00:00:00.001 Started by upstream project "autotest-spdk-v24.05-vs-dpdk-v22.11" build number 82 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3260 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.001 Started by timer 00:00:00.040 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.041 The recommended git tool is: git 00:00:00.041 using credential 00000000-0000-0000-0000-000000000002 00:00:00.042 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.061 Fetching changes from the remote Git repository 00:00:00.064 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.095 Using shallow fetch with depth 1 00:00:00.095 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.095 > git --version # timeout=10 00:00:00.138 > git --version # 'git version 2.39.2' 00:00:00.138 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.182 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.182 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:03.634 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:03.644 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:03.653 Checking out Revision 4b79378c7834917407ff4d2cff4edf1dcbb13c5f (FETCH_HEAD) 00:00:03.653 > git config core.sparsecheckout # timeout=10 00:00:03.665 > git read-tree -mu HEAD # timeout=10 00:00:03.678 > git checkout -f 4b79378c7834917407ff4d2cff4edf1dcbb13c5f # timeout=5 00:00:03.697 Commit message: "jbp-per-patch: add create-perf-report job as a part of testing" 00:00:03.697 > git rev-list --no-walk 4b79378c7834917407ff4d2cff4edf1dcbb13c5f # timeout=10 00:00:03.793 [Pipeline] Start of Pipeline 00:00:03.806 [Pipeline] library 00:00:03.807 Loading library shm_lib@master 00:00:03.807 Library shm_lib@master is cached. Copying from home. 00:00:03.821 [Pipeline] node 00:00:03.827 Running on GP2 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:03.829 [Pipeline] { 00:00:03.837 [Pipeline] catchError 00:00:03.838 [Pipeline] { 00:00:03.847 [Pipeline] wrap 00:00:03.854 [Pipeline] { 00:00:03.861 [Pipeline] stage 00:00:03.862 [Pipeline] { (Prologue) 00:00:04.036 [Pipeline] sh 00:00:04.320 + logger -p user.info -t JENKINS-CI 00:00:04.337 [Pipeline] echo 00:00:04.338 Node: GP2 00:00:04.344 [Pipeline] sh 00:00:04.640 [Pipeline] setCustomBuildProperty 00:00:04.652 [Pipeline] echo 00:00:04.654 Cleanup processes 00:00:04.658 [Pipeline] sh 00:00:04.939 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:04.939 754199 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:04.952 [Pipeline] sh 00:00:05.237 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:05.237 ++ grep -v 'sudo pgrep' 00:00:05.237 ++ awk '{print $1}' 00:00:05.237 + sudo kill -9 00:00:05.237 + true 00:00:05.250 [Pipeline] cleanWs 00:00:05.257 [WS-CLEANUP] Deleting project workspace... 00:00:05.257 [WS-CLEANUP] Deferred wipeout is used... 00:00:05.263 [WS-CLEANUP] done 00:00:05.268 [Pipeline] setCustomBuildProperty 00:00:05.283 [Pipeline] sh 00:00:05.569 + sudo git config --global --replace-all safe.directory '*' 00:00:05.650 [Pipeline] httpRequest 00:00:05.679 [Pipeline] echo 00:00:05.680 Sorcerer 10.211.164.101 is alive 00:00:05.686 [Pipeline] httpRequest 00:00:05.690 HttpMethod: GET 00:00:05.690 URL: http://10.211.164.101/packages/jbp_4b79378c7834917407ff4d2cff4edf1dcbb13c5f.tar.gz 00:00:05.691 Sending request to url: http://10.211.164.101/packages/jbp_4b79378c7834917407ff4d2cff4edf1dcbb13c5f.tar.gz 00:00:05.694 Response Code: HTTP/1.1 200 OK 00:00:05.694 Success: Status code 200 is in the accepted range: 200,404 00:00:05.695 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_4b79378c7834917407ff4d2cff4edf1dcbb13c5f.tar.gz 00:00:06.818 [Pipeline] sh 00:00:07.103 + tar --no-same-owner -xf jbp_4b79378c7834917407ff4d2cff4edf1dcbb13c5f.tar.gz 00:00:07.118 [Pipeline] httpRequest 00:00:07.154 [Pipeline] echo 00:00:07.156 Sorcerer 10.211.164.101 is alive 00:00:07.162 [Pipeline] httpRequest 00:00:07.167 HttpMethod: GET 00:00:07.167 URL: http://10.211.164.101/packages/spdk_5fa2f5086d008303c3936a88b8ec036d6970b1e3.tar.gz 00:00:07.168 Sending request to url: http://10.211.164.101/packages/spdk_5fa2f5086d008303c3936a88b8ec036d6970b1e3.tar.gz 00:00:07.194 Response Code: HTTP/1.1 200 OK 00:00:07.195 Success: Status code 200 is in the accepted range: 200,404 00:00:07.195 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_5fa2f5086d008303c3936a88b8ec036d6970b1e3.tar.gz 00:01:34.287 [Pipeline] sh 00:01:34.575 + tar --no-same-owner -xf spdk_5fa2f5086d008303c3936a88b8ec036d6970b1e3.tar.gz 00:01:37.886 [Pipeline] sh 00:01:38.189 + git -C spdk log --oneline -n5 00:01:38.189 5fa2f5086 nvme: add lock_depth for ctrlr_lock 00:01:38.189 330a4f94d nvme: check pthread_mutex_destroy() return value 00:01:38.189 7b72c3ced nvme: add nvme_ctrlr_lock 00:01:38.189 fc7a37019 nvme: always use nvme_robust_mutex_lock for ctrlr_lock 00:01:38.189 3e04ecdd1 bdev_nvme: use spdk_nvme_ctrlr_fail() on ctrlr_loss_timeout 00:01:38.205 [Pipeline] withCredentials 00:01:38.214 > git --version # timeout=10 00:01:38.224 > git --version # 'git version 2.39.2' 00:01:38.243 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:01:38.244 [Pipeline] { 00:01:38.252 [Pipeline] retry 00:01:38.253 [Pipeline] { 00:01:38.269 [Pipeline] sh 00:01:38.753 + git ls-remote http://dpdk.org/git/dpdk-stable v22.11.4 00:01:39.335 [Pipeline] } 00:01:39.356 [Pipeline] // retry 00:01:39.361 [Pipeline] } 00:01:39.376 [Pipeline] // withCredentials 00:01:39.385 [Pipeline] httpRequest 00:01:39.411 [Pipeline] echo 00:01:39.414 Sorcerer 10.211.164.101 is alive 00:01:39.423 [Pipeline] httpRequest 00:01:39.427 HttpMethod: GET 00:01:39.428 URL: http://10.211.164.101/packages/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:39.428 Sending request to url: http://10.211.164.101/packages/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:39.434 Response Code: HTTP/1.1 200 OK 00:01:39.435 Success: Status code 200 is in the accepted range: 200,404 00:01:39.435 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:02:25.104 [Pipeline] sh 00:02:25.390 + tar --no-same-owner -xf dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:02:27.307 [Pipeline] sh 00:02:27.592 + git -C dpdk log --oneline -n5 00:02:27.592 caf0f5d395 version: 22.11.4 00:02:27.592 7d6f1cc05f Revert "net/iavf: fix abnormal disable HW interrupt" 00:02:27.592 dc9c799c7d vhost: fix missing spinlock unlock 00:02:27.592 4307659a90 net/mlx5: fix LACP redirection in Rx domain 00:02:27.592 6ef77f2a5e net/gve: fix RX buffer size alignment 00:02:27.603 [Pipeline] } 00:02:27.622 [Pipeline] // stage 00:02:27.633 [Pipeline] stage 00:02:27.635 [Pipeline] { (Prepare) 00:02:27.659 [Pipeline] writeFile 00:02:27.677 [Pipeline] sh 00:02:27.964 + logger -p user.info -t JENKINS-CI 00:02:27.978 [Pipeline] sh 00:02:28.265 + logger -p user.info -t JENKINS-CI 00:02:28.282 [Pipeline] sh 00:02:28.569 + cat autorun-spdk.conf 00:02:28.569 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:28.569 SPDK_TEST_NVMF=1 00:02:28.569 SPDK_TEST_NVME_CLI=1 00:02:28.569 SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:28.569 SPDK_TEST_NVMF_NICS=e810 00:02:28.569 SPDK_TEST_VFIOUSER=1 00:02:28.569 SPDK_RUN_UBSAN=1 00:02:28.569 NET_TYPE=phy 00:02:28.569 SPDK_TEST_NATIVE_DPDK=v22.11.4 00:02:28.569 SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:02:28.577 RUN_NIGHTLY=1 00:02:28.584 [Pipeline] readFile 00:02:28.614 [Pipeline] withEnv 00:02:28.616 [Pipeline] { 00:02:28.631 [Pipeline] sh 00:02:28.919 + set -ex 00:02:28.919 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:02:28.919 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:02:28.919 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:28.919 ++ SPDK_TEST_NVMF=1 00:02:28.919 ++ SPDK_TEST_NVME_CLI=1 00:02:28.919 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:28.919 ++ SPDK_TEST_NVMF_NICS=e810 00:02:28.919 ++ SPDK_TEST_VFIOUSER=1 00:02:28.919 ++ SPDK_RUN_UBSAN=1 00:02:28.919 ++ NET_TYPE=phy 00:02:28.919 ++ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:02:28.919 ++ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:02:28.919 ++ RUN_NIGHTLY=1 00:02:28.919 + case $SPDK_TEST_NVMF_NICS in 00:02:28.919 + DRIVERS=ice 00:02:28.919 + [[ tcp == \r\d\m\a ]] 00:02:28.919 + [[ -n ice ]] 00:02:28.919 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:02:28.919 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:02:35.494 rmmod: ERROR: Module irdma is not currently loaded 00:02:35.494 rmmod: ERROR: Module i40iw is not currently loaded 00:02:35.494 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:02:35.494 + true 00:02:35.494 + for D in $DRIVERS 00:02:35.494 + sudo modprobe ice 00:02:35.494 + exit 0 00:02:35.502 [Pipeline] } 00:02:35.518 [Pipeline] // withEnv 00:02:35.523 [Pipeline] } 00:02:35.538 [Pipeline] // stage 00:02:35.548 [Pipeline] catchError 00:02:35.549 [Pipeline] { 00:02:35.564 [Pipeline] timeout 00:02:35.564 Timeout set to expire in 50 min 00:02:35.566 [Pipeline] { 00:02:35.581 [Pipeline] stage 00:02:35.583 [Pipeline] { (Tests) 00:02:35.599 [Pipeline] sh 00:02:35.884 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:02:35.884 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:02:35.884 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:02:35.884 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:02:35.884 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:35.884 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:02:35.884 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:02:35.884 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:02:35.884 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:02:35.884 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:02:35.884 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:02:35.884 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:02:35.884 + source /etc/os-release 00:02:35.884 ++ NAME='Fedora Linux' 00:02:35.884 ++ VERSION='38 (Cloud Edition)' 00:02:35.884 ++ ID=fedora 00:02:35.884 ++ VERSION_ID=38 00:02:35.884 ++ VERSION_CODENAME= 00:02:35.884 ++ PLATFORM_ID=platform:f38 00:02:35.884 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:02:35.884 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:35.884 ++ LOGO=fedora-logo-icon 00:02:35.884 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:02:35.884 ++ HOME_URL=https://fedoraproject.org/ 00:02:35.884 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:02:35.884 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:35.884 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:35.884 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:35.884 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:02:35.884 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:35.884 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:02:35.884 ++ SUPPORT_END=2024-05-14 00:02:35.884 ++ VARIANT='Cloud Edition' 00:02:35.884 ++ VARIANT_ID=cloud 00:02:35.884 + uname -a 00:02:35.884 Linux spdk-gp-02 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:02:35.884 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:02:36.822 Hugepages 00:02:36.822 node hugesize free / total 00:02:36.822 node0 1048576kB 0 / 0 00:02:36.822 node0 2048kB 0 / 0 00:02:36.822 node1 1048576kB 0 / 0 00:02:36.822 node1 2048kB 0 / 0 00:02:36.822 00:02:36.822 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:36.822 I/OAT 0000:00:04.0 8086 3c20 0 ioatdma - - 00:02:36.822 I/OAT 0000:00:04.1 8086 3c21 0 ioatdma - - 00:02:36.822 I/OAT 0000:00:04.2 8086 3c22 0 ioatdma - - 00:02:36.822 I/OAT 0000:00:04.3 8086 3c23 0 ioatdma - - 00:02:36.822 I/OAT 0000:00:04.4 8086 3c24 0 ioatdma - - 00:02:36.822 I/OAT 0000:00:04.5 8086 3c25 0 ioatdma - - 00:02:36.822 I/OAT 0000:00:04.6 8086 3c26 0 ioatdma - - 00:02:36.822 I/OAT 0000:00:04.7 8086 3c27 0 ioatdma - - 00:02:36.822 I/OAT 0000:80:04.0 8086 3c20 1 ioatdma - - 00:02:36.822 I/OAT 0000:80:04.1 8086 3c21 1 ioatdma - - 00:02:36.822 I/OAT 0000:80:04.2 8086 3c22 1 ioatdma - - 00:02:36.822 I/OAT 0000:80:04.3 8086 3c23 1 ioatdma - - 00:02:36.822 I/OAT 0000:80:04.4 8086 3c24 1 ioatdma - - 00:02:36.822 I/OAT 0000:80:04.5 8086 3c25 1 ioatdma - - 00:02:36.822 I/OAT 0000:80:04.6 8086 3c26 1 ioatdma - - 00:02:36.822 I/OAT 0000:80:04.7 8086 3c27 1 ioatdma - - 00:02:36.822 NVMe 0000:84:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:02:36.822 + rm -f /tmp/spdk-ld-path 00:02:36.822 + source autorun-spdk.conf 00:02:36.822 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:36.822 ++ SPDK_TEST_NVMF=1 00:02:36.822 ++ SPDK_TEST_NVME_CLI=1 00:02:36.822 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:36.822 ++ SPDK_TEST_NVMF_NICS=e810 00:02:36.822 ++ SPDK_TEST_VFIOUSER=1 00:02:36.822 ++ SPDK_RUN_UBSAN=1 00:02:36.822 ++ NET_TYPE=phy 00:02:36.822 ++ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:02:36.822 ++ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:02:36.822 ++ RUN_NIGHTLY=1 00:02:36.822 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:36.822 + [[ -n '' ]] 00:02:36.822 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:36.822 + for M in /var/spdk/build-*-manifest.txt 00:02:36.822 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:36.822 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:02:36.822 + for M in /var/spdk/build-*-manifest.txt 00:02:36.822 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:36.822 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:02:36.822 ++ uname 00:02:36.822 + [[ Linux == \L\i\n\u\x ]] 00:02:36.822 + sudo dmesg -T 00:02:37.081 + sudo dmesg --clear 00:02:37.081 + dmesg_pid=754920 00:02:37.081 + [[ Fedora Linux == FreeBSD ]] 00:02:37.081 + sudo dmesg -Tw 00:02:37.081 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:37.081 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:37.081 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:37.081 + export VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:02:37.081 + VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:02:37.081 + [[ -x /usr/src/fio-static/fio ]] 00:02:37.081 + export FIO_BIN=/usr/src/fio-static/fio 00:02:37.081 + FIO_BIN=/usr/src/fio-static/fio 00:02:37.081 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:37.081 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:37.081 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:37.081 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:37.081 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:37.081 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:37.081 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:37.081 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:37.081 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:02:37.081 Test configuration: 00:02:37.081 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:37.081 SPDK_TEST_NVMF=1 00:02:37.081 SPDK_TEST_NVME_CLI=1 00:02:37.081 SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:37.081 SPDK_TEST_NVMF_NICS=e810 00:02:37.081 SPDK_TEST_VFIOUSER=1 00:02:37.081 SPDK_RUN_UBSAN=1 00:02:37.081 NET_TYPE=phy 00:02:37.081 SPDK_TEST_NATIVE_DPDK=v22.11.4 00:02:37.081 SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:02:37.081 RUN_NIGHTLY=1 00:15:04 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:02:37.081 00:15:04 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:37.081 00:15:04 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:37.081 00:15:04 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:37.081 00:15:04 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:37.081 00:15:04 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:37.081 00:15:04 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:37.081 00:15:04 -- paths/export.sh@5 -- $ export PATH 00:02:37.081 00:15:04 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:37.081 00:15:04 -- common/autobuild_common.sh@436 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:02:37.081 00:15:04 -- common/autobuild_common.sh@437 -- $ date +%s 00:02:37.081 00:15:04 -- common/autobuild_common.sh@437 -- $ mktemp -dt spdk_1720736104.XXXXXX 00:02:37.081 00:15:04 -- common/autobuild_common.sh@437 -- $ SPDK_WORKSPACE=/tmp/spdk_1720736104.GJrr46 00:02:37.081 00:15:04 -- common/autobuild_common.sh@439 -- $ [[ -n '' ]] 00:02:37.081 00:15:04 -- common/autobuild_common.sh@443 -- $ '[' -n v22.11.4 ']' 00:02:37.081 00:15:04 -- common/autobuild_common.sh@444 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:02:37.081 00:15:04 -- common/autobuild_common.sh@444 -- $ scanbuild_exclude=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk' 00:02:37.081 00:15:04 -- common/autobuild_common.sh@450 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:02:37.081 00:15:04 -- common/autobuild_common.sh@452 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:02:37.081 00:15:04 -- common/autobuild_common.sh@453 -- $ get_config_params 00:02:37.081 00:15:04 -- common/autotest_common.sh@395 -- $ xtrace_disable 00:02:37.081 00:15:04 -- common/autotest_common.sh@10 -- $ set +x 00:02:37.081 00:15:04 -- common/autobuild_common.sh@453 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build' 00:02:37.081 00:15:04 -- common/autobuild_common.sh@455 -- $ start_monitor_resources 00:02:37.081 00:15:04 -- pm/common@17 -- $ local monitor 00:02:37.081 00:15:04 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:37.081 00:15:04 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:37.081 00:15:04 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:37.081 00:15:04 -- pm/common@21 -- $ date +%s 00:02:37.081 00:15:04 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:37.081 00:15:04 -- pm/common@21 -- $ date +%s 00:02:37.081 00:15:04 -- pm/common@25 -- $ sleep 1 00:02:37.081 00:15:04 -- pm/common@21 -- $ date +%s 00:02:37.081 00:15:04 -- pm/common@21 -- $ date +%s 00:02:37.081 00:15:04 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1720736104 00:02:37.081 00:15:04 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1720736104 00:02:37.081 00:15:04 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1720736104 00:02:37.081 00:15:04 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1720736104 00:02:37.081 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1720736104_collect-vmstat.pm.log 00:02:37.081 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1720736104_collect-cpu-load.pm.log 00:02:37.081 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1720736104_collect-cpu-temp.pm.log 00:02:37.081 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1720736104_collect-bmc-pm.bmc.pm.log 00:02:38.020 00:15:05 -- common/autobuild_common.sh@456 -- $ trap stop_monitor_resources EXIT 00:02:38.020 00:15:05 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:38.020 00:15:05 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:38.020 00:15:05 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:38.020 00:15:05 -- spdk/autobuild.sh@16 -- $ date -u 00:02:38.020 Thu Jul 11 10:15:05 PM UTC 2024 00:02:38.020 00:15:05 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:38.020 v24.05-13-g5fa2f5086 00:02:38.020 00:15:05 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:02:38.020 00:15:05 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:38.020 00:15:05 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:38.020 00:15:05 -- common/autotest_common.sh@1097 -- $ '[' 3 -le 1 ']' 00:02:38.020 00:15:05 -- common/autotest_common.sh@1103 -- $ xtrace_disable 00:02:38.020 00:15:05 -- common/autotest_common.sh@10 -- $ set +x 00:02:38.020 ************************************ 00:02:38.020 START TEST ubsan 00:02:38.020 ************************************ 00:02:38.020 00:15:05 ubsan -- common/autotest_common.sh@1121 -- $ echo 'using ubsan' 00:02:38.020 using ubsan 00:02:38.020 00:02:38.020 real 0m0.000s 00:02:38.020 user 0m0.000s 00:02:38.020 sys 0m0.000s 00:02:38.020 00:15:05 ubsan -- common/autotest_common.sh@1122 -- $ xtrace_disable 00:02:38.020 00:15:05 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:02:38.020 ************************************ 00:02:38.020 END TEST ubsan 00:02:38.020 ************************************ 00:02:38.020 00:15:05 -- spdk/autobuild.sh@27 -- $ '[' -n v22.11.4 ']' 00:02:38.020 00:15:05 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:02:38.020 00:15:05 -- common/autobuild_common.sh@429 -- $ run_test build_native_dpdk _build_native_dpdk 00:02:38.021 00:15:05 -- common/autotest_common.sh@1097 -- $ '[' 2 -le 1 ']' 00:02:38.021 00:15:05 -- common/autotest_common.sh@1103 -- $ xtrace_disable 00:02:38.021 00:15:05 -- common/autotest_common.sh@10 -- $ set +x 00:02:38.281 ************************************ 00:02:38.281 START TEST build_native_dpdk 00:02:38.281 ************************************ 00:02:38.281 00:15:05 build_native_dpdk -- common/autotest_common.sh@1121 -- $ _build_native_dpdk 00:02:38.281 00:15:05 build_native_dpdk -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:02:38.281 00:15:05 build_native_dpdk -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:02:38.281 00:15:05 build_native_dpdk -- common/autobuild_common.sh@50 -- $ local compiler_version 00:02:38.281 00:15:05 build_native_dpdk -- common/autobuild_common.sh@51 -- $ local compiler 00:02:38.281 00:15:05 build_native_dpdk -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:02:38.281 00:15:05 build_native_dpdk -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:02:38.281 00:15:05 build_native_dpdk -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:02:38.281 00:15:05 build_native_dpdk -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:02:38.281 00:15:05 build_native_dpdk -- common/autobuild_common.sh@61 -- $ CC=gcc 00:02:38.281 00:15:05 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:02:38.281 00:15:05 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:02:38.281 00:15:05 build_native_dpdk -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:02:38.281 00:15:05 build_native_dpdk -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:02:38.281 00:15:05 build_native_dpdk -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:02:38.281 00:15:05 build_native_dpdk -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:02:38.281 00:15:05 build_native_dpdk -- common/autobuild_common.sh@71 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:02:38.281 00:15:05 build_native_dpdk -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:02:38.281 00:15:05 build_native_dpdk -- common/autobuild_common.sh@73 -- $ [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk ]] 00:02:38.281 00:15:05 build_native_dpdk -- common/autobuild_common.sh@82 -- $ orgdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:38.281 00:15:05 build_native_dpdk -- common/autobuild_common.sh@83 -- $ git -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk log --oneline -n 5 00:02:38.281 caf0f5d395 version: 22.11.4 00:02:38.281 7d6f1cc05f Revert "net/iavf: fix abnormal disable HW interrupt" 00:02:38.281 dc9c799c7d vhost: fix missing spinlock unlock 00:02:38.281 4307659a90 net/mlx5: fix LACP redirection in Rx domain 00:02:38.281 6ef77f2a5e net/gve: fix RX buffer size alignment 00:02:38.281 00:15:05 build_native_dpdk -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:02:38.281 00:15:05 build_native_dpdk -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:02:38.281 00:15:05 build_native_dpdk -- common/autobuild_common.sh@87 -- $ dpdk_ver=22.11.4 00:02:38.281 00:15:05 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:02:38.281 00:15:05 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:02:38.281 00:15:05 build_native_dpdk -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:02:38.281 00:15:05 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:02:38.281 00:15:05 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:02:38.281 00:15:05 build_native_dpdk -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:02:38.281 00:15:05 build_native_dpdk -- common/autobuild_common.sh@100 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base") 00:02:38.281 00:15:05 build_native_dpdk -- common/autobuild_common.sh@102 -- $ local mlx5_libs_added=n 00:02:38.281 00:15:05 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:02:38.281 00:15:05 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:02:38.281 00:15:05 build_native_dpdk -- common/autobuild_common.sh@139 -- $ [[ 0 -eq 1 ]] 00:02:38.282 00:15:05 build_native_dpdk -- common/autobuild_common.sh@167 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:02:38.282 00:15:05 build_native_dpdk -- common/autobuild_common.sh@168 -- $ uname -s 00:02:38.282 00:15:05 build_native_dpdk -- common/autobuild_common.sh@168 -- $ '[' Linux = Linux ']' 00:02:38.282 00:15:05 build_native_dpdk -- common/autobuild_common.sh@169 -- $ lt 22.11.4 21.11.0 00:02:38.282 00:15:05 build_native_dpdk -- scripts/common.sh@370 -- $ cmp_versions 22.11.4 '<' 21.11.0 00:02:38.282 00:15:05 build_native_dpdk -- scripts/common.sh@330 -- $ local ver1 ver1_l 00:02:38.282 00:15:05 build_native_dpdk -- scripts/common.sh@331 -- $ local ver2 ver2_l 00:02:38.282 00:15:05 build_native_dpdk -- scripts/common.sh@333 -- $ IFS=.-: 00:02:38.282 00:15:05 build_native_dpdk -- scripts/common.sh@333 -- $ read -ra ver1 00:02:38.282 00:15:05 build_native_dpdk -- scripts/common.sh@334 -- $ IFS=.-: 00:02:38.282 00:15:05 build_native_dpdk -- scripts/common.sh@334 -- $ read -ra ver2 00:02:38.282 00:15:05 build_native_dpdk -- scripts/common.sh@335 -- $ local 'op=<' 00:02:38.282 00:15:05 build_native_dpdk -- scripts/common.sh@337 -- $ ver1_l=3 00:02:38.282 00:15:05 build_native_dpdk -- scripts/common.sh@338 -- $ ver2_l=3 00:02:38.282 00:15:05 build_native_dpdk -- scripts/common.sh@340 -- $ local lt=0 gt=0 eq=0 v 00:02:38.282 00:15:05 build_native_dpdk -- scripts/common.sh@341 -- $ case "$op" in 00:02:38.282 00:15:05 build_native_dpdk -- scripts/common.sh@342 -- $ : 1 00:02:38.282 00:15:05 build_native_dpdk -- scripts/common.sh@361 -- $ (( v = 0 )) 00:02:38.282 00:15:05 build_native_dpdk -- scripts/common.sh@361 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:38.282 00:15:05 build_native_dpdk -- scripts/common.sh@362 -- $ decimal 22 00:02:38.282 00:15:05 build_native_dpdk -- scripts/common.sh@350 -- $ local d=22 00:02:38.282 00:15:05 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:02:38.282 00:15:05 build_native_dpdk -- scripts/common.sh@352 -- $ echo 22 00:02:38.282 00:15:05 build_native_dpdk -- scripts/common.sh@362 -- $ ver1[v]=22 00:02:38.282 00:15:05 build_native_dpdk -- scripts/common.sh@363 -- $ decimal 21 00:02:38.282 00:15:05 build_native_dpdk -- scripts/common.sh@350 -- $ local d=21 00:02:38.282 00:15:05 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:02:38.282 00:15:05 build_native_dpdk -- scripts/common.sh@352 -- $ echo 21 00:02:38.282 00:15:05 build_native_dpdk -- scripts/common.sh@363 -- $ ver2[v]=21 00:02:38.282 00:15:05 build_native_dpdk -- scripts/common.sh@364 -- $ (( ver1[v] > ver2[v] )) 00:02:38.282 00:15:05 build_native_dpdk -- scripts/common.sh@364 -- $ return 1 00:02:38.282 00:15:05 build_native_dpdk -- common/autobuild_common.sh@173 -- $ patch -p1 00:02:38.282 patching file config/rte_config.h 00:02:38.282 Hunk #1 succeeded at 60 (offset 1 line). 00:02:38.282 00:15:05 build_native_dpdk -- common/autobuild_common.sh@177 -- $ dpdk_kmods=false 00:02:38.282 00:15:05 build_native_dpdk -- common/autobuild_common.sh@178 -- $ uname -s 00:02:38.282 00:15:05 build_native_dpdk -- common/autobuild_common.sh@178 -- $ '[' Linux = FreeBSD ']' 00:02:38.282 00:15:05 build_native_dpdk -- common/autobuild_common.sh@182 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base 00:02:38.282 00:15:05 build_native_dpdk -- common/autobuild_common.sh@182 -- $ meson build-tmp --prefix=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:02:42.480 The Meson build system 00:02:42.480 Version: 1.3.1 00:02:42.480 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:02:42.480 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp 00:02:42.480 Build type: native build 00:02:42.480 Program cat found: YES (/usr/bin/cat) 00:02:42.480 Project name: DPDK 00:02:42.480 Project version: 22.11.4 00:02:42.480 C compiler for the host machine: gcc (gcc 13.2.1 "gcc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:02:42.480 C linker for the host machine: gcc ld.bfd 2.39-16 00:02:42.480 Host machine cpu family: x86_64 00:02:42.480 Host machine cpu: x86_64 00:02:42.480 Message: ## Building in Developer Mode ## 00:02:42.480 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:42.480 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/check-symbols.sh) 00:02:42.480 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/options-ibverbs-static.sh) 00:02:42.480 Program objdump found: YES (/usr/bin/objdump) 00:02:42.480 Program python3 found: YES (/usr/bin/python3) 00:02:42.480 Program cat found: YES (/usr/bin/cat) 00:02:42.480 config/meson.build:83: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:02:42.480 Checking for size of "void *" : 8 00:02:42.480 Checking for size of "void *" : 8 (cached) 00:02:42.480 Library m found: YES 00:02:42.480 Library numa found: YES 00:02:42.480 Has header "numaif.h" : YES 00:02:42.480 Library fdt found: NO 00:02:42.480 Library execinfo found: NO 00:02:42.480 Has header "execinfo.h" : YES 00:02:42.480 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:02:42.480 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:42.480 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:42.480 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:42.480 Run-time dependency openssl found: YES 3.0.9 00:02:42.480 Run-time dependency libpcap found: YES 1.10.4 00:02:42.480 Has header "pcap.h" with dependency libpcap: YES 00:02:42.480 Compiler for C supports arguments -Wcast-qual: YES 00:02:42.480 Compiler for C supports arguments -Wdeprecated: YES 00:02:42.480 Compiler for C supports arguments -Wformat: YES 00:02:42.480 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:42.480 Compiler for C supports arguments -Wformat-security: NO 00:02:42.480 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:42.480 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:42.480 Compiler for C supports arguments -Wnested-externs: YES 00:02:42.480 Compiler for C supports arguments -Wold-style-definition: YES 00:02:42.480 Compiler for C supports arguments -Wpointer-arith: YES 00:02:42.480 Compiler for C supports arguments -Wsign-compare: YES 00:02:42.480 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:42.480 Compiler for C supports arguments -Wundef: YES 00:02:42.480 Compiler for C supports arguments -Wwrite-strings: YES 00:02:42.480 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:42.480 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:42.480 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:42.480 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:42.480 Compiler for C supports arguments -mavx512f: YES 00:02:42.480 Checking if "AVX512 checking" compiles: YES 00:02:42.480 Fetching value of define "__SSE4_2__" : 1 00:02:42.480 Fetching value of define "__AES__" : 1 00:02:42.480 Fetching value of define "__AVX__" : 1 00:02:42.480 Fetching value of define "__AVX2__" : (undefined) 00:02:42.480 Fetching value of define "__AVX512BW__" : (undefined) 00:02:42.480 Fetching value of define "__AVX512CD__" : (undefined) 00:02:42.480 Fetching value of define "__AVX512DQ__" : (undefined) 00:02:42.480 Fetching value of define "__AVX512F__" : (undefined) 00:02:42.480 Fetching value of define "__AVX512VL__" : (undefined) 00:02:42.480 Fetching value of define "__PCLMUL__" : 1 00:02:42.480 Fetching value of define "__RDRND__" : (undefined) 00:02:42.480 Fetching value of define "__RDSEED__" : (undefined) 00:02:42.480 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:42.480 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:42.480 Message: lib/kvargs: Defining dependency "kvargs" 00:02:42.480 Message: lib/telemetry: Defining dependency "telemetry" 00:02:42.480 Checking for function "getentropy" : YES 00:02:42.480 Message: lib/eal: Defining dependency "eal" 00:02:42.480 Message: lib/ring: Defining dependency "ring" 00:02:42.480 Message: lib/rcu: Defining dependency "rcu" 00:02:42.480 Message: lib/mempool: Defining dependency "mempool" 00:02:42.480 Message: lib/mbuf: Defining dependency "mbuf" 00:02:42.480 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:42.480 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:42.480 Compiler for C supports arguments -mpclmul: YES 00:02:42.480 Compiler for C supports arguments -maes: YES 00:02:42.480 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:42.480 Compiler for C supports arguments -mavx512bw: YES 00:02:42.480 Compiler for C supports arguments -mavx512dq: YES 00:02:42.480 Compiler for C supports arguments -mavx512vl: YES 00:02:42.480 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:42.480 Compiler for C supports arguments -mavx2: YES 00:02:42.480 Compiler for C supports arguments -mavx: YES 00:02:42.480 Message: lib/net: Defining dependency "net" 00:02:42.480 Message: lib/meter: Defining dependency "meter" 00:02:42.480 Message: lib/ethdev: Defining dependency "ethdev" 00:02:42.480 Message: lib/pci: Defining dependency "pci" 00:02:42.480 Message: lib/cmdline: Defining dependency "cmdline" 00:02:42.480 Message: lib/metrics: Defining dependency "metrics" 00:02:42.480 Message: lib/hash: Defining dependency "hash" 00:02:42.480 Message: lib/timer: Defining dependency "timer" 00:02:42.480 Fetching value of define "__AVX2__" : (undefined) (cached) 00:02:42.480 Compiler for C supports arguments -mavx2: YES (cached) 00:02:42.480 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:42.480 Fetching value of define "__AVX512VL__" : (undefined) (cached) 00:02:42.480 Fetching value of define "__AVX512CD__" : (undefined) (cached) 00:02:42.480 Fetching value of define "__AVX512BW__" : (undefined) (cached) 00:02:42.480 Compiler for C supports arguments -mavx512f -mavx512vl -mavx512cd -mavx512bw: YES 00:02:42.480 Message: lib/acl: Defining dependency "acl" 00:02:42.480 Message: lib/bbdev: Defining dependency "bbdev" 00:02:42.480 Message: lib/bitratestats: Defining dependency "bitratestats" 00:02:42.480 Run-time dependency libelf found: YES 0.190 00:02:42.480 Message: lib/bpf: Defining dependency "bpf" 00:02:42.480 Message: lib/cfgfile: Defining dependency "cfgfile" 00:02:42.480 Message: lib/compressdev: Defining dependency "compressdev" 00:02:42.480 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:42.480 Message: lib/distributor: Defining dependency "distributor" 00:02:42.480 Message: lib/efd: Defining dependency "efd" 00:02:42.480 Message: lib/eventdev: Defining dependency "eventdev" 00:02:42.481 Message: lib/gpudev: Defining dependency "gpudev" 00:02:42.481 Message: lib/gro: Defining dependency "gro" 00:02:42.481 Message: lib/gso: Defining dependency "gso" 00:02:42.481 Message: lib/ip_frag: Defining dependency "ip_frag" 00:02:42.481 Message: lib/jobstats: Defining dependency "jobstats" 00:02:42.481 Message: lib/latencystats: Defining dependency "latencystats" 00:02:42.481 Message: lib/lpm: Defining dependency "lpm" 00:02:42.481 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:42.481 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:02:42.481 Fetching value of define "__AVX512IFMA__" : (undefined) 00:02:42.481 Compiler for C supports arguments -mavx512f -mavx512dq -mavx512ifma: YES 00:02:42.481 Message: lib/member: Defining dependency "member" 00:02:42.481 Message: lib/pcapng: Defining dependency "pcapng" 00:02:42.481 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:42.481 Message: lib/power: Defining dependency "power" 00:02:42.481 Message: lib/rawdev: Defining dependency "rawdev" 00:02:42.481 Message: lib/regexdev: Defining dependency "regexdev" 00:02:42.481 Message: lib/dmadev: Defining dependency "dmadev" 00:02:42.481 Message: lib/rib: Defining dependency "rib" 00:02:42.481 Message: lib/reorder: Defining dependency "reorder" 00:02:42.481 Message: lib/sched: Defining dependency "sched" 00:02:42.481 Message: lib/security: Defining dependency "security" 00:02:42.481 Message: lib/stack: Defining dependency "stack" 00:02:42.481 Has header "linux/userfaultfd.h" : YES 00:02:42.481 Message: lib/vhost: Defining dependency "vhost" 00:02:42.481 Message: lib/ipsec: Defining dependency "ipsec" 00:02:42.481 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:42.481 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:02:42.481 Compiler for C supports arguments -mavx512f -mavx512dq: YES 00:02:42.481 Compiler for C supports arguments -mavx512bw: YES (cached) 00:02:42.481 Message: lib/fib: Defining dependency "fib" 00:02:42.481 Message: lib/port: Defining dependency "port" 00:02:42.481 Message: lib/pdump: Defining dependency "pdump" 00:02:42.481 Message: lib/table: Defining dependency "table" 00:02:42.481 Message: lib/pipeline: Defining dependency "pipeline" 00:02:42.481 Message: lib/graph: Defining dependency "graph" 00:02:42.481 Message: lib/node: Defining dependency "node" 00:02:42.481 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:42.481 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:42.481 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:42.481 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:42.481 Compiler for C supports arguments -Wno-sign-compare: YES 00:02:42.481 Compiler for C supports arguments -Wno-unused-value: YES 00:02:43.866 Compiler for C supports arguments -Wno-format: YES 00:02:43.866 Compiler for C supports arguments -Wno-format-security: YES 00:02:43.866 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:02:43.866 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:02:43.866 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:02:43.866 Compiler for C supports arguments -Wno-unused-parameter: YES 00:02:43.866 Fetching value of define "__AVX2__" : (undefined) (cached) 00:02:43.866 Compiler for C supports arguments -mavx2: YES (cached) 00:02:43.866 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:43.866 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:43.866 Compiler for C supports arguments -mavx512bw: YES (cached) 00:02:43.866 Compiler for C supports arguments -march=skylake-avx512: YES 00:02:43.866 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:02:43.866 Program doxygen found: YES (/usr/bin/doxygen) 00:02:43.866 Configuring doxy-api.conf using configuration 00:02:43.866 Program sphinx-build found: NO 00:02:43.866 Configuring rte_build_config.h using configuration 00:02:43.866 Message: 00:02:43.866 ================= 00:02:43.866 Applications Enabled 00:02:43.866 ================= 00:02:43.866 00:02:43.866 apps: 00:02:43.866 dumpcap, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, test-crypto-perf, 00:02:43.866 test-eventdev, test-fib, test-flow-perf, test-gpudev, test-pipeline, test-pmd, test-regex, test-sad, 00:02:43.866 test-security-perf, 00:02:43.866 00:02:43.866 Message: 00:02:43.866 ================= 00:02:43.866 Libraries Enabled 00:02:43.866 ================= 00:02:43.866 00:02:43.866 libs: 00:02:43.866 kvargs, telemetry, eal, ring, rcu, mempool, mbuf, net, 00:02:43.866 meter, ethdev, pci, cmdline, metrics, hash, timer, acl, 00:02:43.866 bbdev, bitratestats, bpf, cfgfile, compressdev, cryptodev, distributor, efd, 00:02:43.866 eventdev, gpudev, gro, gso, ip_frag, jobstats, latencystats, lpm, 00:02:43.866 member, pcapng, power, rawdev, regexdev, dmadev, rib, reorder, 00:02:43.866 sched, security, stack, vhost, ipsec, fib, port, pdump, 00:02:43.866 table, pipeline, graph, node, 00:02:43.866 00:02:43.866 Message: 00:02:43.866 =============== 00:02:43.866 Drivers Enabled 00:02:43.866 =============== 00:02:43.866 00:02:43.866 common: 00:02:43.866 00:02:43.866 bus: 00:02:43.866 pci, vdev, 00:02:43.866 mempool: 00:02:43.866 ring, 00:02:43.866 dma: 00:02:43.866 00:02:43.866 net: 00:02:43.866 i40e, 00:02:43.866 raw: 00:02:43.866 00:02:43.866 crypto: 00:02:43.866 00:02:43.867 compress: 00:02:43.867 00:02:43.867 regex: 00:02:43.867 00:02:43.867 vdpa: 00:02:43.867 00:02:43.867 event: 00:02:43.867 00:02:43.867 baseband: 00:02:43.867 00:02:43.867 gpu: 00:02:43.867 00:02:43.867 00:02:43.867 Message: 00:02:43.867 ================= 00:02:43.867 Content Skipped 00:02:43.867 ================= 00:02:43.867 00:02:43.867 apps: 00:02:43.867 00:02:43.867 libs: 00:02:43.867 kni: explicitly disabled via build config (deprecated lib) 00:02:43.867 flow_classify: explicitly disabled via build config (deprecated lib) 00:02:43.867 00:02:43.867 drivers: 00:02:43.867 common/cpt: not in enabled drivers build config 00:02:43.867 common/dpaax: not in enabled drivers build config 00:02:43.867 common/iavf: not in enabled drivers build config 00:02:43.867 common/idpf: not in enabled drivers build config 00:02:43.867 common/mvep: not in enabled drivers build config 00:02:43.867 common/octeontx: not in enabled drivers build config 00:02:43.867 bus/auxiliary: not in enabled drivers build config 00:02:43.867 bus/dpaa: not in enabled drivers build config 00:02:43.867 bus/fslmc: not in enabled drivers build config 00:02:43.867 bus/ifpga: not in enabled drivers build config 00:02:43.867 bus/vmbus: not in enabled drivers build config 00:02:43.867 common/cnxk: not in enabled drivers build config 00:02:43.867 common/mlx5: not in enabled drivers build config 00:02:43.867 common/qat: not in enabled drivers build config 00:02:43.867 common/sfc_efx: not in enabled drivers build config 00:02:43.867 mempool/bucket: not in enabled drivers build config 00:02:43.867 mempool/cnxk: not in enabled drivers build config 00:02:43.867 mempool/dpaa: not in enabled drivers build config 00:02:43.867 mempool/dpaa2: not in enabled drivers build config 00:02:43.867 mempool/octeontx: not in enabled drivers build config 00:02:43.867 mempool/stack: not in enabled drivers build config 00:02:43.867 dma/cnxk: not in enabled drivers build config 00:02:43.867 dma/dpaa: not in enabled drivers build config 00:02:43.867 dma/dpaa2: not in enabled drivers build config 00:02:43.867 dma/hisilicon: not in enabled drivers build config 00:02:43.867 dma/idxd: not in enabled drivers build config 00:02:43.867 dma/ioat: not in enabled drivers build config 00:02:43.867 dma/skeleton: not in enabled drivers build config 00:02:43.867 net/af_packet: not in enabled drivers build config 00:02:43.867 net/af_xdp: not in enabled drivers build config 00:02:43.867 net/ark: not in enabled drivers build config 00:02:43.867 net/atlantic: not in enabled drivers build config 00:02:43.867 net/avp: not in enabled drivers build config 00:02:43.867 net/axgbe: not in enabled drivers build config 00:02:43.867 net/bnx2x: not in enabled drivers build config 00:02:43.867 net/bnxt: not in enabled drivers build config 00:02:43.867 net/bonding: not in enabled drivers build config 00:02:43.867 net/cnxk: not in enabled drivers build config 00:02:43.867 net/cxgbe: not in enabled drivers build config 00:02:43.867 net/dpaa: not in enabled drivers build config 00:02:43.867 net/dpaa2: not in enabled drivers build config 00:02:43.867 net/e1000: not in enabled drivers build config 00:02:43.867 net/ena: not in enabled drivers build config 00:02:43.867 net/enetc: not in enabled drivers build config 00:02:43.867 net/enetfec: not in enabled drivers build config 00:02:43.867 net/enic: not in enabled drivers build config 00:02:43.867 net/failsafe: not in enabled drivers build config 00:02:43.867 net/fm10k: not in enabled drivers build config 00:02:43.867 net/gve: not in enabled drivers build config 00:02:43.867 net/hinic: not in enabled drivers build config 00:02:43.867 net/hns3: not in enabled drivers build config 00:02:43.867 net/iavf: not in enabled drivers build config 00:02:43.867 net/ice: not in enabled drivers build config 00:02:43.867 net/idpf: not in enabled drivers build config 00:02:43.867 net/igc: not in enabled drivers build config 00:02:43.867 net/ionic: not in enabled drivers build config 00:02:43.867 net/ipn3ke: not in enabled drivers build config 00:02:43.867 net/ixgbe: not in enabled drivers build config 00:02:43.867 net/kni: not in enabled drivers build config 00:02:43.867 net/liquidio: not in enabled drivers build config 00:02:43.867 net/mana: not in enabled drivers build config 00:02:43.867 net/memif: not in enabled drivers build config 00:02:43.867 net/mlx4: not in enabled drivers build config 00:02:43.867 net/mlx5: not in enabled drivers build config 00:02:43.867 net/mvneta: not in enabled drivers build config 00:02:43.867 net/mvpp2: not in enabled drivers build config 00:02:43.867 net/netvsc: not in enabled drivers build config 00:02:43.867 net/nfb: not in enabled drivers build config 00:02:43.867 net/nfp: not in enabled drivers build config 00:02:43.867 net/ngbe: not in enabled drivers build config 00:02:43.867 net/null: not in enabled drivers build config 00:02:43.867 net/octeontx: not in enabled drivers build config 00:02:43.867 net/octeon_ep: not in enabled drivers build config 00:02:43.867 net/pcap: not in enabled drivers build config 00:02:43.867 net/pfe: not in enabled drivers build config 00:02:43.867 net/qede: not in enabled drivers build config 00:02:43.867 net/ring: not in enabled drivers build config 00:02:43.867 net/sfc: not in enabled drivers build config 00:02:43.867 net/softnic: not in enabled drivers build config 00:02:43.867 net/tap: not in enabled drivers build config 00:02:43.867 net/thunderx: not in enabled drivers build config 00:02:43.867 net/txgbe: not in enabled drivers build config 00:02:43.867 net/vdev_netvsc: not in enabled drivers build config 00:02:43.867 net/vhost: not in enabled drivers build config 00:02:43.867 net/virtio: not in enabled drivers build config 00:02:43.867 net/vmxnet3: not in enabled drivers build config 00:02:43.867 raw/cnxk_bphy: not in enabled drivers build config 00:02:43.867 raw/cnxk_gpio: not in enabled drivers build config 00:02:43.867 raw/dpaa2_cmdif: not in enabled drivers build config 00:02:43.867 raw/ifpga: not in enabled drivers build config 00:02:43.867 raw/ntb: not in enabled drivers build config 00:02:43.867 raw/skeleton: not in enabled drivers build config 00:02:43.867 crypto/armv8: not in enabled drivers build config 00:02:43.867 crypto/bcmfs: not in enabled drivers build config 00:02:43.867 crypto/caam_jr: not in enabled drivers build config 00:02:43.867 crypto/ccp: not in enabled drivers build config 00:02:43.867 crypto/cnxk: not in enabled drivers build config 00:02:43.867 crypto/dpaa_sec: not in enabled drivers build config 00:02:43.867 crypto/dpaa2_sec: not in enabled drivers build config 00:02:43.867 crypto/ipsec_mb: not in enabled drivers build config 00:02:43.867 crypto/mlx5: not in enabled drivers build config 00:02:43.867 crypto/mvsam: not in enabled drivers build config 00:02:43.867 crypto/nitrox: not in enabled drivers build config 00:02:43.867 crypto/null: not in enabled drivers build config 00:02:43.867 crypto/octeontx: not in enabled drivers build config 00:02:43.867 crypto/openssl: not in enabled drivers build config 00:02:43.867 crypto/scheduler: not in enabled drivers build config 00:02:43.867 crypto/uadk: not in enabled drivers build config 00:02:43.867 crypto/virtio: not in enabled drivers build config 00:02:43.867 compress/isal: not in enabled drivers build config 00:02:43.867 compress/mlx5: not in enabled drivers build config 00:02:43.867 compress/octeontx: not in enabled drivers build config 00:02:43.867 compress/zlib: not in enabled drivers build config 00:02:43.867 regex/mlx5: not in enabled drivers build config 00:02:43.867 regex/cn9k: not in enabled drivers build config 00:02:43.867 vdpa/ifc: not in enabled drivers build config 00:02:43.867 vdpa/mlx5: not in enabled drivers build config 00:02:43.867 vdpa/sfc: not in enabled drivers build config 00:02:43.867 event/cnxk: not in enabled drivers build config 00:02:43.867 event/dlb2: not in enabled drivers build config 00:02:43.867 event/dpaa: not in enabled drivers build config 00:02:43.867 event/dpaa2: not in enabled drivers build config 00:02:43.867 event/dsw: not in enabled drivers build config 00:02:43.867 event/opdl: not in enabled drivers build config 00:02:43.867 event/skeleton: not in enabled drivers build config 00:02:43.867 event/sw: not in enabled drivers build config 00:02:43.867 event/octeontx: not in enabled drivers build config 00:02:43.867 baseband/acc: not in enabled drivers build config 00:02:43.867 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:02:43.867 baseband/fpga_lte_fec: not in enabled drivers build config 00:02:43.867 baseband/la12xx: not in enabled drivers build config 00:02:43.867 baseband/null: not in enabled drivers build config 00:02:43.867 baseband/turbo_sw: not in enabled drivers build config 00:02:43.867 gpu/cuda: not in enabled drivers build config 00:02:43.867 00:02:43.867 00:02:43.867 Build targets in project: 316 00:02:43.867 00:02:43.867 DPDK 22.11.4 00:02:43.867 00:02:43.867 User defined options 00:02:43.867 libdir : lib 00:02:43.867 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:02:43.867 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:02:43.867 c_link_args : 00:02:43.867 enable_docs : false 00:02:43.867 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:02:43.867 enable_kmods : false 00:02:43.867 machine : native 00:02:43.867 tests : false 00:02:43.867 00:02:43.867 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:43.867 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:02:43.867 00:15:11 build_native_dpdk -- common/autobuild_common.sh@186 -- $ ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp -j32 00:02:43.867 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp' 00:02:44.131 [1/745] Generating lib/rte_kvargs_mingw with a custom command 00:02:44.131 [2/745] Generating lib/rte_telemetry_def with a custom command 00:02:44.131 [3/745] Generating lib/rte_kvargs_def with a custom command 00:02:44.131 [4/745] Generating lib/rte_telemetry_mingw with a custom command 00:02:44.131 [5/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:44.131 [6/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:44.131 [7/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:44.131 [8/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:44.131 [9/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:44.131 [10/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:44.131 [11/745] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:44.131 [12/745] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:44.131 [13/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:44.131 [14/745] Linking static target lib/librte_kvargs.a 00:02:44.131 [15/745] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:44.131 [16/745] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:44.131 [17/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:44.131 [18/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:44.131 [19/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:44.131 [20/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:44.132 [21/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:44.132 [22/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:44.394 [23/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:44.394 [24/745] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:44.394 [25/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:44.394 [26/745] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:44.394 [27/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:44.394 [28/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:44.394 [29/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:44.394 [30/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:44.394 [31/745] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:44.394 [32/745] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:44.394 [33/745] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:44.394 [34/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_log.c.o 00:02:44.394 [35/745] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:44.394 [36/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:44.394 [37/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:44.394 [38/745] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:44.394 [39/745] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:44.394 [40/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:44.394 [41/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:44.394 [42/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_log.c.o 00:02:44.394 [43/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:44.656 [44/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:44.656 [45/745] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:44.656 [46/745] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:44.656 [47/745] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:44.656 [48/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:44.656 [49/745] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:44.656 [50/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:44.656 [51/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:44.656 [52/745] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:44.656 [53/745] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:44.656 [54/745] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:44.656 [55/745] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:44.656 [56/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:44.656 [57/745] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:44.656 [58/745] Generating lib/rte_eal_def with a custom command 00:02:44.656 [59/745] Generating lib/rte_eal_mingw with a custom command 00:02:44.656 [60/745] Generating lib/rte_ring_def with a custom command 00:02:44.656 [61/745] Generating lib/rte_ring_mingw with a custom command 00:02:44.656 [62/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:44.656 [63/745] Generating lib/rte_rcu_def with a custom command 00:02:44.656 [64/745] Linking target lib/librte_kvargs.so.23.0 00:02:44.656 [65/745] Generating lib/rte_rcu_mingw with a custom command 00:02:44.656 [66/745] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:44.656 [67/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:44.656 [68/745] Generating lib/rte_mempool_mingw with a custom command 00:02:44.656 [69/745] Generating lib/rte_mempool_def with a custom command 00:02:44.656 [70/745] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:44.656 [71/745] Generating lib/rte_mbuf_def with a custom command 00:02:44.657 [72/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:44.657 [73/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:44.657 [74/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:44.657 [75/745] Generating lib/rte_mbuf_mingw with a custom command 00:02:44.657 [76/745] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:44.918 [77/745] Generating lib/rte_net_def with a custom command 00:02:44.918 [78/745] Generating lib/rte_net_mingw with a custom command 00:02:44.918 [79/745] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:44.918 [80/745] Generating symbol file lib/librte_kvargs.so.23.0.p/librte_kvargs.so.23.0.symbols 00:02:44.918 [81/745] Generating lib/rte_meter_def with a custom command 00:02:44.918 [82/745] Generating lib/rte_meter_mingw with a custom command 00:02:44.918 [83/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:44.918 [84/745] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:44.918 [85/745] Linking static target lib/librte_ring.a 00:02:44.918 [86/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:44.918 [87/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:44.918 [88/745] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:45.179 [89/745] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:45.179 [90/745] Linking static target lib/librte_meter.a 00:02:45.179 [91/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:45.179 [92/745] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:45.179 [93/745] Linking static target lib/librte_telemetry.a 00:02:45.440 [94/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:45.440 [95/745] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.440 [96/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:45.440 [97/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:45.440 [98/745] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.700 [99/745] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.700 [100/745] Linking target lib/librte_telemetry.so.23.0 00:02:45.700 [101/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:45.700 [102/745] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:45.961 [103/745] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:45.961 [104/745] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:45.961 [105/745] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:45.961 [106/745] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:45.961 [107/745] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:45.961 [108/745] Generating lib/rte_ethdev_def with a custom command 00:02:45.961 [109/745] Generating lib/rte_ethdev_mingw with a custom command 00:02:45.961 [110/745] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:45.961 [111/745] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:45.961 [112/745] Generating lib/rte_pci_def with a custom command 00:02:45.961 [113/745] Generating symbol file lib/librte_telemetry.so.23.0.p/librte_telemetry.so.23.0.symbols 00:02:45.961 [114/745] Generating lib/rte_pci_mingw with a custom command 00:02:45.961 [115/745] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:45.961 [116/745] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:45.961 [117/745] Linking static target lib/librte_pci.a 00:02:46.220 [118/745] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:46.220 [119/745] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:46.220 [120/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:46.220 [121/745] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:46.220 [122/745] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:46.220 [123/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:46.220 [124/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:46.220 [125/745] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:46.220 [126/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:46.220 [127/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:46.220 [128/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:46.220 [129/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:46.220 [130/745] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:46.220 [131/745] Generating lib/rte_cmdline_mingw with a custom command 00:02:46.220 [132/745] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:46.220 [133/745] Generating lib/rte_cmdline_def with a custom command 00:02:46.220 [134/745] Generating lib/rte_metrics_mingw with a custom command 00:02:46.220 [135/745] Generating lib/rte_metrics_def with a custom command 00:02:46.220 [136/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:46.220 [137/745] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:46.220 [138/745] Generating lib/rte_hash_def with a custom command 00:02:46.220 [139/745] Linking static target lib/librte_rcu.a 00:02:46.220 [140/745] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:46.220 [141/745] Generating lib/rte_hash_mingw with a custom command 00:02:46.482 [142/745] Generating lib/rte_timer_mingw with a custom command 00:02:46.482 [143/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:46.482 [144/745] Generating lib/rte_timer_def with a custom command 00:02:46.482 [145/745] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:46.482 [146/745] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:46.482 [147/745] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:46.482 [148/745] Linking static target lib/librte_net.a 00:02:46.482 [149/745] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:02:46.482 [150/745] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:46.482 [151/745] Generating lib/rte_acl_def with a custom command 00:02:46.482 [152/745] Generating lib/rte_acl_mingw with a custom command 00:02:46.743 [153/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:46.743 [154/745] Generating lib/rte_bbdev_def with a custom command 00:02:46.743 [155/745] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:46.743 [156/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:46.743 [157/745] Linking static target lib/librte_mempool.a 00:02:46.743 [158/745] Generating lib/rte_bbdev_mingw with a custom command 00:02:46.743 [159/745] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:46.743 [160/745] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:46.743 [161/745] Generating lib/rte_bitratestats_def with a custom command 00:02:46.743 [162/745] Generating lib/rte_bitratestats_mingw with a custom command 00:02:46.743 [163/745] Linking static target lib/librte_eal.a 00:02:46.743 [164/745] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:47.000 [165/745] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:47.000 [166/745] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:47.000 [167/745] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:47.000 [168/745] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:47.000 [169/745] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:47.000 [170/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:47.280 [171/745] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:47.280 [172/745] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:47.280 [173/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:47.280 [174/745] Linking static target lib/librte_timer.a 00:02:47.280 [175/745] Linking static target lib/librte_cmdline.a 00:02:47.280 [176/745] Generating lib/rte_bpf_def with a custom command 00:02:47.280 [177/745] Generating lib/rte_bpf_mingw with a custom command 00:02:47.551 [178/745] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:02:47.551 [179/745] Linking static target lib/librte_metrics.a 00:02:47.551 [180/745] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:47.551 [181/745] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:02:47.551 [182/745] Generating lib/rte_cfgfile_def with a custom command 00:02:47.551 [183/745] Generating lib/rte_cfgfile_mingw with a custom command 00:02:47.551 [184/745] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:02:47.816 [185/745] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:47.816 [186/745] Generating lib/rte_compressdev_def with a custom command 00:02:47.816 [187/745] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:02:47.816 [188/745] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:47.816 [189/745] Linking static target lib/librte_cfgfile.a 00:02:47.816 [190/745] Generating lib/rte_compressdev_mingw with a custom command 00:02:47.816 [191/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:02:47.816 [192/745] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:02:47.816 [193/745] Linking static target lib/librte_bitratestats.a 00:02:48.077 [194/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:02:48.077 [195/745] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:02:48.077 [196/745] Generating lib/rte_cryptodev_def with a custom command 00:02:48.077 [197/745] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:02:48.077 [198/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:02:48.077 [199/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:02:48.077 [200/745] Generating lib/rte_cryptodev_mingw with a custom command 00:02:48.077 [201/745] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:02:48.077 [202/745] Generating lib/rte_distributor_def with a custom command 00:02:48.077 [203/745] Generating lib/rte_distributor_mingw with a custom command 00:02:48.077 [204/745] Generating lib/rte_efd_def with a custom command 00:02:48.077 [205/745] Generating lib/rte_efd_mingw with a custom command 00:02:48.340 [206/745] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:48.340 [207/745] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:02:48.340 [208/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:02:48.340 [209/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:02:48.340 [210/745] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:02:48.600 [211/745] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:48.600 [212/745] Linking static target lib/librte_bbdev.a 00:02:48.600 [213/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:02:48.863 [214/745] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:48.863 [215/745] Generating lib/rte_eventdev_def with a custom command 00:02:48.863 [216/745] Generating lib/rte_eventdev_mingw with a custom command 00:02:49.125 [217/745] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:49.125 [218/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:02:49.125 [219/745] Generating lib/rte_gpudev_def with a custom command 00:02:49.125 [220/745] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:49.125 [221/745] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:49.125 [222/745] Generating lib/rte_gpudev_mingw with a custom command 00:02:49.125 [223/745] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:02:49.395 [224/745] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:49.395 [225/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:02:49.395 [226/745] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:02:49.395 [227/745] Generating lib/rte_gro_def with a custom command 00:02:49.395 [228/745] Generating lib/rte_gro_mingw with a custom command 00:02:49.395 [229/745] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:49.395 [230/745] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:02:49.395 [231/745] Linking static target lib/librte_compressdev.a 00:02:49.395 [232/745] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:49.652 [233/745] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:02:49.652 [234/745] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:02:49.652 [235/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:02:49.652 [236/745] Linking static target lib/librte_bpf.a 00:02:49.652 [237/745] Generating lib/rte_gso_def with a custom command 00:02:49.652 [238/745] Generating lib/rte_gso_mingw with a custom command 00:02:49.913 [239/745] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:02:49.913 [240/745] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:02:49.913 [241/745] Linking static target lib/librte_distributor.a 00:02:50.176 [242/745] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.176 [243/745] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:50.440 [244/745] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.440 [245/745] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:02:50.700 [246/745] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:02:50.700 [247/745] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:02:50.700 [248/745] Generating lib/rte_ip_frag_mingw with a custom command 00:02:50.700 [249/745] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:02:50.700 [250/745] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:02:50.700 [251/745] Generating lib/rte_ip_frag_def with a custom command 00:02:50.700 [252/745] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:02:50.700 [253/745] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:02:50.700 [254/745] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:02:50.700 [255/745] Linking static target lib/librte_gpudev.a 00:02:50.700 [256/745] Generating lib/rte_jobstats_def with a custom command 00:02:50.700 [257/745] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:02:50.700 [258/745] Generating lib/rte_jobstats_mingw with a custom command 00:02:50.700 [259/745] Generating lib/rte_latencystats_def with a custom command 00:02:50.700 [260/745] Generating lib/rte_latencystats_mingw with a custom command 00:02:50.700 [261/745] Generating lib/rte_lpm_def with a custom command 00:02:50.700 [262/745] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:02:50.700 [263/745] Generating lib/rte_lpm_mingw with a custom command 00:02:50.961 [264/745] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:02:50.961 [265/745] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:02:50.961 [266/745] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.961 [267/745] Linking static target lib/librte_gro.a 00:02:50.961 [268/745] Generating lib/rte_member_def with a custom command 00:02:50.961 [269/745] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:02:50.961 [270/745] Linking static target lib/librte_jobstats.a 00:02:50.961 [271/745] Generating lib/rte_member_mingw with a custom command 00:02:51.223 [272/745] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.223 [273/745] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:02:51.223 [274/745] Generating lib/rte_pcapng_def with a custom command 00:02:51.223 [275/745] Generating lib/rte_pcapng_mingw with a custom command 00:02:51.486 [276/745] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.486 [277/745] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:02:51.486 [278/745] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:02:51.486 [279/745] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:02:51.486 [280/745] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:51.486 [281/745] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:51.486 [282/745] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:51.748 [283/745] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:51.748 [284/745] Compiling C object lib/acl/libavx2_tmp.a.p/acl_run_avx2.c.o 00:02:51.748 [285/745] Linking static target lib/acl/libavx2_tmp.a 00:02:51.748 [286/745] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:02:51.748 [287/745] Generating lib/rte_power_def with a custom command 00:02:51.748 [288/745] Generating lib/rte_power_mingw with a custom command 00:02:51.748 [289/745] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:02:51.748 [290/745] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:02:51.748 [291/745] Generating lib/rte_rawdev_def with a custom command 00:02:52.011 [292/745] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:02:52.011 [293/745] Generating lib/rte_rawdev_mingw with a custom command 00:02:52.011 [294/745] Generating lib/rte_regexdev_def with a custom command 00:02:52.011 [295/745] Compiling C object lib/librte_power.a.p/power_rte_power_empty_poll.c.o 00:02:52.011 [296/745] Generating lib/rte_regexdev_mingw with a custom command 00:02:52.011 [297/745] Generating lib/rte_dmadev_def with a custom command 00:02:52.011 [298/745] Generating lib/rte_dmadev_mingw with a custom command 00:02:52.011 [299/745] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:52.011 [300/745] Generating lib/rte_rib_mingw with a custom command 00:02:52.011 [301/745] Generating lib/rte_rib_def with a custom command 00:02:52.011 [302/745] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:02:52.011 [303/745] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:52.011 [304/745] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:02:52.011 [305/745] Generating lib/rte_reorder_def with a custom command 00:02:52.011 [306/745] Linking static target lib/librte_mbuf.a 00:02:52.011 [307/745] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:52.011 [308/745] Linking static target lib/librte_hash.a 00:02:52.011 [309/745] Compiling C object lib/member/libsketch_avx512_tmp.a.p/rte_member_sketch_avx512.c.o 00:02:52.275 [310/745] Linking static target lib/member/libsketch_avx512_tmp.a 00:02:52.275 [311/745] Generating lib/rte_reorder_mingw with a custom command 00:02:52.275 [312/745] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:52.275 [313/745] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:02:52.275 [314/745] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:02:52.275 [315/745] Linking static target lib/librte_ethdev.a 00:02:52.275 [316/745] Linking static target lib/librte_ip_frag.a 00:02:52.275 [317/745] Linking static target lib/librte_latencystats.a 00:02:52.275 [318/745] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:02:52.275 [319/745] Compiling C object lib/acl/libavx512_tmp.a.p/acl_run_avx512.c.o 00:02:52.275 [320/745] Linking static target lib/acl/libavx512_tmp.a 00:02:52.275 [321/745] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:02:52.275 [322/745] Linking static target lib/librte_acl.a 00:02:52.275 [323/745] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:02:52.275 [324/745] Linking static target lib/librte_efd.a 00:02:52.275 [325/745] Generating lib/rte_sched_def with a custom command 00:02:52.275 [326/745] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:02:52.275 [327/745] Linking static target lib/librte_gso.a 00:02:52.552 [328/745] Generating lib/rte_sched_mingw with a custom command 00:02:52.552 [329/745] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:02:52.552 [330/745] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:02:52.552 [331/745] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:02:52.552 [332/745] Generating lib/rte_security_def with a custom command 00:02:52.552 [333/745] Linking static target lib/librte_rawdev.a 00:02:52.552 [334/745] Generating lib/rte_security_mingw with a custom command 00:02:52.552 [335/745] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:52.552 [336/745] Generating lib/rte_stack_def with a custom command 00:02:52.552 [337/745] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:02:52.552 [338/745] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:02:52.552 [339/745] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:02:52.552 [340/745] Linking static target lib/librte_stack.a 00:02:52.552 [341/745] Generating lib/rte_stack_mingw with a custom command 00:02:52.553 [342/745] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:02:52.553 [343/745] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:02:52.553 [344/745] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:52.820 [345/745] Linking static target lib/librte_dmadev.a 00:02:52.820 [346/745] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:02:52.820 [347/745] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:02:52.820 [348/745] Compiling C object lib/librte_power.a.p/power_rte_power_intel_uncore.c.o 00:02:52.820 [349/745] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:52.820 [350/745] Generating lib/rte_vhost_def with a custom command 00:02:52.820 [351/745] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:52.820 [352/745] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.082 [353/745] Generating lib/rte_vhost_mingw with a custom command 00:02:53.082 [354/745] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.082 [355/745] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:02:53.082 [356/745] Linking static target lib/librte_pcapng.a 00:02:53.082 [357/745] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.344 [358/745] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:53.344 [359/745] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:02:53.344 [360/745] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:02:53.344 [361/745] Linking static target lib/librte_regexdev.a 00:02:53.344 [362/745] Generating lib/rte_ipsec_def with a custom command 00:02:53.604 [363/745] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.604 [364/745] Generating lib/rte_ipsec_mingw with a custom command 00:02:53.604 [365/745] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:02:53.604 [366/745] Linking static target lib/librte_lpm.a 00:02:53.604 [367/745] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.604 [368/745] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:53.865 [369/745] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:02:53.865 [370/745] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:53.865 [371/745] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:02:53.865 [372/745] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:02:53.865 [373/745] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:53.865 [374/745] Generating lib/rte_fib_def with a custom command 00:02:53.865 [375/745] Linking static target lib/librte_reorder.a 00:02:53.865 [376/745] Generating lib/rte_fib_mingw with a custom command 00:02:54.127 [377/745] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:54.127 [378/745] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:54.127 [379/745] Linking static target lib/librte_power.a 00:02:54.127 [380/745] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.127 [381/745] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:54.127 [382/745] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:02:54.127 [383/745] Linking static target lib/librte_eventdev.a 00:02:54.388 [384/745] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.388 [385/745] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:54.388 [386/745] Linking static target lib/librte_security.a 00:02:54.388 [387/745] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:02:54.388 [388/745] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.388 [389/745] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:54.651 [390/745] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:02:54.651 [391/745] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:02:54.651 [392/745] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:02:54.651 [393/745] Linking static target lib/librte_rib.a 00:02:54.651 [394/745] Compiling C object lib/fib/libtrie_avx512_tmp.a.p/trie_avx512.c.o 00:02:54.651 [395/745] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:02:54.651 [396/745] Linking static target lib/fib/libtrie_avx512_tmp.a 00:02:54.917 [397/745] Compiling C object lib/fib/libdir24_8_avx512_tmp.a.p/dir24_8_avx512.c.o 00:02:54.917 [398/745] Linking static target lib/fib/libdir24_8_avx512_tmp.a 00:02:54.917 [399/745] Generating lib/rte_port_def with a custom command 00:02:54.917 [400/745] Generating lib/rte_port_mingw with a custom command 00:02:54.917 [401/745] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:02:54.917 [402/745] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:54.917 [403/745] Generating lib/rte_pdump_def with a custom command 00:02:54.917 [404/745] Linking static target lib/librte_cryptodev.a 00:02:54.917 [405/745] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.917 [406/745] Generating lib/rte_pdump_mingw with a custom command 00:02:55.182 [407/745] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:02:55.442 [408/745] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:02:55.442 [409/745] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:02:55.442 [410/745] Linking static target lib/librte_member.a 00:02:55.442 [411/745] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:55.442 [412/745] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:02:55.705 [413/745] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:02:55.705 [414/745] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:55.705 [415/745] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:02:55.968 [416/745] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:02:55.968 [417/745] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:02:56.228 [418/745] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:02:56.228 [419/745] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:02:56.228 [420/745] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:02:56.228 [421/745] Linking static target lib/librte_sched.a 00:02:56.228 [422/745] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:02:56.228 [423/745] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:02:56.228 [424/745] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:02:56.228 [425/745] Linking static target lib/librte_fib.a 00:02:56.493 [426/745] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:56.493 [427/745] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:02:56.755 [428/745] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:02:56.755 [429/745] Generating lib/rte_table_def with a custom command 00:02:56.756 [430/745] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:02:56.756 [431/745] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:02:56.756 [432/745] Generating lib/rte_table_mingw with a custom command 00:02:56.756 [433/745] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:02:56.756 [434/745] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:02:56.756 [435/745] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:02:56.756 [436/745] Generating lib/rte_pipeline_def with a custom command 00:02:56.756 [437/745] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:02:56.756 [438/745] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:02:56.756 [439/745] Generating lib/rte_pipeline_mingw with a custom command 00:02:57.018 [440/745] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:02:57.018 [441/745] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:57.018 [442/745] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:02:57.276 [443/745] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:02:57.276 [444/745] Generating lib/rte_graph_def with a custom command 00:02:57.276 [445/745] Generating lib/rte_graph_mingw with a custom command 00:02:57.276 [446/745] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:02:57.536 [447/745] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:02:57.536 [448/745] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:02:57.536 [449/745] Linking static target lib/librte_ipsec.a 00:02:57.536 [450/745] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:02:57.536 [451/745] Linking static target lib/librte_pdump.a 00:02:57.536 [452/745] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:02:57.536 [453/745] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:02:57.801 [454/745] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:02:58.063 [455/745] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:02:58.063 [456/745] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:02:58.063 [457/745] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:02:58.063 [458/745] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:02:58.063 [459/745] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:58.063 [460/745] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:02:58.063 [461/745] Compiling C object lib/librte_node.a.p/node_null.c.o 00:02:58.063 [462/745] Generating lib/rte_node_def with a custom command 00:02:58.063 [463/745] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:02:58.063 [464/745] Generating lib/rte_node_mingw with a custom command 00:02:58.063 [465/745] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:58.324 [466/745] Linking target lib/librte_eal.so.23.0 00:02:58.324 [467/745] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:02:58.324 [468/745] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:58.324 [469/745] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:02:58.324 [470/745] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:02:58.324 [471/745] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:02:58.324 [472/745] Generating drivers/rte_bus_pci_def with a custom command 00:02:58.324 [473/745] Generating drivers/rte_bus_pci_mingw with a custom command 00:02:58.324 [474/745] Generating drivers/rte_bus_vdev_def with a custom command 00:02:58.324 [475/745] Generating drivers/rte_bus_vdev_mingw with a custom command 00:02:58.324 [476/745] Generating symbol file lib/librte_eal.so.23.0.p/librte_eal.so.23.0.symbols 00:02:58.324 [477/745] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:58.324 [478/745] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:58.589 [479/745] Generating drivers/rte_mempool_ring_def with a custom command 00:02:58.589 [480/745] Linking target lib/librte_ring.so.23.0 00:02:58.589 [481/745] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:02:58.589 [482/745] Linking target lib/librte_meter.so.23.0 00:02:58.589 [483/745] Linking target lib/librte_pci.so.23.0 00:02:58.589 [484/745] Linking target lib/librte_timer.so.23.0 00:02:58.589 [485/745] Linking target lib/librte_acl.so.23.0 00:02:58.589 [486/745] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:02:58.589 [487/745] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:02:58.589 [488/745] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:58.589 [489/745] Linking target lib/librte_jobstats.so.23.0 00:02:58.589 [490/745] Linking target lib/librte_cfgfile.so.23.0 00:02:58.589 [491/745] Linking target lib/librte_rawdev.so.23.0 00:02:58.589 [492/745] Generating symbol file lib/librte_meter.so.23.0.p/librte_meter.so.23.0.symbols 00:02:58.589 [493/745] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:02:58.851 [494/745] Generating symbol file lib/librte_ring.so.23.0.p/librte_ring.so.23.0.symbols 00:02:58.851 [495/745] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:58.851 [496/745] Generating symbol file lib/librte_pci.so.23.0.p/librte_pci.so.23.0.symbols 00:02:58.851 [497/745] Linking target lib/librte_stack.so.23.0 00:02:58.851 [498/745] Linking target lib/librte_dmadev.so.23.0 00:02:58.851 [499/745] Generating drivers/rte_mempool_ring_mingw with a custom command 00:02:58.851 [500/745] Generating symbol file lib/librte_timer.so.23.0.p/librte_timer.so.23.0.symbols 00:02:58.851 [501/745] Linking target lib/librte_rcu.so.23.0 00:02:58.851 [502/745] Generating symbol file lib/librte_acl.so.23.0.p/librte_acl.so.23.0.symbols 00:02:58.851 [503/745] Linking target lib/librte_mempool.so.23.0 00:02:58.851 [504/745] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:58.851 [505/745] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:58.851 [506/745] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:02:58.851 [507/745] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:58.851 [508/745] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:02:58.851 [509/745] Linking static target lib/librte_table.a 00:02:58.851 [510/745] Generating symbol file lib/librte_dmadev.so.23.0.p/librte_dmadev.so.23.0.symbols 00:02:59.111 [511/745] Generating symbol file lib/librte_rcu.so.23.0.p/librte_rcu.so.23.0.symbols 00:02:59.111 [512/745] Generating symbol file lib/librte_mempool.so.23.0.p/librte_mempool.so.23.0.symbols 00:02:59.111 [513/745] Compiling C object lib/librte_node.a.p/node_log.c.o 00:02:59.111 [514/745] Linking target lib/librte_mbuf.so.23.0 00:02:59.111 [515/745] Linking target lib/librte_rib.so.23.0 00:02:59.111 [516/745] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:59.111 [517/745] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:02:59.111 [518/745] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:59.111 [519/745] Compiling C object drivers/librte_bus_vdev.so.23.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:59.111 [520/745] Linking static target drivers/librte_bus_vdev.a 00:02:59.111 [521/745] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:02:59.376 [522/745] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:02:59.376 [523/745] Linking static target lib/librte_graph.a 00:02:59.376 [524/745] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:02:59.376 [525/745] Generating symbol file lib/librte_mbuf.so.23.0.p/librte_mbuf.so.23.0.symbols 00:02:59.376 [526/745] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:59.376 [527/745] Linking static target lib/librte_port.a 00:02:59.376 [528/745] Generating symbol file lib/librte_rib.so.23.0.p/librte_rib.so.23.0.symbols 00:02:59.376 [529/745] Linking target lib/librte_net.so.23.0 00:02:59.376 [530/745] Linking target lib/librte_bbdev.so.23.0 00:02:59.376 [531/745] Linking target lib/librte_compressdev.so.23.0 00:02:59.376 [532/745] Linking target lib/librte_distributor.so.23.0 00:02:59.376 [533/745] Linking target lib/librte_cryptodev.so.23.0 00:02:59.376 [534/745] Linking target lib/librte_gpudev.so.23.0 00:02:59.376 [535/745] Linking target lib/librte_regexdev.so.23.0 00:02:59.376 [536/745] Linking target lib/librte_reorder.so.23.0 00:02:59.640 [537/745] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:59.640 [538/745] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:59.640 [539/745] Linking target lib/librte_sched.so.23.0 00:02:59.640 [540/745] Linking target lib/librte_fib.so.23.0 00:02:59.640 [541/745] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:59.640 [542/745] Generating symbol file lib/librte_net.so.23.0.p/librte_net.so.23.0.symbols 00:02:59.640 [543/745] Linking target drivers/librte_bus_vdev.so.23.0 00:02:59.640 [544/745] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:02:59.640 [545/745] Generating symbol file lib/librte_cryptodev.so.23.0.p/librte_cryptodev.so.23.0.symbols 00:02:59.640 [546/745] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:02:59.640 [547/745] Linking target lib/librte_ethdev.so.23.0 00:02:59.640 [548/745] Linking target lib/librte_cmdline.so.23.0 00:02:59.640 [549/745] Linking target lib/librte_hash.so.23.0 00:02:59.640 [550/745] Linking target lib/librte_security.so.23.0 00:02:59.901 [551/745] Generating symbol file lib/librte_sched.so.23.0.p/librte_sched.so.23.0.symbols 00:02:59.901 [552/745] Generating symbol file drivers/librte_bus_vdev.so.23.0.p/librte_bus_vdev.so.23.0.symbols 00:02:59.901 [553/745] Generating symbol file lib/librte_ethdev.so.23.0.p/librte_ethdev.so.23.0.symbols 00:02:59.901 [554/745] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:02:59.901 [555/745] Generating symbol file lib/librte_hash.so.23.0.p/librte_hash.so.23.0.symbols 00:02:59.901 [556/745] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:59.901 [557/745] Generating symbol file lib/librte_security.so.23.0.p/librte_security.so.23.0.symbols 00:02:59.901 [558/745] Linking target lib/librte_metrics.so.23.0 00:02:59.901 [559/745] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:02:59.901 [560/745] Linking target lib/librte_bpf.so.23.0 00:02:59.901 [561/745] Linking target lib/librte_efd.so.23.0 00:03:00.165 [562/745] Linking target lib/librte_gro.so.23.0 00:03:00.165 [563/745] Linking target lib/librte_gso.so.23.0 00:03:00.165 [564/745] Linking target lib/librte_eventdev.so.23.0 00:03:00.165 [565/745] Linking target lib/librte_ip_frag.so.23.0 00:03:00.165 [566/745] Linking target lib/librte_lpm.so.23.0 00:03:00.165 [567/745] Linking target lib/librte_member.so.23.0 00:03:00.165 [568/745] Linking target lib/librte_pcapng.so.23.0 00:03:00.165 [569/745] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:03:00.165 [570/745] Generating symbol file lib/librte_bpf.so.23.0.p/librte_bpf.so.23.0.symbols 00:03:00.165 [571/745] Generating symbol file lib/librte_metrics.so.23.0.p/librte_metrics.so.23.0.symbols 00:03:00.428 [572/745] Generating symbol file lib/librte_eventdev.so.23.0.p/librte_eventdev.so.23.0.symbols 00:03:00.428 [573/745] Linking target lib/librte_bitratestats.so.23.0 00:03:00.428 [574/745] Linking target lib/librte_power.so.23.0 00:03:00.428 [575/745] Linking target lib/librte_ipsec.so.23.0 00:03:00.428 [576/745] Generating symbol file lib/librte_ip_frag.so.23.0.p/librte_ip_frag.so.23.0.symbols 00:03:00.428 [577/745] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:00.428 [578/745] Compiling C object drivers/librte_bus_pci.so.23.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:00.428 [579/745] Generating symbol file lib/librte_lpm.so.23.0.p/librte_lpm.so.23.0.symbols 00:03:00.428 [580/745] Linking static target drivers/librte_bus_pci.a 00:03:00.428 [581/745] Linking target lib/librte_latencystats.so.23.0 00:03:00.428 [582/745] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:03:00.428 [583/745] Generating drivers/rte_net_i40e_def with a custom command 00:03:00.428 [584/745] Generating drivers/rte_net_i40e_mingw with a custom command 00:03:00.428 [585/745] Generating symbol file lib/librte_pcapng.so.23.0.p/librte_pcapng.so.23.0.symbols 00:03:00.428 [586/745] Linking target lib/librte_port.so.23.0 00:03:00.428 [587/745] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:03:00.428 [588/745] Linking target lib/librte_pdump.so.23.0 00:03:00.690 [589/745] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:03:00.690 [590/745] Linking target lib/librte_graph.so.23.0 00:03:00.690 [591/745] Generating symbol file lib/librte_port.so.23.0.p/librte_port.so.23.0.symbols 00:03:00.953 [592/745] Linking target lib/librte_table.so.23.0 00:03:00.953 [593/745] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:03:00.953 [594/745] Generating symbol file lib/librte_graph.so.23.0.p/librte_graph.so.23.0.symbols 00:03:00.953 [595/745] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:00.953 [596/745] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:03:00.953 [597/745] Linking target drivers/librte_bus_pci.so.23.0 00:03:00.954 [598/745] Linking static target drivers/libtmp_rte_mempool_ring.a 00:03:00.954 [599/745] Generating symbol file lib/librte_table.so.23.0.p/librte_table.so.23.0.symbols 00:03:01.215 [600/745] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:03:01.215 [601/745] Generating symbol file drivers/librte_bus_pci.so.23.0.p/librte_bus_pci.so.23.0.symbols 00:03:01.476 [602/745] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:03:01.476 [603/745] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:03:01.476 [604/745] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:01.476 [605/745] Compiling C object drivers/librte_mempool_ring.so.23.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:01.476 [606/745] Linking static target drivers/librte_mempool_ring.a 00:03:01.476 [607/745] Linking target drivers/librte_mempool_ring.so.23.0 00:03:01.476 [608/745] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:03:01.739 [609/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:03:01.739 [610/745] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:03:01.739 [611/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:03:01.739 [612/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:03:02.003 [613/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:03:02.286 [614/745] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:03:02.286 [615/745] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:03:02.568 [616/745] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:03:02.568 [617/745] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:03:02.568 [618/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:03:02.842 [619/745] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:03:02.842 [620/745] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:03:03.103 [621/745] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:03:03.103 [622/745] Compiling C object drivers/net/i40e/libi40e_avx2_lib.a.p/i40e_rxtx_vec_avx2.c.o 00:03:03.103 [623/745] Linking static target drivers/net/i40e/libi40e_avx2_lib.a 00:03:03.103 [624/745] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:03:03.364 [625/745] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:03:03.364 [626/745] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:03:03.621 [627/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:03:03.621 [628/745] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:03:03.621 [629/745] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:03:03.621 [630/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:03:03.621 [631/745] Linking static target drivers/net/i40e/base/libi40e_base.a 00:03:03.884 [632/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:03:03.884 [633/745] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:03:03.884 [634/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:03:04.144 [635/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:03:04.144 [636/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:03:04.405 [637/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:03:04.405 [638/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:03:04.666 [639/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:03:04.927 [640/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:03:04.927 [641/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:03:05.190 [642/745] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:03:05.456 [643/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:03:05.456 [644/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:03:05.721 [645/745] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:03:05.721 [646/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:03:05.982 [647/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:03:05.982 [648/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:03:06.244 [649/745] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:03:06.504 [650/745] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:03:06.764 [651/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:03:06.764 [652/745] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:03:06.764 [653/745] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:03:06.764 [654/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:03:07.027 [655/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:03:07.027 [656/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:03:07.027 [657/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:03:07.027 [658/745] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:03:07.027 [659/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:03:07.027 [660/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:03:07.027 [661/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:03:07.288 [662/745] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:03:07.288 [663/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:03:07.288 [664/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:03:07.288 [665/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:03:07.549 [666/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:03:07.549 [667/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:03:08.119 [668/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:03:08.119 [669/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:03:08.119 [670/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:03:08.119 [671/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:03:08.381 [672/745] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:03:08.640 [673/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:03:08.640 [674/745] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:03:08.640 [675/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:03:08.902 [676/745] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:03:08.902 [677/745] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:03:08.902 [678/745] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:03:08.902 [679/745] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:03:08.902 [680/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:03:08.902 [681/745] Linking static target drivers/libtmp_rte_net_i40e.a 00:03:09.164 [682/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:03:09.164 [683/745] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:03:09.164 [684/745] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:03:09.164 [685/745] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:03:09.164 [686/745] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:03:09.164 [687/745] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:03:09.423 [688/745] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:03:09.423 [689/745] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:03:09.681 [690/745] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:03:09.681 [691/745] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:03:09.681 [692/745] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:03:09.681 [693/745] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:03:09.681 [694/745] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:03:09.681 [695/745] Compiling C object drivers/librte_net_i40e.so.23.0.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:03:09.681 [696/745] Linking static target drivers/librte_net_i40e.a 00:03:09.939 [697/745] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:03:09.939 [698/745] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:03:09.939 [699/745] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:03:10.197 [700/745] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:03:10.197 [701/745] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:03:10.197 [702/745] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:03:10.197 [703/745] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:03:10.455 [704/745] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:03:10.455 [705/745] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:03:10.455 [706/745] Linking target drivers/librte_net_i40e.so.23.0 00:03:10.713 [707/745] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:03:10.971 [708/745] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:03:10.971 [709/745] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:03:10.971 [710/745] Linking static target lib/librte_node.a 00:03:10.971 [711/745] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:03:11.229 [712/745] Linking target lib/librte_node.so.23.0 00:03:11.229 [713/745] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:03:11.796 [714/745] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:03:11.796 [715/745] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:03:12.363 [716/745] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:03:12.929 [717/745] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:03:13.496 [718/745] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:03:14.430 [719/745] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:03:19.712 [720/745] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:03:51.797 [721/745] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:03:51.797 [722/745] Linking static target lib/librte_vhost.a 00:03:51.797 [723/745] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:03:51.797 [724/745] Linking target lib/librte_vhost.so.23.0 00:04:01.762 [725/745] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:04:01.762 [726/745] Linking static target lib/librte_pipeline.a 00:04:02.020 [727/745] Linking target app/dpdk-test-gpudev 00:04:02.020 [728/745] Linking target app/dpdk-test-fib 00:04:02.020 [729/745] Linking target app/dpdk-test-acl 00:04:02.279 [730/745] Linking target app/dpdk-dumpcap 00:04:02.279 [731/745] Linking target app/dpdk-test-security-perf 00:04:02.279 [732/745] Linking target app/dpdk-test-crypto-perf 00:04:02.279 [733/745] Linking target app/dpdk-test-regex 00:04:02.279 [734/745] Linking target app/dpdk-test-eventdev 00:04:02.279 [735/745] Linking target app/dpdk-test-compress-perf 00:04:02.279 [736/745] Linking target app/dpdk-pdump 00:04:02.279 [737/745] Linking target app/dpdk-proc-info 00:04:02.279 [738/745] Linking target app/dpdk-test-flow-perf 00:04:02.279 [739/745] Linking target app/dpdk-testpmd 00:04:02.279 [740/745] Linking target app/dpdk-test-cmdline 00:04:02.279 [741/745] Linking target app/dpdk-test-sad 00:04:02.279 [742/745] Linking target app/dpdk-test-pipeline 00:04:02.279 [743/745] Linking target app/dpdk-test-bbdev 00:04:04.183 [744/745] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:04:04.183 [745/745] Linking target lib/librte_pipeline.so.23.0 00:04:04.183 00:16:31 build_native_dpdk -- common/autobuild_common.sh@187 -- $ ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp -j32 install 00:04:04.441 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp' 00:04:04.441 [0/1] Installing files. 00:04:04.703 Installing subdir /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples 00:04:04.703 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:04:04.703 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:04:04.703 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_default_v4.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:04:04.703 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:04:04.703 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:04:04.703 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_route.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:04:04.703 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:04:04.703 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:04:04.703 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:04:04.703 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:04:04.703 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:04:04.703 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:04:04.703 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:04:04.703 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_route_parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:04:04.703 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:04:04.703 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:04:04.703 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:04:04.703 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_fib.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:04:04.703 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:04:04.703 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_default_v6.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:04:04.704 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:04:04.704 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:04:04.704 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:04:04.704 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:04:04.704 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:04:04.704 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:04:04.704 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_route_parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:04:04.704 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_default_v4.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:04:04.704 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:04:04.704 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:04:04.704 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_default_v6.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:04:04.704 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:04:04.704 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:04:04.704 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq_dcb/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq_dcb 00:04:04.704 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq_dcb/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq_dcb 00:04:04.704 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:04:04.704 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:04:04.704 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/flow_blocks.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:04:04.704 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_classify/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_classify 00:04:04.704 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_classify/flow_classify.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_classify 00:04:04.704 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_classify/ipv4_rules_file.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_classify 00:04:04.704 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:04:04.704 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:04:04.704 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:04:04.704 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:04:04.704 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:04:04.704 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:04:04.704 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:04:04.704 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_poll.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:04:04.704 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_common.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:04:04.704 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_poll.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:04:04.704 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/commands.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:04:04.704 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/parse_obj_list.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:04:04.704 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:04:04.704 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:04:04.704 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:04:04.704 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/parse_obj_list.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:04:04.704 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/pkt_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common 00:04:04.704 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/neon/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/neon 00:04:04.704 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/altivec/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/altivec 00:04:04.704 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/sse/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/sse 00:04:04.704 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ptpclient/ptpclient.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ptpclient 00:04:04.704 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ptpclient/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ptpclient 00:04:04.704 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/helloworld/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/helloworld 00:04:04.704 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/helloworld/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/helloworld 00:04:04.704 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd 00:04:04.704 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd 00:04:04.704 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/rxtx_callbacks/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:04:04.704 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/rxtx_callbacks/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:04:04.704 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:04:04.704 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/vm_power_cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:04:04.704 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_manager.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:04:04.704 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_monitor.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:04:04.704 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:04:04.704 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:04:04.704 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_monitor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:04:04.704 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:04:04.704 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:04:04.704 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:04:04.704 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_manager.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:04:04.704 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/vm_power_cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:04:04.704 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/power_manager.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:04:04.704 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:04:04.704 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/power_manager.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:04:04.704 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:04:04.704 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:04:04.704 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:04:04.704 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:04:04.704 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:04:04.704 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:04:04.704 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/app_thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:04:04.704 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_ov.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:04:04.704 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:04:04.704 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cfg_file.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:04:04.705 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:04:04.705 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cmdline.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:04:04.705 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:04:04.705 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cfg_file.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:04:04.705 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/stats.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:04:04.705 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:04:04.705 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:04:04.705 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_red.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:04:04.705 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_pie.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:04:04.705 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:04:04.705 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:04:04.705 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/cat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:04:04.705 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:04:04.705 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/cat.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:04:04.705 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/perf_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:04:04.705 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:04:04.705 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:04:04.705 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:04:04.705 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/perf_core.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:04:04.705 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:04:04.705 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:04:04.705 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/vdpa_blk_compact.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:04:04.705 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/virtio_net.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:04:04.705 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:04:04.705 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:04:04.705 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:04:04.705 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/blk_spec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:04:04.705 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:04:04.705 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk_compat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:04:04.705 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:04:04.705 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:04:04.705 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/blk.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:04:04.705 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:04:04.705 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_aes.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:04:04.705 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_sha.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:04:04.705 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_tdes.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:04:04.705 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:04:04.705 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:04:04.705 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_dev_self_test.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:04:04.705 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_rsa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:04:04.705 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:04:04.705 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_dev_self_test.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:04:04.705 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_gcm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:04:04.705 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_cmac.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:04:04.705 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_xts.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:04:04.705 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_hmac.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:04:04.705 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_ccm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:04:04.705 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:04:04.705 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:04:04.705 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:04:04.705 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:04:04.705 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/dma/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/dma 00:04:04.705 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/dma/dmafwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/dma 00:04:04.705 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process 00:04:04.705 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/commands.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:04:04.705 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:04:04.705 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:04:04.705 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:04:04.705 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/mp_commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:04:04.705 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:04:04.705 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:04:04.705 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/mp_commands.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:04:04.705 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/symmetric_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:04:04.705 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/symmetric_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:04:04.705 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:04:04.705 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:04:04.705 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:04:04.705 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:04:04.705 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:04:04.705 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:04:04.705 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:04:04.705 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:04:04.705 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:04:04.705 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/shared/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:04:04.706 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-graph/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-graph 00:04:04.706 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-graph/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-graph 00:04:04.706 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-jobstats/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:04:04.706 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-jobstats/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:04:04.706 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_fragmentation/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_fragmentation 00:04:04.706 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_fragmentation/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_fragmentation 00:04:04.706 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/packet_ordering/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/packet_ordering 00:04:04.706 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/packet_ordering/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/packet_ordering 00:04:04.706 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-crypto/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:04:04.706 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-crypto/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:04:04.706 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/shm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:04:04.706 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:04:04.706 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:04:04.706 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/shm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:04:04.706 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:04:04.706 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:04:04.706 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/skeleton/basicfwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/skeleton 00:04:04.706 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/skeleton/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/skeleton 00:04:04.706 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/service_cores/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/service_cores 00:04:04.706 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/service_cores/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/service_cores 00:04:04.706 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/distributor/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/distributor 00:04:04.706 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/distributor/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/distributor 00:04:04.706 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:04:04.706 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:04:04.706 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:04:04.706 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:04:04.706 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:04:04.706 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/esp.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:04:04.706 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/event_helper.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:04:04.706 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_worker.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:04:04.706 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:04:04.706 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipip.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:04:04.706 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ep1.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:04:04.706 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sp4.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:04:04.706 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:04:04.706 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/flow.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:04:04.706 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_process.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:04:04.706 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/flow.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:04:04.706 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:04:04.706 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/rt.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:04:04.706 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_worker.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:04:04.706 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:04:04.706 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:04:04.706 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/event_helper.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:04:04.706 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sad.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:04:04.706 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:04:04.706 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sad.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:04:04.706 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/esp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:04:04.706 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:04:04.706 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sp6.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:04:04.706 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/parser.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:04:04.706 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ep0.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:04:04.706 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/parser.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:04:04.706 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:04:04.706 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:04:04.706 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:04:04.706 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:04:04.706 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:04:04.706 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/load_env.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:04:04.706 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:04:04.706 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:04:04.706 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/run_test.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:04:04.706 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:04:04.706 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:04:04.706 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:04:04.706 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:04:04.706 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:04:04.706 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:04:04.706 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:04:04.706 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:04:04.706 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:04:04.706 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:04:04.706 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:04:04.706 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/pkttest.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:04:04.707 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:04:04.707 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:04:04.707 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/pkttest.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:04:04.707 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:04:04.707 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/linux_test.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:04:04.707 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:04:04.707 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:04:04.707 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipv4_multicast/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipv4_multicast 00:04:04.707 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipv4_multicast/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipv4_multicast 00:04:04.707 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd 00:04:04.707 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/server/args.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:04:04.707 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/server/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:04:04.707 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/server/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:04:04.707 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/server/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:04:04.707 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/server/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:04:04.707 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/server/init.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:04:04.707 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/node/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/node 00:04:04.707 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/node/node.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/node 00:04:04.707 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/shared/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:04:04.707 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:04:04.707 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:04:04.707 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:04:04.707 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tap.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:04:04.707 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tmgr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:04:04.707 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/swq.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:04:04.707 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:04:04.707 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/pipeline.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:04:04.707 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/kni.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:04:04.707 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/swq.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:04:04.707 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/action.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:04:04.707 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/conn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:04:04.707 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tap.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:04:04.707 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/link.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:04:04.707 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:04:04.707 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cryptodev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:04:04.707 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/mempool.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:04:04.707 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:04:04.707 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/conn.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:04:04.707 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/parser.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:04:04.707 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/link.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:04:04.707 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cryptodev.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:04:04.707 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:04:04.707 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/parser.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:04:04.707 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:04:04.707 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:04:04.707 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/mempool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:04:04.707 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tmgr.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:04:04.707 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/kni.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:04:04.707 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:04:04.707 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/rss.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:04:04.707 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/flow.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:04:04.707 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/kni.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:04:04.707 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/firewall.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:04:04.707 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/tap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:04:04.707 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:04:04.707 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/route.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:04:04.707 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:04:04.707 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t1.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:04:04.707 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t3.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:04:04.707 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/README to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:04:04.707 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/dummy.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:04:04.708 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t2.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:04:04.708 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq 00:04:04.708 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq 00:04:04.708 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/link_status_interrupt/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/link_status_interrupt 00:04:04.708 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/link_status_interrupt/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/link_status_interrupt 00:04:04.708 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_reassembly/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_reassembly 00:04:04.708 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_reassembly/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_reassembly 00:04:04.708 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bbdev_app/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bbdev_app 00:04:04.708 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bbdev_app/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bbdev_app 00:04:04.708 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:04:04.708 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:04:04.708 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:04:04.708 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/conn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:04:04.708 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:04:04.708 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/obj.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:04:04.708 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:04:04.708 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/conn.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:04:04.708 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:04:04.708 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/obj.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:04:04.708 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/hash_func.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:04.708 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:04.708 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/mirroring.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:04.708 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:04.708 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ethdev.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:04.708 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/mirroring.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:04.708 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/varbit.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:04.708 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:04.708 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:04.708 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/recirculation.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:04.708 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/recirculation.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:04.708 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/varbit.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:04.708 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:04.708 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_routing_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:04.708 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:04.708 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:04.708 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:04.708 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/learner.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:04.708 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:04.708 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/registers.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:04.708 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:04.708 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:04.708 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:04.708 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/registers.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:04.708 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:04.708 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/packet.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:04.708 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:04.708 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/meter.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:04.708 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/hash_func.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:04.708 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/pcap.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:04.708 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_table.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:04.708 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/learner.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:04.708 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:04.708 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:04.708 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/meter.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:04.708 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:04.708 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:04:04.708 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/rte_policer.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:04:04.708 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:04:04.708 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:04:04.708 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/rte_policer.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:04:04.708 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool 00:04:04.708 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/ethapp.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:04:04.708 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/ethapp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:04:04.708 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:04:04.708 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:04:04.708 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:04:04.708 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/rte_ethtool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:04:04.708 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/rte_ethtool.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:04:04.708 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:04:04.708 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/ntb_fwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:04:04.708 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_crypto/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_crypto 00:04:04.708 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_crypto/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_crypto 00:04:04.708 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/timer/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/timer 00:04:04.708 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/timer/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/timer 00:04:04.708 Installing lib/librte_kvargs.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:04:04.708 Installing lib/librte_kvargs.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:04:04.708 Installing lib/librte_telemetry.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:04:04.709 Installing lib/librte_telemetry.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:04:04.709 Installing lib/librte_eal.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:04:04.709 Installing lib/librte_eal.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:04:04.709 Installing lib/librte_ring.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:04:04.709 Installing lib/librte_ring.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:04:04.709 Installing lib/librte_rcu.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:04:04.709 Installing lib/librte_rcu.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:04:04.709 Installing lib/librte_mempool.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:04:04.968 Installing lib/librte_mempool.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:04:04.968 Installing lib/librte_mbuf.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:04:04.968 Installing lib/librte_mbuf.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:04:04.968 Installing lib/librte_net.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:04:04.968 Installing lib/librte_net.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:04:04.968 Installing lib/librte_meter.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:04:04.968 Installing lib/librte_meter.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:04:04.968 Installing lib/librte_ethdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:04:04.968 Installing lib/librte_ethdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:04:04.968 Installing lib/librte_pci.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:04:04.968 Installing lib/librte_pci.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:04:04.968 Installing lib/librte_cmdline.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:04:04.968 Installing lib/librte_cmdline.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:04:04.968 Installing lib/librte_metrics.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:04:04.968 Installing lib/librte_metrics.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:04:04.968 Installing lib/librte_hash.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:04:04.968 Installing lib/librte_hash.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:04:04.968 Installing lib/librte_timer.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:04:04.968 Installing lib/librte_timer.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:04:04.968 Installing lib/librte_acl.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:04:04.968 Installing lib/librte_acl.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:04:04.968 Installing lib/librte_bbdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:04:04.968 Installing lib/librte_bbdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:04:04.968 Installing lib/librte_bitratestats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:04:04.968 Installing lib/librte_bitratestats.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:04:04.968 Installing lib/librte_bpf.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:04:04.968 Installing lib/librte_bpf.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:04:04.968 Installing lib/librte_cfgfile.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:04:04.968 Installing lib/librte_cfgfile.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:04:04.968 Installing lib/librte_compressdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:04:04.968 Installing lib/librte_compressdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:04:04.968 Installing lib/librte_cryptodev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:04:04.968 Installing lib/librte_cryptodev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:04:04.968 Installing lib/librte_distributor.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:04:04.968 Installing lib/librte_distributor.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:04:04.968 Installing lib/librte_efd.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:04:04.968 Installing lib/librte_efd.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:04:04.968 Installing lib/librte_eventdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:04:04.968 Installing lib/librte_eventdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:04:04.968 Installing lib/librte_gpudev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:04:04.968 Installing lib/librte_gpudev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:04:04.968 Installing lib/librte_gro.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:04:04.968 Installing lib/librte_gro.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:04:04.968 Installing lib/librte_gso.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:04:04.968 Installing lib/librte_gso.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:04:04.968 Installing lib/librte_ip_frag.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:04:04.968 Installing lib/librte_ip_frag.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:04:04.968 Installing lib/librte_jobstats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:04:04.968 Installing lib/librte_jobstats.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:04:04.968 Installing lib/librte_latencystats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:04:04.968 Installing lib/librte_latencystats.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:04:04.968 Installing lib/librte_lpm.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:04:04.968 Installing lib/librte_lpm.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:04:04.968 Installing lib/librte_member.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:04:04.968 Installing lib/librte_member.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:04:04.968 Installing lib/librte_pcapng.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:04:04.968 Installing lib/librte_pcapng.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:04:04.968 Installing lib/librte_power.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:04:04.968 Installing lib/librte_power.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:04:04.968 Installing lib/librte_rawdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:04:04.968 Installing lib/librte_rawdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:04:04.968 Installing lib/librte_regexdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:04:04.968 Installing lib/librte_regexdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:04:04.968 Installing lib/librte_dmadev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:04:04.968 Installing lib/librte_dmadev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:04:04.969 Installing lib/librte_rib.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:04:04.969 Installing lib/librte_rib.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:04:04.969 Installing lib/librte_reorder.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:04:04.969 Installing lib/librte_reorder.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:04:04.969 Installing lib/librte_sched.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:04:04.969 Installing lib/librte_sched.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:04:04.969 Installing lib/librte_security.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:04:04.969 Installing lib/librte_security.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:04:04.969 Installing lib/librte_stack.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:04:04.969 Installing lib/librte_stack.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:04:04.969 Installing lib/librte_vhost.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:04:04.969 Installing lib/librte_vhost.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:04:04.969 Installing lib/librte_ipsec.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:04:04.969 Installing lib/librte_ipsec.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:04:04.969 Installing lib/librte_fib.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:04:04.969 Installing lib/librte_fib.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:04:04.969 Installing lib/librte_port.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:04:04.969 Installing lib/librte_port.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:04:04.969 Installing lib/librte_pdump.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:04:04.969 Installing lib/librte_pdump.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:04:05.232 Installing lib/librte_table.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:04:05.232 Installing lib/librte_table.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:04:05.232 Installing lib/librte_pipeline.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:04:05.232 Installing lib/librte_pipeline.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:04:05.232 Installing lib/librte_graph.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:04:05.232 Installing lib/librte_graph.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:04:05.232 Installing lib/librte_node.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:04:05.232 Installing lib/librte_node.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:04:05.232 Installing drivers/librte_bus_pci.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:04:05.232 Installing drivers/librte_bus_pci.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0 00:04:05.232 Installing drivers/librte_bus_vdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:04:05.232 Installing drivers/librte_bus_vdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0 00:04:05.232 Installing drivers/librte_mempool_ring.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:04:05.232 Installing drivers/librte_mempool_ring.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0 00:04:05.232 Installing drivers/librte_net_i40e.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:04:05.232 Installing drivers/librte_net_i40e.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0 00:04:05.232 Installing app/dpdk-dumpcap to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:04:05.232 Installing app/dpdk-pdump to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:04:05.232 Installing app/dpdk-proc-info to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:04:05.232 Installing app/dpdk-test-acl to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:04:05.232 Installing app/dpdk-test-bbdev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:04:05.232 Installing app/dpdk-test-cmdline to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:04:05.232 Installing app/dpdk-test-compress-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:04:05.232 Installing app/dpdk-test-crypto-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:04:05.232 Installing app/dpdk-test-eventdev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:04:05.232 Installing app/dpdk-test-fib to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:04:05.232 Installing app/dpdk-test-flow-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:04:05.232 Installing app/dpdk-test-gpudev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:04:05.232 Installing app/dpdk-test-pipeline to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:04:05.232 Installing app/dpdk-testpmd to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:04:05.232 Installing app/dpdk-test-regex to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:04:05.232 Installing app/dpdk-test-sad to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:04:05.232 Installing app/dpdk-test-security-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:04:05.232 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/config/rte_config.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:04:05.232 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/kvargs/rte_kvargs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:04:05.232 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/telemetry/rte_telemetry.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:04:05.232 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_atomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:04:05.232 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_byteorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:04:05.232 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_cpuflags.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:04:05.232 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_cycles.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:04:05.232 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_io.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:04:05.232 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_memcpy.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:04:05.232 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_pause.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:04:05.232 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:04:05.232 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_prefetch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:04:05.232 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_rwlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:04:05.232 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_spinlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:04:05.232 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_vect.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:04:05.232 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:04:05.232 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:04:05.232 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_cpuflags.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:04:05.232 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_cycles.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:04:05.232 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_io.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:04:05.232 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_memcpy.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:04:05.232 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_pause.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:04:05.232 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:04:05.232 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_prefetch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:04:05.232 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_rtm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:04:05.232 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_rwlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:04:05.232 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_spinlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:04:05.232 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_vect.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:04:05.232 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic_32.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:04:05.232 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic_64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:04:05.232 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:04:05.232 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:04:05.232 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_alarm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:04:05.232 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bitmap.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:04:05.232 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bitops.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:04:05.232 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_branch_prediction.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:04:05.232 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bus.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:04:05.232 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_class.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:04:05.232 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:04:05.232 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_compat.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:04:05.232 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_debug.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:04:05.232 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_dev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:04:05.232 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_devargs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:04:05.232 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:04:05.232 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal_memconfig.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:04:05.232 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:04:05.232 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_errno.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:04:05.232 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_epoll.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:04:05.232 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_fbarray.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:04:05.233 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_hexdump.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:04:05.233 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_hypervisor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:04:05.233 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_interrupts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:04:05.233 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_keepalive.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:04:05.233 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_launch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:04:05.233 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_lcore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:04:05.233 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_log.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:04:05.233 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_malloc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:04:05.233 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_mcslock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:04:05.233 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_memory.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:04:05.233 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_memzone.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:04:05.233 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:04:05.233 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pci_dev_features.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:04:05.233 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_per_lcore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:04:05.233 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pflock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:04:05.233 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_random.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:04:05.233 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_reciprocal.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:04:05.233 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_seqcount.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:04:05.233 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_seqlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:04:05.233 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_service.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:04:05.233 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_service_component.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:04:05.233 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_string_fns.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:04:05.233 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_tailq.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:04:05.233 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:04:05.233 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_ticketlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:04:05.233 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_time.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:04:05.233 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:04:05.233 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace_point.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:04:05.233 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace_point_register.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:04:05.233 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_uuid.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:04:05.233 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_version.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:04:05.233 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_vfio.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:04:05.233 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/linux/include/rte_os.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:04:05.233 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:04:05.233 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:04:05.233 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_elem.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:04:05.233 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:04:05.233 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_c11_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:04:05.233 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_generic_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:04:05.233 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_hts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:04:05.233 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:04:05.233 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:04:05.233 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:04:05.233 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek_zc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:04:05.233 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_rts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:04:05.233 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:04:05.233 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rcu/rte_rcu_qsbr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:04:05.233 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mempool/rte_mempool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:04:05.233 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mempool/rte_mempool_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:04:05.233 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mempool/rte_mempool_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:04:05.233 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:04:05.233 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:04:05.233 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_ptype.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:04:05.233 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:04:05.233 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_dyn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:04:05.233 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ip.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:04:05.233 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_tcp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:04:05.233 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_udp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:04:05.233 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_esp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:04:05.233 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_sctp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:04:05.233 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_icmp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:04:05.233 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_arp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:04:05.233 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ether.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:04:05.233 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_macsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:04:05.233 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_vxlan.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:04:05.233 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_gre.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:04:05.233 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_gtp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:04:05.233 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_net.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:04:05.233 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_net_crc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:04:05.233 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_mpls.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:04:05.233 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_higig.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:04:05.233 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ecpri.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:04:05.233 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_geneve.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:04:05.233 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_l2tpv2.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:04:05.233 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ppp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:04:05.233 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/meter/rte_meter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:04:05.233 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_cman.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:04:05.233 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:04:05.233 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:04:05.233 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:04:05.233 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_dev_info.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:04:05.233 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_flow.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:04:05.233 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_flow_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:04:05.233 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_mtr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:04:05.233 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_mtr_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:04:05.233 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_tm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:04:05.233 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_tm_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:04:05.233 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:04:05.233 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_eth_ctrl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:04:05.233 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pci/rte_pci.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:04:05.234 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:04:05.234 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:04:05.234 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_num.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:04:05.234 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:04:05.234 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:04:05.234 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_string.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:04:05.234 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_rdline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:04:05.234 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_vt100.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:04:05.234 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_socket.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:04:05.234 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_cirbuf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:04:05.234 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_portlist.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:04:05.234 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/metrics/rte_metrics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:04:05.234 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/metrics/rte_metrics_telemetry.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:04:05.234 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_fbk_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:04:05.234 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_hash_crc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:04:05.234 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:04:05.234 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_jhash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:04:05.234 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:04:05.234 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash_gfni.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:04:05.234 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:04:05.234 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_generic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:04:05.234 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_sw.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:04:05.234 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_x86.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:04:05.234 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash_x86_gfni.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:04:05.234 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/timer/rte_timer.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:04:05.234 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/acl/rte_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:04:05.234 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/acl/rte_acl_osdep.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:04:05.234 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:04:05.234 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev_pmd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:04:05.234 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev_op.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:04:05.234 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bitratestats/rte_bitrate.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:04:05.234 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/bpf_def.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:04:05.234 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/rte_bpf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:04:05.234 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/rte_bpf_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:04:05.234 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cfgfile/rte_cfgfile.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:04:05.234 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/compressdev/rte_compressdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:04:05.234 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/compressdev/rte_comp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:04:05.234 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:04:05.234 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:04:05.234 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:04:05.234 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:04:05.234 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto_sym.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:04:05.234 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto_asym.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:04:05.234 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:04:05.234 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/distributor/rte_distributor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:04:05.234 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/efd/rte_efd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:04:05.234 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:04:05.234 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:04:05.234 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:04:05.234 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:04:05.234 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_timer_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:04:05.234 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:04:05.234 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:04:05.234 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:04:05.234 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gpudev/rte_gpudev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:04:05.234 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gro/rte_gro.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:04:05.234 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gso/rte_gso.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:04:05.234 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ip_frag/rte_ip_frag.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:04:05.234 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/jobstats/rte_jobstats.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:04:05.234 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/latencystats/rte_latencystats.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:04:05.234 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:04:05.234 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:04:05.234 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:04:05.234 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:04:05.234 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_scalar.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:04:05.234 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:04:05.234 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_sve.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:04:05.234 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/member/rte_member.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:04:05.234 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pcapng/rte_pcapng.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:04:05.234 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:04:05.234 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_empty_poll.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:04:05.234 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_intel_uncore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:04:05.234 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_pmd_mgmt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:04:05.234 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_guest_channel.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:04:05.234 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rawdev/rte_rawdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:04:05.234 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rawdev/rte_rawdev_pmd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:04:05.234 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:04:05.234 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:04:05.234 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:04:05.234 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dmadev/rte_dmadev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:04:05.234 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dmadev/rte_dmadev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:04:05.234 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rib/rte_rib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:04:05.234 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rib/rte_rib6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:04:05.234 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/reorder/rte_reorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:04:05.234 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_approx.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:04:05.234 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_red.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:04:05.234 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_sched.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:04:05.234 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_sched_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:04:05.235 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_pie.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:04:05.235 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/security/rte_security.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:04:05.235 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/security/rte_security_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:04:05.235 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:04:05.235 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_std.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:04:05.235 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:04:05.235 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_generic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:04:05.235 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_c11.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:04:05.235 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_stubs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:04:05.235 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vdpa.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:04:05.235 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:04:05.235 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost_async.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:04:05.235 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:04:05.235 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:04:05.235 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_sa.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:04:05.235 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_sad.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:04:05.235 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:04:05.235 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/fib/rte_fib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:04:05.235 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/fib/rte_fib6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:04:05.235 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:04:05.235 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_fd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:04:05.235 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_frag.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:04:05.235 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ras.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:04:05.235 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:04:05.235 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:04:05.235 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_sched.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:04:05.235 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_source_sink.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:04:05.235 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_sym_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:04:05.235 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_eventdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:04:05.235 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:04:05.235 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:04:05.235 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_fd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:04:05.235 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:04:05.235 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_source_sink.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:04:05.235 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pdump/rte_pdump.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:04:05.235 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:04:05.235 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_hash_func.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:04:05.235 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:04:05.235 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_em.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:04:05.235 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_learner.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:04:05.235 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_selector.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:04:05.235 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_wm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:04:05.235 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:04:05.235 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:04:05.235 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_array.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:04:05.235 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:04:05.235 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_cuckoo.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:04:05.235 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_func.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:04:05.235 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:04:05.235 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_lpm_ipv6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:04:05.235 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_stub.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:04:05.235 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:04:05.235 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru_x86.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:04:05.235 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_func_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:04:05.235 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:04:05.235 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_port_in_action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:04:05.235 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_table_action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:04:05.235 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:04:05.235 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_extern.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:04:05.235 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_ctl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:04:05.235 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:04:05.235 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_worker.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:04:05.235 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_ip4_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:04:05.235 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_eth_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:04:05.235 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/bus/pci/rte_bus_pci.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:04:05.235 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:04:05.235 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:04:05.235 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-devbind.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:04:05.235 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-pmdinfo.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:04:05.235 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-telemetry.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:04:05.235 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-hugepages.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:04:05.235 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/rte_build_config.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:04:05.235 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig 00:04:05.235 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/meson-private/libdpdk.pc to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig 00:04:05.235 Installing symlink pointing to librte_kvargs.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_kvargs.so.23 00:04:05.235 Installing symlink pointing to librte_kvargs.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_kvargs.so 00:04:05.235 Installing symlink pointing to librte_telemetry.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_telemetry.so.23 00:04:05.235 Installing symlink pointing to librte_telemetry.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_telemetry.so 00:04:05.235 Installing symlink pointing to librte_eal.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eal.so.23 00:04:05.235 Installing symlink pointing to librte_eal.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eal.so 00:04:05.235 Installing symlink pointing to librte_ring.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ring.so.23 00:04:05.235 Installing symlink pointing to librte_ring.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ring.so 00:04:05.235 Installing symlink pointing to librte_rcu.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rcu.so.23 00:04:05.235 Installing symlink pointing to librte_rcu.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rcu.so 00:04:05.235 Installing symlink pointing to librte_mempool.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mempool.so.23 00:04:05.235 Installing symlink pointing to librte_mempool.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mempool.so 00:04:05.235 Installing symlink pointing to librte_mbuf.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mbuf.so.23 00:04:05.235 Installing symlink pointing to librte_mbuf.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mbuf.so 00:04:05.235 Installing symlink pointing to librte_net.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_net.so.23 00:04:05.235 Installing symlink pointing to librte_net.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_net.so 00:04:05.235 Installing symlink pointing to librte_meter.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_meter.so.23 00:04:05.236 Installing symlink pointing to librte_meter.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_meter.so 00:04:05.236 Installing symlink pointing to librte_ethdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ethdev.so.23 00:04:05.236 Installing symlink pointing to librte_ethdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ethdev.so 00:04:05.236 Installing symlink pointing to librte_pci.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pci.so.23 00:04:05.236 Installing symlink pointing to librte_pci.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pci.so 00:04:05.236 Installing symlink pointing to librte_cmdline.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cmdline.so.23 00:04:05.236 Installing symlink pointing to librte_cmdline.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cmdline.so 00:04:05.236 Installing symlink pointing to librte_metrics.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_metrics.so.23 00:04:05.236 Installing symlink pointing to librte_metrics.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_metrics.so 00:04:05.236 Installing symlink pointing to librte_hash.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_hash.so.23 00:04:05.236 Installing symlink pointing to librte_hash.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_hash.so 00:04:05.236 Installing symlink pointing to librte_timer.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_timer.so.23 00:04:05.236 Installing symlink pointing to librte_timer.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_timer.so 00:04:05.236 Installing symlink pointing to librte_acl.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_acl.so.23 00:04:05.236 Installing symlink pointing to librte_acl.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_acl.so 00:04:05.236 Installing symlink pointing to librte_bbdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bbdev.so.23 00:04:05.236 Installing symlink pointing to librte_bbdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bbdev.so 00:04:05.236 Installing symlink pointing to librte_bitratestats.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bitratestats.so.23 00:04:05.236 Installing symlink pointing to librte_bitratestats.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bitratestats.so 00:04:05.236 Installing symlink pointing to librte_bpf.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bpf.so.23 00:04:05.236 Installing symlink pointing to librte_bpf.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bpf.so 00:04:05.236 Installing symlink pointing to librte_cfgfile.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cfgfile.so.23 00:04:05.236 Installing symlink pointing to librte_cfgfile.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cfgfile.so 00:04:05.236 Installing symlink pointing to librte_compressdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_compressdev.so.23 00:04:05.236 Installing symlink pointing to librte_compressdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_compressdev.so 00:04:05.236 Installing symlink pointing to librte_cryptodev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cryptodev.so.23 00:04:05.236 Installing symlink pointing to librte_cryptodev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cryptodev.so 00:04:05.236 Installing symlink pointing to librte_distributor.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_distributor.so.23 00:04:05.236 Installing symlink pointing to librte_distributor.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_distributor.so 00:04:05.236 Installing symlink pointing to librte_efd.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_efd.so.23 00:04:05.236 Installing symlink pointing to librte_efd.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_efd.so 00:04:05.236 Installing symlink pointing to librte_eventdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eventdev.so.23 00:04:05.236 Installing symlink pointing to librte_eventdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eventdev.so 00:04:05.236 Installing symlink pointing to librte_gpudev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gpudev.so.23 00:04:05.236 Installing symlink pointing to librte_gpudev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gpudev.so 00:04:05.236 Installing symlink pointing to librte_gro.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gro.so.23 00:04:05.236 Installing symlink pointing to librte_gro.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gro.so 00:04:05.236 Installing symlink pointing to librte_gso.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gso.so.23 00:04:05.236 Installing symlink pointing to librte_gso.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gso.so 00:04:05.236 Installing symlink pointing to librte_ip_frag.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ip_frag.so.23 00:04:05.236 Installing symlink pointing to librte_ip_frag.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ip_frag.so 00:04:05.236 Installing symlink pointing to librte_jobstats.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_jobstats.so.23 00:04:05.236 Installing symlink pointing to librte_jobstats.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_jobstats.so 00:04:05.236 Installing symlink pointing to librte_latencystats.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_latencystats.so.23 00:04:05.236 Installing symlink pointing to librte_latencystats.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_latencystats.so 00:04:05.236 Installing symlink pointing to librte_lpm.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_lpm.so.23 00:04:05.236 Installing symlink pointing to librte_lpm.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_lpm.so 00:04:05.236 Installing symlink pointing to librte_member.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_member.so.23 00:04:05.236 Installing symlink pointing to librte_member.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_member.so 00:04:05.236 Installing symlink pointing to librte_pcapng.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pcapng.so.23 00:04:05.236 Installing symlink pointing to librte_pcapng.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pcapng.so 00:04:05.236 Installing symlink pointing to librte_power.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_power.so.23 00:04:05.236 Installing symlink pointing to librte_power.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_power.so 00:04:05.236 Installing symlink pointing to librte_rawdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rawdev.so.23 00:04:05.236 Installing symlink pointing to librte_rawdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rawdev.so 00:04:05.236 Installing symlink pointing to librte_regexdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_regexdev.so.23 00:04:05.236 Installing symlink pointing to librte_regexdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_regexdev.so 00:04:05.236 Installing symlink pointing to librte_dmadev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dmadev.so.23 00:04:05.236 Installing symlink pointing to librte_dmadev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dmadev.so 00:04:05.236 Installing symlink pointing to librte_rib.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rib.so.23 00:04:05.236 Installing symlink pointing to librte_rib.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rib.so 00:04:05.236 Installing symlink pointing to librte_reorder.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_reorder.so.23 00:04:05.236 Installing symlink pointing to librte_reorder.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_reorder.so 00:04:05.236 Installing symlink pointing to librte_sched.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_sched.so.23 00:04:05.236 Installing symlink pointing to librte_sched.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_sched.so 00:04:05.236 Installing symlink pointing to librte_security.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_security.so.23 00:04:05.236 Installing symlink pointing to librte_security.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_security.so 00:04:05.236 Installing symlink pointing to librte_stack.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_stack.so.23 00:04:05.236 Installing symlink pointing to librte_stack.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_stack.so 00:04:05.236 Installing symlink pointing to librte_vhost.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_vhost.so.23 00:04:05.236 Installing symlink pointing to librte_vhost.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_vhost.so 00:04:05.236 Installing symlink pointing to librte_ipsec.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ipsec.so.23 00:04:05.236 Installing symlink pointing to librte_ipsec.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ipsec.so 00:04:05.236 Installing symlink pointing to librte_fib.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_fib.so.23 00:04:05.236 Installing symlink pointing to librte_fib.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_fib.so 00:04:05.236 Installing symlink pointing to librte_port.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_port.so.23 00:04:05.236 Installing symlink pointing to librte_port.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_port.so 00:04:05.236 Installing symlink pointing to librte_pdump.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdump.so.23 00:04:05.236 Installing symlink pointing to librte_pdump.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdump.so 00:04:05.236 Installing symlink pointing to librte_table.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_table.so.23 00:04:05.236 Installing symlink pointing to librte_table.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_table.so 00:04:05.236 Installing symlink pointing to librte_pipeline.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pipeline.so.23 00:04:05.237 Installing symlink pointing to librte_pipeline.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pipeline.so 00:04:05.237 Installing symlink pointing to librte_graph.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_graph.so.23 00:04:05.237 Installing symlink pointing to librte_graph.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_graph.so 00:04:05.237 Installing symlink pointing to librte_node.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_node.so.23 00:04:05.237 Installing symlink pointing to librte_node.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_node.so 00:04:05.237 Installing symlink pointing to librte_bus_pci.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so.23 00:04:05.237 Installing symlink pointing to librte_bus_pci.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so 00:04:05.237 Installing symlink pointing to librte_bus_vdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so.23 00:04:05.237 Installing symlink pointing to librte_bus_vdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so 00:04:05.237 './librte_bus_pci.so' -> 'dpdk/pmds-23.0/librte_bus_pci.so' 00:04:05.237 './librte_bus_pci.so.23' -> 'dpdk/pmds-23.0/librte_bus_pci.so.23' 00:04:05.237 './librte_bus_pci.so.23.0' -> 'dpdk/pmds-23.0/librte_bus_pci.so.23.0' 00:04:05.237 './librte_bus_vdev.so' -> 'dpdk/pmds-23.0/librte_bus_vdev.so' 00:04:05.237 './librte_bus_vdev.so.23' -> 'dpdk/pmds-23.0/librte_bus_vdev.so.23' 00:04:05.237 './librte_bus_vdev.so.23.0' -> 'dpdk/pmds-23.0/librte_bus_vdev.so.23.0' 00:04:05.237 './librte_mempool_ring.so' -> 'dpdk/pmds-23.0/librte_mempool_ring.so' 00:04:05.237 './librte_mempool_ring.so.23' -> 'dpdk/pmds-23.0/librte_mempool_ring.so.23' 00:04:05.237 './librte_mempool_ring.so.23.0' -> 'dpdk/pmds-23.0/librte_mempool_ring.so.23.0' 00:04:05.237 './librte_net_i40e.so' -> 'dpdk/pmds-23.0/librte_net_i40e.so' 00:04:05.237 './librte_net_i40e.so.23' -> 'dpdk/pmds-23.0/librte_net_i40e.so.23' 00:04:05.237 './librte_net_i40e.so.23.0' -> 'dpdk/pmds-23.0/librte_net_i40e.so.23.0' 00:04:05.237 Installing symlink pointing to librte_mempool_ring.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so.23 00:04:05.237 Installing symlink pointing to librte_mempool_ring.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so 00:04:05.237 Installing symlink pointing to librte_net_i40e.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so.23 00:04:05.237 Installing symlink pointing to librte_net_i40e.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so 00:04:05.237 Running custom install script '/bin/sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-23.0' 00:04:05.237 00:16:33 build_native_dpdk -- common/autobuild_common.sh@189 -- $ uname -s 00:04:05.237 00:16:33 build_native_dpdk -- common/autobuild_common.sh@189 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:04:05.237 00:16:33 build_native_dpdk -- common/autobuild_common.sh@200 -- $ cat 00:04:05.237 00:16:33 build_native_dpdk -- common/autobuild_common.sh@205 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:05.237 00:04:05.237 real 1m27.174s 00:04:05.237 user 14m9.568s 00:04:05.237 sys 1m39.402s 00:04:05.237 00:16:33 build_native_dpdk -- common/autotest_common.sh@1122 -- $ xtrace_disable 00:04:05.237 00:16:33 build_native_dpdk -- common/autotest_common.sh@10 -- $ set +x 00:04:05.237 ************************************ 00:04:05.237 END TEST build_native_dpdk 00:04:05.237 ************************************ 00:04:05.237 00:16:33 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:04:05.237 00:16:33 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:04:05.237 00:16:33 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:04:05.237 00:16:33 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:04:05.237 00:16:33 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:04:05.237 00:16:33 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:04:05.237 00:16:33 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:04:05.237 00:16:33 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build --with-shared 00:04:05.496 Using /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig for additional libs... 00:04:05.496 DPDK libraries: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:04:05.496 DPDK includes: //var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:04:05.496 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:04:06.065 Using 'verbs' RDMA provider 00:04:16.616 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:04:28.896 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:04:28.896 Creating mk/config.mk...done. 00:04:28.896 Creating mk/cc.flags.mk...done. 00:04:28.896 Type 'make' to build. 00:04:28.896 00:16:55 -- spdk/autobuild.sh@69 -- $ run_test make make -j32 00:04:28.896 00:16:55 -- common/autotest_common.sh@1097 -- $ '[' 3 -le 1 ']' 00:04:28.896 00:16:55 -- common/autotest_common.sh@1103 -- $ xtrace_disable 00:04:28.896 00:16:55 -- common/autotest_common.sh@10 -- $ set +x 00:04:28.896 ************************************ 00:04:28.896 START TEST make 00:04:28.896 ************************************ 00:04:28.896 00:16:55 make -- common/autotest_common.sh@1121 -- $ make -j32 00:04:28.896 make[1]: Nothing to be done for 'all'. 00:04:29.474 The Meson build system 00:04:29.474 Version: 1.3.1 00:04:29.474 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:04:29.474 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:04:29.474 Build type: native build 00:04:29.474 Project name: libvfio-user 00:04:29.474 Project version: 0.0.1 00:04:29.474 C compiler for the host machine: gcc (gcc 13.2.1 "gcc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:04:29.474 C linker for the host machine: gcc ld.bfd 2.39-16 00:04:29.474 Host machine cpu family: x86_64 00:04:29.474 Host machine cpu: x86_64 00:04:29.474 Run-time dependency threads found: YES 00:04:29.474 Library dl found: YES 00:04:29.474 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:04:29.474 Run-time dependency json-c found: YES 0.17 00:04:29.474 Run-time dependency cmocka found: YES 1.1.7 00:04:29.474 Program pytest-3 found: NO 00:04:29.474 Program flake8 found: NO 00:04:29.474 Program misspell-fixer found: NO 00:04:29.474 Program restructuredtext-lint found: NO 00:04:29.474 Program valgrind found: YES (/usr/bin/valgrind) 00:04:29.474 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:04:29.474 Compiler for C supports arguments -Wmissing-declarations: YES 00:04:29.474 Compiler for C supports arguments -Wwrite-strings: YES 00:04:29.474 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:04:29.474 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:04:29.474 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:04:29.474 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:04:29.474 Build targets in project: 8 00:04:29.474 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:04:29.474 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:04:29.474 00:04:29.474 libvfio-user 0.0.1 00:04:29.474 00:04:29.474 User defined options 00:04:29.474 buildtype : debug 00:04:29.474 default_library: shared 00:04:29.474 libdir : /usr/local/lib 00:04:29.474 00:04:29.474 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:04:30.048 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:04:30.310 [1/37] Compiling C object samples/lspci.p/lspci.c.o 00:04:30.310 [2/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:04:30.310 [3/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:04:30.310 [4/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:04:30.310 [5/37] Compiling C object samples/null.p/null.c.o 00:04:30.310 [6/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:04:30.310 [7/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:04:30.310 [8/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:04:30.310 [9/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:04:30.310 [10/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:04:30.310 [11/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:04:30.310 [12/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:04:30.310 [13/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:04:30.571 [14/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:04:30.571 [15/37] Compiling C object samples/server.p/server.c.o 00:04:30.571 [16/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:04:30.571 [17/37] Compiling C object test/unit_tests.p/mocks.c.o 00:04:30.571 [18/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:04:30.571 [19/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:04:30.571 [20/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:04:30.571 [21/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:04:30.571 [22/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:04:30.571 [23/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:04:30.571 [24/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:04:30.571 [25/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:04:30.571 [26/37] Compiling C object samples/client.p/client.c.o 00:04:30.571 [27/37] Linking target samples/client 00:04:30.571 [28/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:04:30.830 [29/37] Linking target lib/libvfio-user.so.0.0.1 00:04:30.830 [30/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:04:30.830 [31/37] Linking target test/unit_tests 00:04:30.830 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:04:30.830 [33/37] Linking target samples/server 00:04:31.090 [34/37] Linking target samples/null 00:04:31.090 [35/37] Linking target samples/lspci 00:04:31.090 [36/37] Linking target samples/shadow_ioeventfd_server 00:04:31.090 [37/37] Linking target samples/gpio-pci-idio-16 00:04:31.090 INFO: autodetecting backend as ninja 00:04:31.090 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:04:31.090 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:04:31.668 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:04:31.668 ninja: no work to do. 00:04:46.561 CC lib/ut_mock/mock.o 00:04:46.561 CC lib/ut/ut.o 00:04:46.561 CC lib/log/log.o 00:04:46.561 CC lib/log/log_flags.o 00:04:46.561 CC lib/log/log_deprecated.o 00:04:46.561 LIB libspdk_log.a 00:04:46.561 LIB libspdk_ut.a 00:04:46.561 LIB libspdk_ut_mock.a 00:04:46.561 SO libspdk_ut_mock.so.6.0 00:04:46.561 SO libspdk_ut.so.2.0 00:04:46.561 SO libspdk_log.so.7.0 00:04:46.561 SYMLINK libspdk_ut_mock.so 00:04:46.561 SYMLINK libspdk_ut.so 00:04:46.561 SYMLINK libspdk_log.so 00:04:46.561 CXX lib/trace_parser/trace.o 00:04:46.561 CC lib/dma/dma.o 00:04:46.561 CC lib/ioat/ioat.o 00:04:46.561 CC lib/util/base64.o 00:04:46.561 CC lib/util/bit_array.o 00:04:46.561 CC lib/util/cpuset.o 00:04:46.561 CC lib/util/crc16.o 00:04:46.561 CC lib/util/crc32.o 00:04:46.561 CC lib/util/crc32c.o 00:04:46.561 CC lib/util/crc32_ieee.o 00:04:46.561 CC lib/util/crc64.o 00:04:46.561 CC lib/util/dif.o 00:04:46.561 CC lib/util/fd.o 00:04:46.561 CC lib/util/file.o 00:04:46.561 CC lib/util/hexlify.o 00:04:46.561 CC lib/util/iov.o 00:04:46.561 CC lib/util/math.o 00:04:46.561 CC lib/util/pipe.o 00:04:46.561 CC lib/util/strerror_tls.o 00:04:46.561 CC lib/util/string.o 00:04:46.561 CC lib/util/uuid.o 00:04:46.561 CC lib/util/fd_group.o 00:04:46.561 CC lib/util/xor.o 00:04:46.561 CC lib/util/zipf.o 00:04:46.561 CC lib/vfio_user/host/vfio_user_pci.o 00:04:46.561 CC lib/vfio_user/host/vfio_user.o 00:04:46.561 LIB libspdk_dma.a 00:04:46.561 SO libspdk_dma.so.4.0 00:04:46.561 SYMLINK libspdk_dma.so 00:04:46.561 LIB libspdk_ioat.a 00:04:46.561 SO libspdk_ioat.so.7.0 00:04:46.561 SYMLINK libspdk_ioat.so 00:04:46.561 LIB libspdk_vfio_user.a 00:04:46.561 SO libspdk_vfio_user.so.5.0 00:04:46.561 SYMLINK libspdk_vfio_user.so 00:04:46.561 LIB libspdk_util.a 00:04:46.561 SO libspdk_util.so.9.0 00:04:46.561 SYMLINK libspdk_util.so 00:04:46.561 LIB libspdk_trace_parser.a 00:04:46.561 CC lib/idxd/idxd.o 00:04:46.561 CC lib/rdma/common.o 00:04:46.561 CC lib/idxd/idxd_user.o 00:04:46.561 CC lib/vmd/vmd.o 00:04:46.561 CC lib/json/json_parse.o 00:04:46.561 CC lib/rdma/rdma_verbs.o 00:04:46.561 CC lib/idxd/idxd_kernel.o 00:04:46.561 CC lib/env_dpdk/env.o 00:04:46.561 CC lib/json/json_util.o 00:04:46.561 CC lib/env_dpdk/memory.o 00:04:46.561 CC lib/vmd/led.o 00:04:46.561 CC lib/json/json_write.o 00:04:46.561 CC lib/env_dpdk/pci.o 00:04:46.561 CC lib/env_dpdk/threads.o 00:04:46.561 CC lib/env_dpdk/init.o 00:04:46.561 CC lib/conf/conf.o 00:04:46.561 CC lib/env_dpdk/pci_ioat.o 00:04:46.561 CC lib/env_dpdk/pci_virtio.o 00:04:46.561 CC lib/env_dpdk/pci_vmd.o 00:04:46.561 CC lib/env_dpdk/pci_idxd.o 00:04:46.561 CC lib/env_dpdk/pci_event.o 00:04:46.561 CC lib/env_dpdk/sigbus_handler.o 00:04:46.561 CC lib/env_dpdk/pci_dpdk.o 00:04:46.561 CC lib/env_dpdk/pci_dpdk_2207.o 00:04:46.561 CC lib/env_dpdk/pci_dpdk_2211.o 00:04:46.561 SO libspdk_trace_parser.so.5.0 00:04:46.820 SYMLINK libspdk_trace_parser.so 00:04:46.820 LIB libspdk_rdma.a 00:04:46.820 SO libspdk_rdma.so.6.0 00:04:46.820 LIB libspdk_conf.a 00:04:46.820 SYMLINK libspdk_rdma.so 00:04:46.820 SO libspdk_conf.so.6.0 00:04:47.077 SYMLINK libspdk_conf.so 00:04:47.077 LIB libspdk_json.a 00:04:47.077 SO libspdk_json.so.6.0 00:04:47.077 SYMLINK libspdk_json.so 00:04:47.077 LIB libspdk_idxd.a 00:04:47.077 SO libspdk_idxd.so.12.0 00:04:47.077 SYMLINK libspdk_idxd.so 00:04:47.335 CC lib/jsonrpc/jsonrpc_server.o 00:04:47.335 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:04:47.335 CC lib/jsonrpc/jsonrpc_client.o 00:04:47.335 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:04:47.335 LIB libspdk_vmd.a 00:04:47.335 SO libspdk_vmd.so.6.0 00:04:47.335 SYMLINK libspdk_vmd.so 00:04:47.592 LIB libspdk_jsonrpc.a 00:04:47.592 SO libspdk_jsonrpc.so.6.0 00:04:47.592 SYMLINK libspdk_jsonrpc.so 00:04:47.849 CC lib/rpc/rpc.o 00:04:47.849 LIB libspdk_rpc.a 00:04:48.107 SO libspdk_rpc.so.6.0 00:04:48.107 SYMLINK libspdk_rpc.so 00:04:48.107 CC lib/notify/notify.o 00:04:48.107 CC lib/trace/trace.o 00:04:48.107 CC lib/keyring/keyring.o 00:04:48.107 CC lib/notify/notify_rpc.o 00:04:48.107 CC lib/keyring/keyring_rpc.o 00:04:48.107 CC lib/trace/trace_flags.o 00:04:48.107 CC lib/trace/trace_rpc.o 00:04:48.364 LIB libspdk_notify.a 00:04:48.364 SO libspdk_notify.so.6.0 00:04:48.364 LIB libspdk_keyring.a 00:04:48.364 SYMLINK libspdk_notify.so 00:04:48.364 SO libspdk_keyring.so.1.0 00:04:48.364 LIB libspdk_trace.a 00:04:48.364 SO libspdk_trace.so.10.0 00:04:48.620 SYMLINK libspdk_keyring.so 00:04:48.620 SYMLINK libspdk_trace.so 00:04:48.620 LIB libspdk_env_dpdk.a 00:04:48.620 SO libspdk_env_dpdk.so.14.0 00:04:48.620 CC lib/thread/thread.o 00:04:48.620 CC lib/sock/sock.o 00:04:48.620 CC lib/thread/iobuf.o 00:04:48.620 CC lib/sock/sock_rpc.o 00:04:48.877 SYMLINK libspdk_env_dpdk.so 00:04:49.135 LIB libspdk_sock.a 00:04:49.135 SO libspdk_sock.so.9.0 00:04:49.135 SYMLINK libspdk_sock.so 00:04:49.394 CC lib/nvme/nvme_ctrlr_cmd.o 00:04:49.394 CC lib/nvme/nvme_ctrlr.o 00:04:49.394 CC lib/nvme/nvme_fabric.o 00:04:49.394 CC lib/nvme/nvme_ns_cmd.o 00:04:49.394 CC lib/nvme/nvme_ns.o 00:04:49.394 CC lib/nvme/nvme_pcie_common.o 00:04:49.394 CC lib/nvme/nvme_pcie.o 00:04:49.394 CC lib/nvme/nvme_qpair.o 00:04:49.394 CC lib/nvme/nvme.o 00:04:49.394 CC lib/nvme/nvme_quirks.o 00:04:49.394 CC lib/nvme/nvme_transport.o 00:04:49.394 CC lib/nvme/nvme_discovery.o 00:04:49.394 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:04:49.394 CC lib/nvme/nvme_tcp.o 00:04:49.394 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:04:49.394 CC lib/nvme/nvme_opal.o 00:04:49.394 CC lib/nvme/nvme_io_msg.o 00:04:49.394 CC lib/nvme/nvme_poll_group.o 00:04:49.394 CC lib/nvme/nvme_zns.o 00:04:49.394 CC lib/nvme/nvme_stubs.o 00:04:49.394 CC lib/nvme/nvme_auth.o 00:04:49.394 CC lib/nvme/nvme_cuse.o 00:04:49.394 CC lib/nvme/nvme_vfio_user.o 00:04:49.394 CC lib/nvme/nvme_rdma.o 00:04:50.767 LIB libspdk_thread.a 00:04:50.767 SO libspdk_thread.so.10.0 00:04:50.767 SYMLINK libspdk_thread.so 00:04:51.024 CC lib/virtio/virtio.o 00:04:51.024 CC lib/virtio/virtio_vhost_user.o 00:04:51.024 CC lib/init/json_config.o 00:04:51.024 CC lib/init/subsystem.o 00:04:51.024 CC lib/vfu_tgt/tgt_endpoint.o 00:04:51.024 CC lib/virtio/virtio_vfio_user.o 00:04:51.024 CC lib/blob/blobstore.o 00:04:51.024 CC lib/init/subsystem_rpc.o 00:04:51.024 CC lib/accel/accel.o 00:04:51.024 CC lib/accel/accel_rpc.o 00:04:51.024 CC lib/blob/request.o 00:04:51.024 CC lib/init/rpc.o 00:04:51.024 CC lib/virtio/virtio_pci.o 00:04:51.024 CC lib/blob/zeroes.o 00:04:51.024 CC lib/accel/accel_sw.o 00:04:51.024 CC lib/vfu_tgt/tgt_rpc.o 00:04:51.024 CC lib/blob/blob_bs_dev.o 00:04:51.317 LIB libspdk_init.a 00:04:51.317 SO libspdk_init.so.5.0 00:04:51.317 LIB libspdk_vfu_tgt.a 00:04:51.317 SO libspdk_vfu_tgt.so.3.0 00:04:51.317 SYMLINK libspdk_init.so 00:04:51.317 LIB libspdk_virtio.a 00:04:51.317 SO libspdk_virtio.so.7.0 00:04:51.318 SYMLINK libspdk_vfu_tgt.so 00:04:51.318 SYMLINK libspdk_virtio.so 00:04:51.574 CC lib/event/app.o 00:04:51.574 CC lib/event/reactor.o 00:04:51.574 CC lib/event/log_rpc.o 00:04:51.574 CC lib/event/app_rpc.o 00:04:51.574 CC lib/event/scheduler_static.o 00:04:51.831 LIB libspdk_event.a 00:04:51.831 SO libspdk_event.so.13.0 00:04:51.831 LIB libspdk_accel.a 00:04:52.088 SYMLINK libspdk_event.so 00:04:52.088 SO libspdk_accel.so.15.0 00:04:52.088 LIB libspdk_nvme.a 00:04:52.088 SYMLINK libspdk_accel.so 00:04:52.088 SO libspdk_nvme.so.13.0 00:04:52.088 CC lib/bdev/bdev.o 00:04:52.088 CC lib/bdev/bdev_rpc.o 00:04:52.088 CC lib/bdev/bdev_zone.o 00:04:52.088 CC lib/bdev/part.o 00:04:52.088 CC lib/bdev/scsi_nvme.o 00:04:52.344 SYMLINK libspdk_nvme.so 00:04:54.276 LIB libspdk_blob.a 00:04:54.276 SO libspdk_blob.so.11.0 00:04:54.276 SYMLINK libspdk_blob.so 00:04:54.276 CC lib/blobfs/blobfs.o 00:04:54.276 CC lib/blobfs/tree.o 00:04:54.276 CC lib/lvol/lvol.o 00:04:54.534 LIB libspdk_bdev.a 00:04:54.794 SO libspdk_bdev.so.15.0 00:04:54.794 SYMLINK libspdk_bdev.so 00:04:54.794 CC lib/nbd/nbd.o 00:04:54.794 CC lib/scsi/dev.o 00:04:54.794 CC lib/ublk/ublk.o 00:04:54.794 CC lib/nvmf/ctrlr.o 00:04:54.794 CC lib/ftl/ftl_core.o 00:04:54.794 CC lib/scsi/lun.o 00:04:54.794 CC lib/ublk/ublk_rpc.o 00:04:54.794 CC lib/nbd/nbd_rpc.o 00:04:54.794 CC lib/ftl/ftl_init.o 00:04:54.794 CC lib/scsi/port.o 00:04:54.794 CC lib/nvmf/ctrlr_discovery.o 00:04:54.794 CC lib/scsi/scsi.o 00:04:54.794 CC lib/ftl/ftl_layout.o 00:04:54.794 CC lib/nvmf/ctrlr_bdev.o 00:04:54.794 CC lib/nvmf/subsystem.o 00:04:54.794 CC lib/ftl/ftl_debug.o 00:04:54.794 CC lib/ftl/ftl_io.o 00:04:54.794 CC lib/scsi/scsi_bdev.o 00:04:54.794 CC lib/nvmf/nvmf.o 00:04:54.794 CC lib/scsi/scsi_pr.o 00:04:54.794 CC lib/ftl/ftl_sb.o 00:04:54.794 CC lib/nvmf/nvmf_rpc.o 00:04:54.794 CC lib/scsi/scsi_rpc.o 00:04:54.794 CC lib/ftl/ftl_l2p.o 00:04:54.794 CC lib/nvmf/transport.o 00:04:54.794 CC lib/scsi/task.o 00:04:54.794 CC lib/nvmf/tcp.o 00:04:54.794 CC lib/ftl/ftl_l2p_flat.o 00:04:54.794 CC lib/nvmf/stubs.o 00:04:54.794 CC lib/nvmf/mdns_server.o 00:04:55.058 LIB libspdk_blobfs.a 00:04:55.058 SO libspdk_blobfs.so.10.0 00:04:55.058 SYMLINK libspdk_blobfs.so 00:04:55.058 CC lib/ftl/ftl_nv_cache.o 00:04:55.320 LIB libspdk_lvol.a 00:04:55.320 SO libspdk_lvol.so.10.0 00:04:55.320 CC lib/ftl/ftl_band.o 00:04:55.320 CC lib/nvmf/vfio_user.o 00:04:55.320 CC lib/ftl/ftl_band_ops.o 00:04:55.320 CC lib/nvmf/rdma.o 00:04:55.320 CC lib/nvmf/auth.o 00:04:55.320 CC lib/ftl/ftl_writer.o 00:04:55.320 CC lib/ftl/ftl_reloc.o 00:04:55.320 CC lib/ftl/ftl_rq.o 00:04:55.320 SYMLINK libspdk_lvol.so 00:04:55.320 CC lib/ftl/ftl_l2p_cache.o 00:04:55.320 CC lib/ftl/ftl_p2l.o 00:04:55.320 CC lib/ftl/mngt/ftl_mngt.o 00:04:55.320 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:04:55.320 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:04:55.320 CC lib/ftl/mngt/ftl_mngt_startup.o 00:04:55.585 CC lib/ftl/mngt/ftl_mngt_md.o 00:04:55.585 CC lib/ftl/mngt/ftl_mngt_misc.o 00:04:55.585 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:04:55.585 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:04:55.585 LIB libspdk_nbd.a 00:04:55.849 CC lib/ftl/mngt/ftl_mngt_band.o 00:04:55.849 SO libspdk_nbd.so.7.0 00:04:55.849 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:04:55.849 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:04:55.849 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:04:55.849 LIB libspdk_scsi.a 00:04:55.849 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:04:55.849 SYMLINK libspdk_nbd.so 00:04:55.849 CC lib/ftl/utils/ftl_conf.o 00:04:55.849 CC lib/ftl/utils/ftl_md.o 00:04:55.849 CC lib/ftl/utils/ftl_mempool.o 00:04:55.849 SO libspdk_scsi.so.9.0 00:04:55.849 CC lib/ftl/utils/ftl_bitmap.o 00:04:55.849 CC lib/ftl/utils/ftl_property.o 00:04:55.849 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:04:56.111 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:04:56.111 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:04:56.111 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:04:56.111 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:04:56.111 SYMLINK libspdk_scsi.so 00:04:56.111 LIB libspdk_ublk.a 00:04:56.111 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:04:56.111 SO libspdk_ublk.so.3.0 00:04:56.111 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:04:56.111 CC lib/ftl/upgrade/ftl_sb_v3.o 00:04:56.111 CC lib/ftl/upgrade/ftl_sb_v5.o 00:04:56.111 CC lib/ftl/nvc/ftl_nvc_dev.o 00:04:56.111 SYMLINK libspdk_ublk.so 00:04:56.111 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:04:56.111 CC lib/ftl/base/ftl_base_dev.o 00:04:56.111 CC lib/ftl/base/ftl_base_bdev.o 00:04:56.111 CC lib/ftl/ftl_trace.o 00:04:56.371 CC lib/iscsi/conn.o 00:04:56.371 CC lib/iscsi/init_grp.o 00:04:56.371 CC lib/iscsi/iscsi.o 00:04:56.371 CC lib/iscsi/md5.o 00:04:56.371 CC lib/vhost/vhost.o 00:04:56.371 CC lib/vhost/vhost_rpc.o 00:04:56.371 CC lib/iscsi/param.o 00:04:56.371 CC lib/vhost/vhost_scsi.o 00:04:56.371 CC lib/vhost/vhost_blk.o 00:04:56.371 CC lib/iscsi/portal_grp.o 00:04:56.371 CC lib/iscsi/tgt_node.o 00:04:56.371 CC lib/vhost/rte_vhost_user.o 00:04:56.371 CC lib/iscsi/iscsi_subsystem.o 00:04:56.371 CC lib/iscsi/iscsi_rpc.o 00:04:56.629 CC lib/iscsi/task.o 00:04:56.888 LIB libspdk_ftl.a 00:04:56.888 SO libspdk_ftl.so.9.0 00:04:57.455 SYMLINK libspdk_ftl.so 00:04:57.713 LIB libspdk_vhost.a 00:04:57.713 SO libspdk_vhost.so.8.0 00:04:57.972 LIB libspdk_iscsi.a 00:04:57.972 SYMLINK libspdk_vhost.so 00:04:57.972 SO libspdk_iscsi.so.8.0 00:04:57.972 LIB libspdk_nvmf.a 00:04:57.972 SYMLINK libspdk_iscsi.so 00:04:58.232 SO libspdk_nvmf.so.18.0 00:04:58.232 SYMLINK libspdk_nvmf.so 00:04:58.798 CC module/env_dpdk/env_dpdk_rpc.o 00:04:58.798 CC module/vfu_device/vfu_virtio.o 00:04:58.798 CC module/vfu_device/vfu_virtio_blk.o 00:04:58.798 CC module/vfu_device/vfu_virtio_scsi.o 00:04:58.798 CC module/vfu_device/vfu_virtio_rpc.o 00:04:58.798 CC module/keyring/linux/keyring.o 00:04:58.798 CC module/accel/dsa/accel_dsa.o 00:04:58.798 CC module/accel/dsa/accel_dsa_rpc.o 00:04:58.798 CC module/keyring/linux/keyring_rpc.o 00:04:58.798 CC module/accel/iaa/accel_iaa.o 00:04:58.798 CC module/accel/iaa/accel_iaa_rpc.o 00:04:58.798 CC module/sock/posix/posix.o 00:04:58.798 CC module/blob/bdev/blob_bdev.o 00:04:58.798 CC module/accel/error/accel_error.o 00:04:58.798 CC module/keyring/file/keyring.o 00:04:58.798 CC module/accel/error/accel_error_rpc.o 00:04:58.798 CC module/keyring/file/keyring_rpc.o 00:04:58.798 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:04:58.798 CC module/scheduler/gscheduler/gscheduler.o 00:04:58.798 CC module/scheduler/dynamic/scheduler_dynamic.o 00:04:58.798 CC module/accel/ioat/accel_ioat.o 00:04:58.798 CC module/accel/ioat/accel_ioat_rpc.o 00:04:58.798 LIB libspdk_env_dpdk_rpc.a 00:04:58.798 SO libspdk_env_dpdk_rpc.so.6.0 00:04:58.798 LIB libspdk_keyring_linux.a 00:04:58.798 SO libspdk_keyring_linux.so.1.0 00:04:58.798 LIB libspdk_keyring_file.a 00:04:59.056 SYMLINK libspdk_env_dpdk_rpc.so 00:04:59.056 LIB libspdk_accel_error.a 00:04:59.056 SO libspdk_keyring_file.so.1.0 00:04:59.056 SO libspdk_accel_error.so.2.0 00:04:59.056 SYMLINK libspdk_keyring_linux.so 00:04:59.056 LIB libspdk_scheduler_dpdk_governor.a 00:04:59.056 LIB libspdk_scheduler_gscheduler.a 00:04:59.056 SYMLINK libspdk_keyring_file.so 00:04:59.056 LIB libspdk_accel_dsa.a 00:04:59.056 SO libspdk_scheduler_dpdk_governor.so.4.0 00:04:59.056 SO libspdk_scheduler_gscheduler.so.4.0 00:04:59.056 SYMLINK libspdk_accel_error.so 00:04:59.056 LIB libspdk_blob_bdev.a 00:04:59.056 LIB libspdk_scheduler_dynamic.a 00:04:59.056 SO libspdk_accel_dsa.so.5.0 00:04:59.056 LIB libspdk_accel_iaa.a 00:04:59.056 SO libspdk_blob_bdev.so.11.0 00:04:59.056 LIB libspdk_accel_ioat.a 00:04:59.056 SO libspdk_scheduler_dynamic.so.4.0 00:04:59.056 SYMLINK libspdk_scheduler_dpdk_governor.so 00:04:59.056 SYMLINK libspdk_scheduler_gscheduler.so 00:04:59.056 SO libspdk_accel_iaa.so.3.0 00:04:59.056 SO libspdk_accel_ioat.so.6.0 00:04:59.056 SYMLINK libspdk_blob_bdev.so 00:04:59.056 SYMLINK libspdk_accel_dsa.so 00:04:59.056 SYMLINK libspdk_scheduler_dynamic.so 00:04:59.056 SYMLINK libspdk_accel_iaa.so 00:04:59.056 SYMLINK libspdk_accel_ioat.so 00:04:59.322 LIB libspdk_vfu_device.a 00:04:59.322 CC module/bdev/delay/vbdev_delay.o 00:04:59.322 CC module/bdev/split/vbdev_split.o 00:04:59.322 CC module/bdev/delay/vbdev_delay_rpc.o 00:04:59.322 CC module/bdev/split/vbdev_split_rpc.o 00:04:59.322 CC module/bdev/null/bdev_null.o 00:04:59.322 CC module/bdev/virtio/bdev_virtio_scsi.o 00:04:59.322 CC module/bdev/null/bdev_null_rpc.o 00:04:59.323 CC module/bdev/virtio/bdev_virtio_blk.o 00:04:59.323 CC module/bdev/ftl/bdev_ftl.o 00:04:59.323 CC module/bdev/virtio/bdev_virtio_rpc.o 00:04:59.323 CC module/bdev/aio/bdev_aio.o 00:04:59.323 CC module/bdev/ftl/bdev_ftl_rpc.o 00:04:59.323 CC module/bdev/error/vbdev_error.o 00:04:59.323 CC module/bdev/aio/bdev_aio_rpc.o 00:04:59.323 CC module/bdev/error/vbdev_error_rpc.o 00:04:59.323 CC module/bdev/zone_block/vbdev_zone_block.o 00:04:59.323 CC module/bdev/raid/bdev_raid.o 00:04:59.323 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:04:59.323 CC module/bdev/raid/bdev_raid_rpc.o 00:04:59.323 CC module/bdev/raid/bdev_raid_sb.o 00:04:59.323 CC module/bdev/raid/raid0.o 00:04:59.323 CC module/bdev/nvme/bdev_nvme.o 00:04:59.323 CC module/blobfs/bdev/blobfs_bdev.o 00:04:59.323 CC module/bdev/passthru/vbdev_passthru.o 00:04:59.323 CC module/bdev/raid/raid1.o 00:04:59.323 CC module/bdev/nvme/bdev_nvme_rpc.o 00:04:59.323 CC module/bdev/gpt/gpt.o 00:04:59.323 CC module/bdev/malloc/bdev_malloc.o 00:04:59.323 CC module/bdev/lvol/vbdev_lvol.o 00:04:59.323 CC module/bdev/iscsi/bdev_iscsi.o 00:04:59.323 SO libspdk_vfu_device.so.3.0 00:04:59.592 SYMLINK libspdk_vfu_device.so 00:04:59.592 CC module/bdev/malloc/bdev_malloc_rpc.o 00:04:59.592 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:04:59.592 CC module/bdev/nvme/nvme_rpc.o 00:04:59.592 CC module/bdev/gpt/vbdev_gpt.o 00:04:59.592 CC module/bdev/nvme/bdev_mdns_client.o 00:04:59.851 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:04:59.851 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:04:59.851 CC module/bdev/raid/concat.o 00:04:59.851 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:04:59.851 CC module/bdev/nvme/vbdev_opal.o 00:04:59.851 LIB libspdk_sock_posix.a 00:04:59.851 CC module/bdev/nvme/vbdev_opal_rpc.o 00:04:59.851 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:04:59.851 SO libspdk_sock_posix.so.6.0 00:04:59.851 LIB libspdk_bdev_split.a 00:04:59.851 LIB libspdk_bdev_null.a 00:04:59.851 SO libspdk_bdev_split.so.6.0 00:04:59.851 LIB libspdk_bdev_error.a 00:04:59.851 SO libspdk_bdev_null.so.6.0 00:04:59.851 LIB libspdk_bdev_ftl.a 00:04:59.851 SO libspdk_bdev_error.so.6.0 00:04:59.851 SYMLINK libspdk_sock_posix.so 00:04:59.851 SO libspdk_bdev_ftl.so.6.0 00:05:00.109 SYMLINK libspdk_bdev_split.so 00:05:00.109 LIB libspdk_bdev_aio.a 00:05:00.109 LIB libspdk_bdev_zone_block.a 00:05:00.109 LIB libspdk_bdev_passthru.a 00:05:00.109 SYMLINK libspdk_bdev_null.so 00:05:00.109 SO libspdk_bdev_aio.so.6.0 00:05:00.109 SO libspdk_bdev_passthru.so.6.0 00:05:00.109 SO libspdk_bdev_zone_block.so.6.0 00:05:00.109 LIB libspdk_bdev_iscsi.a 00:05:00.109 SYMLINK libspdk_bdev_error.so 00:05:00.109 LIB libspdk_blobfs_bdev.a 00:05:00.109 LIB libspdk_bdev_malloc.a 00:05:00.109 LIB libspdk_bdev_delay.a 00:05:00.109 SYMLINK libspdk_bdev_ftl.so 00:05:00.109 SO libspdk_bdev_iscsi.so.6.0 00:05:00.109 SO libspdk_blobfs_bdev.so.6.0 00:05:00.109 SO libspdk_bdev_malloc.so.6.0 00:05:00.109 SO libspdk_bdev_delay.so.6.0 00:05:00.109 SYMLINK libspdk_bdev_aio.so 00:05:00.109 SYMLINK libspdk_bdev_zone_block.so 00:05:00.109 SYMLINK libspdk_bdev_passthru.so 00:05:00.109 LIB libspdk_bdev_gpt.a 00:05:00.109 SYMLINK libspdk_bdev_iscsi.so 00:05:00.109 SYMLINK libspdk_blobfs_bdev.so 00:05:00.109 SO libspdk_bdev_gpt.so.6.0 00:05:00.109 SYMLINK libspdk_bdev_malloc.so 00:05:00.109 SYMLINK libspdk_bdev_delay.so 00:05:00.109 LIB libspdk_bdev_virtio.a 00:05:00.109 SYMLINK libspdk_bdev_gpt.so 00:05:00.109 SO libspdk_bdev_virtio.so.6.0 00:05:00.368 LIB libspdk_bdev_lvol.a 00:05:00.368 SYMLINK libspdk_bdev_virtio.so 00:05:00.368 SO libspdk_bdev_lvol.so.6.0 00:05:00.368 SYMLINK libspdk_bdev_lvol.so 00:05:00.626 LIB libspdk_bdev_raid.a 00:05:00.626 SO libspdk_bdev_raid.so.6.0 00:05:00.626 SYMLINK libspdk_bdev_raid.so 00:05:02.004 LIB libspdk_bdev_nvme.a 00:05:02.004 SO libspdk_bdev_nvme.so.7.0 00:05:02.004 SYMLINK libspdk_bdev_nvme.so 00:05:02.573 CC module/event/subsystems/vmd/vmd.o 00:05:02.573 CC module/event/subsystems/sock/sock.o 00:05:02.573 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:05:02.573 CC module/event/subsystems/keyring/keyring.o 00:05:02.573 CC module/event/subsystems/vmd/vmd_rpc.o 00:05:02.573 CC module/event/subsystems/iobuf/iobuf.o 00:05:02.573 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:05:02.573 CC module/event/subsystems/scheduler/scheduler.o 00:05:02.573 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:05:02.573 LIB libspdk_event_keyring.a 00:05:02.573 LIB libspdk_event_vhost_blk.a 00:05:02.573 LIB libspdk_event_vfu_tgt.a 00:05:02.573 LIB libspdk_event_sock.a 00:05:02.573 LIB libspdk_event_scheduler.a 00:05:02.573 LIB libspdk_event_vmd.a 00:05:02.573 SO libspdk_event_keyring.so.1.0 00:05:02.573 LIB libspdk_event_iobuf.a 00:05:02.573 SO libspdk_event_vfu_tgt.so.3.0 00:05:02.573 SO libspdk_event_vhost_blk.so.3.0 00:05:02.573 SO libspdk_event_scheduler.so.4.0 00:05:02.573 SO libspdk_event_sock.so.5.0 00:05:02.573 SO libspdk_event_vmd.so.6.0 00:05:02.573 SO libspdk_event_iobuf.so.3.0 00:05:02.573 SYMLINK libspdk_event_keyring.so 00:05:02.573 SYMLINK libspdk_event_sock.so 00:05:02.573 SYMLINK libspdk_event_vfu_tgt.so 00:05:02.573 SYMLINK libspdk_event_vhost_blk.so 00:05:02.573 SYMLINK libspdk_event_scheduler.so 00:05:02.573 SYMLINK libspdk_event_vmd.so 00:05:02.573 SYMLINK libspdk_event_iobuf.so 00:05:02.833 CC module/event/subsystems/accel/accel.o 00:05:03.094 LIB libspdk_event_accel.a 00:05:03.094 SO libspdk_event_accel.so.6.0 00:05:03.094 SYMLINK libspdk_event_accel.so 00:05:03.354 CC module/event/subsystems/bdev/bdev.o 00:05:03.614 LIB libspdk_event_bdev.a 00:05:03.614 SO libspdk_event_bdev.so.6.0 00:05:03.614 SYMLINK libspdk_event_bdev.so 00:05:03.871 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:05:03.872 CC module/event/subsystems/ublk/ublk.o 00:05:03.872 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:05:03.872 CC module/event/subsystems/scsi/scsi.o 00:05:03.872 CC module/event/subsystems/nbd/nbd.o 00:05:03.872 LIB libspdk_event_nbd.a 00:05:03.872 LIB libspdk_event_ublk.a 00:05:04.130 LIB libspdk_event_scsi.a 00:05:04.130 SO libspdk_event_nbd.so.6.0 00:05:04.130 SO libspdk_event_ublk.so.3.0 00:05:04.130 SO libspdk_event_scsi.so.6.0 00:05:04.130 SYMLINK libspdk_event_ublk.so 00:05:04.130 SYMLINK libspdk_event_nbd.so 00:05:04.130 SYMLINK libspdk_event_scsi.so 00:05:04.130 LIB libspdk_event_nvmf.a 00:05:04.130 SO libspdk_event_nvmf.so.6.0 00:05:04.130 SYMLINK libspdk_event_nvmf.so 00:05:04.130 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:05:04.388 CC module/event/subsystems/iscsi/iscsi.o 00:05:04.388 LIB libspdk_event_vhost_scsi.a 00:05:04.388 SO libspdk_event_vhost_scsi.so.3.0 00:05:04.388 LIB libspdk_event_iscsi.a 00:05:04.388 SO libspdk_event_iscsi.so.6.0 00:05:04.654 SYMLINK libspdk_event_vhost_scsi.so 00:05:04.654 SYMLINK libspdk_event_iscsi.so 00:05:04.654 SO libspdk.so.6.0 00:05:04.654 SYMLINK libspdk.so 00:05:04.915 CXX app/trace/trace.o 00:05:04.915 CC app/trace_record/trace_record.o 00:05:04.915 CC app/spdk_lspci/spdk_lspci.o 00:05:04.915 CC app/spdk_nvme_perf/perf.o 00:05:04.915 CC app/spdk_nvme_identify/identify.o 00:05:04.915 TEST_HEADER include/spdk/accel.h 00:05:04.915 TEST_HEADER include/spdk/accel_module.h 00:05:04.915 TEST_HEADER include/spdk/assert.h 00:05:04.915 TEST_HEADER include/spdk/barrier.h 00:05:04.915 CC app/spdk_top/spdk_top.o 00:05:04.915 TEST_HEADER include/spdk/base64.h 00:05:04.915 TEST_HEADER include/spdk/bdev.h 00:05:04.915 CC app/spdk_nvme_discover/discovery_aer.o 00:05:04.915 TEST_HEADER include/spdk/bdev_module.h 00:05:04.915 TEST_HEADER include/spdk/bdev_zone.h 00:05:04.915 TEST_HEADER include/spdk/bit_array.h 00:05:04.915 TEST_HEADER include/spdk/bit_pool.h 00:05:04.915 TEST_HEADER include/spdk/blob_bdev.h 00:05:04.915 TEST_HEADER include/spdk/blobfs_bdev.h 00:05:04.915 TEST_HEADER include/spdk/blobfs.h 00:05:04.915 TEST_HEADER include/spdk/blob.h 00:05:04.915 TEST_HEADER include/spdk/conf.h 00:05:04.915 TEST_HEADER include/spdk/config.h 00:05:04.915 TEST_HEADER include/spdk/cpuset.h 00:05:04.915 TEST_HEADER include/spdk/crc16.h 00:05:04.915 TEST_HEADER include/spdk/crc32.h 00:05:04.915 CC examples/interrupt_tgt/interrupt_tgt.o 00:05:04.915 TEST_HEADER include/spdk/crc64.h 00:05:04.915 TEST_HEADER include/spdk/dif.h 00:05:04.915 TEST_HEADER include/spdk/dma.h 00:05:04.915 TEST_HEADER include/spdk/endian.h 00:05:04.915 TEST_HEADER include/spdk/env_dpdk.h 00:05:04.915 TEST_HEADER include/spdk/env.h 00:05:04.915 TEST_HEADER include/spdk/event.h 00:05:04.915 TEST_HEADER include/spdk/fd_group.h 00:05:04.915 TEST_HEADER include/spdk/fd.h 00:05:04.915 CC app/iscsi_tgt/iscsi_tgt.o 00:05:04.915 TEST_HEADER include/spdk/file.h 00:05:04.915 TEST_HEADER include/spdk/ftl.h 00:05:04.915 CC app/vhost/vhost.o 00:05:04.915 TEST_HEADER include/spdk/gpt_spec.h 00:05:04.915 CC app/nvmf_tgt/nvmf_main.o 00:05:04.915 TEST_HEADER include/spdk/hexlify.h 00:05:04.915 TEST_HEADER include/spdk/histogram_data.h 00:05:04.915 TEST_HEADER include/spdk/idxd.h 00:05:04.915 TEST_HEADER include/spdk/idxd_spec.h 00:05:04.915 TEST_HEADER include/spdk/init.h 00:05:04.915 TEST_HEADER include/spdk/ioat.h 00:05:05.180 TEST_HEADER include/spdk/ioat_spec.h 00:05:05.180 CC examples/vmd/lsvmd/lsvmd.o 00:05:05.181 TEST_HEADER include/spdk/iscsi_spec.h 00:05:05.181 CC examples/idxd/perf/perf.o 00:05:05.181 TEST_HEADER include/spdk/json.h 00:05:05.181 TEST_HEADER include/spdk/jsonrpc.h 00:05:05.181 TEST_HEADER include/spdk/keyring.h 00:05:05.181 CC examples/ioat/perf/perf.o 00:05:05.181 CC app/spdk_tgt/spdk_tgt.o 00:05:05.181 CC examples/nvme/hello_world/hello_world.o 00:05:05.181 CC examples/accel/perf/accel_perf.o 00:05:05.181 TEST_HEADER include/spdk/keyring_module.h 00:05:05.181 TEST_HEADER include/spdk/likely.h 00:05:05.181 TEST_HEADER include/spdk/log.h 00:05:05.181 TEST_HEADER include/spdk/lvol.h 00:05:05.181 TEST_HEADER include/spdk/memory.h 00:05:05.181 CC examples/sock/hello_world/hello_sock.o 00:05:05.181 TEST_HEADER include/spdk/mmio.h 00:05:05.181 CC test/event/event_perf/event_perf.o 00:05:05.181 TEST_HEADER include/spdk/nbd.h 00:05:05.181 CC examples/util/zipf/zipf.o 00:05:05.181 TEST_HEADER include/spdk/notify.h 00:05:05.181 TEST_HEADER include/spdk/nvme.h 00:05:05.181 TEST_HEADER include/spdk/nvme_intel.h 00:05:05.181 TEST_HEADER include/spdk/nvme_ocssd.h 00:05:05.181 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:05:05.181 TEST_HEADER include/spdk/nvme_spec.h 00:05:05.181 TEST_HEADER include/spdk/nvme_zns.h 00:05:05.181 TEST_HEADER include/spdk/nvmf_cmd.h 00:05:05.181 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:05:05.181 CC examples/blob/hello_world/hello_blob.o 00:05:05.181 TEST_HEADER include/spdk/nvmf.h 00:05:05.181 TEST_HEADER include/spdk/nvmf_spec.h 00:05:05.181 TEST_HEADER include/spdk/nvmf_transport.h 00:05:05.181 CC test/accel/dif/dif.o 00:05:05.181 CC test/dma/test_dma/test_dma.o 00:05:05.181 TEST_HEADER include/spdk/opal.h 00:05:05.181 CC examples/thread/thread/thread_ex.o 00:05:05.181 CC test/bdev/bdevio/bdevio.o 00:05:05.181 CC test/app/bdev_svc/bdev_svc.o 00:05:05.181 CC test/blobfs/mkfs/mkfs.o 00:05:05.181 CC examples/nvmf/nvmf/nvmf.o 00:05:05.181 TEST_HEADER include/spdk/opal_spec.h 00:05:05.181 TEST_HEADER include/spdk/pci_ids.h 00:05:05.181 CC examples/bdev/hello_world/hello_bdev.o 00:05:05.181 TEST_HEADER include/spdk/pipe.h 00:05:05.181 TEST_HEADER include/spdk/queue.h 00:05:05.181 TEST_HEADER include/spdk/reduce.h 00:05:05.181 TEST_HEADER include/spdk/rpc.h 00:05:05.181 TEST_HEADER include/spdk/scheduler.h 00:05:05.181 TEST_HEADER include/spdk/scsi.h 00:05:05.181 TEST_HEADER include/spdk/scsi_spec.h 00:05:05.181 TEST_HEADER include/spdk/sock.h 00:05:05.181 TEST_HEADER include/spdk/stdinc.h 00:05:05.181 TEST_HEADER include/spdk/string.h 00:05:05.181 TEST_HEADER include/spdk/thread.h 00:05:05.181 TEST_HEADER include/spdk/trace.h 00:05:05.181 TEST_HEADER include/spdk/trace_parser.h 00:05:05.181 TEST_HEADER include/spdk/tree.h 00:05:05.181 TEST_HEADER include/spdk/ublk.h 00:05:05.181 LINK spdk_lspci 00:05:05.181 TEST_HEADER include/spdk/util.h 00:05:05.181 CC test/env/mem_callbacks/mem_callbacks.o 00:05:05.181 TEST_HEADER include/spdk/uuid.h 00:05:05.181 TEST_HEADER include/spdk/version.h 00:05:05.181 TEST_HEADER include/spdk/vfio_user_pci.h 00:05:05.181 TEST_HEADER include/spdk/vfio_user_spec.h 00:05:05.181 TEST_HEADER include/spdk/vhost.h 00:05:05.181 TEST_HEADER include/spdk/vmd.h 00:05:05.181 TEST_HEADER include/spdk/xor.h 00:05:05.181 TEST_HEADER include/spdk/zipf.h 00:05:05.181 CXX test/cpp_headers/accel.o 00:05:05.181 CC test/lvol/esnap/esnap.o 00:05:05.438 LINK lsvmd 00:05:05.438 LINK spdk_nvme_discover 00:05:05.438 LINK interrupt_tgt 00:05:05.438 LINK event_perf 00:05:05.438 LINK zipf 00:05:05.438 LINK spdk_trace_record 00:05:05.438 LINK nvmf_tgt 00:05:05.438 LINK vhost 00:05:05.438 LINK iscsi_tgt 00:05:05.438 LINK ioat_perf 00:05:05.438 LINK spdk_tgt 00:05:05.438 LINK bdev_svc 00:05:05.438 LINK hello_world 00:05:05.438 LINK hello_sock 00:05:05.438 LINK mkfs 00:05:05.700 LINK mem_callbacks 00:05:05.700 CXX test/cpp_headers/accel_module.o 00:05:05.700 LINK hello_blob 00:05:05.700 LINK thread 00:05:05.700 LINK hello_bdev 00:05:05.700 CC test/event/reactor/reactor.o 00:05:05.700 LINK idxd_perf 00:05:05.700 LINK spdk_trace 00:05:05.700 LINK nvmf 00:05:05.700 CXX test/cpp_headers/assert.o 00:05:05.700 CC examples/vmd/led/led.o 00:05:05.700 CC examples/ioat/verify/verify.o 00:05:05.700 CC test/rpc_client/rpc_client_test.o 00:05:05.700 CC examples/nvme/reconnect/reconnect.o 00:05:05.700 LINK test_dma 00:05:05.967 CC test/env/vtophys/vtophys.o 00:05:05.967 LINK bdevio 00:05:05.967 LINK accel_perf 00:05:05.967 LINK dif 00:05:05.967 LINK reactor 00:05:05.967 CC test/nvme/aer/aer.o 00:05:05.967 CC test/app/histogram_perf/histogram_perf.o 00:05:05.967 CC test/nvme/reset/reset.o 00:05:05.967 CC test/app/jsoncat/jsoncat.o 00:05:05.967 CC examples/bdev/bdevperf/bdevperf.o 00:05:05.967 CXX test/cpp_headers/barrier.o 00:05:05.967 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:05:05.967 CC test/event/reactor_perf/reactor_perf.o 00:05:05.967 CC examples/blob/cli/blobcli.o 00:05:05.967 CC examples/nvme/nvme_manage/nvme_manage.o 00:05:05.967 LINK led 00:05:05.967 CC examples/nvme/arbitration/arbitration.o 00:05:06.229 CC examples/nvme/hotplug/hotplug.o 00:05:06.229 CC test/nvme/sgl/sgl.o 00:05:06.229 CC test/app/stub/stub.o 00:05:06.229 LINK vtophys 00:05:06.229 CC examples/nvme/cmb_copy/cmb_copy.o 00:05:06.229 LINK rpc_client_test 00:05:06.229 CC test/thread/poller_perf/poller_perf.o 00:05:06.229 LINK verify 00:05:06.229 LINK jsoncat 00:05:06.229 CC examples/nvme/abort/abort.o 00:05:06.229 LINK histogram_perf 00:05:06.229 CC test/nvme/e2edp/nvme_dp.o 00:05:06.229 LINK reactor_perf 00:05:06.229 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:05:06.229 CXX test/cpp_headers/base64.o 00:05:06.489 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:05:06.489 LINK spdk_nvme_perf 00:05:06.489 CC test/event/app_repeat/app_repeat.o 00:05:06.489 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:05:06.489 LINK spdk_nvme_identify 00:05:06.489 LINK stub 00:05:06.489 LINK reconnect 00:05:06.489 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:05:06.489 LINK aer 00:05:06.489 CXX test/cpp_headers/bdev.o 00:05:06.489 LINK poller_perf 00:05:06.489 LINK reset 00:05:06.489 LINK cmb_copy 00:05:06.489 LINK spdk_top 00:05:06.489 LINK hotplug 00:05:06.489 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:05:06.489 CC test/nvme/overhead/overhead.o 00:05:06.489 CC app/spdk_dd/spdk_dd.o 00:05:06.755 LINK sgl 00:05:06.755 CC test/env/memory/memory_ut.o 00:05:06.755 CC test/event/scheduler/scheduler.o 00:05:06.755 LINK env_dpdk_post_init 00:05:06.755 LINK arbitration 00:05:06.755 LINK app_repeat 00:05:06.755 CXX test/cpp_headers/bdev_module.o 00:05:06.755 LINK nvme_fuzz 00:05:06.755 CXX test/cpp_headers/bdev_zone.o 00:05:06.755 CXX test/cpp_headers/bit_array.o 00:05:06.755 LINK pmr_persistence 00:05:06.755 CC test/env/pci/pci_ut.o 00:05:06.755 CXX test/cpp_headers/bit_pool.o 00:05:06.755 CC app/fio/nvme/fio_plugin.o 00:05:06.755 LINK nvme_dp 00:05:06.755 CC test/nvme/err_injection/err_injection.o 00:05:06.755 CXX test/cpp_headers/blob_bdev.o 00:05:07.021 CC test/nvme/startup/startup.o 00:05:07.021 CC test/nvme/reserve/reserve.o 00:05:07.021 CXX test/cpp_headers/blobfs_bdev.o 00:05:07.021 CC test/nvme/simple_copy/simple_copy.o 00:05:07.021 CC test/nvme/connect_stress/connect_stress.o 00:05:07.021 LINK abort 00:05:07.021 LINK nvme_manage 00:05:07.021 LINK blobcli 00:05:07.021 CXX test/cpp_headers/blobfs.o 00:05:07.021 CC test/nvme/boot_partition/boot_partition.o 00:05:07.021 CXX test/cpp_headers/blob.o 00:05:07.021 CC app/fio/bdev/fio_plugin.o 00:05:07.021 LINK overhead 00:05:07.021 CXX test/cpp_headers/conf.o 00:05:07.021 LINK scheduler 00:05:07.022 CC test/nvme/compliance/nvme_compliance.o 00:05:07.289 CC test/nvme/fused_ordering/fused_ordering.o 00:05:07.289 CXX test/cpp_headers/config.o 00:05:07.289 CXX test/cpp_headers/cpuset.o 00:05:07.289 CC test/nvme/doorbell_aers/doorbell_aers.o 00:05:07.289 CXX test/cpp_headers/crc16.o 00:05:07.289 CXX test/cpp_headers/crc32.o 00:05:07.289 CC test/nvme/fdp/fdp.o 00:05:07.289 CXX test/cpp_headers/crc64.o 00:05:07.289 LINK startup 00:05:07.289 LINK err_injection 00:05:07.289 CXX test/cpp_headers/dif.o 00:05:07.289 LINK spdk_dd 00:05:07.289 CXX test/cpp_headers/dma.o 00:05:07.289 LINK connect_stress 00:05:07.289 CXX test/cpp_headers/endian.o 00:05:07.289 CC test/nvme/cuse/cuse.o 00:05:07.289 LINK reserve 00:05:07.289 CXX test/cpp_headers/env_dpdk.o 00:05:07.289 LINK vhost_fuzz 00:05:07.289 LINK boot_partition 00:05:07.289 CXX test/cpp_headers/env.o 00:05:07.289 CXX test/cpp_headers/event.o 00:05:07.289 LINK simple_copy 00:05:07.289 LINK bdevperf 00:05:07.551 CXX test/cpp_headers/fd_group.o 00:05:07.551 CXX test/cpp_headers/fd.o 00:05:07.551 CXX test/cpp_headers/file.o 00:05:07.551 LINK pci_ut 00:05:07.551 CXX test/cpp_headers/ftl.o 00:05:07.552 CXX test/cpp_headers/gpt_spec.o 00:05:07.552 CXX test/cpp_headers/hexlify.o 00:05:07.552 CXX test/cpp_headers/histogram_data.o 00:05:07.552 CXX test/cpp_headers/idxd.o 00:05:07.552 CXX test/cpp_headers/idxd_spec.o 00:05:07.552 LINK fused_ordering 00:05:07.552 CXX test/cpp_headers/init.o 00:05:07.552 LINK doorbell_aers 00:05:07.552 CXX test/cpp_headers/ioat.o 00:05:07.552 CXX test/cpp_headers/ioat_spec.o 00:05:07.552 CXX test/cpp_headers/iscsi_spec.o 00:05:07.552 CXX test/cpp_headers/json.o 00:05:07.552 CXX test/cpp_headers/jsonrpc.o 00:05:07.552 CXX test/cpp_headers/keyring.o 00:05:07.552 CXX test/cpp_headers/keyring_module.o 00:05:07.810 CXX test/cpp_headers/likely.o 00:05:07.810 CXX test/cpp_headers/log.o 00:05:07.810 CXX test/cpp_headers/lvol.o 00:05:07.810 LINK nvme_compliance 00:05:07.810 CXX test/cpp_headers/memory.o 00:05:07.810 CXX test/cpp_headers/mmio.o 00:05:07.810 CXX test/cpp_headers/nbd.o 00:05:07.810 LINK spdk_nvme 00:05:07.810 CXX test/cpp_headers/notify.o 00:05:07.810 LINK fdp 00:05:07.810 CXX test/cpp_headers/nvme.o 00:05:07.810 CXX test/cpp_headers/nvme_intel.o 00:05:07.810 CXX test/cpp_headers/nvme_ocssd.o 00:05:07.810 CXX test/cpp_headers/nvme_ocssd_spec.o 00:05:07.810 CXX test/cpp_headers/nvme_spec.o 00:05:07.810 CXX test/cpp_headers/nvme_zns.o 00:05:07.810 CXX test/cpp_headers/nvmf_cmd.o 00:05:08.074 CXX test/cpp_headers/nvmf_fc_spec.o 00:05:08.074 CXX test/cpp_headers/nvmf.o 00:05:08.074 LINK memory_ut 00:05:08.074 CXX test/cpp_headers/nvmf_spec.o 00:05:08.074 CXX test/cpp_headers/nvmf_transport.o 00:05:08.074 CXX test/cpp_headers/opal.o 00:05:08.074 CXX test/cpp_headers/opal_spec.o 00:05:08.074 CXX test/cpp_headers/pci_ids.o 00:05:08.074 LINK spdk_bdev 00:05:08.074 CXX test/cpp_headers/pipe.o 00:05:08.074 CXX test/cpp_headers/queue.o 00:05:08.074 CXX test/cpp_headers/reduce.o 00:05:08.074 CXX test/cpp_headers/rpc.o 00:05:08.074 CXX test/cpp_headers/scheduler.o 00:05:08.074 CXX test/cpp_headers/scsi.o 00:05:08.074 CXX test/cpp_headers/sock.o 00:05:08.074 CXX test/cpp_headers/scsi_spec.o 00:05:08.074 CXX test/cpp_headers/stdinc.o 00:05:08.074 CXX test/cpp_headers/string.o 00:05:08.074 CXX test/cpp_headers/thread.o 00:05:08.074 CXX test/cpp_headers/trace.o 00:05:08.074 CXX test/cpp_headers/trace_parser.o 00:05:08.074 CXX test/cpp_headers/tree.o 00:05:08.334 CXX test/cpp_headers/ublk.o 00:05:08.334 CXX test/cpp_headers/util.o 00:05:08.334 CXX test/cpp_headers/uuid.o 00:05:08.334 CXX test/cpp_headers/version.o 00:05:08.334 CXX test/cpp_headers/vfio_user_pci.o 00:05:08.334 CXX test/cpp_headers/vfio_user_spec.o 00:05:08.334 CXX test/cpp_headers/vhost.o 00:05:08.334 CXX test/cpp_headers/vmd.o 00:05:08.334 CXX test/cpp_headers/xor.o 00:05:08.334 CXX test/cpp_headers/zipf.o 00:05:08.900 LINK iscsi_fuzz 00:05:09.181 LINK cuse 00:05:11.719 LINK esnap 00:05:11.978 00:05:11.978 real 0m44.449s 00:05:11.978 user 7m49.689s 00:05:11.978 sys 1m39.940s 00:05:11.978 00:17:39 make -- common/autotest_common.sh@1122 -- $ xtrace_disable 00:05:11.978 00:17:39 make -- common/autotest_common.sh@10 -- $ set +x 00:05:11.978 ************************************ 00:05:11.978 END TEST make 00:05:11.978 ************************************ 00:05:11.978 00:17:39 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:05:11.978 00:17:39 -- pm/common@29 -- $ signal_monitor_resources TERM 00:05:11.978 00:17:39 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:05:11.978 00:17:39 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:11.978 00:17:39 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:05:11.978 00:17:39 -- pm/common@44 -- $ pid=754957 00:05:11.978 00:17:39 -- pm/common@50 -- $ kill -TERM 754957 00:05:11.978 00:17:39 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:11.978 00:17:39 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:05:11.978 00:17:39 -- pm/common@44 -- $ pid=754959 00:05:11.978 00:17:39 -- pm/common@50 -- $ kill -TERM 754959 00:05:11.978 00:17:39 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:11.978 00:17:39 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:05:11.978 00:17:39 -- pm/common@44 -- $ pid=754961 00:05:11.978 00:17:39 -- pm/common@50 -- $ kill -TERM 754961 00:05:11.978 00:17:39 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:11.978 00:17:39 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:05:11.978 00:17:39 -- pm/common@44 -- $ pid=754990 00:05:11.978 00:17:39 -- pm/common@50 -- $ sudo -E kill -TERM 754990 00:05:11.978 00:17:39 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:11.978 00:17:39 -- nvmf/common.sh@7 -- # uname -s 00:05:11.978 00:17:39 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:11.978 00:17:39 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:11.978 00:17:39 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:11.978 00:17:39 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:11.978 00:17:39 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:11.978 00:17:39 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:11.978 00:17:39 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:11.978 00:17:39 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:11.978 00:17:39 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:11.978 00:17:39 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:11.978 00:17:39 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:05:11.978 00:17:39 -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:05:11.978 00:17:39 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:11.978 00:17:39 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:11.978 00:17:39 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:11.978 00:17:39 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:11.978 00:17:39 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:11.978 00:17:39 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:11.978 00:17:39 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:11.978 00:17:39 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:11.978 00:17:39 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:11.978 00:17:39 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:11.978 00:17:39 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:11.978 00:17:39 -- paths/export.sh@5 -- # export PATH 00:05:11.978 00:17:39 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:11.978 00:17:39 -- nvmf/common.sh@47 -- # : 0 00:05:11.978 00:17:39 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:11.978 00:17:39 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:11.978 00:17:39 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:11.978 00:17:39 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:11.978 00:17:39 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:11.978 00:17:39 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:11.978 00:17:39 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:11.978 00:17:39 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:11.978 00:17:39 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:05:11.978 00:17:39 -- spdk/autotest.sh@32 -- # uname -s 00:05:11.978 00:17:39 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:05:11.978 00:17:39 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:05:11.978 00:17:39 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:05:11.978 00:17:39 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:05:11.978 00:17:39 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:05:11.978 00:17:39 -- spdk/autotest.sh@44 -- # modprobe nbd 00:05:11.978 00:17:39 -- spdk/autotest.sh@46 -- # type -P udevadm 00:05:11.978 00:17:39 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:05:11.978 00:17:39 -- spdk/autotest.sh@48 -- # udevadm_pid=827677 00:05:11.978 00:17:39 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:05:11.978 00:17:39 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:05:11.978 00:17:39 -- pm/common@17 -- # local monitor 00:05:11.978 00:17:39 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:05:11.978 00:17:39 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:05:11.978 00:17:39 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:05:11.978 00:17:39 -- pm/common@21 -- # date +%s 00:05:11.978 00:17:39 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:05:11.978 00:17:39 -- pm/common@21 -- # date +%s 00:05:11.978 00:17:39 -- pm/common@25 -- # sleep 1 00:05:11.978 00:17:39 -- pm/common@21 -- # date +%s 00:05:11.978 00:17:39 -- pm/common@21 -- # date +%s 00:05:11.978 00:17:39 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1720736259 00:05:11.978 00:17:39 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1720736259 00:05:11.978 00:17:39 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1720736259 00:05:11.978 00:17:39 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1720736259 00:05:12.238 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1720736259_collect-vmstat.pm.log 00:05:12.238 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1720736259_collect-cpu-load.pm.log 00:05:12.238 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1720736259_collect-cpu-temp.pm.log 00:05:12.238 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1720736259_collect-bmc-pm.bmc.pm.log 00:05:13.178 00:17:40 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:05:13.178 00:17:40 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:05:13.178 00:17:40 -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:13.178 00:17:40 -- common/autotest_common.sh@10 -- # set +x 00:05:13.178 00:17:40 -- spdk/autotest.sh@59 -- # create_test_list 00:05:13.178 00:17:40 -- common/autotest_common.sh@744 -- # xtrace_disable 00:05:13.178 00:17:40 -- common/autotest_common.sh@10 -- # set +x 00:05:13.178 00:17:40 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:05:13.178 00:17:40 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:13.178 00:17:40 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:13.178 00:17:40 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:05:13.178 00:17:40 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:13.178 00:17:40 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:05:13.178 00:17:40 -- common/autotest_common.sh@1451 -- # uname 00:05:13.178 00:17:40 -- common/autotest_common.sh@1451 -- # '[' Linux = FreeBSD ']' 00:05:13.178 00:17:40 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:05:13.178 00:17:40 -- common/autotest_common.sh@1471 -- # uname 00:05:13.178 00:17:40 -- common/autotest_common.sh@1471 -- # [[ Linux = FreeBSD ]] 00:05:13.178 00:17:40 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:05:13.178 00:17:40 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:05:13.178 00:17:40 -- spdk/autotest.sh@72 -- # hash lcov 00:05:13.178 00:17:40 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:05:13.178 00:17:40 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:05:13.178 --rc lcov_branch_coverage=1 00:05:13.178 --rc lcov_function_coverage=1 00:05:13.178 --rc genhtml_branch_coverage=1 00:05:13.178 --rc genhtml_function_coverage=1 00:05:13.178 --rc genhtml_legend=1 00:05:13.178 --rc geninfo_all_blocks=1 00:05:13.178 ' 00:05:13.178 00:17:40 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:05:13.178 --rc lcov_branch_coverage=1 00:05:13.178 --rc lcov_function_coverage=1 00:05:13.178 --rc genhtml_branch_coverage=1 00:05:13.178 --rc genhtml_function_coverage=1 00:05:13.178 --rc genhtml_legend=1 00:05:13.178 --rc geninfo_all_blocks=1 00:05:13.178 ' 00:05:13.178 00:17:40 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:05:13.178 --rc lcov_branch_coverage=1 00:05:13.178 --rc lcov_function_coverage=1 00:05:13.178 --rc genhtml_branch_coverage=1 00:05:13.178 --rc genhtml_function_coverage=1 00:05:13.178 --rc genhtml_legend=1 00:05:13.178 --rc geninfo_all_blocks=1 00:05:13.178 --no-external' 00:05:13.178 00:17:40 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:05:13.179 --rc lcov_branch_coverage=1 00:05:13.179 --rc lcov_function_coverage=1 00:05:13.179 --rc genhtml_branch_coverage=1 00:05:13.179 --rc genhtml_function_coverage=1 00:05:13.179 --rc genhtml_legend=1 00:05:13.179 --rc geninfo_all_blocks=1 00:05:13.179 --no-external' 00:05:13.179 00:17:40 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:05:13.179 lcov: LCOV version 1.14 00:05:13.179 00:17:40 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:05:31.286 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:05:31.286 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:05:43.501 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:05:43.501 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:05:43.501 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:05:43.501 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:05:43.501 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:05:43.501 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:05:43.501 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:05:43.501 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:05:43.501 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:05:43.501 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:05:43.501 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:05:43.501 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:05:43.501 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:05:43.501 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:05:43.501 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:05:43.501 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:05:43.501 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:05:43.501 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:05:43.501 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:05:43.501 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:05:43.501 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:05:43.501 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:05:43.501 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:05:43.501 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:05:43.501 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:05:43.501 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:05:43.501 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:05:43.501 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:05:43.501 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:05:43.501 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:05:43.501 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:05:43.501 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:05:43.501 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:05:43.501 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno 00:05:43.501 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:05:43.501 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:05:43.501 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:05:43.501 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:05:43.501 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:05:43.501 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:05:43.501 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:05:43.501 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:05:43.501 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:05:43.501 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:05:43.502 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:05:43.502 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:05:43.502 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:05:43.502 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:05:43.502 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:05:43.502 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno 00:05:43.502 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:05:43.502 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno 00:05:43.502 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:05:43.502 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:05:43.502 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:05:43.502 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:05:43.502 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:05:43.502 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno 00:05:43.502 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:05:43.502 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:05:43.502 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:05:43.502 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:05:43.502 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:05:43.502 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:05:43.502 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:05:43.502 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:05:43.502 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:05:43.502 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:05:43.502 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:05:43.502 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:05:43.502 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:05:43.502 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:05:43.502 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:05:43.502 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno 00:05:43.502 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:05:43.502 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:05:43.502 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:05:43.502 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:05:43.502 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:05:43.502 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno 00:05:43.502 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:05:43.502 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:05:43.502 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno:no functions found 00:05:43.502 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno 00:05:43.502 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:05:43.502 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno 00:05:43.502 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:05:43.502 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:05:43.502 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:05:43.502 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno 00:05:43.502 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:05:43.502 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:05:43.502 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:05:43.502 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:05:43.502 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:05:43.502 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:05:43.502 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:05:43.502 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:05:43.502 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:05:43.502 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:05:43.502 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:05:43.502 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:05:43.502 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:05:43.502 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:05:43.502 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:05:43.502 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:05:43.502 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:05:43.502 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:05:43.502 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:05:43.502 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:05:43.502 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:05:43.502 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:05:43.502 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:05:43.502 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:05:43.502 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:05:43.502 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:05:43.502 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:05:43.502 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:05:43.502 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:05:43.502 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:05:43.502 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:05:43.502 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:05:43.502 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:05:43.502 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:05:43.502 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:05:43.502 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:05:43.502 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:05:43.502 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:05:43.502 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:05:43.502 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:05:43.502 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:05:43.502 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:05:43.502 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:05:43.502 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:05:43.502 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:05:43.502 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:05:43.502 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:05:43.502 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:05:43.502 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:05:43.502 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:05:43.502 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:05:43.502 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:05:43.502 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:05:43.502 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:05:43.502 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:05:43.502 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:05:43.502 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:05:43.502 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno 00:05:43.502 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:05:43.502 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:05:43.502 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:05:43.502 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:05:43.502 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:05:43.502 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:05:43.502 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:05:43.503 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:05:43.503 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:05:43.503 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:05:43.503 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:05:43.503 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno 00:05:43.503 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:05:43.503 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:05:43.503 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:05:43.503 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno 00:05:43.503 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:05:43.503 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:05:43.503 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:05:43.503 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:05:43.503 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:05:43.503 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:05:43.503 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:05:43.503 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:05:43.503 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:05:43.503 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:05:43.503 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:05:43.503 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:05:48.792 00:18:15 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:05:48.792 00:18:15 -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:48.792 00:18:15 -- common/autotest_common.sh@10 -- # set +x 00:05:48.792 00:18:15 -- spdk/autotest.sh@91 -- # rm -f 00:05:48.792 00:18:15 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:49.052 0000:84:00.0 (8086 0a54): Already using the nvme driver 00:05:49.052 0000:00:04.7 (8086 3c27): Already using the ioatdma driver 00:05:49.052 0000:00:04.6 (8086 3c26): Already using the ioatdma driver 00:05:49.052 0000:00:04.5 (8086 3c25): Already using the ioatdma driver 00:05:49.052 0000:00:04.4 (8086 3c24): Already using the ioatdma driver 00:05:49.052 0000:00:04.3 (8086 3c23): Already using the ioatdma driver 00:05:49.052 0000:00:04.2 (8086 3c22): Already using the ioatdma driver 00:05:49.052 0000:00:04.1 (8086 3c21): Already using the ioatdma driver 00:05:49.052 0000:00:04.0 (8086 3c20): Already using the ioatdma driver 00:05:49.052 0000:80:04.7 (8086 3c27): Already using the ioatdma driver 00:05:49.052 0000:80:04.6 (8086 3c26): Already using the ioatdma driver 00:05:49.052 0000:80:04.5 (8086 3c25): Already using the ioatdma driver 00:05:49.052 0000:80:04.4 (8086 3c24): Already using the ioatdma driver 00:05:49.052 0000:80:04.3 (8086 3c23): Already using the ioatdma driver 00:05:49.052 0000:80:04.2 (8086 3c22): Already using the ioatdma driver 00:05:49.052 0000:80:04.1 (8086 3c21): Already using the ioatdma driver 00:05:49.052 0000:80:04.0 (8086 3c20): Already using the ioatdma driver 00:05:49.310 00:18:16 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:05:49.310 00:18:16 -- common/autotest_common.sh@1665 -- # zoned_devs=() 00:05:49.310 00:18:16 -- common/autotest_common.sh@1665 -- # local -gA zoned_devs 00:05:49.310 00:18:16 -- common/autotest_common.sh@1666 -- # local nvme bdf 00:05:49.310 00:18:16 -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:05:49.311 00:18:16 -- common/autotest_common.sh@1669 -- # is_block_zoned nvme0n1 00:05:49.311 00:18:16 -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:05:49.311 00:18:16 -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:49.311 00:18:16 -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:05:49.311 00:18:16 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:05:49.311 00:18:16 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:05:49.311 00:18:16 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:05:49.311 00:18:16 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:05:49.311 00:18:16 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:05:49.311 00:18:16 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:05:49.311 No valid GPT data, bailing 00:05:49.311 00:18:16 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:49.311 00:18:16 -- scripts/common.sh@391 -- # pt= 00:05:49.311 00:18:16 -- scripts/common.sh@392 -- # return 1 00:05:49.311 00:18:16 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:05:49.311 1+0 records in 00:05:49.311 1+0 records out 00:05:49.311 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00235446 s, 445 MB/s 00:05:49.311 00:18:16 -- spdk/autotest.sh@118 -- # sync 00:05:49.311 00:18:16 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:05:49.311 00:18:16 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:05:49.311 00:18:16 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:05:50.693 00:18:18 -- spdk/autotest.sh@124 -- # uname -s 00:05:50.693 00:18:18 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:05:50.693 00:18:18 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:05:50.693 00:18:18 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:50.693 00:18:18 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:50.693 00:18:18 -- common/autotest_common.sh@10 -- # set +x 00:05:50.952 ************************************ 00:05:50.952 START TEST setup.sh 00:05:50.952 ************************************ 00:05:50.952 00:18:18 setup.sh -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:05:50.952 * Looking for test storage... 00:05:50.952 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:05:50.952 00:18:18 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:05:50.952 00:18:18 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:05:50.952 00:18:18 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:05:50.952 00:18:18 setup.sh -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:50.952 00:18:18 setup.sh -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:50.952 00:18:18 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:50.952 ************************************ 00:05:50.952 START TEST acl 00:05:50.952 ************************************ 00:05:50.952 00:18:18 setup.sh.acl -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:05:50.952 * Looking for test storage... 00:05:50.952 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:05:50.952 00:18:18 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:05:50.952 00:18:18 setup.sh.acl -- common/autotest_common.sh@1665 -- # zoned_devs=() 00:05:50.952 00:18:18 setup.sh.acl -- common/autotest_common.sh@1665 -- # local -gA zoned_devs 00:05:50.952 00:18:18 setup.sh.acl -- common/autotest_common.sh@1666 -- # local nvme bdf 00:05:50.952 00:18:18 setup.sh.acl -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:05:50.952 00:18:18 setup.sh.acl -- common/autotest_common.sh@1669 -- # is_block_zoned nvme0n1 00:05:50.952 00:18:18 setup.sh.acl -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:05:50.952 00:18:18 setup.sh.acl -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:50.952 00:18:18 setup.sh.acl -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:05:50.952 00:18:18 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:05:50.952 00:18:18 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:05:50.952 00:18:18 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:05:50.952 00:18:18 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:05:50.952 00:18:18 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:05:50.952 00:18:18 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:50.952 00:18:18 setup.sh.acl -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:52.348 00:18:19 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:05:52.348 00:18:19 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:05:52.348 00:18:19 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:52.348 00:18:19 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:05:52.348 00:18:19 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:05:52.348 00:18:19 setup.sh.acl -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:05:53.299 Hugepages 00:05:53.299 node hugesize free / total 00:05:53.299 00:18:20 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:05:53.299 00:18:20 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:05:53.299 00:18:20 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:53.299 00:18:20 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:05:53.299 00:18:20 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:05:53.299 00:18:20 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:53.299 00:18:20 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:05:53.299 00:18:20 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:05:53.299 00:18:20 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:53.299 00:05:53.299 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:53.299 00:18:20 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:05:53.299 00:18:20 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:05:53.299 00:18:20 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:53.299 00:18:20 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:05:53.299 00:18:20 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:05:53.299 00:18:20 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:05:53.299 00:18:20 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:53.299 00:18:20 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:05:53.299 00:18:20 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:05:53.299 00:18:20 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:05:53.299 00:18:20 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:53.299 00:18:20 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:05:53.299 00:18:20 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:05:53.299 00:18:20 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:05:53.299 00:18:20 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:53.299 00:18:20 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:05:53.299 00:18:20 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:05:53.300 00:18:20 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:05:53.300 00:18:20 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:53.300 00:18:20 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:05:53.300 00:18:20 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:05:53.300 00:18:20 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:05:53.300 00:18:20 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:53.300 00:18:20 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:05:53.300 00:18:20 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:05:53.300 00:18:20 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:05:53.300 00:18:20 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:53.300 00:18:20 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:05:53.300 00:18:20 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:05:53.300 00:18:20 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:05:53.300 00:18:20 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:53.300 00:18:20 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:05:53.300 00:18:20 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:05:53.300 00:18:20 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:05:53.300 00:18:20 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:53.300 00:18:20 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:05:53.300 00:18:20 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:05:53.300 00:18:20 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:05:53.300 00:18:20 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:53.300 00:18:20 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:05:53.300 00:18:20 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:05:53.300 00:18:20 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:05:53.300 00:18:20 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:53.300 00:18:20 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:05:53.300 00:18:20 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:05:53.300 00:18:20 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:05:53.300 00:18:20 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:53.300 00:18:20 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:05:53.300 00:18:20 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:05:53.300 00:18:20 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:05:53.300 00:18:20 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:53.300 00:18:20 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:05:53.300 00:18:20 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:05:53.300 00:18:20 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:05:53.300 00:18:20 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:53.300 00:18:20 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:05:53.300 00:18:20 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:05:53.300 00:18:20 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:05:53.300 00:18:20 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:53.300 00:18:20 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:05:53.300 00:18:20 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:05:53.300 00:18:20 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:05:53.300 00:18:20 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:53.300 00:18:20 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:05:53.300 00:18:20 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:05:53.300 00:18:20 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:05:53.300 00:18:20 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:53.300 00:18:20 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:84:00.0 == *:*:*.* ]] 00:05:53.300 00:18:20 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:05:53.300 00:18:20 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\8\4\:\0\0\.\0* ]] 00:05:53.300 00:18:20 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:05:53.300 00:18:20 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:05:53.300 00:18:20 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:53.300 00:18:20 setup.sh.acl -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:05:53.300 00:18:20 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:05:53.300 00:18:20 setup.sh.acl -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:53.300 00:18:20 setup.sh.acl -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:53.300 00:18:20 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:05:53.300 ************************************ 00:05:53.300 START TEST denied 00:05:53.300 ************************************ 00:05:53.300 00:18:20 setup.sh.acl.denied -- common/autotest_common.sh@1121 -- # denied 00:05:53.300 00:18:20 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:84:00.0' 00:05:53.300 00:18:20 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:05:53.300 00:18:20 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:05:53.300 00:18:20 setup.sh.acl.denied -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:53.300 00:18:20 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:84:00.0' 00:05:54.681 0000:84:00.0 (8086 0a54): Skipping denied controller at 0000:84:00.0 00:05:54.681 00:18:22 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:84:00.0 00:05:54.681 00:18:22 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:05:54.681 00:18:22 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:05:54.682 00:18:22 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:84:00.0 ]] 00:05:54.682 00:18:22 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:84:00.0/driver 00:05:54.682 00:18:22 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:05:54.682 00:18:22 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:05:54.682 00:18:22 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:05:54.682 00:18:22 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:54.682 00:18:22 setup.sh.acl.denied -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:57.221 00:05:57.221 real 0m3.464s 00:05:57.221 user 0m0.998s 00:05:57.221 sys 0m1.624s 00:05:57.221 00:18:24 setup.sh.acl.denied -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:57.221 00:18:24 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:05:57.221 ************************************ 00:05:57.221 END TEST denied 00:05:57.221 ************************************ 00:05:57.221 00:18:24 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:05:57.221 00:18:24 setup.sh.acl -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:57.221 00:18:24 setup.sh.acl -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:57.221 00:18:24 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:05:57.221 ************************************ 00:05:57.221 START TEST allowed 00:05:57.221 ************************************ 00:05:57.221 00:18:24 setup.sh.acl.allowed -- common/autotest_common.sh@1121 -- # allowed 00:05:57.221 00:18:24 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:84:00.0 00:05:57.221 00:18:24 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:05:57.221 00:18:24 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:05:57.221 00:18:24 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:84:00.0 .*: nvme -> .*' 00:05:57.221 00:18:24 setup.sh.acl.allowed -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:59.123 0000:84:00.0 (8086 0a54): nvme -> vfio-pci 00:05:59.123 00:18:26 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 00:05:59.123 00:18:26 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:05:59.123 00:18:26 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:05:59.123 00:18:26 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:59.123 00:18:26 setup.sh.acl.allowed -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:06:00.504 00:06:00.504 real 0m3.549s 00:06:00.504 user 0m0.936s 00:06:00.504 sys 0m1.550s 00:06:00.504 00:18:28 setup.sh.acl.allowed -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:00.504 00:18:28 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:06:00.504 ************************************ 00:06:00.504 END TEST allowed 00:06:00.504 ************************************ 00:06:00.504 00:06:00.504 real 0m9.434s 00:06:00.504 user 0m2.919s 00:06:00.504 sys 0m4.724s 00:06:00.504 00:18:28 setup.sh.acl -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:00.504 00:18:28 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:06:00.504 ************************************ 00:06:00.504 END TEST acl 00:06:00.504 ************************************ 00:06:00.504 00:18:28 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:06:00.504 00:18:28 setup.sh -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:00.504 00:18:28 setup.sh -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:00.504 00:18:28 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:06:00.504 ************************************ 00:06:00.504 START TEST hugepages 00:06:00.504 ************************************ 00:06:00.504 00:18:28 setup.sh.hugepages -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:06:00.504 * Looking for test storage... 00:06:00.504 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:06:00.504 00:18:28 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:06:00.504 00:18:28 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:06:00.504 00:18:28 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:06:00.504 00:18:28 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:06:00.504 00:18:28 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:06:00.504 00:18:28 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:06:00.504 00:18:28 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:06:00.504 00:18:28 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:06:00.504 00:18:28 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:06:00.504 00:18:28 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:06:00.504 00:18:28 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:00.504 00:18:28 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:00.504 00:18:28 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:00.504 00:18:28 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:06:00.504 00:18:28 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:00.504 00:18:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:00.504 00:18:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:00.504 00:18:28 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 52291176 kB' 'MemFree: 30878732 kB' 'MemAvailable: 34338444 kB' 'Buffers: 5520 kB' 'Cached: 15282804 kB' 'SwapCached: 0 kB' 'Active: 12300188 kB' 'Inactive: 3440976 kB' 'Active(anon): 11900164 kB' 'Inactive(anon): 0 kB' 'Active(file): 400024 kB' 'Inactive(file): 3440976 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 456096 kB' 'Mapped: 161724 kB' 'Shmem: 11447324 kB' 'KReclaimable: 175192 kB' 'Slab: 418476 kB' 'SReclaimable: 175192 kB' 'SUnreclaim: 243284 kB' 'KernelStack: 10064 kB' 'PageTables: 7376 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 32437040 kB' 'Committed_AS: 12868144 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 186748 kB' 'VmallocChunk: 0 kB' 'Percpu: 19968 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 536868 kB' 'DirectMap2M: 19308544 kB' 'DirectMap1G: 40894464 kB' 00:06:00.504 00:18:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:00.504 00:18:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:00.504 00:18:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:00.504 00:18:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:00.504 00:18:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:00.504 00:18:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:00.504 00:18:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:00.504 00:18:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:00.504 00:18:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:00.504 00:18:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:00.504 00:18:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:00.504 00:18:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:00.504 00:18:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:00.505 00:18:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:00.505 00:18:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:00.505 00:18:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:00.505 00:18:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:00.505 00:18:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:00.505 00:18:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:00.505 00:18:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:00.505 00:18:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:00.505 00:18:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:00.505 00:18:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:00.505 00:18:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:00.505 00:18:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:00.505 00:18:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:00.505 00:18:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:00.505 00:18:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:00.505 00:18:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:00.505 00:18:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:00.505 00:18:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:00.505 00:18:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:00.505 00:18:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:00.505 00:18:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:00.505 00:18:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:00.505 00:18:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:00.505 00:18:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:00.505 00:18:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:00.505 00:18:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:00.505 00:18:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:00.505 00:18:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:00.505 00:18:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:00.505 00:18:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:00.505 00:18:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:00.505 00:18:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:00.505 00:18:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:00.505 00:18:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:00.505 00:18:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:00.505 00:18:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:00.505 00:18:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:00.505 00:18:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:00.505 00:18:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:00.505 00:18:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:00.505 00:18:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:00.505 00:18:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:00.505 00:18:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:00.505 00:18:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:00.505 00:18:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:00.505 00:18:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:00.505 00:18:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:00.505 00:18:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:00.505 00:18:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:00.505 00:18:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:00.505 00:18:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:00.505 00:18:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:00.505 00:18:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:00.505 00:18:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:00.505 00:18:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:00.505 00:18:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:00.505 00:18:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:00.505 00:18:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:00.505 00:18:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:00.505 00:18:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:00.505 00:18:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:00.505 00:18:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:00.505 00:18:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:00.505 00:18:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:00.505 00:18:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:00.505 00:18:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:00.505 00:18:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:00.505 00:18:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:00.505 00:18:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:00.505 00:18:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:00.505 00:18:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:00.505 00:18:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:00.505 00:18:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:00.505 00:18:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:00.505 00:18:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:00.505 00:18:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:00.505 00:18:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:00.505 00:18:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:00.505 00:18:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:00.505 00:18:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:00.505 00:18:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:00.505 00:18:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:00.505 00:18:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:00.505 00:18:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:00.505 00:18:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:00.505 00:18:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:00.505 00:18:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:00.505 00:18:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:00.505 00:18:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:00.505 00:18:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:00.505 00:18:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:00.505 00:18:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:00.505 00:18:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:00.505 00:18:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:00.505 00:18:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:00.505 00:18:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:00.505 00:18:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:00.505 00:18:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:00.505 00:18:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:00.505 00:18:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:00.505 00:18:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:00.505 00:18:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:00.505 00:18:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:00.505 00:18:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:00.505 00:18:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:00.505 00:18:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:00.505 00:18:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:00.505 00:18:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:00.505 00:18:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:00.505 00:18:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:00.505 00:18:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:00.505 00:18:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:00.505 00:18:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:00.505 00:18:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:00.505 00:18:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:00.505 00:18:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:00.505 00:18:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:00.505 00:18:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:00.505 00:18:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:00.505 00:18:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:00.505 00:18:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:00.505 00:18:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:00.505 00:18:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:00.505 00:18:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:00.505 00:18:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:00.505 00:18:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:00.505 00:18:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:00.505 00:18:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:00.505 00:18:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:00.505 00:18:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:00.505 00:18:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:00.505 00:18:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:00.505 00:18:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:00.505 00:18:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:00.505 00:18:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:00.505 00:18:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:00.505 00:18:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:00.505 00:18:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:00.505 00:18:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:00.505 00:18:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:00.505 00:18:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:00.505 00:18:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:00.505 00:18:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:00.505 00:18:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:00.505 00:18:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:00.505 00:18:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:00.505 00:18:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:00.505 00:18:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:00.506 00:18:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:00.506 00:18:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:00.506 00:18:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:00.506 00:18:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:00.506 00:18:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:00.506 00:18:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:00.506 00:18:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:00.506 00:18:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:00.506 00:18:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:00.506 00:18:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:00.506 00:18:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:00.506 00:18:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:00.506 00:18:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:00.506 00:18:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:00.506 00:18:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:00.506 00:18:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:00.506 00:18:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:00.506 00:18:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:00.506 00:18:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:00.506 00:18:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:00.506 00:18:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:00.506 00:18:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:00.506 00:18:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:00.506 00:18:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:00.506 00:18:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:00.506 00:18:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:00.506 00:18:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:00.506 00:18:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:00.506 00:18:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:00.506 00:18:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:00.506 00:18:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:00.506 00:18:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:00.506 00:18:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:00.506 00:18:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:00.506 00:18:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:00.506 00:18:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:00.506 00:18:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:00.506 00:18:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:00.506 00:18:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:00.506 00:18:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:00.506 00:18:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:00.506 00:18:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:00.506 00:18:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:00.506 00:18:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:00.506 00:18:28 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:06:00.506 00:18:28 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:06:00.506 00:18:28 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:06:00.506 00:18:28 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:06:00.506 00:18:28 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:06:00.506 00:18:28 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:06:00.506 00:18:28 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:06:00.506 00:18:28 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:06:00.506 00:18:28 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:06:00.506 00:18:28 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:06:00.506 00:18:28 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:06:00.506 00:18:28 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:06:00.506 00:18:28 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:06:00.506 00:18:28 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:06:00.506 00:18:28 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:06:00.506 00:18:28 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:06:00.506 00:18:28 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:06:00.506 00:18:28 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:06:00.506 00:18:28 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:06:00.506 00:18:28 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=2 00:06:00.506 00:18:28 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:06:00.506 00:18:28 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:06:00.506 00:18:28 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:06:00.506 00:18:28 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:06:00.506 00:18:28 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:06:00.506 00:18:28 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:06:00.506 00:18:28 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:06:00.506 00:18:28 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:06:00.506 00:18:28 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:06:00.506 00:18:28 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:06:00.506 00:18:28 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:06:00.506 00:18:28 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:06:00.506 00:18:28 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:06:00.506 00:18:28 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:06:00.506 00:18:28 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:06:00.506 00:18:28 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:06:00.506 00:18:28 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:00.506 00:18:28 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:00.506 00:18:28 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:06:00.506 ************************************ 00:06:00.506 START TEST default_setup 00:06:00.506 ************************************ 00:06:00.506 00:18:28 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1121 -- # default_setup 00:06:00.506 00:18:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:06:00.506 00:18:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:06:00.506 00:18:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:06:00.506 00:18:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:06:00.506 00:18:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:06:00.506 00:18:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:06:00.506 00:18:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:06:00.506 00:18:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:06:00.506 00:18:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:06:00.506 00:18:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:06:00.506 00:18:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:06:00.506 00:18:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:06:00.506 00:18:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:06:00.506 00:18:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:06:00.506 00:18:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:06:00.506 00:18:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:06:00.506 00:18:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:06:00.506 00:18:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:06:00.506 00:18:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:06:00.506 00:18:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:06:00.506 00:18:28 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:06:00.506 00:18:28 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:06:01.442 0000:00:04.7 (8086 3c27): ioatdma -> vfio-pci 00:06:01.442 0000:00:04.6 (8086 3c26): ioatdma -> vfio-pci 00:06:01.442 0000:00:04.5 (8086 3c25): ioatdma -> vfio-pci 00:06:01.703 0000:00:04.4 (8086 3c24): ioatdma -> vfio-pci 00:06:01.703 0000:00:04.3 (8086 3c23): ioatdma -> vfio-pci 00:06:01.703 0000:00:04.2 (8086 3c22): ioatdma -> vfio-pci 00:06:01.703 0000:00:04.1 (8086 3c21): ioatdma -> vfio-pci 00:06:01.703 0000:00:04.0 (8086 3c20): ioatdma -> vfio-pci 00:06:01.703 0000:80:04.7 (8086 3c27): ioatdma -> vfio-pci 00:06:01.703 0000:80:04.6 (8086 3c26): ioatdma -> vfio-pci 00:06:01.703 0000:80:04.5 (8086 3c25): ioatdma -> vfio-pci 00:06:01.703 0000:80:04.4 (8086 3c24): ioatdma -> vfio-pci 00:06:01.703 0000:80:04.3 (8086 3c23): ioatdma -> vfio-pci 00:06:01.703 0000:80:04.2 (8086 3c22): ioatdma -> vfio-pci 00:06:01.703 0000:80:04.1 (8086 3c21): ioatdma -> vfio-pci 00:06:01.703 0000:80:04.0 (8086 3c20): ioatdma -> vfio-pci 00:06:02.649 0000:84:00.0 (8086 0a54): nvme -> vfio-pci 00:06:02.649 00:18:30 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:06:02.649 00:18:30 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:06:02.649 00:18:30 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:06:02.649 00:18:30 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:06:02.649 00:18:30 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:06:02.649 00:18:30 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:06:02.649 00:18:30 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:06:02.649 00:18:30 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:06:02.649 00:18:30 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:06:02.649 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:06:02.649 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:06:02.649 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:06:02.649 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:06:02.649 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:02.649 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:02.649 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:02.649 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:06:02.649 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:02.649 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:02.649 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:02.649 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 52291176 kB' 'MemFree: 32974696 kB' 'MemAvailable: 36434432 kB' 'Buffers: 5520 kB' 'Cached: 15282892 kB' 'SwapCached: 0 kB' 'Active: 12318828 kB' 'Inactive: 3440976 kB' 'Active(anon): 11918804 kB' 'Inactive(anon): 0 kB' 'Active(file): 400024 kB' 'Inactive(file): 3440976 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 474560 kB' 'Mapped: 161816 kB' 'Shmem: 11447412 kB' 'KReclaimable: 175240 kB' 'Slab: 418288 kB' 'SReclaimable: 175240 kB' 'SUnreclaim: 243048 kB' 'KernelStack: 9952 kB' 'PageTables: 7008 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 33485616 kB' 'Committed_AS: 12887116 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 186844 kB' 'VmallocChunk: 0 kB' 'Percpu: 19968 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 536868 kB' 'DirectMap2M: 19308544 kB' 'DirectMap1G: 40894464 kB' 00:06:02.649 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:02.649 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:02.649 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:02.649 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:02.649 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:02.649 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:02.649 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:02.649 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:02.649 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:02.649 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:02.649 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:02.649 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:02.649 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:02.649 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:02.649 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:02.649 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:02.649 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:02.649 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:02.649 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:02.649 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:02.649 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:02.649 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:02.649 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:02.649 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:02.649 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:02.649 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:02.649 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:02.649 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:02.649 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:02.649 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:02.649 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:02.649 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:02.649 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:02.649 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:02.649 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:02.649 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:02.649 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:02.649 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:02.649 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:02.649 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:02.649 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:02.649 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:02.649 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:02.649 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:02.649 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:02.649 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:02.649 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:02.649 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:02.649 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:02.649 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:02.649 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:02.649 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:02.649 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:02.649 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:02.649 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:02.649 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:02.649 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:02.649 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:02.649 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:02.649 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:02.649 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:02.649 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:02.649 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:02.649 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:02.650 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:02.650 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:02.650 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:02.650 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:02.650 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:02.650 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:02.650 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:02.650 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:02.650 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:02.650 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:02.650 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:02.650 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:02.650 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:02.650 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:02.650 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:02.650 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:02.650 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:02.650 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:02.650 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:02.650 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:02.650 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:02.650 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:02.650 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:02.650 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:02.650 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:02.650 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:02.650 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:02.650 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:02.650 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:02.650 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:02.650 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:02.650 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:02.650 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:02.650 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:02.650 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:02.650 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:02.650 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:02.650 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:02.650 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:02.650 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:02.650 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:02.650 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:02.650 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:02.650 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:02.650 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:02.650 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:02.650 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:02.650 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:02.650 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:02.650 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:02.650 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:02.650 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:02.650 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:02.650 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:02.650 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:02.650 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:02.650 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:02.650 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:02.650 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:02.650 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:02.650 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:02.650 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:02.650 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:02.650 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:02.650 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:02.650 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:02.650 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:02.650 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:02.650 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:02.650 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:02.650 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:02.650 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:02.650 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:02.650 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:02.650 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:02.650 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:02.650 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:02.650 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:02.650 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:02.650 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:02.650 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:02.650 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:02.650 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:02.650 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:02.650 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:02.650 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:02.650 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:02.650 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:02.650 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:02.650 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:02.650 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:02.650 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:02.650 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:02.650 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:02.650 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:02.650 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:02.650 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:02.650 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:06:02.650 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:06:02.650 00:18:30 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:06:02.650 00:18:30 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:06:02.650 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:02.650 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:06:02.650 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:06:02.650 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:06:02.650 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:02.650 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:02.650 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:02.650 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:06:02.650 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:02.650 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:02.650 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:02.650 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 52291176 kB' 'MemFree: 32982424 kB' 'MemAvailable: 36442156 kB' 'Buffers: 5520 kB' 'Cached: 15282896 kB' 'SwapCached: 0 kB' 'Active: 12318072 kB' 'Inactive: 3440976 kB' 'Active(anon): 11918048 kB' 'Inactive(anon): 0 kB' 'Active(file): 400024 kB' 'Inactive(file): 3440976 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 473812 kB' 'Mapped: 161840 kB' 'Shmem: 11447416 kB' 'KReclaimable: 175232 kB' 'Slab: 418396 kB' 'SReclaimable: 175232 kB' 'SUnreclaim: 243164 kB' 'KernelStack: 9824 kB' 'PageTables: 6864 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 33485616 kB' 'Committed_AS: 12887344 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 186892 kB' 'VmallocChunk: 0 kB' 'Percpu: 19968 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 536868 kB' 'DirectMap2M: 19308544 kB' 'DirectMap1G: 40894464 kB' 00:06:02.650 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.650 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:02.650 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:02.650 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:02.650 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.651 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:02.651 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:02.651 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:02.651 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.651 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:02.651 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:02.651 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:02.651 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.651 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:02.651 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:02.651 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:02.651 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.651 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:02.651 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:02.651 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:02.651 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.651 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:02.651 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:02.651 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:02.651 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.651 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:02.651 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:02.651 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:02.651 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.651 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:02.651 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:02.651 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:02.651 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.651 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:02.651 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:02.651 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:02.651 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.651 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:02.651 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:02.651 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:02.651 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.651 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:02.651 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:02.651 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:02.651 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.651 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:02.651 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:02.651 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:02.651 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.651 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:02.651 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:02.651 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:02.651 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.651 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:02.651 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:02.651 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:02.651 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.651 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:02.651 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:02.651 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:02.651 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.651 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:02.651 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:02.651 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:02.651 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.651 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:02.651 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:02.651 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:02.651 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.651 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:02.651 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:02.651 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:02.651 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.651 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:02.651 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:02.651 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:02.651 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.651 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:02.651 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:02.651 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:02.651 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.651 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:02.651 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:02.651 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:02.651 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.651 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:02.651 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:02.651 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:02.651 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.651 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:02.651 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:02.651 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:02.651 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.651 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:02.651 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:02.651 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:02.651 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.651 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:02.651 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:02.651 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:02.651 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.651 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:02.651 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:02.651 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:02.651 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.651 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:02.651 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:02.651 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:02.651 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.651 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:02.651 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:02.651 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:02.651 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.651 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:02.651 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:02.651 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:02.651 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.651 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:02.651 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:02.651 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:02.651 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.651 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:02.651 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:02.651 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:02.651 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.651 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:02.651 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:02.651 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:02.651 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.651 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:02.651 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:02.651 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:02.651 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.651 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:02.651 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:02.651 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:02.651 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.652 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:02.652 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:02.652 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:02.652 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.652 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:02.652 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:02.652 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:02.652 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.652 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:02.652 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:02.652 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:02.652 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.652 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:02.652 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:02.652 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:02.652 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.652 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:02.652 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:02.652 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:02.652 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.652 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:02.652 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:02.652 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:02.652 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.652 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:02.652 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:02.652 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:02.652 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.652 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:02.652 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:02.652 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:02.652 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.652 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:02.652 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:02.652 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:02.652 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.652 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:02.652 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:02.652 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:02.652 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.652 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:02.652 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:02.652 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:02.652 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.652 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:02.652 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:02.652 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:02.652 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.652 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:02.652 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:02.652 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:02.652 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.652 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:02.652 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:02.652 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:02.652 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.652 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:02.652 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:02.652 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:02.652 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.652 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:02.652 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:02.652 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:02.652 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.652 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:02.652 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:02.652 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:02.652 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.652 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:06:02.652 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:06:02.652 00:18:30 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:06:02.652 00:18:30 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:06:02.652 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:06:02.652 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:06:02.652 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:06:02.652 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:06:02.652 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:02.652 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:02.652 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:02.652 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:06:02.652 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:02.652 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:02.652 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:02.652 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 52291176 kB' 'MemFree: 32982900 kB' 'MemAvailable: 36442632 kB' 'Buffers: 5520 kB' 'Cached: 15282916 kB' 'SwapCached: 0 kB' 'Active: 12316876 kB' 'Inactive: 3440976 kB' 'Active(anon): 11916852 kB' 'Inactive(anon): 0 kB' 'Active(file): 400024 kB' 'Inactive(file): 3440976 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 472508 kB' 'Mapped: 161760 kB' 'Shmem: 11447436 kB' 'KReclaimable: 175232 kB' 'Slab: 418320 kB' 'SReclaimable: 175232 kB' 'SUnreclaim: 243088 kB' 'KernelStack: 9872 kB' 'PageTables: 6308 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 33485616 kB' 'Committed_AS: 12885300 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 186844 kB' 'VmallocChunk: 0 kB' 'Percpu: 19968 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 536868 kB' 'DirectMap2M: 19308544 kB' 'DirectMap1G: 40894464 kB' 00:06:02.652 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:02.652 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:02.652 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:02.652 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:02.652 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:02.652 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:02.652 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:02.652 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:02.652 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:02.652 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:02.652 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:02.652 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:02.652 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:02.652 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:02.652 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:02.652 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:02.652 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:02.652 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:02.652 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:02.652 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:02.652 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:02.652 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:02.652 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:02.652 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:02.652 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:02.652 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:02.652 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:02.652 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:02.652 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:02.652 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:02.652 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:02.652 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:02.652 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:02.652 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:02.653 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:02.653 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:02.653 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:02.653 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:02.653 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:02.653 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:02.653 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:02.653 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:02.653 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:02.653 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:02.653 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:02.653 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:02.653 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:02.653 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:02.653 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:02.653 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:02.653 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:02.653 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:02.653 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:02.653 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:02.653 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:02.653 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:02.653 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:02.653 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:02.653 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:02.653 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:02.653 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:02.653 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:02.653 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:02.653 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:02.653 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:02.653 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:02.653 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:02.653 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:02.653 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:02.653 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:02.653 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:02.653 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:02.653 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:02.653 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:02.653 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:02.653 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:02.653 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:02.653 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:02.653 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:02.653 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:02.653 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:02.653 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:02.653 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:02.653 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:02.653 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:02.653 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:02.653 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:02.653 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:02.653 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:02.653 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:02.653 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:02.653 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:02.653 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:02.653 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:02.653 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:02.653 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:02.653 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:02.653 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:02.653 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:02.653 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:02.653 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:02.653 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:02.653 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:02.653 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:02.653 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:02.653 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:02.653 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:02.653 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:02.653 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:02.653 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:02.653 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:02.653 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:02.653 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:02.653 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:02.653 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:02.653 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:02.653 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:02.653 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:02.653 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:02.653 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:02.653 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:02.653 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:02.653 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:02.653 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:02.653 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:02.653 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:02.653 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:02.653 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:02.653 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:02.653 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:02.653 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:02.653 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:02.653 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:02.653 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:02.653 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:02.653 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:02.653 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:02.653 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:02.653 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:02.653 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:02.653 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:02.654 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:02.654 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:02.654 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:02.654 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:02.654 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:02.654 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:02.654 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:02.654 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:02.654 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:02.654 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:02.654 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:02.654 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:02.654 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:02.654 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:02.654 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:02.654 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:02.654 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:02.654 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:02.654 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:02.654 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:02.654 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:02.654 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:02.654 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:02.654 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:02.654 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:02.654 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:02.654 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:02.654 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:02.654 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:02.654 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:02.654 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:02.654 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:02.654 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:02.654 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:02.654 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:02.654 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:02.654 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:02.654 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:02.654 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:02.654 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:02.654 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:02.654 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:02.654 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:02.654 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:02.654 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:02.654 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:02.654 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:02.654 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:02.654 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:02.654 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:02.654 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:02.654 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:02.654 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:02.654 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:02.654 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:02.654 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:02.654 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:02.654 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:02.654 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:02.654 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:02.654 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:06:02.654 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:06:02.654 00:18:30 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:06:02.654 00:18:30 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:06:02.654 nr_hugepages=1024 00:06:02.654 00:18:30 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:06:02.654 resv_hugepages=0 00:06:02.654 00:18:30 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:06:02.654 surplus_hugepages=0 00:06:02.654 00:18:30 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:06:02.654 anon_hugepages=0 00:06:02.654 00:18:30 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:06:02.654 00:18:30 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:06:02.654 00:18:30 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:06:02.654 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:06:02.654 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:06:02.654 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:06:02.654 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:06:02.654 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:02.654 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:02.654 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:02.654 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:06:02.654 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:02.654 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:02.654 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:02.654 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 52291176 kB' 'MemFree: 32982648 kB' 'MemAvailable: 36442380 kB' 'Buffers: 5520 kB' 'Cached: 15282936 kB' 'SwapCached: 0 kB' 'Active: 12317056 kB' 'Inactive: 3440976 kB' 'Active(anon): 11917032 kB' 'Inactive(anon): 0 kB' 'Active(file): 400024 kB' 'Inactive(file): 3440976 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 472852 kB' 'Mapped: 161744 kB' 'Shmem: 11447456 kB' 'KReclaimable: 175232 kB' 'Slab: 418320 kB' 'SReclaimable: 175232 kB' 'SUnreclaim: 243088 kB' 'KernelStack: 9936 kB' 'PageTables: 7180 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 33485616 kB' 'Committed_AS: 12885320 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 186780 kB' 'VmallocChunk: 0 kB' 'Percpu: 19968 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 536868 kB' 'DirectMap2M: 19308544 kB' 'DirectMap1G: 40894464 kB' 00:06:02.654 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:02.654 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:02.654 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:02.654 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:02.654 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:02.654 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:02.654 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:02.654 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:02.654 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:02.654 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:02.654 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:02.654 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:02.654 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:02.654 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:02.654 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:02.654 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:02.654 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:02.654 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:02.654 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:02.654 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:02.654 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:02.654 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:02.654 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:02.654 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:02.654 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:02.654 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:02.654 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:02.654 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:02.654 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:02.654 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:02.654 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:02.655 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:02.655 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:02.655 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:02.655 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:02.655 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:02.655 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:02.655 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:02.655 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:02.655 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:02.655 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:02.655 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:02.655 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:02.655 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:02.655 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:02.655 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:02.655 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:02.655 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:02.655 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:02.655 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:02.655 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:02.655 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:02.655 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:02.655 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:02.655 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:02.655 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:02.655 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:02.655 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:02.655 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:02.655 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:02.655 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:02.655 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:02.655 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:02.655 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:02.655 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:02.655 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:02.655 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:02.655 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:02.655 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:02.655 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:02.655 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:02.655 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:02.655 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:02.655 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:02.655 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:02.655 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:02.655 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:02.655 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:02.655 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:02.655 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:02.655 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:02.655 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:02.655 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:02.655 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:02.655 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:02.655 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:02.655 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:02.655 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:02.655 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:02.655 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:02.655 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:02.655 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:02.655 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:02.655 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:02.655 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:02.655 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:02.655 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:02.655 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:02.655 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:02.655 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:02.655 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:02.655 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:02.655 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:02.655 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:02.655 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:02.655 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:02.655 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:02.655 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:02.655 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:02.655 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:02.655 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:02.655 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:02.655 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:02.655 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:02.655 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:02.655 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:02.655 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:02.655 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:02.655 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:02.655 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:02.655 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:02.655 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:02.655 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:02.655 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:02.655 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:02.655 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:02.655 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:02.655 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:02.655 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:02.655 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:02.655 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:02.655 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:02.655 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:02.655 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:02.655 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:02.655 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:02.655 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:02.655 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:02.655 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:02.655 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:02.655 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:02.655 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:02.655 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:02.655 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:02.655 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:02.655 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:02.655 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:02.655 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:02.655 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:02.656 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:02.656 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:02.656 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:02.656 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:02.656 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:02.656 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:02.656 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:02.656 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:02.656 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:02.656 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:02.656 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:02.656 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:02.656 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:02.656 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:02.656 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:02.656 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:02.656 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:02.656 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:02.656 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:02.656 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:02.656 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:02.656 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:02.656 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:02.656 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:02.656 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:02.656 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:02.656 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:02.656 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:02.656 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:02.656 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:02.656 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:02.656 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:02.656 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:02.656 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:02.656 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:02.656 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:02.656 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:02.656 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:02.656 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:02.656 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:02.656 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:02.656 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:02.656 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:02.656 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:02.656 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:06:02.656 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:06:02.656 00:18:30 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:06:02.656 00:18:30 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:06:02.656 00:18:30 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:06:02.656 00:18:30 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:06:02.656 00:18:30 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:06:02.656 00:18:30 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:06:02.656 00:18:30 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:06:02.656 00:18:30 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=2 00:06:02.656 00:18:30 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:06:02.656 00:18:30 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:06:02.656 00:18:30 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:06:02.656 00:18:30 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:06:02.656 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:02.656 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:06:02.656 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:06:02.656 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:06:02.656 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:02.656 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:06:02.656 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:06:02.656 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:06:02.656 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:02.656 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:02.656 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:02.656 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32834692 kB' 'MemFree: 19832876 kB' 'MemUsed: 13001816 kB' 'SwapCached: 0 kB' 'Active: 6854660 kB' 'Inactive: 3336096 kB' 'Active(anon): 6611848 kB' 'Inactive(anon): 0 kB' 'Active(file): 242812 kB' 'Inactive(file): 3336096 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9873520 kB' 'Mapped: 47368 kB' 'AnonPages: 320412 kB' 'Shmem: 6294612 kB' 'KernelStack: 5864 kB' 'PageTables: 3852 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 103204 kB' 'Slab: 230000 kB' 'SReclaimable: 103204 kB' 'SUnreclaim: 126796 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:06:02.656 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.656 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:02.656 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:02.656 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:02.656 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.656 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:02.656 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:02.656 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:02.656 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.656 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:02.656 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:02.656 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:02.656 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.656 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:02.656 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:02.656 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:02.656 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.656 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:02.656 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:02.656 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:02.656 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.656 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:02.656 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:02.656 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:02.656 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.656 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:02.656 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:02.656 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:02.656 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.656 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:02.656 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:02.656 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:02.656 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.656 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:02.656 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:02.656 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:02.656 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.657 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:02.657 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:02.657 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:02.657 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.657 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:02.657 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:02.657 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:02.657 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.657 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:02.657 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:02.657 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:02.657 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.657 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:02.657 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:02.657 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:02.657 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.657 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:02.657 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:02.657 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:02.657 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.657 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:02.657 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:02.657 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:02.657 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.657 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:02.657 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:02.657 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:02.657 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.657 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:02.657 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:02.657 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:02.657 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.657 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:02.657 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:02.657 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:02.657 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.657 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:02.657 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:02.657 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:02.657 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.657 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:02.657 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:02.657 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:02.657 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.657 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:02.657 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:02.657 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:02.657 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.657 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:02.657 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:02.657 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:02.657 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.657 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:02.657 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:02.657 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:02.657 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.657 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:02.657 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:02.657 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:02.657 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.657 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:02.657 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:02.657 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:02.657 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.657 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:02.657 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:02.657 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:02.657 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.657 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:02.657 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:02.657 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:02.657 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.657 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:02.657 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:02.657 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:02.657 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.916 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:02.916 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:02.916 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:02.916 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.916 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:02.916 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:02.916 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:02.916 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.916 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:02.916 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:02.916 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:02.916 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.916 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:02.916 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:02.916 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:02.916 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.916 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:02.916 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:02.916 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:02.916 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.916 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:02.916 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:02.916 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:02.916 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.916 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:02.916 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:02.916 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:02.916 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.916 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:06:02.916 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:02.916 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:02.916 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:02.916 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:06:02.916 00:18:30 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:06:02.916 00:18:30 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:06:02.916 00:18:30 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:06:02.916 00:18:30 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:06:02.916 00:18:30 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:06:02.916 00:18:30 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:06:02.916 node0=1024 expecting 1024 00:06:02.916 00:18:30 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:06:02.916 00:06:02.916 real 0m2.246s 00:06:02.916 user 0m0.617s 00:06:02.916 sys 0m0.782s 00:06:02.916 00:18:30 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:02.916 00:18:30 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:06:02.916 ************************************ 00:06:02.916 END TEST default_setup 00:06:02.916 ************************************ 00:06:02.916 00:18:30 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:06:02.916 00:18:30 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:02.916 00:18:30 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:02.916 00:18:30 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:06:02.916 ************************************ 00:06:02.916 START TEST per_node_1G_alloc 00:06:02.916 ************************************ 00:06:02.916 00:18:30 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1121 -- # per_node_1G_alloc 00:06:02.916 00:18:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:06:02.916 00:18:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:06:02.916 00:18:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:06:02.916 00:18:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:06:02.916 00:18:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:06:02.916 00:18:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:06:02.916 00:18:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:06:02.916 00:18:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:06:02.916 00:18:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:06:02.916 00:18:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:06:02.917 00:18:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:06:02.917 00:18:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:06:02.917 00:18:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:06:02.917 00:18:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:06:02.917 00:18:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:06:02.917 00:18:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:06:02.917 00:18:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:06:02.917 00:18:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:06:02.917 00:18:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:06:02.917 00:18:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:06:02.917 00:18:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:06:02.917 00:18:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:06:02.917 00:18:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:06:02.917 00:18:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:06:02.917 00:18:30 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:06:02.917 00:18:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:06:02.917 00:18:30 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:06:03.854 0000:00:04.7 (8086 3c27): Already using the vfio-pci driver 00:06:03.854 0000:84:00.0 (8086 0a54): Already using the vfio-pci driver 00:06:03.854 0000:00:04.6 (8086 3c26): Already using the vfio-pci driver 00:06:03.854 0000:00:04.5 (8086 3c25): Already using the vfio-pci driver 00:06:03.854 0000:00:04.4 (8086 3c24): Already using the vfio-pci driver 00:06:03.854 0000:00:04.3 (8086 3c23): Already using the vfio-pci driver 00:06:03.854 0000:00:04.2 (8086 3c22): Already using the vfio-pci driver 00:06:03.854 0000:00:04.1 (8086 3c21): Already using the vfio-pci driver 00:06:03.854 0000:00:04.0 (8086 3c20): Already using the vfio-pci driver 00:06:03.854 0000:80:04.7 (8086 3c27): Already using the vfio-pci driver 00:06:03.854 0000:80:04.6 (8086 3c26): Already using the vfio-pci driver 00:06:03.854 0000:80:04.5 (8086 3c25): Already using the vfio-pci driver 00:06:03.854 0000:80:04.4 (8086 3c24): Already using the vfio-pci driver 00:06:03.854 0000:80:04.3 (8086 3c23): Already using the vfio-pci driver 00:06:03.854 0000:80:04.2 (8086 3c22): Already using the vfio-pci driver 00:06:03.854 0000:80:04.1 (8086 3c21): Already using the vfio-pci driver 00:06:03.854 0000:80:04.0 (8086 3c20): Already using the vfio-pci driver 00:06:03.854 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:06:03.854 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:06:03.854 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:06:03.854 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:06:03.854 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:06:03.854 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:06:03.854 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:06:03.854 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:06:03.854 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:06:03.854 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:06:03.854 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:06:03.854 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:06:03.854 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:06:03.854 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:03.854 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:03.854 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:03.854 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:03.854 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:03.854 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:03.854 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.854 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.854 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 52291176 kB' 'MemFree: 32990172 kB' 'MemAvailable: 36449888 kB' 'Buffers: 5520 kB' 'Cached: 15283000 kB' 'SwapCached: 0 kB' 'Active: 12316800 kB' 'Inactive: 3440976 kB' 'Active(anon): 11916776 kB' 'Inactive(anon): 0 kB' 'Active(file): 400024 kB' 'Inactive(file): 3440976 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 472408 kB' 'Mapped: 161776 kB' 'Shmem: 11447520 kB' 'KReclaimable: 175200 kB' 'Slab: 418348 kB' 'SReclaimable: 175200 kB' 'SUnreclaim: 243148 kB' 'KernelStack: 9888 kB' 'PageTables: 7032 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 33485616 kB' 'Committed_AS: 12885368 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 186860 kB' 'VmallocChunk: 0 kB' 'Percpu: 19968 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 536868 kB' 'DirectMap2M: 19308544 kB' 'DirectMap1G: 40894464 kB' 00:06:03.854 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:03.854 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:03.854 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.854 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.854 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:03.854 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:03.854 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.854 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.854 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:03.854 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:03.854 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.854 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.854 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:03.854 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:03.854 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.854 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.854 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:03.854 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:03.854 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.854 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.854 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:03.854 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:03.854 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.854 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.854 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:03.854 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:03.854 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.854 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.854 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:03.854 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:03.854 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.854 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.854 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:03.854 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:03.854 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.854 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.854 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:03.854 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:03.854 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.854 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.854 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:03.854 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:03.854 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.854 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.854 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:03.854 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:03.854 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.854 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.854 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:03.854 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:03.854 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.854 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.854 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:03.854 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:03.854 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.854 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.854 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:03.854 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:03.854 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.854 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.854 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:03.854 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:03.854 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.854 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.854 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:03.854 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:03.854 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.854 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.854 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:03.855 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:03.855 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.855 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.855 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:03.855 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:03.855 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.855 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.855 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:03.855 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:03.855 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.855 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.855 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:03.855 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:03.855 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.855 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.855 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:03.855 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:03.855 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.855 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.855 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:03.855 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:03.855 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.855 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.855 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:03.855 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:03.855 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.855 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.855 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:03.855 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:03.855 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.855 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.855 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:03.855 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:03.855 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.855 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.855 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:03.855 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:03.855 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.855 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.855 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:03.855 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:03.855 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.855 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.855 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:03.855 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:03.855 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.855 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.855 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:03.855 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:03.855 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.855 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.855 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:03.855 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:03.855 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.855 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.855 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:03.855 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:03.855 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.855 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.855 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:03.855 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:03.855 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.855 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.855 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:03.855 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:03.855 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.855 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.855 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:03.855 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:03.855 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.855 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.855 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:03.855 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:03.855 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.855 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.855 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:03.855 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:03.855 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.855 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.855 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:03.855 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:03.855 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.855 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.855 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:03.855 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:03.855 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.855 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.855 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:03.855 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:03.855 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.855 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.855 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:03.855 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:06:03.855 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:06:03.855 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:06:03.855 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:06:03.855 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:03.855 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:06:03.855 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:06:03.855 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:03.855 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:03.855 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:03.855 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:03.855 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:03.855 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:03.855 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.855 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.855 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 52291176 kB' 'MemFree: 32989956 kB' 'MemAvailable: 36449672 kB' 'Buffers: 5520 kB' 'Cached: 15283000 kB' 'SwapCached: 0 kB' 'Active: 12317300 kB' 'Inactive: 3440976 kB' 'Active(anon): 11917276 kB' 'Inactive(anon): 0 kB' 'Active(file): 400024 kB' 'Inactive(file): 3440976 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 472892 kB' 'Mapped: 161756 kB' 'Shmem: 11447520 kB' 'KReclaimable: 175200 kB' 'Slab: 418308 kB' 'SReclaimable: 175200 kB' 'SUnreclaim: 243108 kB' 'KernelStack: 9920 kB' 'PageTables: 7096 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 33485616 kB' 'Committed_AS: 12885384 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 186828 kB' 'VmallocChunk: 0 kB' 'Percpu: 19968 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 536868 kB' 'DirectMap2M: 19308544 kB' 'DirectMap1G: 40894464 kB' 00:06:03.855 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.855 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:03.855 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.855 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.855 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.855 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:03.855 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.855 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.855 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.855 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:03.855 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.855 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.855 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.855 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:03.855 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.855 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.855 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.855 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:03.855 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.855 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.855 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.855 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:03.855 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.855 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.855 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.855 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:03.855 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.855 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.855 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.855 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:03.855 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.855 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.855 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.855 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:03.855 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.855 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.855 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.855 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:03.855 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.855 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.855 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.855 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:03.855 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:03.855 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:03.855 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:03.855 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:03.855 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.120 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.120 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.120 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.120 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.120 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.120 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.120 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.120 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.120 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.120 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.120 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.120 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.120 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.120 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.120 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.120 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.120 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.120 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.120 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.120 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.120 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.120 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.120 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.120 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.120 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.120 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.120 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.120 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.120 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.120 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.120 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.120 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.120 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.120 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.120 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.120 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.120 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.120 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.120 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.120 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.120 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.120 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.120 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.120 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.120 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.120 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.120 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.120 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.120 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.120 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.120 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.120 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.120 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.120 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.120 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.120 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.120 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.120 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.120 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.120 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.120 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.120 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.120 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.120 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.120 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.120 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.120 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.120 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.120 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.120 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.120 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.120 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.120 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.120 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.121 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.121 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.121 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.121 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.121 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.121 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.121 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.121 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.121 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.121 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.121 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.121 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.121 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.121 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.121 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.121 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.121 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.121 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.121 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.121 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.121 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.121 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.121 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.121 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.121 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.121 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.121 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.121 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.121 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.121 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.121 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.121 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.121 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.121 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.121 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.121 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.121 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.121 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.121 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.121 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.121 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.121 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.121 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.121 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.121 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.121 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.121 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.121 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.121 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.121 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.121 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.121 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.121 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.121 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.121 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.121 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.121 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.121 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.121 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.121 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.121 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.121 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.121 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.121 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.121 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.121 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.121 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.121 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.121 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.121 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.121 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.121 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.121 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.121 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.121 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.121 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.121 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.121 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.121 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.121 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.121 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.121 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.121 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.121 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.121 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:06:04.121 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:06:04.121 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:06:04.121 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:06:04.121 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:06:04.121 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:06:04.121 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:06:04.121 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:04.121 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:04.121 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:04.121 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:04.121 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:04.121 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:04.121 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.121 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.121 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 52291176 kB' 'MemFree: 32989452 kB' 'MemAvailable: 36449168 kB' 'Buffers: 5520 kB' 'Cached: 15283024 kB' 'SwapCached: 0 kB' 'Active: 12317068 kB' 'Inactive: 3440976 kB' 'Active(anon): 11917044 kB' 'Inactive(anon): 0 kB' 'Active(file): 400024 kB' 'Inactive(file): 3440976 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 472672 kB' 'Mapped: 161756 kB' 'Shmem: 11447544 kB' 'KReclaimable: 175200 kB' 'Slab: 418364 kB' 'SReclaimable: 175200 kB' 'SUnreclaim: 243164 kB' 'KernelStack: 9904 kB' 'PageTables: 7080 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 33485616 kB' 'Committed_AS: 12885408 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 186812 kB' 'VmallocChunk: 0 kB' 'Percpu: 19968 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 536868 kB' 'DirectMap2M: 19308544 kB' 'DirectMap1G: 40894464 kB' 00:06:04.121 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:04.121 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.121 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.121 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.121 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:04.121 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.121 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.121 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.121 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:04.122 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.122 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.122 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.122 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:04.122 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.122 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.122 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.122 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:04.122 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.122 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.122 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.122 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:04.122 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.122 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.122 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.122 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:04.122 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.122 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.122 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.122 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:04.122 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.122 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.122 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.122 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:04.122 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.122 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.122 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.122 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:04.122 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.122 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.122 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.122 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:04.122 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.122 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.122 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.122 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:04.122 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.122 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.122 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.122 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:04.122 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.122 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.122 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.122 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:04.122 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.122 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.122 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.122 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:04.122 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.122 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.122 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.122 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:04.122 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.122 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.122 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.122 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:04.122 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.122 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.122 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.122 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:04.122 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.122 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.122 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.122 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:04.122 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.122 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.122 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.122 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:04.122 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.122 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.122 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.122 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:04.122 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.122 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.122 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.122 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:04.122 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.122 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.122 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.122 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:04.122 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.122 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.122 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.122 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:04.122 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.122 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.122 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.122 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:04.122 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.122 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.122 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.122 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:04.122 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.122 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.122 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.122 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:04.122 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.122 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.122 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.122 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:04.122 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.122 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.122 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.122 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:04.122 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.122 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.122 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.122 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:04.122 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.122 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.122 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.122 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:04.122 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.122 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.122 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.122 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:04.122 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.122 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.122 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.122 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:04.122 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.122 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.122 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.122 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:04.122 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.122 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.122 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.123 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:04.123 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.123 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.123 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.123 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:04.123 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.123 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.123 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.123 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:04.123 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.123 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.123 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.123 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:04.123 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.123 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.123 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.123 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:04.123 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.123 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.123 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.123 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:04.123 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.123 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.123 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.123 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:04.123 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.123 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.123 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.123 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:04.123 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.123 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.123 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.123 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:04.123 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.123 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.123 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.123 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:04.123 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.123 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.123 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.123 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:04.123 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.123 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.123 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.123 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:04.123 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.123 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.123 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.123 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:04.123 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.123 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.123 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.123 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:04.123 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.123 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.123 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.123 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:04.123 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.123 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.123 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.123 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:04.123 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.123 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.123 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.123 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:04.123 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:06:04.123 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:06:04.123 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:06:04.123 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:06:04.123 nr_hugepages=1024 00:06:04.123 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:06:04.123 resv_hugepages=0 00:06:04.123 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:06:04.123 surplus_hugepages=0 00:06:04.123 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:06:04.123 anon_hugepages=0 00:06:04.123 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:06:04.123 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:06:04.123 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:06:04.123 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:06:04.123 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:06:04.123 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:06:04.123 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:04.123 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:04.123 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:04.123 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:04.123 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:04.123 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:04.123 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.123 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.123 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 52291176 kB' 'MemFree: 32990124 kB' 'MemAvailable: 36449840 kB' 'Buffers: 5520 kB' 'Cached: 15283044 kB' 'SwapCached: 0 kB' 'Active: 12317108 kB' 'Inactive: 3440976 kB' 'Active(anon): 11917084 kB' 'Inactive(anon): 0 kB' 'Active(file): 400024 kB' 'Inactive(file): 3440976 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 472672 kB' 'Mapped: 161756 kB' 'Shmem: 11447564 kB' 'KReclaimable: 175200 kB' 'Slab: 418364 kB' 'SReclaimable: 175200 kB' 'SUnreclaim: 243164 kB' 'KernelStack: 9904 kB' 'PageTables: 7080 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 33485616 kB' 'Committed_AS: 12885432 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 186812 kB' 'VmallocChunk: 0 kB' 'Percpu: 19968 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 536868 kB' 'DirectMap2M: 19308544 kB' 'DirectMap1G: 40894464 kB' 00:06:04.123 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:04.123 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.123 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.123 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.123 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:04.123 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.123 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.123 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.123 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:04.123 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.123 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.123 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.123 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:04.123 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.123 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.123 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.123 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:04.123 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.123 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.123 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.123 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:04.123 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.123 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.123 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.124 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:04.124 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.124 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.124 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.124 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:04.124 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.124 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.124 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.124 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:04.124 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.124 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.124 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.124 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:04.124 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.124 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.124 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.124 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:04.124 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.124 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.124 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.124 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:04.124 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.124 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.124 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.124 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:04.124 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.124 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.124 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.124 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:04.124 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.124 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.124 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.124 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:04.124 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.124 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.124 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.124 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:04.124 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.124 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.124 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.124 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:04.124 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.124 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.124 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.124 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:04.124 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.124 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.124 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.124 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:04.124 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.124 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.124 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.124 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:04.124 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.124 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.124 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.124 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:04.124 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.124 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.124 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.124 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:04.124 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.124 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.124 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.124 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:04.124 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.124 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.124 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.124 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:04.124 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.124 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.124 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.124 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:04.124 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.124 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.124 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.124 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:04.124 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.124 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.124 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.124 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:04.124 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.124 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.124 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.124 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:04.124 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.124 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.124 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.124 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:04.124 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.124 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.124 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.124 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:04.124 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.124 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.124 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.124 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:04.124 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.124 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.125 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.125 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:04.125 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.125 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.125 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.125 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:04.125 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.125 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.125 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.125 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:04.125 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.125 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.125 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.125 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:04.125 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.125 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.125 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.125 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:04.125 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.125 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.125 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.125 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:04.125 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.125 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.125 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.125 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:04.125 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.125 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.125 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.125 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:04.125 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.125 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.125 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.125 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:04.125 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.125 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.125 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.125 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:04.125 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.125 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.125 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.125 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:04.125 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.125 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.125 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.125 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:04.125 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.125 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.125 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.125 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:04.125 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.125 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.125 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.125 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:04.125 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.125 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.125 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.125 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:04.125 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.125 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.125 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.125 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:04.125 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.125 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.125 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.125 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:04.125 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.125 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.125 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.125 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:04.125 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 1024 00:06:04.125 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:06:04.125 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:06:04.125 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:06:04.125 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:06:04.125 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:06:04.125 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:06:04.125 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:06:04.125 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:06:04.125 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:06:04.125 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:06:04.125 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:06:04.125 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:06:04.125 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:06:04.125 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:04.125 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:06:04.125 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:06:04.125 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:04.125 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:04.125 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:06:04.125 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:06:04.125 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:04.125 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:04.125 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.125 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.125 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32834692 kB' 'MemFree: 20879656 kB' 'MemUsed: 11955036 kB' 'SwapCached: 0 kB' 'Active: 6854304 kB' 'Inactive: 3336096 kB' 'Active(anon): 6611492 kB' 'Inactive(anon): 0 kB' 'Active(file): 242812 kB' 'Inactive(file): 3336096 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9873600 kB' 'Mapped: 47368 kB' 'AnonPages: 319920 kB' 'Shmem: 6294692 kB' 'KernelStack: 5816 kB' 'PageTables: 3804 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 103204 kB' 'Slab: 229956 kB' 'SReclaimable: 103204 kB' 'SUnreclaim: 126752 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:06:04.125 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.125 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.125 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.125 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.125 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.125 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.125 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.125 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.125 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.125 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.125 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.125 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.125 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.125 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.125 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.125 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.125 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.125 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.126 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.126 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.126 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.126 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.126 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.126 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.126 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.126 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.126 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.126 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.126 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.126 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.126 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.126 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.126 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.126 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.126 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.126 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.126 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.126 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.126 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.126 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.126 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.126 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.126 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.126 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.126 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.126 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.126 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.126 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.126 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.126 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.126 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.126 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.126 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.126 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.126 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.126 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.126 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.126 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.126 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.126 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.126 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.126 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.126 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.126 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.126 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.126 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.126 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.126 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.126 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.126 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.126 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.126 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.126 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.126 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.126 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.126 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.126 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.126 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.126 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.126 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.126 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.126 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.126 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.126 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.126 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.126 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.126 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.126 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.126 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.126 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.126 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.126 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.126 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.126 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.126 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.126 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.126 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.126 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.126 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.126 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.126 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.126 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.126 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.126 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.126 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.126 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.126 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.126 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.126 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.126 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.126 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.126 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.126 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.126 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.126 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.126 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.126 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.126 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.126 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.126 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.126 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.126 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.126 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.126 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.126 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.126 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.126 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.126 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.126 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.126 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.126 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.126 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.126 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.126 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.126 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.126 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.126 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.126 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.126 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.126 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.126 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.126 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.126 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.127 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.127 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.127 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:06:04.127 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:06:04.127 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:06:04.127 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:06:04.127 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:06:04.127 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:06:04.127 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:04.127 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=1 00:06:04.127 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:06:04.127 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:04.127 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:04.127 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:06:04.127 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:06:04.127 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:04.127 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:04.127 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.127 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.127 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 19456484 kB' 'MemFree: 12110468 kB' 'MemUsed: 7346016 kB' 'SwapCached: 0 kB' 'Active: 5462476 kB' 'Inactive: 104880 kB' 'Active(anon): 5305264 kB' 'Inactive(anon): 0 kB' 'Active(file): 157212 kB' 'Inactive(file): 104880 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 5415008 kB' 'Mapped: 114388 kB' 'AnonPages: 152376 kB' 'Shmem: 5152916 kB' 'KernelStack: 4056 kB' 'PageTables: 3220 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 71996 kB' 'Slab: 188408 kB' 'SReclaimable: 71996 kB' 'SUnreclaim: 116412 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:06:04.127 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.127 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.127 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.127 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.127 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.127 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.127 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.127 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.127 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.127 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.127 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.127 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.127 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.127 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.127 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.127 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.127 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.127 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.127 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.127 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.127 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.127 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.127 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.127 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.127 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.127 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.127 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.127 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.127 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.127 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.127 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.127 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.127 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.127 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.127 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.127 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.127 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.127 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.127 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.127 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.127 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.127 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.127 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.127 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.127 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.127 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.127 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.127 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.127 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.127 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.127 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.127 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.127 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.127 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.127 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.127 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.127 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.127 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.127 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.127 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.127 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.127 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.127 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.127 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.127 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.127 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.127 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.127 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.127 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.127 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.127 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.127 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.127 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.127 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.127 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.127 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.127 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.127 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.127 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.127 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.127 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.127 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.127 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.127 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.127 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.127 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.127 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.127 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.127 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.127 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.127 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.127 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.127 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.127 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.127 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.127 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.127 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.128 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.128 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.128 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.128 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.128 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.128 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.128 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.128 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.128 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.128 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.128 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.128 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.128 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.128 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.128 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.128 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.128 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.128 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.128 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.128 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.128 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.128 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.128 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.128 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.128 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.128 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.128 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.128 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.128 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.128 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.128 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.128 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.128 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.128 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.128 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.128 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.128 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.128 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.128 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.128 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.128 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.128 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.128 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.128 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.128 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:06:04.128 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:04.128 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:04.128 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:04.128 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:06:04.128 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:06:04.128 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:06:04.128 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:06:04.128 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:06:04.128 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:06:04.128 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:06:04.128 node0=512 expecting 512 00:06:04.128 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:06:04.128 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:06:04.128 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:06:04.128 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:06:04.128 node1=512 expecting 512 00:06:04.128 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:06:04.128 00:06:04.128 real 0m1.279s 00:06:04.128 user 0m0.606s 00:06:04.128 sys 0m0.710s 00:06:04.128 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:04.128 00:18:31 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:06:04.128 ************************************ 00:06:04.128 END TEST per_node_1G_alloc 00:06:04.128 ************************************ 00:06:04.128 00:18:31 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:06:04.128 00:18:31 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:04.128 00:18:31 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:04.128 00:18:31 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:06:04.128 ************************************ 00:06:04.128 START TEST even_2G_alloc 00:06:04.128 ************************************ 00:06:04.128 00:18:31 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1121 -- # even_2G_alloc 00:06:04.128 00:18:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:06:04.128 00:18:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:06:04.128 00:18:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:06:04.128 00:18:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:06:04.128 00:18:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:06:04.128 00:18:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:06:04.128 00:18:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:06:04.128 00:18:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:06:04.128 00:18:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:06:04.128 00:18:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:06:04.128 00:18:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:06:04.128 00:18:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:06:04.128 00:18:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:06:04.128 00:18:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:06:04.128 00:18:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:06:04.128 00:18:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:06:04.128 00:18:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 512 00:06:04.128 00:18:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 1 00:06:04.128 00:18:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:06:04.128 00:18:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:06:04.128 00:18:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:06:04.128 00:18:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:06:04.128 00:18:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:06:04.128 00:18:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:06:04.128 00:18:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:06:04.128 00:18:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:06:04.128 00:18:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:06:04.128 00:18:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:06:05.065 0000:00:04.7 (8086 3c27): Already using the vfio-pci driver 00:06:05.065 0000:84:00.0 (8086 0a54): Already using the vfio-pci driver 00:06:05.065 0000:00:04.6 (8086 3c26): Already using the vfio-pci driver 00:06:05.065 0000:00:04.5 (8086 3c25): Already using the vfio-pci driver 00:06:05.065 0000:00:04.4 (8086 3c24): Already using the vfio-pci driver 00:06:05.065 0000:00:04.3 (8086 3c23): Already using the vfio-pci driver 00:06:05.065 0000:00:04.2 (8086 3c22): Already using the vfio-pci driver 00:06:05.065 0000:00:04.1 (8086 3c21): Already using the vfio-pci driver 00:06:05.065 0000:00:04.0 (8086 3c20): Already using the vfio-pci driver 00:06:05.066 0000:80:04.7 (8086 3c27): Already using the vfio-pci driver 00:06:05.066 0000:80:04.6 (8086 3c26): Already using the vfio-pci driver 00:06:05.066 0000:80:04.5 (8086 3c25): Already using the vfio-pci driver 00:06:05.066 0000:80:04.4 (8086 3c24): Already using the vfio-pci driver 00:06:05.066 0000:80:04.3 (8086 3c23): Already using the vfio-pci driver 00:06:05.066 0000:80:04.2 (8086 3c22): Already using the vfio-pci driver 00:06:05.066 0000:80:04.1 (8086 3c21): Already using the vfio-pci driver 00:06:05.066 0000:80:04.0 (8086 3c20): Already using the vfio-pci driver 00:06:05.334 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:06:05.334 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:06:05.334 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:06:05.334 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:06:05.334 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:06:05.334 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:06:05.334 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:06:05.334 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:06:05.334 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:06:05.334 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:06:05.334 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:06:05.334 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:06:05.334 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:05.334 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:05.335 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:05.335 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:05.335 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:05.335 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:05.335 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.335 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.335 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 52291176 kB' 'MemFree: 32987496 kB' 'MemAvailable: 36447212 kB' 'Buffers: 5520 kB' 'Cached: 15283140 kB' 'SwapCached: 0 kB' 'Active: 12317492 kB' 'Inactive: 3440976 kB' 'Active(anon): 11917468 kB' 'Inactive(anon): 0 kB' 'Active(file): 400024 kB' 'Inactive(file): 3440976 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 473016 kB' 'Mapped: 161796 kB' 'Shmem: 11447660 kB' 'KReclaimable: 175200 kB' 'Slab: 418296 kB' 'SReclaimable: 175200 kB' 'SUnreclaim: 243096 kB' 'KernelStack: 9904 kB' 'PageTables: 7080 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 33485616 kB' 'Committed_AS: 12885792 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 186908 kB' 'VmallocChunk: 0 kB' 'Percpu: 19968 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 536868 kB' 'DirectMap2M: 19308544 kB' 'DirectMap1G: 40894464 kB' 00:06:05.335 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:05.335 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.335 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.335 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.335 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:05.335 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.335 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.335 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.335 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:05.335 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.336 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.336 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.336 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:05.336 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.336 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.336 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.336 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:05.336 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.336 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.336 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.336 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:05.336 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.336 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.336 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.336 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:05.336 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.336 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.336 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.336 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:05.336 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.336 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.336 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.336 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:05.336 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.336 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.336 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.336 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:05.336 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.336 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.336 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.336 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:05.336 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.336 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.336 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.336 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:05.336 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.336 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.336 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.336 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:05.336 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.336 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.336 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.336 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:05.336 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.336 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.336 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.336 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:05.336 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.336 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.336 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.336 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:05.336 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.336 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.336 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.336 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:05.336 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.336 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.336 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.336 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:05.336 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.337 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.337 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.337 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:05.337 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.337 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.337 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.337 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:05.337 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.337 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.337 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.337 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:05.337 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.337 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.337 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.337 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:05.337 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.337 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.337 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.337 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:05.337 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.337 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.337 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.337 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:05.337 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.337 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.337 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.337 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:05.337 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.337 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.337 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.337 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:05.337 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.337 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.337 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.337 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:05.337 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.337 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.337 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.337 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:05.337 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.337 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.337 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.337 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:05.337 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.337 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.337 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.337 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:05.337 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.337 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.337 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.337 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:05.337 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.337 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.337 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.337 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:05.337 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.337 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.337 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.337 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:05.338 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.338 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.338 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.338 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:05.338 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.338 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.338 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.338 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:05.338 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.338 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.338 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.338 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:05.338 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.338 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.338 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.338 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:05.338 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.338 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.338 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.338 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:05.338 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.338 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.338 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.338 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:05.338 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.338 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.338 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.338 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:05.338 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.338 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.338 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.338 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:05.338 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:06:05.338 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:06:05.338 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:06:05.338 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:06:05.338 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:05.338 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:06:05.338 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:06:05.338 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:05.338 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:05.338 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:05.338 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:05.338 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:05.338 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:05.338 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.338 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.339 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 52291176 kB' 'MemFree: 32987876 kB' 'MemAvailable: 36447592 kB' 'Buffers: 5520 kB' 'Cached: 15283140 kB' 'SwapCached: 0 kB' 'Active: 12317156 kB' 'Inactive: 3440976 kB' 'Active(anon): 11917132 kB' 'Inactive(anon): 0 kB' 'Active(file): 400024 kB' 'Inactive(file): 3440976 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 472616 kB' 'Mapped: 161772 kB' 'Shmem: 11447660 kB' 'KReclaimable: 175200 kB' 'Slab: 418272 kB' 'SReclaimable: 175200 kB' 'SUnreclaim: 243072 kB' 'KernelStack: 9888 kB' 'PageTables: 6992 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 33485616 kB' 'Committed_AS: 12885808 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 186876 kB' 'VmallocChunk: 0 kB' 'Percpu: 19968 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 536868 kB' 'DirectMap2M: 19308544 kB' 'DirectMap1G: 40894464 kB' 00:06:05.339 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.339 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.339 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.339 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.339 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.339 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.339 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.339 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.339 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.339 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.339 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.339 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.339 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.339 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.339 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.339 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.339 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.339 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.339 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.339 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.339 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.339 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.339 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.339 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.339 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.339 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.339 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.339 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.339 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.339 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.339 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.339 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.339 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.339 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.339 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.339 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.339 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.339 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.339 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.339 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.339 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.339 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.339 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.339 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.339 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.339 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.339 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.339 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.339 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.339 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.340 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.340 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.340 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.340 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.340 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.340 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.340 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.340 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.340 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.340 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.340 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.340 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.340 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.340 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.340 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.340 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.340 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.340 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.340 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.340 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.340 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.340 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.340 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.340 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.340 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.340 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.340 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.340 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.340 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.340 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.340 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.340 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.340 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.340 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.340 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.340 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.340 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.340 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.340 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.340 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.340 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.340 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.340 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.340 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.340 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.340 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.340 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.340 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.340 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.340 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.340 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.340 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.340 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.340 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.340 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.340 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.340 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.341 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.341 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.341 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.341 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.341 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.341 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.341 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.341 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.341 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.341 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.341 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.341 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.341 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.341 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.341 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.341 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.341 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.341 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.341 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.341 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.341 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.341 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.341 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.341 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.341 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.341 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.341 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.341 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.341 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.341 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.341 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.341 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.341 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.341 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.341 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.341 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.341 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.341 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.341 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.341 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.341 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.341 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.341 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.341 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.341 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.341 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.341 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.341 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.341 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.341 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.341 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.342 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.342 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.342 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.342 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.342 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.342 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.342 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.342 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.342 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.342 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.342 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.342 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.342 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.342 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.342 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.342 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.342 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.342 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.342 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.342 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.342 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.342 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.342 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.342 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.342 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.342 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.342 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.342 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.342 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.342 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.342 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.342 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.343 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.343 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.343 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.343 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.343 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.343 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.343 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.343 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.343 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.343 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.343 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.343 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.343 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.343 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.343 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.343 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:06:05.343 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:06:05.343 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:06:05.343 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:06:05.343 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:06:05.343 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:06:05.343 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:06:05.343 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:05.343 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:05.343 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:05.343 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:05.343 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:05.343 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:05.343 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.343 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.343 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 52291176 kB' 'MemFree: 32988144 kB' 'MemAvailable: 36447860 kB' 'Buffers: 5520 kB' 'Cached: 15283160 kB' 'SwapCached: 0 kB' 'Active: 12317256 kB' 'Inactive: 3440976 kB' 'Active(anon): 11917232 kB' 'Inactive(anon): 0 kB' 'Active(file): 400024 kB' 'Inactive(file): 3440976 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 472684 kB' 'Mapped: 161772 kB' 'Shmem: 11447680 kB' 'KReclaimable: 175200 kB' 'Slab: 418328 kB' 'SReclaimable: 175200 kB' 'SUnreclaim: 243128 kB' 'KernelStack: 9872 kB' 'PageTables: 6980 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 33485616 kB' 'Committed_AS: 12885832 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 186876 kB' 'VmallocChunk: 0 kB' 'Percpu: 19968 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 536868 kB' 'DirectMap2M: 19308544 kB' 'DirectMap1G: 40894464 kB' 00:06:05.343 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.343 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.343 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.343 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.343 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.343 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.343 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.343 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.343 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.343 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.343 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.343 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.343 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.343 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.343 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.343 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.343 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.344 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.344 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.344 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.344 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.344 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.344 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.344 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.344 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.344 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.344 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.344 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.344 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.344 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.344 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.344 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.344 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.344 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.344 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.344 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.344 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.344 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.344 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.344 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.344 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.344 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.344 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.344 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.344 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.344 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.344 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.344 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.344 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.344 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.344 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.344 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.344 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.344 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.344 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.344 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.344 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.344 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.344 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.344 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.344 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.344 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.344 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.344 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.344 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.344 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.344 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.344 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.344 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.344 00:18:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.344 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.344 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.344 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.344 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.344 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.344 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.344 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.344 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.345 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.345 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.345 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.345 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.345 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.345 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.345 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.345 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.345 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.345 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.345 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.345 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.345 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.345 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.345 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.345 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.345 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.345 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.345 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.345 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.345 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.345 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.345 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.345 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.345 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.345 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.345 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.345 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.345 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.345 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.345 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.345 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.345 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.345 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.345 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.345 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.345 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.345 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.345 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.345 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.345 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.345 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.345 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.345 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.345 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.345 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.345 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.346 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.346 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.346 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.346 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.346 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.346 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.346 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.346 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.346 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.346 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.346 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.346 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.346 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.346 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.346 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.346 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.346 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.346 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.346 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.346 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.346 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.346 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.346 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.346 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.346 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.346 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.346 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.346 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.346 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.346 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.346 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.346 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.346 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.346 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.346 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.346 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.346 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.346 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.346 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.346 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.346 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.346 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.346 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.346 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.346 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.346 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.346 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.346 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.346 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.346 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.346 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.346 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.346 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.346 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.346 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.346 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.346 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.347 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.347 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.347 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.347 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.347 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.347 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.347 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.347 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.347 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.347 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.347 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.347 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.347 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.347 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.347 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.347 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.347 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.347 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.347 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:05.347 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:06:05.347 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:06:05.347 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:06:05.347 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:06:05.347 nr_hugepages=1024 00:06:05.347 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:06:05.347 resv_hugepages=0 00:06:05.347 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:06:05.347 surplus_hugepages=0 00:06:05.347 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:06:05.347 anon_hugepages=0 00:06:05.347 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:06:05.347 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:06:05.347 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:06:05.347 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:06:05.347 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:06:05.347 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:06:05.347 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:05.347 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:05.347 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:05.347 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:05.347 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:05.347 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:05.347 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.347 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.347 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 52291176 kB' 'MemFree: 32988144 kB' 'MemAvailable: 36447860 kB' 'Buffers: 5520 kB' 'Cached: 15283180 kB' 'SwapCached: 0 kB' 'Active: 12317320 kB' 'Inactive: 3440976 kB' 'Active(anon): 11917296 kB' 'Inactive(anon): 0 kB' 'Active(file): 400024 kB' 'Inactive(file): 3440976 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 472748 kB' 'Mapped: 161772 kB' 'Shmem: 11447700 kB' 'KReclaimable: 175200 kB' 'Slab: 418328 kB' 'SReclaimable: 175200 kB' 'SUnreclaim: 243128 kB' 'KernelStack: 9904 kB' 'PageTables: 7084 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 33485616 kB' 'Committed_AS: 12885852 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 186876 kB' 'VmallocChunk: 0 kB' 'Percpu: 19968 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 536868 kB' 'DirectMap2M: 19308544 kB' 'DirectMap1G: 40894464 kB' 00:06:05.347 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.347 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.347 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.347 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.348 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.348 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.348 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.348 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.348 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.348 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.348 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.348 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.348 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.348 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.348 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.348 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.348 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.348 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.348 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.348 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.348 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.348 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.348 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.348 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.348 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.348 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.348 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.348 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.348 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.348 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.348 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.348 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.348 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.348 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.348 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.348 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.348 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.348 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.348 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.348 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.348 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.348 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.348 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.348 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.348 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.348 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.348 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.348 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.348 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.348 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.348 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.348 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.348 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.348 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.348 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.348 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.348 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.348 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.348 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.348 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.348 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.348 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.348 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.348 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.348 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.348 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.348 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.348 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.349 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.349 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.349 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.349 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.349 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.349 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.349 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.349 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.349 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.349 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.349 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.349 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.349 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.349 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.349 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.349 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.349 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.349 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.349 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.349 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.349 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.349 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.349 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.349 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.349 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.349 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.349 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.349 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.349 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.349 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.349 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.349 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.349 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.349 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.349 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.349 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.349 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.349 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.349 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.349 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.349 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.349 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.349 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.349 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.349 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.349 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.349 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.349 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.349 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.349 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.349 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.349 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.349 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.349 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.349 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.349 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.349 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.349 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.349 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.349 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.349 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.349 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.349 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.350 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.350 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.350 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.350 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.350 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.350 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.350 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.350 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.350 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.350 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.350 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.350 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.350 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.350 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.350 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.350 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.350 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.350 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.350 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.350 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.350 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.350 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.350 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.350 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.350 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.350 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.350 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.350 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.350 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.350 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.350 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.350 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.350 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.350 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.350 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.350 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.350 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.350 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.350 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.350 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.350 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.350 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.350 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.350 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.350 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.350 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.350 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.350 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.350 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.350 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.350 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.350 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.350 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.350 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.350 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.350 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.350 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.350 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.350 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.350 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.350 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.350 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:05.350 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:06:05.350 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:06:05.350 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:06:05.350 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:06:05.350 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:06:05.350 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:06:05.350 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:06:05.350 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:06:05.350 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:06:05.350 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:06:05.350 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:06:05.350 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:06:05.350 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:06:05.350 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:06:05.350 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:05.350 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:06:05.350 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:06:05.350 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:05.350 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:05.350 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:06:05.350 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:06:05.350 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:05.350 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:05.350 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.350 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.350 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32834692 kB' 'MemFree: 20868876 kB' 'MemUsed: 11965816 kB' 'SwapCached: 0 kB' 'Active: 6855052 kB' 'Inactive: 3336096 kB' 'Active(anon): 6612240 kB' 'Inactive(anon): 0 kB' 'Active(file): 242812 kB' 'Inactive(file): 3336096 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9873696 kB' 'Mapped: 47368 kB' 'AnonPages: 320560 kB' 'Shmem: 6294788 kB' 'KernelStack: 5848 kB' 'PageTables: 3852 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 103204 kB' 'Slab: 229972 kB' 'SReclaimable: 103204 kB' 'SUnreclaim: 126768 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:06:05.350 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.350 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.350 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.350 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.350 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.350 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.351 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.351 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.351 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.351 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.351 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.351 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.351 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.351 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.351 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.351 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.351 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.351 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.351 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.351 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.351 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.351 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.351 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.351 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.351 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.351 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.351 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.351 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.351 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.351 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.351 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.351 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.351 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.351 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.351 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.351 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.351 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.351 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.351 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.351 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.351 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.351 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.351 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.351 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.351 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.351 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.351 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.351 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.351 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.351 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.351 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.351 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.351 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.351 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.351 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.351 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.351 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.351 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.351 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.351 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.351 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.351 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.351 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.351 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.351 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.351 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.351 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.351 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.351 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.351 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.351 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.351 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.351 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.351 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.351 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.351 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.351 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.351 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.351 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.351 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.351 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.351 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.351 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.351 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.351 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.351 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.351 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.351 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.351 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.351 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.351 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.351 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.352 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.352 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.352 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.352 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.352 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.352 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.352 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.352 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.352 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.352 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.352 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.352 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.352 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.352 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.352 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.352 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.352 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.352 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.352 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.352 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.352 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.352 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.352 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.352 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.352 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.352 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.352 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.352 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.352 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.352 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.352 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.352 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.352 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.352 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.352 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.352 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.352 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.352 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.352 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.352 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.352 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.352 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.352 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.352 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.352 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.352 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.352 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.352 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.352 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.352 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.352 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.352 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.352 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.352 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:06:05.352 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:06:05.352 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:06:05.352 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:06:05.352 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:06:05.352 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:06:05.352 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:05.352 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=1 00:06:05.353 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:06:05.353 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:05.353 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:05.353 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:06:05.353 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:06:05.353 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:05.353 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:05.353 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.353 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.353 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 19456484 kB' 'MemFree: 12119524 kB' 'MemUsed: 7336960 kB' 'SwapCached: 0 kB' 'Active: 5462000 kB' 'Inactive: 104880 kB' 'Active(anon): 5304788 kB' 'Inactive(anon): 0 kB' 'Active(file): 157212 kB' 'Inactive(file): 104880 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 5415008 kB' 'Mapped: 114404 kB' 'AnonPages: 151916 kB' 'Shmem: 5152916 kB' 'KernelStack: 4056 kB' 'PageTables: 3232 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 71996 kB' 'Slab: 188356 kB' 'SReclaimable: 71996 kB' 'SUnreclaim: 116360 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:06:05.353 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.353 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.353 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.353 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.353 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.353 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.353 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.353 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.353 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.353 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.354 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.354 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.354 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.354 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.354 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.354 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.354 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.354 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.354 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.354 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.354 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.354 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.354 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.354 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.354 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.354 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.354 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.354 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.354 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.354 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.354 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.354 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.354 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.354 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.354 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.354 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.354 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.354 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.354 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.354 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.354 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.354 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.354 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.354 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.354 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.354 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.354 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.354 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.354 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.354 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.354 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.354 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.354 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.354 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.354 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.354 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.354 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.354 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.354 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.354 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.354 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.354 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.354 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.354 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.354 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.354 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.354 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.354 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.354 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.354 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.354 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.354 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.354 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.354 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.354 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.354 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.355 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.355 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.355 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.355 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.355 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.355 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.355 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.355 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.355 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.355 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.355 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.355 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.355 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.355 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.355 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.355 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.355 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.355 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.355 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.355 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.355 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.355 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.355 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.355 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.355 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.355 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.355 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.355 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.355 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.355 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.355 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.355 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.355 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.355 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.355 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.355 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.355 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.355 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.355 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.355 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.355 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.355 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.355 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.355 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.355 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.355 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.355 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.355 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.355 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.355 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.355 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.355 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.355 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.355 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.355 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.355 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.355 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.355 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.355 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.355 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.355 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.355 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.355 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.355 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.355 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.355 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:05.355 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:05.355 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:05.355 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:05.355 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:06:05.355 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:06:05.355 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:06:05.355 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:06:05.355 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:06:05.355 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:06:05.355 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:06:05.355 node0=512 expecting 512 00:06:05.355 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:06:05.355 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:06:05.355 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:06:05.355 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:06:05.355 node1=512 expecting 512 00:06:05.355 00:18:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:06:05.355 00:06:05.355 real 0m1.220s 00:06:05.355 user 0m0.549s 00:06:05.355 sys 0m0.704s 00:06:05.355 00:18:33 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:05.355 00:18:33 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:06:05.355 ************************************ 00:06:05.355 END TEST even_2G_alloc 00:06:05.355 ************************************ 00:06:05.355 00:18:33 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:06:05.355 00:18:33 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:05.355 00:18:33 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:05.355 00:18:33 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:06:05.355 ************************************ 00:06:05.355 START TEST odd_alloc 00:06:05.355 ************************************ 00:06:05.355 00:18:33 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1121 -- # odd_alloc 00:06:05.355 00:18:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:06:05.355 00:18:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:06:05.355 00:18:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:06:05.355 00:18:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:06:05.355 00:18:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:06:05.355 00:18:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:06:05.356 00:18:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:06:05.356 00:18:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:06:05.356 00:18:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:06:05.356 00:18:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:06:05.356 00:18:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:06:05.356 00:18:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:06:05.356 00:18:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:06:05.356 00:18:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:06:05.356 00:18:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:06:05.356 00:18:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:06:05.356 00:18:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 513 00:06:05.356 00:18:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 1 00:06:05.356 00:18:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:06:05.356 00:18:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:06:05.356 00:18:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:06:05.356 00:18:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:06:05.356 00:18:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:06:05.356 00:18:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:06:05.356 00:18:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:06:05.356 00:18:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:06:05.356 00:18:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:06:05.356 00:18:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:06:06.742 0000:00:04.7 (8086 3c27): Already using the vfio-pci driver 00:06:06.742 0000:84:00.0 (8086 0a54): Already using the vfio-pci driver 00:06:06.742 0000:00:04.6 (8086 3c26): Already using the vfio-pci driver 00:06:06.742 0000:00:04.5 (8086 3c25): Already using the vfio-pci driver 00:06:06.742 0000:00:04.4 (8086 3c24): Already using the vfio-pci driver 00:06:06.742 0000:00:04.3 (8086 3c23): Already using the vfio-pci driver 00:06:06.742 0000:00:04.2 (8086 3c22): Already using the vfio-pci driver 00:06:06.742 0000:00:04.1 (8086 3c21): Already using the vfio-pci driver 00:06:06.742 0000:00:04.0 (8086 3c20): Already using the vfio-pci driver 00:06:06.742 0000:80:04.7 (8086 3c27): Already using the vfio-pci driver 00:06:06.742 0000:80:04.6 (8086 3c26): Already using the vfio-pci driver 00:06:06.742 0000:80:04.5 (8086 3c25): Already using the vfio-pci driver 00:06:06.742 0000:80:04.4 (8086 3c24): Already using the vfio-pci driver 00:06:06.742 0000:80:04.3 (8086 3c23): Already using the vfio-pci driver 00:06:06.742 0000:80:04.2 (8086 3c22): Already using the vfio-pci driver 00:06:06.742 0000:80:04.1 (8086 3c21): Already using the vfio-pci driver 00:06:06.742 0000:80:04.0 (8086 3c20): Already using the vfio-pci driver 00:06:06.742 00:18:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:06:06.742 00:18:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:06:06.742 00:18:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:06:06.742 00:18:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:06:06.742 00:18:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:06:06.742 00:18:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:06:06.742 00:18:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:06:06.742 00:18:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:06:06.742 00:18:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:06:06.742 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:06:06.742 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:06:06.742 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:06:06.742 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:06.742 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:06.742 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:06.742 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:06.742 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:06.742 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:06.742 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.742 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.743 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 52291176 kB' 'MemFree: 32990328 kB' 'MemAvailable: 36450044 kB' 'Buffers: 5520 kB' 'Cached: 15283268 kB' 'SwapCached: 0 kB' 'Active: 12313660 kB' 'Inactive: 3440976 kB' 'Active(anon): 11913636 kB' 'Inactive(anon): 0 kB' 'Active(file): 400024 kB' 'Inactive(file): 3440976 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 469088 kB' 'Mapped: 160844 kB' 'Shmem: 11447788 kB' 'KReclaimable: 175200 kB' 'Slab: 418300 kB' 'SReclaimable: 175200 kB' 'SUnreclaim: 243100 kB' 'KernelStack: 9856 kB' 'PageTables: 6752 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 33484592 kB' 'Committed_AS: 12870548 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 186780 kB' 'VmallocChunk: 0 kB' 'Percpu: 19968 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 536868 kB' 'DirectMap2M: 19308544 kB' 'DirectMap1G: 40894464 kB' 00:06:06.743 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:06.743 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:06.743 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.743 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.743 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:06.743 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:06.743 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.743 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.743 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:06.743 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:06.743 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.743 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.743 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:06.743 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:06.743 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.743 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.743 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:06.743 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:06.743 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.743 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.743 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:06.743 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:06.743 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.743 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.743 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:06.743 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:06.743 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.743 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.743 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:06.743 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:06.743 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.743 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.743 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:06.743 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:06.743 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.743 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.743 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:06.743 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:06.743 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.743 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.743 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:06.743 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:06.743 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.743 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.743 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:06.743 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:06.743 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.743 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.743 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:06.743 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:06.743 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.743 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.743 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:06.743 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:06.743 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.743 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.743 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:06.743 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:06.743 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.743 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.743 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:06.743 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:06.743 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.743 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.743 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:06.743 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:06.743 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.743 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.743 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:06.743 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:06.743 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.743 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.743 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:06.743 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:06.743 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.743 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.743 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:06.743 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:06.743 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.743 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.743 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:06.743 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:06.743 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.743 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.743 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:06.743 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:06.743 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.743 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.743 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:06.743 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:06.743 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.743 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.743 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:06.743 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:06.743 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.743 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.743 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:06.743 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:06.743 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.743 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.743 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:06.743 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:06.743 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.743 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.743 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:06.743 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:06.743 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.743 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.743 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:06.743 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:06.743 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.743 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.743 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:06.743 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:06.743 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.743 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.743 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:06.743 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:06.743 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.743 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.743 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:06.744 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:06.744 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.744 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.744 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:06.744 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:06.744 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.744 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.744 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:06.744 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:06.744 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.744 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.744 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:06.744 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:06.744 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.744 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.744 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:06.744 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:06.744 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.744 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.744 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:06.744 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:06.744 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.744 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.744 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:06.744 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:06.744 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.744 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.744 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:06.744 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:06.744 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.744 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.744 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:06.744 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:06.744 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.744 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.744 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:06.744 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:06.744 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.744 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.744 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:06.744 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:06:06.744 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:06:06.744 00:18:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:06:06.744 00:18:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:06:06.744 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:06.744 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:06:06.744 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:06:06.744 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:06.744 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:06.744 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:06.744 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:06.744 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:06.744 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:06.744 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.744 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.744 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 52291176 kB' 'MemFree: 32993128 kB' 'MemAvailable: 36452844 kB' 'Buffers: 5520 kB' 'Cached: 15283268 kB' 'SwapCached: 0 kB' 'Active: 12314944 kB' 'Inactive: 3440976 kB' 'Active(anon): 11914920 kB' 'Inactive(anon): 0 kB' 'Active(file): 400024 kB' 'Inactive(file): 3440976 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 470388 kB' 'Mapped: 161224 kB' 'Shmem: 11447788 kB' 'KReclaimable: 175200 kB' 'Slab: 418300 kB' 'SReclaimable: 175200 kB' 'SUnreclaim: 243100 kB' 'KernelStack: 9872 kB' 'PageTables: 6824 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 33484592 kB' 'Committed_AS: 12872052 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 186780 kB' 'VmallocChunk: 0 kB' 'Percpu: 19968 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 536868 kB' 'DirectMap2M: 19308544 kB' 'DirectMap1G: 40894464 kB' 00:06:06.744 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.744 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:06.744 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.744 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.744 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.744 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:06.744 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.744 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.744 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.744 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:06.744 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.744 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.744 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.744 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:06.744 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.744 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.744 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.744 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:06.744 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.744 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.744 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.744 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:06.744 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.744 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.744 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.744 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:06.744 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.744 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.744 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.744 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:06.744 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.744 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.744 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.744 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:06.744 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.744 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.744 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.744 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:06.744 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.744 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.744 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.744 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:06.744 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.744 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.744 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.744 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:06.744 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.744 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.744 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.744 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:06.744 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.744 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.744 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.744 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:06.744 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.744 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.744 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.745 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:06.745 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.745 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.745 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.745 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:06.745 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.745 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.745 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.745 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:06.745 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.745 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.745 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.745 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:06.745 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.745 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.745 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.745 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:06.745 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.745 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.745 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.745 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:06.745 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.745 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.745 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.745 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:06.745 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.745 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.745 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.745 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:06.745 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.745 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.745 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.745 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:06.745 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.745 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.745 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.745 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:06.745 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.745 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.745 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.745 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:06.745 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.745 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.745 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.745 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:06.745 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.745 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.745 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.745 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:06.745 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.745 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.745 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.745 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:06.745 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.745 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.745 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.745 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:06.745 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.745 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.745 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.745 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:06.745 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.745 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.745 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.745 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:06.745 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.745 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.745 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.745 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:06.745 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.745 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.745 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.745 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:06.745 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.745 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.745 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.745 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:06.745 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.745 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.745 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.745 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:06.745 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.745 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.745 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.745 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:06.745 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.745 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.745 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.745 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:06.745 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.745 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.745 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.745 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:06.745 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.745 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.745 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.745 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:06.745 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.745 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.745 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.745 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:06.745 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.745 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.745 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.745 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:06.745 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.745 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.745 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.745 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:06.745 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.745 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.745 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.745 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:06.745 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.745 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.745 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.745 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:06.745 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.745 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.745 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.745 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:06.745 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.745 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.745 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.746 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:06.746 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.746 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.746 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.746 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:06.746 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.746 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.746 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.746 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:06.746 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.746 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.746 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.746 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:06.746 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.746 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.746 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.746 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:06.746 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.746 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.746 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.746 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:06.746 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.746 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.746 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.746 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:06:06.746 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:06:06.746 00:18:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:06:06.746 00:18:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:06:06.746 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:06:06.746 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:06:06.746 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:06:06.746 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:06.746 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:06.746 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:06.746 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:06.746 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:06.746 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:06.746 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.746 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.746 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 52291176 kB' 'MemFree: 32993356 kB' 'MemAvailable: 36453072 kB' 'Buffers: 5520 kB' 'Cached: 15283292 kB' 'SwapCached: 0 kB' 'Active: 12317544 kB' 'Inactive: 3440976 kB' 'Active(anon): 11917520 kB' 'Inactive(anon): 0 kB' 'Active(file): 400024 kB' 'Inactive(file): 3440976 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 472904 kB' 'Mapped: 161116 kB' 'Shmem: 11447812 kB' 'KReclaimable: 175200 kB' 'Slab: 418284 kB' 'SReclaimable: 175200 kB' 'SUnreclaim: 243084 kB' 'KernelStack: 9824 kB' 'PageTables: 6692 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 33484592 kB' 'Committed_AS: 12875240 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 186764 kB' 'VmallocChunk: 0 kB' 'Percpu: 19968 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 536868 kB' 'DirectMap2M: 19308544 kB' 'DirectMap1G: 40894464 kB' 00:06:06.746 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.746 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:06.746 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.746 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.746 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.746 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:06.746 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.746 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.746 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.746 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:06.746 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.746 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.746 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.746 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:06.746 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.746 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.746 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.746 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:06.746 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.746 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.746 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.746 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:06.746 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.746 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.746 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.746 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:06.746 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.746 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.746 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.746 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:06.746 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.746 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.746 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.746 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:06.746 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.746 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.746 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.746 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:06.746 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.746 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.746 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.746 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:06.746 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.746 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.746 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.746 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:06.746 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.746 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.746 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.746 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:06.746 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.746 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.746 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.746 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:06.746 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.746 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.746 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.746 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:06.746 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.746 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.746 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.746 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:06.746 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.746 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.746 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.746 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:06.747 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.747 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.747 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.747 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:06.747 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.747 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.747 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.747 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:06.747 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.747 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.747 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.747 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:06.747 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.747 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.747 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.747 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:06.747 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.747 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.747 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.747 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:06.747 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.747 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.747 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.747 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:06.747 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.747 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.747 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.747 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:06.747 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.747 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.747 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.747 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:06.747 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.747 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.747 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.747 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:06.747 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.747 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.747 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.747 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:06.747 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.747 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.747 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.747 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:06.747 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.747 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.747 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.747 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:06.747 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.747 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.747 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.747 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:06.747 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.747 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.747 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.747 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:06.747 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.747 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.747 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.747 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:06.747 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.747 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.747 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.747 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:06.747 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.747 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.747 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.747 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:06.747 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.747 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.747 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.747 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:06.747 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.747 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.747 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.747 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:06.747 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.747 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.747 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.747 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:06.747 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.747 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.747 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.747 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:06.747 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.747 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.747 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.747 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:06.747 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.747 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.747 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.747 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:06.747 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.747 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.747 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.747 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:06.748 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.748 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.748 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.748 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:06.748 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.748 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.748 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.748 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:06.748 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.748 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.748 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.748 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:06.748 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.748 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.748 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.748 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:06.748 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.748 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.748 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.748 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:06.748 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.748 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.748 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.748 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:06.748 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.748 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.748 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.748 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:06.748 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.748 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.748 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.748 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:06.748 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.748 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.748 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.748 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:06.748 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.748 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.748 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.748 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:06:06.748 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:06:06.748 00:18:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:06:06.748 00:18:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:06:06.748 nr_hugepages=1025 00:06:06.748 00:18:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:06:06.748 resv_hugepages=0 00:06:06.748 00:18:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:06:06.748 surplus_hugepages=0 00:06:06.748 00:18:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:06:06.748 anon_hugepages=0 00:06:06.748 00:18:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:06:06.748 00:18:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:06:06.748 00:18:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:06:06.748 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:06:06.748 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:06:06.748 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:06:06.748 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:06.748 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:06.748 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:06.748 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:06.748 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:06.748 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:06.748 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.748 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.748 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 52291176 kB' 'MemFree: 32993356 kB' 'MemAvailable: 36453072 kB' 'Buffers: 5520 kB' 'Cached: 15283312 kB' 'SwapCached: 0 kB' 'Active: 12319360 kB' 'Inactive: 3440976 kB' 'Active(anon): 11919336 kB' 'Inactive(anon): 0 kB' 'Active(file): 400024 kB' 'Inactive(file): 3440976 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 474728 kB' 'Mapped: 161596 kB' 'Shmem: 11447832 kB' 'KReclaimable: 175200 kB' 'Slab: 418284 kB' 'SReclaimable: 175200 kB' 'SUnreclaim: 243084 kB' 'KernelStack: 9840 kB' 'PageTables: 6756 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 33484592 kB' 'Committed_AS: 12876724 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 186784 kB' 'VmallocChunk: 0 kB' 'Percpu: 19968 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 536868 kB' 'DirectMap2M: 19308544 kB' 'DirectMap1G: 40894464 kB' 00:06:06.748 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.748 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:06.748 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.748 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.748 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.748 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:06.748 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.748 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.748 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.748 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:06.748 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.748 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.748 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.748 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:06.748 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.748 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.748 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.748 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:06.748 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.748 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.748 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.748 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:06.748 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.748 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.748 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.748 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:06.748 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.748 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.748 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.748 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:06.748 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.748 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.748 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.748 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:06.748 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.748 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.748 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.748 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:06.748 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.748 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.748 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.748 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:06.748 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.748 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.748 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.748 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:06.748 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.748 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.748 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.748 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:06.748 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.748 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.748 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.748 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:06.748 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.748 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.748 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.748 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:06.748 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.748 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.748 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.748 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:06.748 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.748 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.748 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.749 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:06.749 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.749 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.749 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.749 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:06.749 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.749 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.749 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.749 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:06.749 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.749 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.749 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.749 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:06.749 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.749 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.749 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.749 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:06.749 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.749 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.749 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.749 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:06.749 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.749 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.749 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.749 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:06.749 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.749 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.749 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.749 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:06.749 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.749 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.749 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.749 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:06.749 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.749 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.749 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.749 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:06.749 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.749 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.749 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.749 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:06.749 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.749 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.749 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.749 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:06.749 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.749 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.749 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.749 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:06.749 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.749 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.749 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.749 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:06.749 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.749 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.749 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.749 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:06.749 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.749 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.749 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.749 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:06.749 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.749 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.749 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.749 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:06.749 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.749 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.749 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.749 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:06.749 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.749 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.749 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.749 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:06.749 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.749 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.749 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.749 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:06.749 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.749 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.749 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.749 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:06.749 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.749 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.749 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.749 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:06.749 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.749 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.749 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.749 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:06.749 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.749 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.749 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.749 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:06.749 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.749 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.749 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.749 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:06.749 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.749 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.749 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.749 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:06.749 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.749 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.749 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.749 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:06.749 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.749 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.749 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.749 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:06.749 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.749 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.749 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.749 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:06.749 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.749 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.749 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.749 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:06.749 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.749 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.749 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.749 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:06.749 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.749 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.749 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.749 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:06.749 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.749 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.749 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.749 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:06:06.749 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:06:06.749 00:18:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:06:06.749 00:18:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:06:06.749 00:18:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:06:06.749 00:18:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:06:06.749 00:18:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:06:06.749 00:18:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:06:06.750 00:18:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:06:06.750 00:18:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:06:06.750 00:18:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:06:06.750 00:18:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:06:06.750 00:18:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:06:06.750 00:18:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:06:06.750 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:06.750 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:06:06.750 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:06:06.750 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:06.750 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:06.750 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:06:06.750 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:06:06.750 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:06.750 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:06.750 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.750 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.750 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32834692 kB' 'MemFree: 20877016 kB' 'MemUsed: 11957676 kB' 'SwapCached: 0 kB' 'Active: 6853064 kB' 'Inactive: 3336096 kB' 'Active(anon): 6610252 kB' 'Inactive(anon): 0 kB' 'Active(file): 242812 kB' 'Inactive(file): 3336096 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9873836 kB' 'Mapped: 47024 kB' 'AnonPages: 318508 kB' 'Shmem: 6294928 kB' 'KernelStack: 5864 kB' 'PageTables: 3860 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 103204 kB' 'Slab: 230044 kB' 'SReclaimable: 103204 kB' 'SUnreclaim: 126840 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:06:06.750 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.750 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:06.750 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.750 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.750 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.750 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:06.750 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.750 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.750 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.750 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:06.750 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.750 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.750 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.750 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:06.750 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.750 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.750 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.750 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:06.750 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.750 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.750 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.750 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:06.750 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.750 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.750 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.750 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:06.750 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.750 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.750 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.750 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:06.750 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.750 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.750 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.750 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:06.750 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.750 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.750 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.750 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:06.750 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.750 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.750 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.750 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:06.750 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.750 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.750 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.750 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:06.750 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.750 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.750 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.750 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:06.750 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.750 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.750 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.750 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:06.750 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.750 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.750 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.750 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:06.750 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.750 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.750 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.750 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:06.750 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.750 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.750 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.750 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:06.750 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.750 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.750 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.750 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:06.750 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.750 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.750 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.750 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:06.750 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.750 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.750 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.750 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:06.750 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.750 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.750 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.750 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:06.750 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.750 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.750 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.750 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:06.750 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.750 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.750 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.750 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:06.750 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.750 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.750 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.750 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:06.750 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.750 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.750 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.751 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:06.751 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.751 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.751 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.751 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:06.751 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.751 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.751 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.751 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:06.751 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.751 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.751 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.751 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:06.751 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.751 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.751 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.751 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:06.751 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.751 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.751 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.751 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:06.751 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.751 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.751 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.751 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:06.751 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.751 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.751 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.751 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:06.751 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.751 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.751 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.751 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:06.751 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.751 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.751 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.751 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:06.751 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.751 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.751 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.751 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:06.751 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.751 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.751 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.751 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:06.751 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.751 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.751 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.751 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:06:06.751 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:06:06.751 00:18:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:06:06.751 00:18:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:06:06.751 00:18:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:06:06.751 00:18:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:06:06.751 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:06.751 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=1 00:06:06.751 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:06:06.751 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:06.751 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:06.751 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:06:06.751 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:06:06.751 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:06.751 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:06.751 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.751 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.751 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 19456484 kB' 'MemFree: 12112560 kB' 'MemUsed: 7343924 kB' 'SwapCached: 0 kB' 'Active: 5464636 kB' 'Inactive: 104880 kB' 'Active(anon): 5307424 kB' 'Inactive(anon): 0 kB' 'Active(file): 157212 kB' 'Inactive(file): 104880 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 5415020 kB' 'Mapped: 114244 kB' 'AnonPages: 155048 kB' 'Shmem: 5152928 kB' 'KernelStack: 3976 kB' 'PageTables: 2888 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 71996 kB' 'Slab: 188240 kB' 'SReclaimable: 71996 kB' 'SUnreclaim: 116244 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:06:06.751 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.751 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:06.751 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.751 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.751 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.751 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:06.751 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.751 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.751 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.751 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:06.751 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.751 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.751 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.751 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:06.751 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.751 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.751 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.751 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:06.751 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.751 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.751 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.751 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:06.751 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.751 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.751 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.751 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:06.751 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.751 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.751 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.751 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:06.752 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.752 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.752 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.752 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:06.752 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.752 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.752 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.752 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:06.752 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.752 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.752 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.752 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:06.752 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.752 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.752 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.752 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:06.752 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.752 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.752 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.752 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:06.752 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.752 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.752 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.752 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:06.752 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.752 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.752 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.752 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:06.752 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.752 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.752 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.752 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:06.752 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.752 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.752 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.752 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:06.752 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.752 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.752 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.752 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:06.752 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.752 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.752 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.752 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:06.752 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.752 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.752 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.752 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:06.752 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.752 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.752 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.752 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:06.752 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.752 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.752 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.752 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:06.752 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.752 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.752 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.752 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:06.752 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.752 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.752 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.752 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:06.752 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.752 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.752 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.752 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:06.752 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.752 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.752 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.752 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:06.752 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.752 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.752 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.752 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:06.752 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.752 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.752 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.752 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:06.752 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.752 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.752 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.752 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:06.752 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.752 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.752 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.752 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:06.752 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.752 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.752 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.752 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:06.752 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.752 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.752 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.752 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:06.752 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.752 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.752 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.752 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:06.752 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.752 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.752 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.752 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:06.752 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.752 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.752 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.752 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:06.752 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.752 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.752 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.752 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:06.752 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:06.752 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:06.752 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.752 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:06:06.752 00:18:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:06:06.752 00:18:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:06:06.752 00:18:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:06:06.752 00:18:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:06:06.752 00:18:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:06:06.752 00:18:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:06:06.752 node0=512 expecting 513 00:06:06.752 00:18:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:06:06.752 00:18:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:06:06.752 00:18:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:06:06.752 00:18:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:06:06.752 node1=513 expecting 512 00:06:06.752 00:18:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:06:06.752 00:06:06.752 real 0m1.288s 00:06:06.752 user 0m0.578s 00:06:06.752 sys 0m0.737s 00:06:06.753 00:18:34 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:06.753 00:18:34 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:06:06.753 ************************************ 00:06:06.753 END TEST odd_alloc 00:06:06.753 ************************************ 00:06:06.753 00:18:34 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:06:06.753 00:18:34 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:06.753 00:18:34 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:06.753 00:18:34 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:06:06.753 ************************************ 00:06:06.753 START TEST custom_alloc 00:06:06.753 ************************************ 00:06:06.753 00:18:34 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1121 -- # custom_alloc 00:06:06.753 00:18:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:06:06.753 00:18:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:06:06.753 00:18:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:06:06.753 00:18:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:06:06.753 00:18:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:06:06.753 00:18:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:06:06.753 00:18:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:06:06.753 00:18:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:06:06.753 00:18:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:06:06.753 00:18:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:06:06.753 00:18:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:06:06.753 00:18:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:06:06.753 00:18:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:06:06.753 00:18:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:06:06.753 00:18:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:06:06.753 00:18:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:06:06.753 00:18:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:06:06.753 00:18:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:06:06.753 00:18:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:06:06.753 00:18:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:06:06.753 00:18:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:06:06.753 00:18:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 256 00:06:06.753 00:18:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 1 00:06:06.753 00:18:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:06:06.753 00:18:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:06:06.753 00:18:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:06:06.753 00:18:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:06:06.753 00:18:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:06:06.753 00:18:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:06:06.753 00:18:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:06:06.753 00:18:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:06:06.753 00:18:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:06:06.753 00:18:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:06:06.753 00:18:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:06:06.753 00:18:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:06:06.753 00:18:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:06:06.753 00:18:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:06:06.753 00:18:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:06:06.753 00:18:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:06:06.753 00:18:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:06:06.753 00:18:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:06:06.753 00:18:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:06:06.753 00:18:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:06:06.753 00:18:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:06:06.753 00:18:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:06:06.753 00:18:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:06:06.753 00:18:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:06:06.753 00:18:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:06:06.753 00:18:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:06:06.753 00:18:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:06:06.753 00:18:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:06:06.753 00:18:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:06:06.753 00:18:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:06:06.753 00:18:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:06:06.753 00:18:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:06:06.753 00:18:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:06:06.753 00:18:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:06:06.753 00:18:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:06:06.753 00:18:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:06:06.753 00:18:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:06:06.753 00:18:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:06:06.753 00:18:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:06:06.753 00:18:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:06:06.753 00:18:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:06:06.753 00:18:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:06:06.753 00:18:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:06:06.753 00:18:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:06:06.753 00:18:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:06:06.753 00:18:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:06:06.753 00:18:34 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:06:06.753 00:18:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:06:06.753 00:18:34 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:06:07.690 0000:00:04.7 (8086 3c27): Already using the vfio-pci driver 00:06:07.690 0000:84:00.0 (8086 0a54): Already using the vfio-pci driver 00:06:07.690 0000:00:04.6 (8086 3c26): Already using the vfio-pci driver 00:06:07.690 0000:00:04.5 (8086 3c25): Already using the vfio-pci driver 00:06:07.690 0000:00:04.4 (8086 3c24): Already using the vfio-pci driver 00:06:07.690 0000:00:04.3 (8086 3c23): Already using the vfio-pci driver 00:06:07.690 0000:00:04.2 (8086 3c22): Already using the vfio-pci driver 00:06:07.690 0000:00:04.1 (8086 3c21): Already using the vfio-pci driver 00:06:07.690 0000:00:04.0 (8086 3c20): Already using the vfio-pci driver 00:06:07.690 0000:80:04.7 (8086 3c27): Already using the vfio-pci driver 00:06:07.690 0000:80:04.6 (8086 3c26): Already using the vfio-pci driver 00:06:07.955 0000:80:04.5 (8086 3c25): Already using the vfio-pci driver 00:06:07.955 0000:80:04.4 (8086 3c24): Already using the vfio-pci driver 00:06:07.955 0000:80:04.3 (8086 3c23): Already using the vfio-pci driver 00:06:07.955 0000:80:04.2 (8086 3c22): Already using the vfio-pci driver 00:06:07.955 0000:80:04.1 (8086 3c21): Already using the vfio-pci driver 00:06:07.955 0000:80:04.0 (8086 3c20): Already using the vfio-pci driver 00:06:07.955 00:18:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:06:07.955 00:18:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:06:07.955 00:18:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:06:07.955 00:18:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:06:07.955 00:18:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:06:07.955 00:18:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:06:07.955 00:18:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:06:07.955 00:18:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:06:07.955 00:18:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:06:07.955 00:18:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:06:07.955 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:06:07.955 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:06:07.955 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:06:07.955 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:07.955 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:07.955 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:07.955 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:07.955 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:07.955 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:07.955 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.955 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.955 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 52291176 kB' 'MemFree: 31926976 kB' 'MemAvailable: 35386692 kB' 'Buffers: 5520 kB' 'Cached: 15283388 kB' 'SwapCached: 0 kB' 'Active: 12319712 kB' 'Inactive: 3440976 kB' 'Active(anon): 11919688 kB' 'Inactive(anon): 0 kB' 'Active(file): 400024 kB' 'Inactive(file): 3440976 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 474996 kB' 'Mapped: 161624 kB' 'Shmem: 11447908 kB' 'KReclaimable: 175200 kB' 'Slab: 418356 kB' 'SReclaimable: 175200 kB' 'SUnreclaim: 243156 kB' 'KernelStack: 9824 kB' 'PageTables: 6736 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 32961328 kB' 'Committed_AS: 12876920 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 186832 kB' 'VmallocChunk: 0 kB' 'Percpu: 19968 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 536868 kB' 'DirectMap2M: 19308544 kB' 'DirectMap1G: 40894464 kB' 00:06:07.955 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:07.955 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:07.955 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.955 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.955 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:07.955 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:07.955 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.955 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.955 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:07.955 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:07.955 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.955 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.955 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:07.955 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:07.955 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.955 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.955 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:07.955 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:07.955 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.955 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.956 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:07.956 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:07.956 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.956 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.956 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:07.956 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:07.956 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.956 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.956 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:07.956 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:07.956 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.956 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.956 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:07.956 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:07.956 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.956 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.956 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:07.956 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:07.956 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.956 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.956 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:07.956 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:07.956 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.956 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.956 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:07.956 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:07.956 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.956 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.956 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:07.956 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:07.956 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.956 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.956 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:07.956 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:07.956 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.956 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.956 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:07.956 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:07.956 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.956 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.956 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:07.956 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:07.956 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.956 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.956 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:07.956 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:07.956 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.956 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.956 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:07.956 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:07.956 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.956 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.956 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:07.956 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:07.956 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.956 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.956 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:07.956 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:07.956 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.956 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.956 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:07.956 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:07.956 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.956 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.956 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:07.956 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:07.956 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.956 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.956 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:07.956 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:07.956 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.956 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.956 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:07.956 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:07.956 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.956 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.956 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:07.956 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:07.956 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.956 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.956 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:07.956 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:07.956 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.956 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.956 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:07.956 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:07.956 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.956 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.956 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:07.956 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:07.956 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.956 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.956 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:07.956 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:07.956 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.956 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.956 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:07.956 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:07.956 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.956 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.956 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:07.956 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:07.956 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.956 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.956 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:07.956 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:07.956 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.956 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.956 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:07.956 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:07.956 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.956 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.956 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:07.956 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:07.956 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.956 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.956 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:07.956 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:07.956 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.956 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.956 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:07.956 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:07.956 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.956 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.956 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:07.956 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:07.956 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.956 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.956 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:07.956 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:07.956 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.957 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.957 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:07.957 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:07.957 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.957 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.957 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:07.957 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:07.957 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.957 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.957 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:07.957 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:06:07.957 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:06:07.957 00:18:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:06:07.957 00:18:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:06:07.957 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:07.957 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:06:07.957 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:06:07.957 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:07.957 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:07.957 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:07.957 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:07.957 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:07.957 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:07.957 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.957 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.957 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 52291176 kB' 'MemFree: 31926472 kB' 'MemAvailable: 35386188 kB' 'Buffers: 5520 kB' 'Cached: 15283388 kB' 'SwapCached: 0 kB' 'Active: 12314192 kB' 'Inactive: 3440976 kB' 'Active(anon): 11914168 kB' 'Inactive(anon): 0 kB' 'Active(file): 400024 kB' 'Inactive(file): 3440976 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 469440 kB' 'Mapped: 161244 kB' 'Shmem: 11447908 kB' 'KReclaimable: 175200 kB' 'Slab: 418348 kB' 'SReclaimable: 175200 kB' 'SUnreclaim: 243148 kB' 'KernelStack: 9792 kB' 'PageTables: 6624 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 32961328 kB' 'Committed_AS: 12870816 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 186796 kB' 'VmallocChunk: 0 kB' 'Percpu: 19968 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 536868 kB' 'DirectMap2M: 19308544 kB' 'DirectMap1G: 40894464 kB' 00:06:07.957 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.957 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:07.957 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.957 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.957 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.957 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:07.957 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.957 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.957 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.957 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:07.957 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.957 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.957 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.957 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:07.957 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.957 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.957 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.957 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:07.957 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.957 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.957 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.957 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:07.957 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.957 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.957 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.957 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:07.957 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.957 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.957 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.957 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:07.957 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.957 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.957 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.957 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:07.957 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.957 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.957 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.957 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:07.957 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.957 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.957 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.957 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:07.957 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.957 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.957 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.957 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:07.957 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.957 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.957 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.957 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:07.957 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.957 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.957 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.957 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:07.957 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.957 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.957 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.957 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:07.957 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.957 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.957 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.957 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:07.957 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.957 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.957 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.957 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:07.957 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.957 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.957 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.957 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:07.957 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.957 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.957 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.957 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:07.957 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.957 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.957 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.957 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:07.957 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.957 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.957 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.957 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:07.957 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.957 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.957 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.957 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:07.957 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.957 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.957 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.957 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:07.957 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.958 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.958 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.958 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:07.958 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.958 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.958 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.958 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:07.958 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.958 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.958 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.958 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:07.958 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.958 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.958 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.958 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:07.958 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.958 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.958 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.958 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:07.958 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.958 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.958 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.958 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:07.958 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.958 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.958 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.958 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:07.958 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.958 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.958 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.958 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:07.958 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.958 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.958 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.958 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:07.958 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.958 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.958 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.958 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:07.958 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.958 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.958 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.958 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:07.958 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.958 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.958 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.958 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:07.958 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.958 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.958 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.958 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:07.958 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.958 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.958 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.958 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:07.958 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.958 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.958 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.958 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:07.958 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.958 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.958 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.958 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:07.958 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.958 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.958 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.958 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:07.958 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.958 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.958 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.958 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:07.958 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.958 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.958 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.958 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:07.958 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.958 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.958 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.958 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:07.958 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.958 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.958 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.958 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:07.958 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.958 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.958 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.958 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:07.958 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.958 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.958 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.958 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:07.958 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.958 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.958 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.958 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:07.958 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.958 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.958 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.958 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:07.958 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.958 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.958 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.958 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:07.958 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.958 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.958 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.958 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:07.958 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.958 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.958 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.958 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:07.958 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.958 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.958 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.958 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:06:07.958 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:06:07.958 00:18:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:06:07.958 00:18:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:06:07.958 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:06:07.958 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:06:07.958 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:06:07.958 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:07.958 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:07.958 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:07.958 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:07.958 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:07.958 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:07.958 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.958 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.959 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 52291176 kB' 'MemFree: 31927388 kB' 'MemAvailable: 35387104 kB' 'Buffers: 5520 kB' 'Cached: 15283412 kB' 'SwapCached: 0 kB' 'Active: 12313844 kB' 'Inactive: 3440976 kB' 'Active(anon): 11913820 kB' 'Inactive(anon): 0 kB' 'Active(file): 400024 kB' 'Inactive(file): 3440976 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 469052 kB' 'Mapped: 160688 kB' 'Shmem: 11447932 kB' 'KReclaimable: 175200 kB' 'Slab: 418332 kB' 'SReclaimable: 175200 kB' 'SUnreclaim: 243132 kB' 'KernelStack: 9824 kB' 'PageTables: 6688 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 32961328 kB' 'Committed_AS: 12870836 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 186780 kB' 'VmallocChunk: 0 kB' 'Percpu: 19968 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 536868 kB' 'DirectMap2M: 19308544 kB' 'DirectMap1G: 40894464 kB' 00:06:07.959 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:07.959 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:07.959 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.959 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.959 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:07.959 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:07.959 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.959 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.959 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:07.959 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:07.959 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.959 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.959 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:07.959 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:07.959 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.959 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.959 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:07.959 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:07.959 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.959 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.959 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:07.959 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:07.959 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.959 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.959 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:07.959 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:07.959 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.959 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.959 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:07.959 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:07.959 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.959 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.959 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:07.959 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:07.959 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.959 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.959 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:07.959 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:07.959 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.959 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.959 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:07.959 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:07.959 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.959 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.959 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:07.959 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:07.959 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.959 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.959 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:07.959 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:07.959 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.959 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.959 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:07.959 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:07.959 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.959 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.959 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:07.959 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:07.959 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.959 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.959 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:07.959 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:07.959 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.959 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.959 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:07.959 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:07.959 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.959 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.959 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:07.959 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:07.959 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.959 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.959 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:07.959 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:07.959 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.959 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.959 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:07.959 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:07.959 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.959 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.959 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:07.959 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:07.959 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.959 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.959 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:07.959 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:07.959 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.959 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.959 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:07.959 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:07.959 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.959 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.959 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:07.959 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:07.959 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.959 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.959 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:07.960 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:07.960 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.960 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.960 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:07.960 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:07.960 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.960 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.960 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:07.960 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:07.960 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.960 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.960 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:07.960 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:07.960 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.960 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.960 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:07.960 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:07.960 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.960 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.960 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:07.960 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:07.960 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.960 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.960 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:07.960 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:07.960 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.960 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.960 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:07.960 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:07.960 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.960 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.960 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:07.960 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:07.960 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.960 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.960 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:07.960 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:07.960 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.960 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.960 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:07.960 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:07.960 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.960 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.960 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:07.960 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:07.960 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.960 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.960 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:07.960 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:07.960 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.960 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.960 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:07.960 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:07.960 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.960 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.960 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:07.960 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:07.960 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.960 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.960 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:07.960 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:07.960 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.960 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.960 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:07.960 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:07.960 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.960 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.960 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:07.960 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:07.960 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.960 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.960 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:07.960 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:07.960 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.960 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.960 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:07.960 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:07.960 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.960 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.960 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:07.960 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:07.960 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.960 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.960 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:07.960 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:07.960 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.960 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.960 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:07.960 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:07.960 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.960 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.960 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:07.960 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:07.960 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.960 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.960 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:07.960 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:07.960 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.960 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.960 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:07.960 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:07.960 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.960 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.960 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:07.960 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:06:07.960 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:06:07.960 00:18:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:06:07.960 00:18:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:06:07.960 nr_hugepages=1536 00:06:07.960 00:18:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:06:07.960 resv_hugepages=0 00:06:07.960 00:18:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:06:07.960 surplus_hugepages=0 00:06:07.960 00:18:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:06:07.960 anon_hugepages=0 00:06:07.960 00:18:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:06:07.960 00:18:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:06:07.960 00:18:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:06:07.960 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:06:07.960 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:06:07.960 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:06:07.960 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:07.960 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:07.960 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:07.960 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:07.960 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:07.960 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:07.960 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.960 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.961 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 52291176 kB' 'MemFree: 31927388 kB' 'MemAvailable: 35387104 kB' 'Buffers: 5520 kB' 'Cached: 15283432 kB' 'SwapCached: 0 kB' 'Active: 12313988 kB' 'Inactive: 3440976 kB' 'Active(anon): 11913964 kB' 'Inactive(anon): 0 kB' 'Active(file): 400024 kB' 'Inactive(file): 3440976 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 469196 kB' 'Mapped: 160688 kB' 'Shmem: 11447952 kB' 'KReclaimable: 175200 kB' 'Slab: 418332 kB' 'SReclaimable: 175200 kB' 'SUnreclaim: 243132 kB' 'KernelStack: 9856 kB' 'PageTables: 6784 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 32961328 kB' 'Committed_AS: 12870860 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 186796 kB' 'VmallocChunk: 0 kB' 'Percpu: 19968 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 536868 kB' 'DirectMap2M: 19308544 kB' 'DirectMap1G: 40894464 kB' 00:06:07.961 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:07.961 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:07.961 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.961 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.961 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:07.961 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:07.961 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.961 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.961 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:07.961 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:07.961 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.961 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.961 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:07.961 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:07.961 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.961 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.961 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:07.961 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:07.961 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.961 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.961 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:07.961 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:07.961 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.961 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.961 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:07.961 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:07.961 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.961 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.961 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:07.961 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:07.961 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.961 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.961 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:07.961 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:07.961 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.961 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.961 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:07.961 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:07.961 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.961 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.961 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:07.961 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:07.961 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.961 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.961 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:07.961 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:07.961 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.961 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.961 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:07.961 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:07.961 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.961 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.961 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:07.961 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:07.961 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.961 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.961 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:07.961 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:07.961 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.961 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.961 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:07.961 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:07.961 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.961 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.961 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:07.961 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:07.961 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.961 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.961 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:07.961 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:07.961 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.961 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.961 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:07.961 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:07.961 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.961 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.961 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:07.961 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:07.961 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.961 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.961 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:07.961 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:07.961 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.961 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.961 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:07.961 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:07.961 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.961 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.961 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:07.961 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:07.961 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.961 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.961 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:07.961 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:07.961 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.961 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.961 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:07.961 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:07.961 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.961 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.961 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:07.961 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:07.961 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.961 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.961 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:07.961 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:07.961 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.961 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.961 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:07.961 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:07.961 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.961 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.961 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:07.961 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:07.961 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.961 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.961 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:07.961 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:07.961 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.961 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.961 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:07.962 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:07.962 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.962 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.962 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:07.962 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:07.962 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.962 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.962 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:07.962 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:07.962 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.962 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.962 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:07.962 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:07.962 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.962 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.962 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:07.962 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:07.962 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.962 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.962 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:07.962 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:07.962 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.962 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.962 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:07.962 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:07.962 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.962 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.962 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:07.962 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:07.962 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.962 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.962 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:07.962 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:07.962 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.962 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.962 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:07.962 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:07.962 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.962 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.962 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:07.962 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:07.962 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.962 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.962 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:07.962 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:07.962 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.962 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.962 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:07.962 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:07.962 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.962 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.962 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:07.962 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:07.962 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.962 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.962 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:07.962 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:07.962 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.962 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.962 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:07.962 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:07.962 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.962 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.962 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:07.962 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:07.962 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.962 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.962 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:07.962 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:07.962 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.962 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.962 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:07.962 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 1536 00:06:07.962 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:06:07.962 00:18:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:06:07.962 00:18:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:06:07.962 00:18:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:06:07.962 00:18:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:06:07.962 00:18:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:06:07.962 00:18:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:06:07.962 00:18:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:06:07.962 00:18:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:06:07.962 00:18:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:06:07.962 00:18:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:06:07.962 00:18:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:06:07.962 00:18:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:06:07.962 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:07.962 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:06:07.962 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:06:07.962 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:07.962 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:07.962 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:06:07.962 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:06:07.962 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:07.962 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:07.962 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.962 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.962 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32834692 kB' 'MemFree: 20880780 kB' 'MemUsed: 11953912 kB' 'SwapCached: 0 kB' 'Active: 6852988 kB' 'Inactive: 3336096 kB' 'Active(anon): 6610176 kB' 'Inactive(anon): 0 kB' 'Active(file): 242812 kB' 'Inactive(file): 3336096 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9873936 kB' 'Mapped: 46872 kB' 'AnonPages: 318316 kB' 'Shmem: 6295028 kB' 'KernelStack: 5848 kB' 'PageTables: 3812 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 103204 kB' 'Slab: 230044 kB' 'SReclaimable: 103204 kB' 'SUnreclaim: 126840 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:06:07.962 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.962 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:07.962 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.962 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.962 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.962 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:07.962 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.962 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.962 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.962 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:07.962 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.962 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.962 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.962 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:07.962 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.962 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.962 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.962 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:07.962 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.962 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.962 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.962 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:07.962 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.962 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.962 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.963 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:07.963 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.963 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.963 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.963 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:07.963 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.963 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.963 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.963 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:07.963 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.963 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.963 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.963 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:07.963 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.963 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.963 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.963 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:07.963 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.963 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.963 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.963 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:07.963 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.963 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.963 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.963 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:07.963 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.963 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.963 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.963 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:07.963 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.963 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.963 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.963 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:07.963 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.963 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.963 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.963 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:07.963 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.963 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.963 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.963 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:07.963 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.963 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.963 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.963 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:07.963 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.963 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.963 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.963 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:07.963 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.963 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.963 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.963 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:07.963 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.963 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.963 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.963 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:07.963 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.963 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.963 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.963 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:07.963 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.963 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.963 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.963 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:07.963 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.963 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.963 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.963 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:07.963 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.963 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.963 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.963 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:07.963 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.963 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.963 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.963 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:07.963 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.963 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.963 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.963 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:07.963 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.963 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.963 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.963 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:07.963 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.963 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.963 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.963 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:07.963 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.963 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.963 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.963 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:07.963 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.963 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.963 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.963 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:07.963 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.963 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.963 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.963 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:07.963 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.963 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.963 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.963 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:07.963 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.963 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.963 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.963 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:07.963 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.963 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.963 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.963 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:07.963 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.963 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.963 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.963 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:07.963 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.963 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.963 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.963 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:06:07.963 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:06:07.963 00:18:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:06:07.963 00:18:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:06:07.963 00:18:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:06:07.963 00:18:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:06:07.963 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:07.963 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=1 00:06:07.963 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:06:07.963 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:07.963 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:07.963 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:06:07.964 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:06:07.964 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:07.964 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:07.964 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.964 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.964 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 19456484 kB' 'MemFree: 11046608 kB' 'MemUsed: 8409876 kB' 'SwapCached: 0 kB' 'Active: 5460972 kB' 'Inactive: 104880 kB' 'Active(anon): 5303760 kB' 'Inactive(anon): 0 kB' 'Active(file): 157212 kB' 'Inactive(file): 104880 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 5415036 kB' 'Mapped: 113816 kB' 'AnonPages: 150880 kB' 'Shmem: 5152944 kB' 'KernelStack: 4008 kB' 'PageTables: 2972 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 71996 kB' 'Slab: 188288 kB' 'SReclaimable: 71996 kB' 'SUnreclaim: 116292 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:06:07.964 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.964 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:07.964 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.964 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.964 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.964 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:07.964 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.964 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.964 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.964 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:07.964 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.964 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.964 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.964 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:07.964 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.964 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.964 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.964 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:07.964 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.964 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.964 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.964 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:07.964 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.964 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.964 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.964 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:07.964 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.964 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.964 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.964 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:07.964 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.964 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.964 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.964 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:07.964 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.964 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.964 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.964 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:07.964 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.964 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.964 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.964 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:07.964 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.964 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.964 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.964 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:07.964 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.964 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.964 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.964 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:07.964 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.964 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.964 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.964 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:07.964 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.964 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.964 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.964 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:07.964 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.964 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.964 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.964 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:07.964 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.964 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.964 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.964 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:07.964 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.964 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.964 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.964 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:07.964 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.964 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.964 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.964 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:07.964 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.964 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.964 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.964 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:07.964 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.964 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.964 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.964 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:07.964 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.964 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.964 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.964 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:07.964 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.964 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.964 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.964 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:07.964 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.964 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.964 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.964 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:07.964 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.964 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.964 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.964 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:07.964 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.965 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.965 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.965 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:07.965 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.965 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.965 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.965 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:07.965 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.965 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.965 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.965 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:07.965 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.965 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.965 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.965 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:07.965 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.965 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.965 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.965 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:07.965 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.965 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.965 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.965 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:07.965 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.965 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.965 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.965 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:07.965 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.965 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.965 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.965 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:07.965 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.965 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.965 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.965 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:07.965 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.965 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.965 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.965 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:07.965 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.965 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.965 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.965 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:07.965 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:07.965 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:07.965 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:07.965 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:06:07.965 00:18:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:06:07.965 00:18:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:06:07.965 00:18:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:06:07.965 00:18:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:06:07.965 00:18:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:06:07.965 00:18:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:06:07.965 node0=512 expecting 512 00:06:07.965 00:18:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:06:07.965 00:18:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:06:07.965 00:18:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:06:07.965 00:18:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:06:07.965 node1=1024 expecting 1024 00:06:07.965 00:18:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:06:07.965 00:06:07.965 real 0m1.281s 00:06:07.965 user 0m0.578s 00:06:07.965 sys 0m0.739s 00:06:07.965 00:18:35 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:07.965 00:18:35 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:06:07.965 ************************************ 00:06:07.965 END TEST custom_alloc 00:06:07.965 ************************************ 00:06:08.225 00:18:35 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:06:08.225 00:18:35 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:08.225 00:18:35 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:08.225 00:18:35 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:06:08.225 ************************************ 00:06:08.225 START TEST no_shrink_alloc 00:06:08.225 ************************************ 00:06:08.225 00:18:35 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1121 -- # no_shrink_alloc 00:06:08.225 00:18:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:06:08.225 00:18:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:06:08.225 00:18:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:06:08.225 00:18:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:06:08.225 00:18:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:06:08.225 00:18:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:06:08.225 00:18:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:06:08.225 00:18:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:06:08.225 00:18:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:06:08.225 00:18:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:06:08.225 00:18:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:06:08.225 00:18:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:06:08.225 00:18:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:06:08.225 00:18:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:06:08.225 00:18:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:06:08.225 00:18:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:06:08.225 00:18:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:06:08.225 00:18:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:06:08.225 00:18:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:06:08.225 00:18:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:06:08.225 00:18:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:06:08.225 00:18:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:06:09.233 0000:00:04.7 (8086 3c27): Already using the vfio-pci driver 00:06:09.233 0000:84:00.0 (8086 0a54): Already using the vfio-pci driver 00:06:09.233 0000:00:04.6 (8086 3c26): Already using the vfio-pci driver 00:06:09.233 0000:00:04.5 (8086 3c25): Already using the vfio-pci driver 00:06:09.233 0000:00:04.4 (8086 3c24): Already using the vfio-pci driver 00:06:09.233 0000:00:04.3 (8086 3c23): Already using the vfio-pci driver 00:06:09.233 0000:00:04.2 (8086 3c22): Already using the vfio-pci driver 00:06:09.233 0000:00:04.1 (8086 3c21): Already using the vfio-pci driver 00:06:09.233 0000:00:04.0 (8086 3c20): Already using the vfio-pci driver 00:06:09.233 0000:80:04.7 (8086 3c27): Already using the vfio-pci driver 00:06:09.233 0000:80:04.6 (8086 3c26): Already using the vfio-pci driver 00:06:09.233 0000:80:04.5 (8086 3c25): Already using the vfio-pci driver 00:06:09.233 0000:80:04.4 (8086 3c24): Already using the vfio-pci driver 00:06:09.233 0000:80:04.3 (8086 3c23): Already using the vfio-pci driver 00:06:09.233 0000:80:04.2 (8086 3c22): Already using the vfio-pci driver 00:06:09.233 0000:80:04.1 (8086 3c21): Already using the vfio-pci driver 00:06:09.233 0000:80:04.0 (8086 3c20): Already using the vfio-pci driver 00:06:09.233 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:06:09.233 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:06:09.233 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:06:09.233 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:06:09.233 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:06:09.233 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:06:09.233 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:06:09.233 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:06:09.233 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:06:09.233 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:06:09.233 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:06:09.233 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:06:09.233 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:09.233 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:09.233 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:09.233 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:09.233 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:09.233 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:09.233 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.233 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.233 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 52291176 kB' 'MemFree: 32961420 kB' 'MemAvailable: 36421136 kB' 'Buffers: 5520 kB' 'Cached: 15283516 kB' 'SwapCached: 0 kB' 'Active: 12314448 kB' 'Inactive: 3440976 kB' 'Active(anon): 11914424 kB' 'Inactive(anon): 0 kB' 'Active(file): 400024 kB' 'Inactive(file): 3440976 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 469572 kB' 'Mapped: 160812 kB' 'Shmem: 11448036 kB' 'KReclaimable: 175200 kB' 'Slab: 418232 kB' 'SReclaimable: 175200 kB' 'SUnreclaim: 243032 kB' 'KernelStack: 9824 kB' 'PageTables: 6752 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 33485616 kB' 'Committed_AS: 12871068 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 186844 kB' 'VmallocChunk: 0 kB' 'Percpu: 19968 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 536868 kB' 'DirectMap2M: 19308544 kB' 'DirectMap1G: 40894464 kB' 00:06:09.233 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:09.233 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.233 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.233 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.233 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:09.233 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.233 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.233 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.233 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:09.233 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.233 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.233 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.233 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:09.233 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.233 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.233 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.233 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:09.233 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.233 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.233 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.233 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:09.233 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.233 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.233 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.233 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:09.233 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.233 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.233 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.233 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:09.233 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.233 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.233 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.233 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:09.233 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.233 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.233 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.233 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:09.233 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.233 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.233 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.233 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:09.233 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.233 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.233 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.233 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:09.233 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.233 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.233 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.233 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:09.233 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.233 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.233 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.233 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:09.233 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.233 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.233 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.233 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:09.233 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.233 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.233 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.233 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:09.233 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.233 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.233 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.233 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:09.233 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.233 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.233 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.234 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:09.234 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.234 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.234 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.234 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:09.234 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.234 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.234 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.234 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:09.234 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.234 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.234 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.234 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:09.234 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.234 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.234 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.234 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:09.234 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.234 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.234 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.234 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:09.234 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.234 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.234 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.234 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:09.234 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.234 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.234 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.234 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:09.234 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.234 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.234 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.234 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:09.234 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.234 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.234 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.234 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:09.234 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.234 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.234 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.234 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:09.234 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.234 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.234 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.234 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:09.234 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.234 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.234 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.234 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:09.234 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.234 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.234 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.234 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:09.234 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.234 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.234 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.234 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:09.234 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.234 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.234 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.234 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:09.234 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.234 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.234 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.234 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:09.234 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.234 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.234 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.234 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:09.234 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.234 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.234 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.234 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:09.234 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.234 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.234 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.234 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:09.234 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.234 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.234 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.234 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:09.234 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.234 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.234 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.234 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:09.234 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.234 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.234 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.234 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:09.234 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.234 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.234 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.234 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:09.234 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:06:09.234 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:06:09.234 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:06:09.234 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:06:09.234 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:09.234 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:06:09.234 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:06:09.234 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:09.234 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:09.234 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:09.234 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:09.234 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:09.234 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:09.234 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.234 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.234 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 52291176 kB' 'MemFree: 32961420 kB' 'MemAvailable: 36421136 kB' 'Buffers: 5520 kB' 'Cached: 15283516 kB' 'SwapCached: 0 kB' 'Active: 12315040 kB' 'Inactive: 3440976 kB' 'Active(anon): 11915016 kB' 'Inactive(anon): 0 kB' 'Active(file): 400024 kB' 'Inactive(file): 3440976 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 470200 kB' 'Mapped: 160864 kB' 'Shmem: 11448036 kB' 'KReclaimable: 175200 kB' 'Slab: 418224 kB' 'SReclaimable: 175200 kB' 'SUnreclaim: 243024 kB' 'KernelStack: 9856 kB' 'PageTables: 6848 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 33485616 kB' 'Committed_AS: 12873448 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 186828 kB' 'VmallocChunk: 0 kB' 'Percpu: 19968 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 536868 kB' 'DirectMap2M: 19308544 kB' 'DirectMap1G: 40894464 kB' 00:06:09.234 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:09.234 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.234 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.234 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.234 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:09.234 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.234 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.234 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.234 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:09.234 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.234 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.234 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.234 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:09.234 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.234 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.234 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.234 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:09.234 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.234 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.234 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.234 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:09.234 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.234 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.234 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.234 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:09.234 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.234 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.234 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.234 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:09.234 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.234 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.234 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.234 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:09.234 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.234 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.234 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.234 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:09.234 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.234 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.234 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.234 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:09.234 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.234 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.234 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.234 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:09.234 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.234 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.234 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.234 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:09.234 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.234 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.234 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.234 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:09.234 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.234 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.234 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.234 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:09.234 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.234 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.234 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.234 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:09.234 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.234 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.234 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.234 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:09.234 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.234 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.234 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.234 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:09.234 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.234 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.234 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.234 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:09.234 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.234 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.234 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.234 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:09.234 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.234 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.234 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.234 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:09.234 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.234 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.234 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.235 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:09.235 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.235 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.235 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.235 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:09.235 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.235 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.235 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.235 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:09.235 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.235 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.235 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.235 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:09.235 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.235 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.235 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.235 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:09.235 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.235 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.235 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.235 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:09.235 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.235 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.235 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.235 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:09.235 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.235 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.235 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.235 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:09.235 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.235 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.235 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.235 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:09.235 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.235 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.235 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.235 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:09.235 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.235 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.235 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.235 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:09.235 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.235 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.235 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.235 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:09.235 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.235 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.235 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.235 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:09.235 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.235 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.235 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.235 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:09.235 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.235 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.235 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.235 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:09.235 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.235 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.235 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.235 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:09.235 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.235 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.235 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.235 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:09.235 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.235 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.235 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.235 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:09.235 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.235 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.235 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.235 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:09.235 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.235 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.235 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.235 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:09.235 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.235 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.235 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.235 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:09.235 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.235 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.235 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.235 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:09.235 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.235 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.235 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.235 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:09.235 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.235 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.235 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.235 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:09.235 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.235 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.235 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.235 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:09.235 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.235 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.235 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.235 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:09.235 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.235 00:18:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.235 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.235 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:09.235 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.235 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.235 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.235 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:09.235 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.235 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.235 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.235 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:09.235 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.235 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.235 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.235 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:09.235 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.235 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.235 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.235 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:09.235 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:06:09.235 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:06:09.235 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:06:09.235 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:06:09.235 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:06:09.235 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:06:09.235 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:06:09.235 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:09.235 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:09.235 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:09.235 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:09.235 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:09.235 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:09.235 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.235 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.235 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 52291176 kB' 'MemFree: 32961964 kB' 'MemAvailable: 36421680 kB' 'Buffers: 5520 kB' 'Cached: 15283520 kB' 'SwapCached: 0 kB' 'Active: 12314436 kB' 'Inactive: 3440976 kB' 'Active(anon): 11914412 kB' 'Inactive(anon): 0 kB' 'Active(file): 400024 kB' 'Inactive(file): 3440976 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 469628 kB' 'Mapped: 160788 kB' 'Shmem: 11448040 kB' 'KReclaimable: 175200 kB' 'Slab: 418208 kB' 'SReclaimable: 175200 kB' 'SUnreclaim: 243008 kB' 'KernelStack: 10048 kB' 'PageTables: 7600 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 33485616 kB' 'Committed_AS: 12872104 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 186940 kB' 'VmallocChunk: 0 kB' 'Percpu: 19968 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 536868 kB' 'DirectMap2M: 19308544 kB' 'DirectMap1G: 40894464 kB' 00:06:09.235 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:09.235 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.235 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.235 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.235 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:09.235 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.235 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.235 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.235 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:09.235 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.235 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.235 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.235 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:09.235 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.235 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.235 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.235 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:09.235 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.235 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.235 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.235 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:09.235 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.235 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.235 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.235 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:09.235 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.235 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.235 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.235 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:09.235 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.235 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.235 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.235 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:09.235 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.235 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.235 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.235 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:09.236 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.236 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.236 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.236 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:09.236 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.236 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.236 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.236 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:09.236 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.236 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.236 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.236 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:09.236 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.236 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.236 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.236 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:09.236 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.236 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.236 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.236 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:09.236 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.236 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.236 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.236 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:09.236 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.236 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.236 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.236 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:09.236 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.236 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.236 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.236 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:09.236 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.236 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.236 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.236 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:09.236 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.236 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.236 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.236 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:09.236 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.236 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.236 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.236 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:09.236 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.236 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.236 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.236 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:09.236 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.236 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.236 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.236 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:09.236 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.236 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.236 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.236 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:09.236 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.236 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.236 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.236 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:09.236 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.236 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.236 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.236 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:09.236 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.236 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.236 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.236 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:09.236 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.236 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.236 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.236 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:09.236 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.236 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.236 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.236 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:09.236 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.236 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.236 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.236 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:09.236 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.236 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.236 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.236 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:09.236 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.236 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.236 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.236 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:09.236 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.236 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.236 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.236 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:09.236 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.236 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.236 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.236 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:09.236 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.236 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.236 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.236 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:09.236 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.236 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.236 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.236 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:09.236 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.236 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.236 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.236 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:09.236 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.236 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.236 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.236 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:09.236 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.236 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.236 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.236 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:09.236 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.236 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.236 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.236 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:09.236 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.236 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.236 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.236 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:09.236 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.236 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.236 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.236 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:09.236 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.236 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.236 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.236 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:09.236 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.236 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.236 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.236 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:09.236 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.236 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.236 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.236 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:09.236 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.236 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.236 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.236 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:09.236 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.236 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.236 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.236 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:09.236 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.236 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.236 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.236 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:09.237 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.237 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.237 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.237 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:09.237 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.237 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.237 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.237 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:09.237 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.237 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.237 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.237 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:09.237 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:06:09.237 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:06:09.237 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:06:09.237 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:06:09.237 nr_hugepages=1024 00:06:09.237 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:06:09.237 resv_hugepages=0 00:06:09.237 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:06:09.237 surplus_hugepages=0 00:06:09.237 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:06:09.237 anon_hugepages=0 00:06:09.237 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:06:09.237 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:06:09.237 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:06:09.237 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:06:09.237 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:06:09.237 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:06:09.237 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:09.237 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:09.237 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:09.237 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:09.237 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:09.237 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:09.237 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.237 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.237 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 52291176 kB' 'MemFree: 32959952 kB' 'MemAvailable: 36419668 kB' 'Buffers: 5520 kB' 'Cached: 15283524 kB' 'SwapCached: 0 kB' 'Active: 12314772 kB' 'Inactive: 3440976 kB' 'Active(anon): 11914748 kB' 'Inactive(anon): 0 kB' 'Active(file): 400024 kB' 'Inactive(file): 3440976 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 469964 kB' 'Mapped: 160788 kB' 'Shmem: 11448044 kB' 'KReclaimable: 175200 kB' 'Slab: 418208 kB' 'SReclaimable: 175200 kB' 'SUnreclaim: 243008 kB' 'KernelStack: 9904 kB' 'PageTables: 7372 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 33485616 kB' 'Committed_AS: 12873492 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 186876 kB' 'VmallocChunk: 0 kB' 'Percpu: 19968 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 536868 kB' 'DirectMap2M: 19308544 kB' 'DirectMap1G: 40894464 kB' 00:06:09.237 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:09.237 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.237 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.237 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.237 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:09.237 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.237 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.237 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.237 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:09.237 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.237 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.237 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.237 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:09.237 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.237 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.237 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.237 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:09.237 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.237 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.237 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.237 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:09.237 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.237 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.237 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.237 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:09.237 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.237 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.237 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.237 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:09.237 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.237 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.237 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.237 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:09.237 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.237 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.237 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.237 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:09.237 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.237 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.237 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.237 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:09.237 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.237 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.237 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.237 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:09.237 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.237 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.237 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.237 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:09.237 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.237 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.237 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.237 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:09.237 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.237 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.237 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.237 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:09.237 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.237 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.237 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.237 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:09.237 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.237 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.237 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.237 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:09.237 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.237 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.237 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.237 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:09.237 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.237 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.237 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.237 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:09.237 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.237 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.237 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.237 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:09.237 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.237 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.237 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.237 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:09.237 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.237 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.237 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.237 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:09.237 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.237 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.237 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.237 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:09.237 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.237 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.237 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.237 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:09.237 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.237 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.237 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.237 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:09.237 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.237 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.237 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.237 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:09.237 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.237 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.237 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.237 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:09.237 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.237 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.237 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.237 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:09.237 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.237 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.237 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.237 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:09.237 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.237 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.237 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.237 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:09.237 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.237 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.237 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.237 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:09.237 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.237 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.237 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.237 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:09.237 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.237 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.237 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.237 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:09.237 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.237 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.237 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.237 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:09.237 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.237 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.237 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.237 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:09.237 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.237 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.237 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.237 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:09.237 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.237 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.237 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.237 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:09.237 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.237 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.237 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.237 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:09.237 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.237 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.237 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.237 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:09.237 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.237 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.238 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.238 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:09.238 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.238 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.238 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.238 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:09.238 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.238 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.238 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.238 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:09.238 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.238 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.238 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.238 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:09.238 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.238 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.238 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.238 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:09.238 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.238 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.238 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.238 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:09.238 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.238 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.238 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.238 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:09.238 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.238 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.238 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.238 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:09.238 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.238 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.238 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.238 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:09.238 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.238 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.238 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.238 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:09.238 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:06:09.238 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:06:09.238 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:06:09.238 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:06:09.238 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:06:09.238 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:06:09.238 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:06:09.238 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:06:09.238 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:06:09.238 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:06:09.238 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:06:09.238 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:06:09.238 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:06:09.238 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:06:09.238 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:09.238 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:06:09.238 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:06:09.238 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:09.238 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:09.238 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:06:09.238 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:06:09.499 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:09.499 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:09.499 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.499 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.499 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32834692 kB' 'MemFree: 19826732 kB' 'MemUsed: 13007960 kB' 'SwapCached: 0 kB' 'Active: 6852932 kB' 'Inactive: 3336096 kB' 'Active(anon): 6610120 kB' 'Inactive(anon): 0 kB' 'Active(file): 242812 kB' 'Inactive(file): 3336096 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9874060 kB' 'Mapped: 46948 kB' 'AnonPages: 318104 kB' 'Shmem: 6295152 kB' 'KernelStack: 5832 kB' 'PageTables: 3760 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 103204 kB' 'Slab: 229976 kB' 'SReclaimable: 103204 kB' 'SUnreclaim: 126772 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:06:09.499 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:09.499 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.499 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.499 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.499 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:09.499 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.499 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.499 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.499 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:09.499 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.499 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.499 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.499 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:09.500 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.500 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.500 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.500 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:09.500 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.500 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.500 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.500 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:09.500 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.500 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.500 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.500 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:09.500 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.500 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.500 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.500 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:09.500 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.500 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.500 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.500 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:09.500 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.500 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.500 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.500 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:09.500 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.500 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.500 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.500 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:09.500 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.500 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.500 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.500 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:09.500 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.500 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.500 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.500 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:09.500 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.500 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.500 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.500 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:09.500 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.500 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.500 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.500 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:09.500 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.500 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.500 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.500 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:09.500 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.500 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.500 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.500 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:09.500 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.500 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.500 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.500 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:09.500 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.500 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.500 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.500 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:09.500 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.500 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.500 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.500 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:09.500 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.500 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.500 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.500 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:09.500 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.500 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.500 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.500 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:09.500 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.500 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.500 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.500 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:09.500 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.500 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.500 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.500 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:09.500 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.500 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.500 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.500 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:09.500 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.500 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.500 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.500 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:09.501 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.501 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.501 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.501 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:09.501 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.501 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.501 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.501 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:09.501 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.501 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.501 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.501 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:09.501 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.501 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.501 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.501 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:09.501 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.501 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.501 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.501 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:09.501 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.501 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.501 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.501 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:09.501 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.501 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.501 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.501 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:09.501 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.501 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.501 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.501 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:09.501 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.501 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.501 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.501 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:09.501 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.501 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.501 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.501 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:09.501 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:09.501 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:09.501 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:09.501 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:09.501 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:06:09.501 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:06:09.501 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:06:09.501 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:06:09.501 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:06:09.501 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:06:09.501 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:06:09.501 node0=1024 expecting 1024 00:06:09.501 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:06:09.501 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:06:09.501 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:06:09.501 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:06:09.501 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:06:09.501 00:18:37 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:06:10.447 0000:00:04.7 (8086 3c27): Already using the vfio-pci driver 00:06:10.447 0000:84:00.0 (8086 0a54): Already using the vfio-pci driver 00:06:10.447 0000:00:04.6 (8086 3c26): Already using the vfio-pci driver 00:06:10.447 0000:00:04.5 (8086 3c25): Already using the vfio-pci driver 00:06:10.447 0000:00:04.4 (8086 3c24): Already using the vfio-pci driver 00:06:10.447 0000:00:04.3 (8086 3c23): Already using the vfio-pci driver 00:06:10.447 0000:00:04.2 (8086 3c22): Already using the vfio-pci driver 00:06:10.447 0000:00:04.1 (8086 3c21): Already using the vfio-pci driver 00:06:10.447 0000:00:04.0 (8086 3c20): Already using the vfio-pci driver 00:06:10.447 0000:80:04.7 (8086 3c27): Already using the vfio-pci driver 00:06:10.447 0000:80:04.6 (8086 3c26): Already using the vfio-pci driver 00:06:10.447 0000:80:04.5 (8086 3c25): Already using the vfio-pci driver 00:06:10.447 0000:80:04.4 (8086 3c24): Already using the vfio-pci driver 00:06:10.447 0000:80:04.3 (8086 3c23): Already using the vfio-pci driver 00:06:10.447 0000:80:04.2 (8086 3c22): Already using the vfio-pci driver 00:06:10.447 0000:80:04.1 (8086 3c21): Already using the vfio-pci driver 00:06:10.447 0000:80:04.0 (8086 3c20): Already using the vfio-pci driver 00:06:10.447 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:06:10.447 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:06:10.447 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:06:10.447 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:06:10.447 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:06:10.447 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:06:10.447 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:06:10.447 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:06:10.447 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:06:10.447 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:06:10.447 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:06:10.447 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:06:10.447 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:06:10.447 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:10.447 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:10.447 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:10.447 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:10.447 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:10.447 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:10.447 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.447 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.447 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 52291176 kB' 'MemFree: 32936196 kB' 'MemAvailable: 36395912 kB' 'Buffers: 5520 kB' 'Cached: 15283628 kB' 'SwapCached: 0 kB' 'Active: 12314936 kB' 'Inactive: 3440976 kB' 'Active(anon): 11914912 kB' 'Inactive(anon): 0 kB' 'Active(file): 400024 kB' 'Inactive(file): 3440976 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 469944 kB' 'Mapped: 160964 kB' 'Shmem: 11448148 kB' 'KReclaimable: 175200 kB' 'Slab: 418212 kB' 'SReclaimable: 175200 kB' 'SUnreclaim: 243012 kB' 'KernelStack: 9824 kB' 'PageTables: 6712 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 33485616 kB' 'Committed_AS: 12871324 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 186812 kB' 'VmallocChunk: 0 kB' 'Percpu: 19968 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 536868 kB' 'DirectMap2M: 19308544 kB' 'DirectMap1G: 40894464 kB' 00:06:10.447 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:10.447 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.447 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.447 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.447 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:10.447 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.447 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.447 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.447 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:10.447 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.447 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.447 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.447 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:10.447 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.447 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.447 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.447 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:10.447 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.447 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.447 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.447 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:10.447 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.447 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.447 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.447 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:10.447 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.447 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.447 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.447 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:10.447 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.447 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.447 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.447 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:10.447 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.447 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.447 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.447 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:10.447 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.447 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.447 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.447 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:10.447 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.447 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.447 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.447 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:10.447 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.447 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.447 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.447 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:10.447 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.447 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.447 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.447 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:10.447 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.447 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.447 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.447 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:10.447 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.447 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.447 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.447 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:10.447 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.447 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.447 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.447 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:10.447 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.447 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.447 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.447 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:10.447 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.447 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.447 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.447 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:10.447 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.447 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.447 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.447 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:10.447 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.447 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.447 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.447 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:10.447 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.447 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.447 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.447 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:10.447 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.447 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.447 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.447 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:10.447 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.447 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.447 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.447 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:10.447 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.447 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.447 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.447 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:10.447 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.447 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.447 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.447 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:10.447 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.448 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.448 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.448 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:10.448 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.448 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.448 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.448 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:10.448 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.448 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.448 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.448 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:10.448 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.448 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.448 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.448 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:10.448 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.448 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.448 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.448 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:10.448 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.448 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.448 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.448 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:10.448 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.448 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.448 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.448 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:10.448 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.448 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.448 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.448 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:10.448 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.448 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.448 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.448 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:10.448 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.448 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.448 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.448 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:10.448 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.448 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.448 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.448 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:10.448 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.448 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.448 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.448 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:10.448 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.448 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.448 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.448 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:10.448 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.448 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.448 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.448 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:10.448 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.448 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.448 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.448 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:10.448 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:06:10.448 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:06:10.448 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:06:10.448 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:06:10.448 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:10.448 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:06:10.448 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:06:10.448 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:10.448 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:10.448 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:10.448 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:10.448 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:10.448 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:10.448 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.448 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.448 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 52291176 kB' 'MemFree: 32935976 kB' 'MemAvailable: 36395692 kB' 'Buffers: 5520 kB' 'Cached: 15283632 kB' 'SwapCached: 0 kB' 'Active: 12314572 kB' 'Inactive: 3440976 kB' 'Active(anon): 11914548 kB' 'Inactive(anon): 0 kB' 'Active(file): 400024 kB' 'Inactive(file): 3440976 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 469588 kB' 'Mapped: 160932 kB' 'Shmem: 11448152 kB' 'KReclaimable: 175200 kB' 'Slab: 418212 kB' 'SReclaimable: 175200 kB' 'SUnreclaim: 243012 kB' 'KernelStack: 9792 kB' 'PageTables: 6600 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 33485616 kB' 'Committed_AS: 12871344 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 186796 kB' 'VmallocChunk: 0 kB' 'Percpu: 19968 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 536868 kB' 'DirectMap2M: 19308544 kB' 'DirectMap1G: 40894464 kB' 00:06:10.448 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.448 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.448 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.448 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.448 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.448 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.448 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.448 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.448 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.448 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.448 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.448 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.448 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.448 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.448 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.448 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.448 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.448 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.448 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.448 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.448 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.448 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.448 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.448 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.448 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.448 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.448 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.448 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.448 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.448 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.448 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.448 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.448 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.448 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.448 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.448 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.448 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.448 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.448 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.448 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.448 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.448 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.448 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.448 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.448 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.448 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.448 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.448 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.448 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.448 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.448 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.448 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.448 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.448 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.448 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.448 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.448 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.448 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.448 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.448 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.448 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.448 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.448 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.448 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.448 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.448 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.448 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.448 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.448 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.448 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.448 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.448 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.448 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.448 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.448 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.448 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.448 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.448 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.448 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.448 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.448 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.448 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.448 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.448 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.448 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.448 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.448 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.448 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.448 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.448 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.448 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.448 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.448 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.448 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.448 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.448 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.448 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.448 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.448 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.448 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.448 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.448 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.448 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.448 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.448 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.448 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.448 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.448 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.448 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.448 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.448 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.448 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.448 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.448 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.448 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.448 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.448 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.448 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.448 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.448 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.448 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.448 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.448 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.448 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.448 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.448 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.448 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.448 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.448 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.448 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.448 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.448 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.448 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.448 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.448 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.448 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.448 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.448 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.448 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.448 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.448 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.448 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.448 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.448 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.448 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.448 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.448 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.448 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.449 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.449 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.449 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.449 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.449 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.449 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.449 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.449 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.449 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.449 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.449 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.449 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.449 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.449 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.449 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.449 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.449 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.449 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.449 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.449 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.449 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.449 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.449 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.449 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.449 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.449 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.449 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.449 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.449 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.449 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.449 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.449 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.449 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.449 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.449 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.449 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.449 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.449 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.449 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.449 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.449 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.449 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.449 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.449 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.449 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.449 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.449 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.449 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.449 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.449 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.449 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.449 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.449 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.449 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.449 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.449 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.449 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.449 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:06:10.449 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:06:10.449 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:06:10.449 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:06:10.449 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:06:10.449 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:06:10.449 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:06:10.449 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:10.449 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:10.449 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:10.449 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:10.449 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:10.449 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:10.449 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.449 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.449 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 52291176 kB' 'MemFree: 32944716 kB' 'MemAvailable: 36404432 kB' 'Buffers: 5520 kB' 'Cached: 15283660 kB' 'SwapCached: 0 kB' 'Active: 12314900 kB' 'Inactive: 3440976 kB' 'Active(anon): 11914876 kB' 'Inactive(anon): 0 kB' 'Active(file): 400024 kB' 'Inactive(file): 3440976 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 469876 kB' 'Mapped: 160704 kB' 'Shmem: 11448180 kB' 'KReclaimable: 175200 kB' 'Slab: 418212 kB' 'SReclaimable: 175200 kB' 'SUnreclaim: 243012 kB' 'KernelStack: 9856 kB' 'PageTables: 6812 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 33485616 kB' 'Committed_AS: 12872360 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 186796 kB' 'VmallocChunk: 0 kB' 'Percpu: 19968 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 536868 kB' 'DirectMap2M: 19308544 kB' 'DirectMap1G: 40894464 kB' 00:06:10.449 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:10.449 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.449 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.449 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.449 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:10.449 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.449 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.449 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.449 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:10.449 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.449 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.449 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.449 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:10.449 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.449 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.449 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.449 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:10.449 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.449 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.449 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.449 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:10.449 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.449 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.449 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.449 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:10.449 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.449 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.449 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.449 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:10.449 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.449 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.449 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.449 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:10.449 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.449 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.449 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.449 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:10.449 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.449 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.449 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.449 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:10.449 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.449 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.449 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.449 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:10.449 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.449 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.449 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.449 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:10.449 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.449 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.449 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.449 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:10.449 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.449 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.449 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.449 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:10.449 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.449 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.449 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.449 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:10.449 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.449 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.449 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.449 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:10.449 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.449 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.449 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.449 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:10.449 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.449 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.449 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.449 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:10.449 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.449 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.449 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.449 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:10.449 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.449 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.449 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.449 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:10.449 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.449 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.449 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.449 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:10.449 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.449 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.449 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.449 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:10.449 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.449 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.449 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.449 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:10.449 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.449 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.449 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.449 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:10.449 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.449 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.449 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.449 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:10.449 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.449 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.449 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.449 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:10.449 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.449 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.449 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.449 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:10.449 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.449 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.449 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.449 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:10.449 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.449 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.449 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.449 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:10.449 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.449 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.449 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.449 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:10.449 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.449 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.449 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.449 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:10.450 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.450 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.450 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.450 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:10.450 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.450 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.450 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.450 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:10.450 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.450 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.450 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.450 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:10.450 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.450 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.450 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.450 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:10.450 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.450 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.450 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.450 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:10.450 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.450 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.450 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.450 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:10.450 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.450 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.450 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.450 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:10.450 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.450 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.450 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.450 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:10.450 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.450 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.450 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.450 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:10.450 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.450 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.450 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.450 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:10.450 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.450 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.450 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.450 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:10.450 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.450 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.450 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.450 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:10.450 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.450 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.450 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.450 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:10.450 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.450 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.450 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.450 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:10.450 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.450 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.450 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.450 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:10.450 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.450 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.450 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.450 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:10.450 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.450 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.450 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.450 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:10.450 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.450 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.450 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.450 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:10.450 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.450 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.450 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.450 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:10.450 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:06:10.450 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:06:10.450 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:06:10.450 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:06:10.450 nr_hugepages=1024 00:06:10.450 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:06:10.450 resv_hugepages=0 00:06:10.450 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:06:10.450 surplus_hugepages=0 00:06:10.450 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:06:10.450 anon_hugepages=0 00:06:10.450 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:06:10.450 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:06:10.450 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:06:10.450 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:06:10.450 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:06:10.450 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:06:10.450 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:10.450 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:10.450 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:10.450 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:10.450 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:10.450 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:10.450 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.450 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.450 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 52291176 kB' 'MemFree: 32944900 kB' 'MemAvailable: 36404616 kB' 'Buffers: 5520 kB' 'Cached: 15283672 kB' 'SwapCached: 0 kB' 'Active: 12315128 kB' 'Inactive: 3440976 kB' 'Active(anon): 11915104 kB' 'Inactive(anon): 0 kB' 'Active(file): 400024 kB' 'Inactive(file): 3440976 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 470124 kB' 'Mapped: 160704 kB' 'Shmem: 11448192 kB' 'KReclaimable: 175200 kB' 'Slab: 418212 kB' 'SReclaimable: 175200 kB' 'SUnreclaim: 243012 kB' 'KernelStack: 10208 kB' 'PageTables: 7752 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 33485616 kB' 'Committed_AS: 12873748 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 186956 kB' 'VmallocChunk: 0 kB' 'Percpu: 19968 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 536868 kB' 'DirectMap2M: 19308544 kB' 'DirectMap1G: 40894464 kB' 00:06:10.450 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:10.450 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.450 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.450 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.450 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:10.450 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.450 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.450 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.450 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:10.450 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.450 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.450 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.450 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:10.450 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.450 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.450 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.450 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:10.450 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.450 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.450 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.450 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:10.450 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.450 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.450 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.450 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:10.450 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.450 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.450 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.450 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:10.450 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.450 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.450 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.450 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:10.450 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.450 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.450 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.450 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:10.450 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.450 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.450 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.450 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:10.450 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.450 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.450 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.450 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:10.450 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.450 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.450 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.450 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:10.450 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.450 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.450 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.450 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:10.450 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.450 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.450 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.450 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:10.450 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.450 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.450 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.450 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:10.450 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.450 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.450 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.450 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:10.450 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.450 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.450 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.450 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:10.450 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.450 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.450 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.450 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:10.450 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.450 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.450 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.450 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:10.450 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.450 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.450 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.450 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:10.450 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.450 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.450 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.450 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:10.450 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.450 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.450 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.450 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:10.450 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.450 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.450 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.450 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:10.450 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.450 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.450 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.450 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:10.450 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.450 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.450 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.450 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:10.450 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.450 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.450 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.450 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:10.450 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.450 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.450 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.450 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:10.450 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.450 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.450 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.450 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:10.450 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.450 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.450 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.450 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:10.450 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.450 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.450 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.450 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:10.450 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.450 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.450 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.450 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:10.450 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.450 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.450 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.450 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:10.450 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.450 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.450 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.450 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:10.450 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.450 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.450 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.450 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:10.451 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.451 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.451 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.451 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:10.451 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.451 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.451 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.451 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:10.451 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.451 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.451 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.451 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:10.451 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.451 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.451 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.451 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:10.451 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.451 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.451 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.451 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:10.451 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.451 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.451 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.451 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:10.451 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.451 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.451 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.451 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:10.451 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.451 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.451 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.451 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:10.451 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.451 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.451 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.451 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:10.451 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.451 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.451 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.451 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:10.451 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.451 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.451 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.451 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:10.451 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.451 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.451 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.451 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:10.451 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.451 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.451 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.451 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:10.451 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.451 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.451 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.451 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:10.451 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:06:10.451 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:06:10.451 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:06:10.451 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:06:10.451 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:06:10.451 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:06:10.451 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:06:10.451 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:06:10.451 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:06:10.451 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:06:10.451 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:06:10.451 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:06:10.451 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:06:10.451 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:06:10.451 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:10.451 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:06:10.451 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:06:10.451 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:10.451 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:10.451 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:06:10.451 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:06:10.451 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:10.451 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:10.451 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.451 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.451 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32834692 kB' 'MemFree: 19824088 kB' 'MemUsed: 13010604 kB' 'SwapCached: 0 kB' 'Active: 6852516 kB' 'Inactive: 3336096 kB' 'Active(anon): 6609704 kB' 'Inactive(anon): 0 kB' 'Active(file): 242812 kB' 'Inactive(file): 3336096 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9874132 kB' 'Mapped: 46864 kB' 'AnonPages: 317568 kB' 'Shmem: 6295224 kB' 'KernelStack: 5848 kB' 'PageTables: 3708 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 103204 kB' 'Slab: 229972 kB' 'SReclaimable: 103204 kB' 'SUnreclaim: 126768 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:06:10.451 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.451 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.451 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.451 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.451 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.451 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.451 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.451 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.451 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.451 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.451 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.451 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.451 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.451 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.451 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.451 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.451 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.451 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.451 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.451 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.451 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.451 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.451 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.451 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.451 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.451 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.451 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.451 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.451 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.451 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.451 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.451 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.451 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.451 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.451 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.451 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.451 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.451 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.451 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.451 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.451 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.451 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.451 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.451 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.451 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.451 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.451 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.451 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.451 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.451 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.451 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.451 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.451 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.451 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.451 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.451 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.451 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.451 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.451 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.451 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.451 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.451 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.451 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.451 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.451 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.451 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.451 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.451 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.451 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.451 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.451 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.451 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.451 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.451 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.451 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.451 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.451 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.451 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.451 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.451 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.451 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.451 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.451 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.451 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.451 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.451 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.451 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.451 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.451 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.451 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.451 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.451 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.451 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.451 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.451 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.451 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.451 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.451 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.451 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.451 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.451 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.451 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.451 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.451 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.451 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.451 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.451 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.451 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.451 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.451 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.451 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.451 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.451 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.451 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.451 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.451 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.451 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.451 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.451 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.451 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.451 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.451 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.451 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.451 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.451 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.451 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.451 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.451 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.451 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.451 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.451 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.451 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.451 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.451 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.451 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.451 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.451 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.451 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.451 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.451 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.451 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.451 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:10.451 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:10.451 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:10.451 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:10.451 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:06:10.451 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:06:10.451 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:06:10.451 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:06:10.451 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:06:10.451 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:06:10.451 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:06:10.451 node0=1024 expecting 1024 00:06:10.451 00:18:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:06:10.451 00:06:10.451 real 0m2.327s 00:06:10.452 user 0m1.045s 00:06:10.452 sys 0m1.343s 00:06:10.452 00:18:38 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:10.452 00:18:38 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:06:10.452 ************************************ 00:06:10.452 END TEST no_shrink_alloc 00:06:10.452 ************************************ 00:06:10.452 00:18:38 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:06:10.452 00:18:38 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:06:10.452 00:18:38 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:06:10.452 00:18:38 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:06:10.452 00:18:38 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:06:10.452 00:18:38 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:06:10.452 00:18:38 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:06:10.452 00:18:38 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:06:10.452 00:18:38 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:06:10.452 00:18:38 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:06:10.452 00:18:38 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:06:10.452 00:18:38 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:06:10.452 00:18:38 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:06:10.452 00:18:38 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:06:10.452 00:06:10.452 real 0m10.084s 00:06:10.452 user 0m4.153s 00:06:10.452 sys 0m5.297s 00:06:10.452 00:18:38 setup.sh.hugepages -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:10.452 00:18:38 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:06:10.452 ************************************ 00:06:10.452 END TEST hugepages 00:06:10.452 ************************************ 00:06:10.452 00:18:38 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:06:10.452 00:18:38 setup.sh -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:10.452 00:18:38 setup.sh -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:10.452 00:18:38 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:06:10.452 ************************************ 00:06:10.452 START TEST driver 00:06:10.452 ************************************ 00:06:10.452 00:18:38 setup.sh.driver -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:06:10.709 * Looking for test storage... 00:06:10.710 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:06:10.710 00:18:38 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:06:10.710 00:18:38 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:06:10.710 00:18:38 setup.sh.driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:06:13.250 00:18:40 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:06:13.250 00:18:40 setup.sh.driver -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:13.250 00:18:40 setup.sh.driver -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:13.250 00:18:40 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:06:13.250 ************************************ 00:06:13.250 START TEST guess_driver 00:06:13.250 ************************************ 00:06:13.250 00:18:40 setup.sh.driver.guess_driver -- common/autotest_common.sh@1121 -- # guess_driver 00:06:13.250 00:18:40 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:06:13.250 00:18:40 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:06:13.250 00:18:40 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:06:13.250 00:18:40 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:06:13.250 00:18:40 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:06:13.250 00:18:40 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:06:13.250 00:18:40 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:06:13.250 00:18:40 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:06:13.250 00:18:40 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:06:13.250 00:18:40 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 102 > 0 )) 00:06:13.250 00:18:40 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # is_driver vfio_pci 00:06:13.250 00:18:40 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod vfio_pci 00:06:13.250 00:18:40 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep vfio_pci 00:06:13.250 00:18:40 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:06:13.250 00:18:40 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:06:13.250 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:06:13.250 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:06:13.250 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:06:13.250 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:06:13.250 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:06:13.250 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:06:13.250 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:06:13.250 00:18:40 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # return 0 00:06:13.250 00:18:40 setup.sh.driver.guess_driver -- setup/driver.sh@37 -- # echo vfio-pci 00:06:13.250 00:18:40 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=vfio-pci 00:06:13.250 00:18:40 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:06:13.250 00:18:40 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:06:13.250 Looking for driver=vfio-pci 00:06:13.250 00:18:40 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:06:13.250 00:18:40 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:06:13.250 00:18:40 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:06:13.250 00:18:40 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:06:14.189 00:18:41 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:06:14.190 00:18:41 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:06:14.190 00:18:41 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:06:14.190 00:18:41 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:06:14.190 00:18:41 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:06:14.190 00:18:41 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:06:14.190 00:18:41 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:06:14.190 00:18:41 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:06:14.190 00:18:41 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:06:14.190 00:18:41 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:06:14.190 00:18:41 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:06:14.190 00:18:41 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:06:14.190 00:18:41 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:06:14.190 00:18:41 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:06:14.190 00:18:41 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:06:14.190 00:18:41 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:06:14.190 00:18:41 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:06:14.190 00:18:41 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:06:14.190 00:18:41 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:06:14.190 00:18:41 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:06:14.190 00:18:41 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:06:14.190 00:18:41 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:06:14.190 00:18:41 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:06:14.190 00:18:41 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:06:14.190 00:18:41 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:06:14.190 00:18:41 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:06:14.190 00:18:41 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:06:14.190 00:18:41 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:06:14.190 00:18:41 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:06:14.190 00:18:41 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:06:14.190 00:18:41 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:06:14.190 00:18:41 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:06:14.190 00:18:41 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:06:14.190 00:18:41 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:06:14.190 00:18:41 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:06:14.190 00:18:41 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:06:14.190 00:18:41 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:06:14.190 00:18:41 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:06:14.190 00:18:41 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:06:14.190 00:18:41 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:06:14.190 00:18:41 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:06:14.190 00:18:41 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:06:14.190 00:18:41 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:06:14.190 00:18:41 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:06:14.190 00:18:41 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:06:14.190 00:18:41 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:06:14.190 00:18:41 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:06:14.190 00:18:41 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:06:15.129 00:18:42 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:06:15.129 00:18:42 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:06:15.129 00:18:42 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:06:15.129 00:18:42 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:06:15.129 00:18:42 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:06:15.129 00:18:42 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:06:15.129 00:18:42 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:06:17.667 00:06:17.667 real 0m4.427s 00:06:17.667 user 0m0.977s 00:06:17.667 sys 0m1.623s 00:06:17.667 00:18:45 setup.sh.driver.guess_driver -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:17.667 00:18:45 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:06:17.667 ************************************ 00:06:17.667 END TEST guess_driver 00:06:17.667 ************************************ 00:06:17.667 00:06:17.667 real 0m6.934s 00:06:17.667 user 0m1.540s 00:06:17.667 sys 0m2.592s 00:06:17.667 00:18:45 setup.sh.driver -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:17.667 00:18:45 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:06:17.667 ************************************ 00:06:17.667 END TEST driver 00:06:17.667 ************************************ 00:06:17.667 00:18:45 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:06:17.667 00:18:45 setup.sh -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:17.667 00:18:45 setup.sh -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:17.667 00:18:45 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:06:17.667 ************************************ 00:06:17.667 START TEST devices 00:06:17.667 ************************************ 00:06:17.667 00:18:45 setup.sh.devices -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:06:17.667 * Looking for test storage... 00:06:17.667 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:06:17.667 00:18:45 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:06:17.667 00:18:45 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:06:17.667 00:18:45 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:06:17.667 00:18:45 setup.sh.devices -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:06:19.050 00:18:46 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:06:19.050 00:18:46 setup.sh.devices -- common/autotest_common.sh@1665 -- # zoned_devs=() 00:06:19.050 00:18:46 setup.sh.devices -- common/autotest_common.sh@1665 -- # local -gA zoned_devs 00:06:19.050 00:18:46 setup.sh.devices -- common/autotest_common.sh@1666 -- # local nvme bdf 00:06:19.050 00:18:46 setup.sh.devices -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:06:19.050 00:18:46 setup.sh.devices -- common/autotest_common.sh@1669 -- # is_block_zoned nvme0n1 00:06:19.050 00:18:46 setup.sh.devices -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:06:19.050 00:18:46 setup.sh.devices -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:06:19.050 00:18:46 setup.sh.devices -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:06:19.050 00:18:46 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:06:19.050 00:18:46 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:06:19.050 00:18:46 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:06:19.050 00:18:46 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:06:19.050 00:18:46 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:06:19.050 00:18:46 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:06:19.050 00:18:46 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:06:19.050 00:18:46 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:06:19.050 00:18:46 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:84:00.0 00:06:19.050 00:18:46 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\8\4\:\0\0\.\0* ]] 00:06:19.050 00:18:46 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:06:19.050 00:18:46 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:06:19.050 00:18:46 setup.sh.devices -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:06:19.050 No valid GPT data, bailing 00:06:19.050 00:18:46 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:06:19.050 00:18:46 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:06:19.050 00:18:46 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:06:19.050 00:18:46 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:06:19.050 00:18:46 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:06:19.050 00:18:46 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:06:19.050 00:18:46 setup.sh.devices -- setup/common.sh@80 -- # echo 1000204886016 00:06:19.050 00:18:46 setup.sh.devices -- setup/devices.sh@204 -- # (( 1000204886016 >= min_disk_size )) 00:06:19.050 00:18:46 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:06:19.050 00:18:46 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:84:00.0 00:06:19.050 00:18:46 setup.sh.devices -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:06:19.050 00:18:46 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:06:19.050 00:18:46 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:06:19.050 00:18:46 setup.sh.devices -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:19.050 00:18:46 setup.sh.devices -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:19.050 00:18:46 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:06:19.050 ************************************ 00:06:19.050 START TEST nvme_mount 00:06:19.050 ************************************ 00:06:19.050 00:18:46 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1121 -- # nvme_mount 00:06:19.050 00:18:46 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:06:19.050 00:18:46 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:06:19.050 00:18:46 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:06:19.050 00:18:46 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:06:19.050 00:18:46 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:06:19.050 00:18:46 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:06:19.050 00:18:46 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:06:19.050 00:18:46 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:06:19.050 00:18:46 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:06:19.050 00:18:46 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:06:19.050 00:18:46 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:06:19.050 00:18:46 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:06:19.050 00:18:46 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:06:19.050 00:18:46 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:06:19.050 00:18:46 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:06:19.050 00:18:46 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:06:19.050 00:18:46 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:06:19.050 00:18:46 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:06:19.050 00:18:46 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:06:19.990 Creating new GPT entries in memory. 00:06:19.990 GPT data structures destroyed! You may now partition the disk using fdisk or 00:06:19.990 other utilities. 00:06:19.990 00:18:47 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:06:19.990 00:18:47 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:06:19.990 00:18:47 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:06:19.990 00:18:47 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:06:19.990 00:18:47 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:06:20.928 Creating new GPT entries in memory. 00:06:20.928 The operation has completed successfully. 00:06:20.928 00:18:48 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:06:20.928 00:18:48 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:06:20.928 00:18:48 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 844444 00:06:20.928 00:18:48 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:06:20.928 00:18:48 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size= 00:06:20.928 00:18:48 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:06:20.928 00:18:48 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:06:20.928 00:18:48 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:06:20.928 00:18:48 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:06:20.928 00:18:48 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:84:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:06:20.928 00:18:48 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:84:00.0 00:06:20.928 00:18:48 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:06:20.928 00:18:48 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:06:20.928 00:18:48 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:06:20.928 00:18:48 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:06:20.928 00:18:48 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:06:20.928 00:18:48 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:06:20.928 00:18:48 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:06:20.928 00:18:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:20.928 00:18:48 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:84:00.0 00:06:20.928 00:18:48 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:06:20.928 00:18:48 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:06:20.928 00:18:48 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:06:21.875 00:18:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:84:00.0 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:06:21.875 00:18:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:06:21.875 00:18:49 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:06:21.875 00:18:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:21.875 00:18:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:06:21.875 00:18:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:21.875 00:18:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:06:21.875 00:18:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:21.875 00:18:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:06:21.875 00:18:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:21.875 00:18:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:06:21.875 00:18:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:21.875 00:18:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:06:21.875 00:18:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:21.875 00:18:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:06:21.875 00:18:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:21.875 00:18:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:06:21.875 00:18:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:21.875 00:18:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:06:21.875 00:18:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:21.875 00:18:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:06:21.875 00:18:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:21.875 00:18:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:06:21.875 00:18:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:21.875 00:18:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:06:21.875 00:18:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:21.875 00:18:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:06:21.875 00:18:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:21.875 00:18:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:06:21.875 00:18:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:21.875 00:18:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:06:21.875 00:18:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:21.875 00:18:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:06:21.875 00:18:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:21.875 00:18:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:06:21.875 00:18:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:22.136 00:18:49 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:06:22.136 00:18:49 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:06:22.136 00:18:49 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:06:22.136 00:18:49 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:06:22.136 00:18:49 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:06:22.136 00:18:49 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:06:22.136 00:18:49 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:06:22.136 00:18:49 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:06:22.136 00:18:49 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:06:22.136 00:18:49 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:06:22.136 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:06:22.136 00:18:49 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:06:22.136 00:18:49 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:06:22.396 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:06:22.396 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:06:22.396 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:06:22.396 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:06:22.396 00:18:50 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:06:22.396 00:18:50 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:06:22.396 00:18:50 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:06:22.396 00:18:50 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:06:22.396 00:18:50 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:06:22.396 00:18:50 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:06:22.396 00:18:50 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:84:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:06:22.396 00:18:50 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:84:00.0 00:06:22.396 00:18:50 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:06:22.396 00:18:50 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:06:22.396 00:18:50 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:06:22.396 00:18:50 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:06:22.396 00:18:50 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:06:22.396 00:18:50 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:06:22.396 00:18:50 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:06:22.397 00:18:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:22.397 00:18:50 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:84:00.0 00:06:22.397 00:18:50 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:06:22.397 00:18:50 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:06:22.397 00:18:50 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:06:23.334 00:18:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:84:00.0 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:06:23.334 00:18:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:06:23.334 00:18:50 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:06:23.334 00:18:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:23.334 00:18:50 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:06:23.334 00:18:51 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:23.334 00:18:51 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:06:23.334 00:18:51 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:23.334 00:18:51 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:06:23.334 00:18:51 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:23.334 00:18:51 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:06:23.334 00:18:51 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:23.334 00:18:51 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:06:23.334 00:18:51 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:23.334 00:18:51 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:06:23.334 00:18:51 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:23.334 00:18:51 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:06:23.334 00:18:51 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:23.334 00:18:51 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:06:23.334 00:18:51 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:23.334 00:18:51 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:06:23.334 00:18:51 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:23.334 00:18:51 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:06:23.334 00:18:51 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:23.334 00:18:51 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:06:23.334 00:18:51 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:23.334 00:18:51 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:06:23.334 00:18:51 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:23.334 00:18:51 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:06:23.334 00:18:51 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:23.334 00:18:51 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:06:23.334 00:18:51 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:23.334 00:18:51 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:06:23.334 00:18:51 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:23.334 00:18:51 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:06:23.334 00:18:51 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:23.334 00:18:51 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:06:23.334 00:18:51 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:06:23.334 00:18:51 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:06:23.334 00:18:51 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:06:23.334 00:18:51 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:06:23.334 00:18:51 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:06:23.334 00:18:51 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:84:00.0 data@nvme0n1 '' '' 00:06:23.334 00:18:51 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:84:00.0 00:06:23.334 00:18:51 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:06:23.334 00:18:51 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:06:23.334 00:18:51 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:06:23.334 00:18:51 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:06:23.334 00:18:51 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:06:23.334 00:18:51 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:06:23.334 00:18:51 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:23.334 00:18:51 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:84:00.0 00:06:23.334 00:18:51 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:06:23.334 00:18:51 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:06:23.334 00:18:51 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:06:24.717 00:18:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:84:00.0 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:06:24.717 00:18:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:06:24.717 00:18:52 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:06:24.717 00:18:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:24.717 00:18:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:06:24.717 00:18:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:24.717 00:18:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:06:24.717 00:18:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:24.717 00:18:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:06:24.717 00:18:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:24.717 00:18:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:06:24.717 00:18:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:24.717 00:18:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:06:24.717 00:18:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:24.717 00:18:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:06:24.717 00:18:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:24.717 00:18:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:06:24.717 00:18:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:24.717 00:18:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:06:24.717 00:18:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:24.717 00:18:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:06:24.717 00:18:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:24.717 00:18:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:06:24.717 00:18:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:24.717 00:18:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:06:24.717 00:18:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:24.717 00:18:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:06:24.717 00:18:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:24.717 00:18:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:06:24.717 00:18:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:24.717 00:18:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:06:24.717 00:18:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:24.717 00:18:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:06:24.717 00:18:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:24.717 00:18:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:06:24.717 00:18:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:24.717 00:18:52 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:06:24.717 00:18:52 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:06:24.717 00:18:52 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:06:24.717 00:18:52 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:06:24.717 00:18:52 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:06:24.717 00:18:52 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:06:24.717 00:18:52 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:06:24.717 00:18:52 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:06:24.717 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:06:24.717 00:06:24.717 real 0m5.654s 00:06:24.717 user 0m1.266s 00:06:24.717 sys 0m2.128s 00:06:24.717 00:18:52 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:24.717 00:18:52 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:06:24.717 ************************************ 00:06:24.717 END TEST nvme_mount 00:06:24.717 ************************************ 00:06:24.717 00:18:52 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:06:24.717 00:18:52 setup.sh.devices -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:24.717 00:18:52 setup.sh.devices -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:24.717 00:18:52 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:06:24.717 ************************************ 00:06:24.717 START TEST dm_mount 00:06:24.717 ************************************ 00:06:24.717 00:18:52 setup.sh.devices.dm_mount -- common/autotest_common.sh@1121 -- # dm_mount 00:06:24.717 00:18:52 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:06:24.717 00:18:52 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:06:24.717 00:18:52 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:06:24.717 00:18:52 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:06:24.717 00:18:52 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:06:24.717 00:18:52 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:06:24.717 00:18:52 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:06:24.717 00:18:52 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:06:24.717 00:18:52 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:06:24.718 00:18:52 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:06:24.718 00:18:52 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:06:24.718 00:18:52 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:06:24.718 00:18:52 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:06:24.718 00:18:52 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:06:24.718 00:18:52 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:06:24.718 00:18:52 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:06:24.718 00:18:52 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:06:24.718 00:18:52 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:06:24.718 00:18:52 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:06:24.718 00:18:52 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:06:24.718 00:18:52 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:06:25.656 Creating new GPT entries in memory. 00:06:25.656 GPT data structures destroyed! You may now partition the disk using fdisk or 00:06:25.656 other utilities. 00:06:25.656 00:18:53 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:06:25.656 00:18:53 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:06:25.656 00:18:53 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:06:25.656 00:18:53 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:06:25.656 00:18:53 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:06:26.598 Creating new GPT entries in memory. 00:06:26.598 The operation has completed successfully. 00:06:26.598 00:18:54 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:06:26.598 00:18:54 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:06:26.598 00:18:54 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:06:26.598 00:18:54 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:06:26.598 00:18:54 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:06:27.536 The operation has completed successfully. 00:06:27.536 00:18:55 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:06:27.536 00:18:55 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:06:27.536 00:18:55 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 846212 00:06:27.797 00:18:55 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:06:27.797 00:18:55 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:06:27.797 00:18:55 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:06:27.797 00:18:55 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:06:27.797 00:18:55 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:06:27.797 00:18:55 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:06:27.797 00:18:55 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:06:27.797 00:18:55 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:06:27.797 00:18:55 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:06:27.797 00:18:55 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:06:27.797 00:18:55 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:06:27.797 00:18:55 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:06:27.797 00:18:55 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:06:27.797 00:18:55 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:06:27.797 00:18:55 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount size= 00:06:27.797 00:18:55 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:06:27.797 00:18:55 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:06:27.797 00:18:55 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:06:27.797 00:18:55 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:06:27.797 00:18:55 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:84:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:06:27.797 00:18:55 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:84:00.0 00:06:27.797 00:18:55 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:06:27.797 00:18:55 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:06:27.797 00:18:55 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:06:27.797 00:18:55 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:06:27.797 00:18:55 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:06:27.797 00:18:55 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:06:27.797 00:18:55 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:06:27.797 00:18:55 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:84:00.0 00:06:27.797 00:18:55 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:27.797 00:18:55 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:06:27.797 00:18:55 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:06:27.797 00:18:55 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:06:28.775 00:18:56 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:84:00.0 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:06:28.775 00:18:56 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:06:28.775 00:18:56 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:06:28.775 00:18:56 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:28.775 00:18:56 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:06:28.775 00:18:56 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:28.775 00:18:56 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:06:28.775 00:18:56 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:28.775 00:18:56 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:06:28.775 00:18:56 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:28.775 00:18:56 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:06:28.775 00:18:56 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:28.775 00:18:56 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:06:28.775 00:18:56 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:28.775 00:18:56 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:06:28.775 00:18:56 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:28.775 00:18:56 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:06:28.775 00:18:56 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:28.775 00:18:56 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:06:28.775 00:18:56 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:28.775 00:18:56 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:06:28.775 00:18:56 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:28.775 00:18:56 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:06:28.775 00:18:56 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:28.775 00:18:56 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:06:28.775 00:18:56 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:28.775 00:18:56 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:06:28.775 00:18:56 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:28.775 00:18:56 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:06:28.775 00:18:56 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:28.775 00:18:56 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:06:28.775 00:18:56 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:28.775 00:18:56 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:06:28.775 00:18:56 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:28.775 00:18:56 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:06:28.775 00:18:56 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:28.775 00:18:56 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:06:28.775 00:18:56 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount ]] 00:06:28.775 00:18:56 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:06:28.775 00:18:56 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:06:28.776 00:18:56 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:06:28.776 00:18:56 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:06:28.776 00:18:56 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:84:00.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:06:28.776 00:18:56 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:84:00.0 00:06:28.776 00:18:56 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:06:28.776 00:18:56 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:06:28.776 00:18:56 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:06:28.776 00:18:56 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:06:28.776 00:18:56 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:06:28.776 00:18:56 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:06:28.776 00:18:56 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:28.776 00:18:56 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:84:00.0 00:06:28.776 00:18:56 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:06:28.776 00:18:56 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:06:28.776 00:18:56 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:06:29.713 00:18:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:84:00.0 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:06:29.713 00:18:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:06:29.713 00:18:57 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:06:29.713 00:18:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:29.713 00:18:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:06:29.713 00:18:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:29.713 00:18:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:06:29.713 00:18:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:29.713 00:18:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:06:29.713 00:18:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:29.713 00:18:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:06:29.713 00:18:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:29.713 00:18:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:06:29.713 00:18:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:29.713 00:18:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:06:29.713 00:18:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:29.713 00:18:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:06:29.713 00:18:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:29.713 00:18:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:06:29.713 00:18:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:29.713 00:18:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:06:29.713 00:18:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:29.713 00:18:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:06:29.713 00:18:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:29.713 00:18:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:06:29.713 00:18:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:29.713 00:18:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:06:29.713 00:18:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:29.713 00:18:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:06:29.713 00:18:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:29.713 00:18:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:06:29.713 00:18:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:29.713 00:18:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:06:29.713 00:18:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:29.713 00:18:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:06:29.713 00:18:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:06:29.713 00:18:57 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:06:29.713 00:18:57 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:06:29.713 00:18:57 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:06:29.713 00:18:57 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:06:29.713 00:18:57 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:06:29.713 00:18:57 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:06:29.713 00:18:57 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:06:29.713 00:18:57 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:06:29.713 00:18:57 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:06:29.713 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:06:29.713 00:18:57 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:06:29.713 00:18:57 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:06:29.713 00:06:29.713 real 0m5.212s 00:06:29.713 user 0m0.804s 00:06:29.713 sys 0m1.355s 00:06:29.713 00:18:57 setup.sh.devices.dm_mount -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:29.713 00:18:57 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:06:29.713 ************************************ 00:06:29.713 END TEST dm_mount 00:06:29.713 ************************************ 00:06:29.970 00:18:57 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:06:29.970 00:18:57 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:06:29.970 00:18:57 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:06:29.970 00:18:57 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:06:29.970 00:18:57 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:06:29.970 00:18:57 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:06:29.970 00:18:57 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:06:30.228 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:06:30.228 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:06:30.228 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:06:30.228 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:06:30.228 00:18:57 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:06:30.228 00:18:57 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:06:30.228 00:18:57 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:06:30.228 00:18:57 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:06:30.229 00:18:57 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:06:30.229 00:18:57 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:06:30.229 00:18:57 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:06:30.229 00:06:30.229 real 0m12.613s 00:06:30.229 user 0m2.656s 00:06:30.229 sys 0m4.464s 00:06:30.229 00:18:57 setup.sh.devices -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:30.229 00:18:57 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:06:30.229 ************************************ 00:06:30.229 END TEST devices 00:06:30.229 ************************************ 00:06:30.229 00:06:30.229 real 0m39.332s 00:06:30.229 user 0m11.357s 00:06:30.229 sys 0m17.272s 00:06:30.229 00:18:57 setup.sh -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:30.229 00:18:57 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:06:30.229 ************************************ 00:06:30.229 END TEST setup.sh 00:06:30.229 ************************************ 00:06:30.229 00:18:57 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:06:31.166 Hugepages 00:06:31.166 node hugesize free / total 00:06:31.166 node0 1048576kB 0 / 0 00:06:31.166 node0 2048kB 2048 / 2048 00:06:31.166 node1 1048576kB 0 / 0 00:06:31.166 node1 2048kB 0 / 0 00:06:31.166 00:06:31.166 Type BDF Vendor Device NUMA Driver Device Block devices 00:06:31.166 I/OAT 0000:00:04.0 8086 3c20 0 ioatdma - - 00:06:31.166 I/OAT 0000:00:04.1 8086 3c21 0 ioatdma - - 00:06:31.166 I/OAT 0000:00:04.2 8086 3c22 0 ioatdma - - 00:06:31.166 I/OAT 0000:00:04.3 8086 3c23 0 ioatdma - - 00:06:31.166 I/OAT 0000:00:04.4 8086 3c24 0 ioatdma - - 00:06:31.166 I/OAT 0000:00:04.5 8086 3c25 0 ioatdma - - 00:06:31.166 I/OAT 0000:00:04.6 8086 3c26 0 ioatdma - - 00:06:31.166 I/OAT 0000:00:04.7 8086 3c27 0 ioatdma - - 00:06:31.166 I/OAT 0000:80:04.0 8086 3c20 1 ioatdma - - 00:06:31.166 I/OAT 0000:80:04.1 8086 3c21 1 ioatdma - - 00:06:31.166 I/OAT 0000:80:04.2 8086 3c22 1 ioatdma - - 00:06:31.166 I/OAT 0000:80:04.3 8086 3c23 1 ioatdma - - 00:06:31.166 I/OAT 0000:80:04.4 8086 3c24 1 ioatdma - - 00:06:31.166 I/OAT 0000:80:04.5 8086 3c25 1 ioatdma - - 00:06:31.166 I/OAT 0000:80:04.6 8086 3c26 1 ioatdma - - 00:06:31.166 I/OAT 0000:80:04.7 8086 3c27 1 ioatdma - - 00:06:31.166 NVMe 0000:84:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:06:31.166 00:18:58 -- spdk/autotest.sh@130 -- # uname -s 00:06:31.166 00:18:58 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:06:31.166 00:18:58 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:06:31.166 00:18:58 -- common/autotest_common.sh@1527 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:06:32.104 0000:00:04.7 (8086 3c27): ioatdma -> vfio-pci 00:06:32.104 0000:00:04.6 (8086 3c26): ioatdma -> vfio-pci 00:06:32.104 0000:00:04.5 (8086 3c25): ioatdma -> vfio-pci 00:06:32.364 0000:00:04.4 (8086 3c24): ioatdma -> vfio-pci 00:06:32.364 0000:00:04.3 (8086 3c23): ioatdma -> vfio-pci 00:06:32.364 0000:00:04.2 (8086 3c22): ioatdma -> vfio-pci 00:06:32.364 0000:00:04.1 (8086 3c21): ioatdma -> vfio-pci 00:06:32.364 0000:00:04.0 (8086 3c20): ioatdma -> vfio-pci 00:06:32.364 0000:80:04.7 (8086 3c27): ioatdma -> vfio-pci 00:06:32.364 0000:80:04.6 (8086 3c26): ioatdma -> vfio-pci 00:06:32.364 0000:80:04.5 (8086 3c25): ioatdma -> vfio-pci 00:06:32.364 0000:80:04.4 (8086 3c24): ioatdma -> vfio-pci 00:06:32.364 0000:80:04.3 (8086 3c23): ioatdma -> vfio-pci 00:06:32.364 0000:80:04.2 (8086 3c22): ioatdma -> vfio-pci 00:06:32.364 0000:80:04.1 (8086 3c21): ioatdma -> vfio-pci 00:06:32.364 0000:80:04.0 (8086 3c20): ioatdma -> vfio-pci 00:06:33.306 0000:84:00.0 (8086 0a54): nvme -> vfio-pci 00:06:33.306 00:19:00 -- common/autotest_common.sh@1528 -- # sleep 1 00:06:34.244 00:19:01 -- common/autotest_common.sh@1529 -- # bdfs=() 00:06:34.244 00:19:01 -- common/autotest_common.sh@1529 -- # local bdfs 00:06:34.244 00:19:01 -- common/autotest_common.sh@1530 -- # bdfs=($(get_nvme_bdfs)) 00:06:34.244 00:19:01 -- common/autotest_common.sh@1530 -- # get_nvme_bdfs 00:06:34.244 00:19:01 -- common/autotest_common.sh@1509 -- # bdfs=() 00:06:34.244 00:19:02 -- common/autotest_common.sh@1509 -- # local bdfs 00:06:34.244 00:19:02 -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:06:34.244 00:19:02 -- common/autotest_common.sh@1510 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:06:34.244 00:19:02 -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:06:34.244 00:19:02 -- common/autotest_common.sh@1511 -- # (( 1 == 0 )) 00:06:34.244 00:19:02 -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:84:00.0 00:06:34.244 00:19:02 -- common/autotest_common.sh@1532 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:06:35.184 Waiting for block devices as requested 00:06:35.184 0000:84:00.0 (8086 0a54): vfio-pci -> nvme 00:06:35.442 0000:00:04.7 (8086 3c27): vfio-pci -> ioatdma 00:06:35.442 0000:00:04.6 (8086 3c26): vfio-pci -> ioatdma 00:06:35.701 0000:00:04.5 (8086 3c25): vfio-pci -> ioatdma 00:06:35.701 0000:00:04.4 (8086 3c24): vfio-pci -> ioatdma 00:06:35.701 0000:00:04.3 (8086 3c23): vfio-pci -> ioatdma 00:06:35.701 0000:00:04.2 (8086 3c22): vfio-pci -> ioatdma 00:06:35.961 0000:00:04.1 (8086 3c21): vfio-pci -> ioatdma 00:06:35.961 0000:00:04.0 (8086 3c20): vfio-pci -> ioatdma 00:06:35.961 0000:80:04.7 (8086 3c27): vfio-pci -> ioatdma 00:06:36.222 0000:80:04.6 (8086 3c26): vfio-pci -> ioatdma 00:06:36.222 0000:80:04.5 (8086 3c25): vfio-pci -> ioatdma 00:06:36.222 0000:80:04.4 (8086 3c24): vfio-pci -> ioatdma 00:06:36.222 0000:80:04.3 (8086 3c23): vfio-pci -> ioatdma 00:06:36.480 0000:80:04.2 (8086 3c22): vfio-pci -> ioatdma 00:06:36.480 0000:80:04.1 (8086 3c21): vfio-pci -> ioatdma 00:06:36.480 0000:80:04.0 (8086 3c20): vfio-pci -> ioatdma 00:06:36.740 00:19:04 -- common/autotest_common.sh@1534 -- # for bdf in "${bdfs[@]}" 00:06:36.740 00:19:04 -- common/autotest_common.sh@1535 -- # get_nvme_ctrlr_from_bdf 0000:84:00.0 00:06:36.740 00:19:04 -- common/autotest_common.sh@1498 -- # readlink -f /sys/class/nvme/nvme0 00:06:36.740 00:19:04 -- common/autotest_common.sh@1498 -- # grep 0000:84:00.0/nvme/nvme 00:06:36.740 00:19:04 -- common/autotest_common.sh@1498 -- # bdf_sysfs_path=/sys/devices/pci0000:80/0000:80:03.0/0000:84:00.0/nvme/nvme0 00:06:36.740 00:19:04 -- common/autotest_common.sh@1499 -- # [[ -z /sys/devices/pci0000:80/0000:80:03.0/0000:84:00.0/nvme/nvme0 ]] 00:06:36.740 00:19:04 -- common/autotest_common.sh@1503 -- # basename /sys/devices/pci0000:80/0000:80:03.0/0000:84:00.0/nvme/nvme0 00:06:36.740 00:19:04 -- common/autotest_common.sh@1503 -- # printf '%s\n' nvme0 00:06:36.740 00:19:04 -- common/autotest_common.sh@1535 -- # nvme_ctrlr=/dev/nvme0 00:06:36.740 00:19:04 -- common/autotest_common.sh@1536 -- # [[ -z /dev/nvme0 ]] 00:06:36.740 00:19:04 -- common/autotest_common.sh@1541 -- # nvme id-ctrl /dev/nvme0 00:06:36.740 00:19:04 -- common/autotest_common.sh@1541 -- # grep oacs 00:06:36.740 00:19:04 -- common/autotest_common.sh@1541 -- # cut -d: -f2 00:06:36.740 00:19:04 -- common/autotest_common.sh@1541 -- # oacs=' 0xf' 00:06:36.740 00:19:04 -- common/autotest_common.sh@1542 -- # oacs_ns_manage=8 00:06:36.740 00:19:04 -- common/autotest_common.sh@1544 -- # [[ 8 -ne 0 ]] 00:06:36.740 00:19:04 -- common/autotest_common.sh@1550 -- # nvme id-ctrl /dev/nvme0 00:06:36.740 00:19:04 -- common/autotest_common.sh@1550 -- # grep unvmcap 00:06:36.740 00:19:04 -- common/autotest_common.sh@1550 -- # cut -d: -f2 00:06:36.740 00:19:04 -- common/autotest_common.sh@1550 -- # unvmcap=' 0' 00:06:36.740 00:19:04 -- common/autotest_common.sh@1551 -- # [[ 0 -eq 0 ]] 00:06:36.740 00:19:04 -- common/autotest_common.sh@1553 -- # continue 00:06:36.740 00:19:04 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:06:36.740 00:19:04 -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:36.740 00:19:04 -- common/autotest_common.sh@10 -- # set +x 00:06:36.740 00:19:04 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:06:36.740 00:19:04 -- common/autotest_common.sh@720 -- # xtrace_disable 00:06:36.740 00:19:04 -- common/autotest_common.sh@10 -- # set +x 00:06:36.740 00:19:04 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:06:37.679 0000:00:04.7 (8086 3c27): ioatdma -> vfio-pci 00:06:37.679 0000:00:04.6 (8086 3c26): ioatdma -> vfio-pci 00:06:37.679 0000:00:04.5 (8086 3c25): ioatdma -> vfio-pci 00:06:37.679 0000:00:04.4 (8086 3c24): ioatdma -> vfio-pci 00:06:37.679 0000:00:04.3 (8086 3c23): ioatdma -> vfio-pci 00:06:37.679 0000:00:04.2 (8086 3c22): ioatdma -> vfio-pci 00:06:37.679 0000:00:04.1 (8086 3c21): ioatdma -> vfio-pci 00:06:37.679 0000:00:04.0 (8086 3c20): ioatdma -> vfio-pci 00:06:37.679 0000:80:04.7 (8086 3c27): ioatdma -> vfio-pci 00:06:37.679 0000:80:04.6 (8086 3c26): ioatdma -> vfio-pci 00:06:37.679 0000:80:04.5 (8086 3c25): ioatdma -> vfio-pci 00:06:37.973 0000:80:04.4 (8086 3c24): ioatdma -> vfio-pci 00:06:37.973 0000:80:04.3 (8086 3c23): ioatdma -> vfio-pci 00:06:37.973 0000:80:04.2 (8086 3c22): ioatdma -> vfio-pci 00:06:37.973 0000:80:04.1 (8086 3c21): ioatdma -> vfio-pci 00:06:37.973 0000:80:04.0 (8086 3c20): ioatdma -> vfio-pci 00:06:38.911 0000:84:00.0 (8086 0a54): nvme -> vfio-pci 00:06:38.911 00:19:06 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:06:38.911 00:19:06 -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:38.911 00:19:06 -- common/autotest_common.sh@10 -- # set +x 00:06:38.911 00:19:06 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:06:38.911 00:19:06 -- common/autotest_common.sh@1587 -- # mapfile -t bdfs 00:06:38.911 00:19:06 -- common/autotest_common.sh@1587 -- # get_nvme_bdfs_by_id 0x0a54 00:06:38.911 00:19:06 -- common/autotest_common.sh@1573 -- # bdfs=() 00:06:38.911 00:19:06 -- common/autotest_common.sh@1573 -- # local bdfs 00:06:38.911 00:19:06 -- common/autotest_common.sh@1575 -- # get_nvme_bdfs 00:06:38.911 00:19:06 -- common/autotest_common.sh@1509 -- # bdfs=() 00:06:38.911 00:19:06 -- common/autotest_common.sh@1509 -- # local bdfs 00:06:38.911 00:19:06 -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:06:38.911 00:19:06 -- common/autotest_common.sh@1510 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:06:38.911 00:19:06 -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:06:38.911 00:19:06 -- common/autotest_common.sh@1511 -- # (( 1 == 0 )) 00:06:38.911 00:19:06 -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:84:00.0 00:06:38.911 00:19:06 -- common/autotest_common.sh@1575 -- # for bdf in $(get_nvme_bdfs) 00:06:38.911 00:19:06 -- common/autotest_common.sh@1576 -- # cat /sys/bus/pci/devices/0000:84:00.0/device 00:06:38.911 00:19:06 -- common/autotest_common.sh@1576 -- # device=0x0a54 00:06:38.911 00:19:06 -- common/autotest_common.sh@1577 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:06:38.911 00:19:06 -- common/autotest_common.sh@1578 -- # bdfs+=($bdf) 00:06:38.911 00:19:06 -- common/autotest_common.sh@1582 -- # printf '%s\n' 0000:84:00.0 00:06:38.911 00:19:06 -- common/autotest_common.sh@1588 -- # [[ -z 0000:84:00.0 ]] 00:06:38.911 00:19:06 -- common/autotest_common.sh@1593 -- # spdk_tgt_pid=850265 00:06:38.911 00:19:06 -- common/autotest_common.sh@1592 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:38.911 00:19:06 -- common/autotest_common.sh@1594 -- # waitforlisten 850265 00:06:38.911 00:19:06 -- common/autotest_common.sh@827 -- # '[' -z 850265 ']' 00:06:38.911 00:19:06 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:38.911 00:19:06 -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:38.911 00:19:06 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:38.911 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:38.911 00:19:06 -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:38.911 00:19:06 -- common/autotest_common.sh@10 -- # set +x 00:06:38.911 [2024-07-12 00:19:06.633405] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:06:38.911 [2024-07-12 00:19:06.633494] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid850265 ] 00:06:38.911 EAL: No free 2048 kB hugepages reported on node 1 00:06:38.911 [2024-07-12 00:19:06.692931] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:39.169 [2024-07-12 00:19:06.780498] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.169 00:19:06 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:39.169 00:19:06 -- common/autotest_common.sh@860 -- # return 0 00:06:39.169 00:19:06 -- common/autotest_common.sh@1596 -- # bdf_id=0 00:06:39.169 00:19:06 -- common/autotest_common.sh@1597 -- # for bdf in "${bdfs[@]}" 00:06:39.169 00:19:06 -- common/autotest_common.sh@1598 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:84:00.0 00:06:42.451 nvme0n1 00:06:42.451 00:19:10 -- common/autotest_common.sh@1600 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:06:42.709 [2024-07-12 00:19:10.380302] nvme_opal.c:2063:spdk_opal_cmd_revert_tper: *ERROR*: Error on starting admin SP session with error 18 00:06:42.709 [2024-07-12 00:19:10.380339] vbdev_opal_rpc.c: 134:rpc_bdev_nvme_opal_revert: *ERROR*: Revert TPer failure: 18 00:06:42.709 request: 00:06:42.709 { 00:06:42.709 "nvme_ctrlr_name": "nvme0", 00:06:42.709 "password": "test", 00:06:42.709 "method": "bdev_nvme_opal_revert", 00:06:42.709 "req_id": 1 00:06:42.709 } 00:06:42.709 Got JSON-RPC error response 00:06:42.709 response: 00:06:42.709 { 00:06:42.709 "code": -32603, 00:06:42.709 "message": "Internal error" 00:06:42.709 } 00:06:42.709 00:19:10 -- common/autotest_common.sh@1600 -- # true 00:06:42.709 00:19:10 -- common/autotest_common.sh@1601 -- # (( ++bdf_id )) 00:06:42.709 00:19:10 -- common/autotest_common.sh@1604 -- # killprocess 850265 00:06:42.709 00:19:10 -- common/autotest_common.sh@946 -- # '[' -z 850265 ']' 00:06:42.709 00:19:10 -- common/autotest_common.sh@950 -- # kill -0 850265 00:06:42.709 00:19:10 -- common/autotest_common.sh@951 -- # uname 00:06:42.709 00:19:10 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:42.709 00:19:10 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 850265 00:06:42.709 00:19:10 -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:42.709 00:19:10 -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:42.709 00:19:10 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 850265' 00:06:42.709 killing process with pid 850265 00:06:42.709 00:19:10 -- common/autotest_common.sh@965 -- # kill 850265 00:06:42.709 00:19:10 -- common/autotest_common.sh@970 -- # wait 850265 00:06:42.709 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:42.709 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:42.709 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:42.709 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:42.709 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:42.709 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:42.709 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:42.709 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:42.709 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:42.709 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:42.709 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:42.709 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:42.709 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:42.709 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:42.709 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:42.709 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:42.709 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:42.709 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:42.709 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:42.710 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:42.710 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:42.710 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:42.710 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:42.710 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:42.710 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:42.710 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:42.710 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:42.710 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:42.710 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:42.710 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:42.710 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:42.710 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:42.710 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:42.710 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:42.710 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:42.710 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:42.710 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:42.710 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:42.710 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:42.710 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:42.710 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:42.710 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:42.710 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:42.710 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:42.710 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:42.710 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:42.710 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:42.710 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:42.710 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:42.710 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:42.710 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:42.710 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:42.710 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:42.710 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:42.710 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:42.710 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:42.710 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:42.710 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:42.710 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:42.710 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:42.710 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:42.710 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:42.710 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:42.710 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:42.710 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:42.710 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:42.710 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:42.710 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:42.710 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:42.710 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:42.710 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:42.710 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:42.710 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:42.710 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:42.710 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:42.710 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:42.710 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:42.710 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:42.710 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:42.710 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:42.710 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:42.710 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:42.710 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:42.710 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:42.710 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:42.710 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:42.710 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:42.710 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:42.710 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:42.710 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:42.710 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:42.710 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:42.710 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:42.710 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:42.710 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:42.710 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:42.710 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:42.710 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:42.710 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:42.710 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:42.710 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:42.710 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:42.710 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:42.710 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:42.710 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:42.710 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:42.710 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:42.710 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:42.710 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:42.710 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:42.710 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:42.710 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:42.710 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:42.710 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:42.710 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:42.710 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:42.710 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:42.710 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:42.710 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:42.710 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:42.710 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:42.710 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:42.710 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:42.710 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:42.710 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:42.710 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:42.710 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:42.710 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:42.710 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:42.710 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:42.710 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:42.710 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:42.710 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:42.710 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:42.710 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:42.710 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:42.710 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:42.710 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:42.710 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:42.710 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:42.710 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:42.710 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:42.710 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:42.710 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:42.710 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:42.710 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:42.710 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:42.710 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:42.710 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:42.710 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:42.710 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:42.710 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:42.710 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:42.710 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:42.710 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:42.710 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:42.710 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:42.710 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:42.710 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:42.710 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:42.710 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:42.710 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:42.710 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:42.710 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:42.710 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:42.710 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:42.710 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:42.710 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:42.710 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:42.710 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:42.710 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:42.710 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:42.710 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:42.710 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:42.710 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:42.710 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:42.710 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:42.710 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:42.710 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:42.710 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:42.710 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:42.710 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:42.710 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:42.710 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:42.710 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:42.710 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:42.710 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:42.710 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:42.710 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:42.710 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:42.710 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:42.710 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:42.710 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:42.710 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:42.710 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:42.710 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:44.607 00:19:11 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:06:44.607 00:19:11 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:06:44.607 00:19:11 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:06:44.607 00:19:11 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:06:44.607 00:19:11 -- spdk/autotest.sh@162 -- # timing_enter lib 00:06:44.607 00:19:11 -- common/autotest_common.sh@720 -- # xtrace_disable 00:06:44.607 00:19:11 -- common/autotest_common.sh@10 -- # set +x 00:06:44.608 00:19:11 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:06:44.608 00:19:11 -- spdk/autotest.sh@168 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:06:44.608 00:19:12 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:44.608 00:19:12 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:44.608 00:19:12 -- common/autotest_common.sh@10 -- # set +x 00:06:44.608 ************************************ 00:06:44.608 START TEST env 00:06:44.608 ************************************ 00:06:44.608 00:19:12 env -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:06:44.608 * Looking for test storage... 00:06:44.608 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:06:44.608 00:19:12 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:06:44.608 00:19:12 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:44.608 00:19:12 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:44.608 00:19:12 env -- common/autotest_common.sh@10 -- # set +x 00:06:44.608 ************************************ 00:06:44.608 START TEST env_memory 00:06:44.608 ************************************ 00:06:44.608 00:19:12 env.env_memory -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:06:44.608 00:06:44.608 00:06:44.608 CUnit - A unit testing framework for C - Version 2.1-3 00:06:44.608 http://cunit.sourceforge.net/ 00:06:44.608 00:06:44.608 00:06:44.608 Suite: memory 00:06:44.608 Test: alloc and free memory map ...[2024-07-12 00:19:12.151575] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:06:44.608 passed 00:06:44.608 Test: mem map translation ...[2024-07-12 00:19:12.183062] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:06:44.608 [2024-07-12 00:19:12.183090] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:06:44.608 [2024-07-12 00:19:12.183143] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:06:44.608 [2024-07-12 00:19:12.183157] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:06:44.608 passed 00:06:44.608 Test: mem map registration ...[2024-07-12 00:19:12.249593] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:06:44.608 [2024-07-12 00:19:12.249619] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:06:44.608 passed 00:06:44.608 Test: mem map adjacent registrations ...passed 00:06:44.608 00:06:44.608 Run Summary: Type Total Ran Passed Failed Inactive 00:06:44.608 suites 1 1 n/a 0 0 00:06:44.608 tests 4 4 4 0 0 00:06:44.608 asserts 152 152 152 0 n/a 00:06:44.608 00:06:44.608 Elapsed time = 0.216 seconds 00:06:44.608 00:06:44.608 real 0m0.224s 00:06:44.608 user 0m0.214s 00:06:44.608 sys 0m0.010s 00:06:44.608 00:19:12 env.env_memory -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:44.608 00:19:12 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:06:44.608 ************************************ 00:06:44.608 END TEST env_memory 00:06:44.608 ************************************ 00:06:44.608 00:19:12 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:06:44.608 00:19:12 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:44.608 00:19:12 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:44.608 00:19:12 env -- common/autotest_common.sh@10 -- # set +x 00:06:44.608 ************************************ 00:06:44.608 START TEST env_vtophys 00:06:44.608 ************************************ 00:06:44.608 00:19:12 env.env_vtophys -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:06:44.608 EAL: lib.eal log level changed from notice to debug 00:06:44.608 EAL: Detected lcore 0 as core 0 on socket 0 00:06:44.608 EAL: Detected lcore 1 as core 1 on socket 0 00:06:44.608 EAL: Detected lcore 2 as core 2 on socket 0 00:06:44.608 EAL: Detected lcore 3 as core 3 on socket 0 00:06:44.608 EAL: Detected lcore 4 as core 4 on socket 0 00:06:44.608 EAL: Detected lcore 5 as core 5 on socket 0 00:06:44.608 EAL: Detected lcore 6 as core 6 on socket 0 00:06:44.608 EAL: Detected lcore 7 as core 7 on socket 0 00:06:44.608 EAL: Detected lcore 8 as core 0 on socket 1 00:06:44.608 EAL: Detected lcore 9 as core 1 on socket 1 00:06:44.608 EAL: Detected lcore 10 as core 2 on socket 1 00:06:44.608 EAL: Detected lcore 11 as core 3 on socket 1 00:06:44.608 EAL: Detected lcore 12 as core 4 on socket 1 00:06:44.608 EAL: Detected lcore 13 as core 5 on socket 1 00:06:44.608 EAL: Detected lcore 14 as core 6 on socket 1 00:06:44.608 EAL: Detected lcore 15 as core 7 on socket 1 00:06:44.608 EAL: Detected lcore 16 as core 0 on socket 0 00:06:44.608 EAL: Detected lcore 17 as core 1 on socket 0 00:06:44.608 EAL: Detected lcore 18 as core 2 on socket 0 00:06:44.608 EAL: Detected lcore 19 as core 3 on socket 0 00:06:44.608 EAL: Detected lcore 20 as core 4 on socket 0 00:06:44.608 EAL: Detected lcore 21 as core 5 on socket 0 00:06:44.608 EAL: Detected lcore 22 as core 6 on socket 0 00:06:44.608 EAL: Detected lcore 23 as core 7 on socket 0 00:06:44.608 EAL: Detected lcore 24 as core 0 on socket 1 00:06:44.608 EAL: Detected lcore 25 as core 1 on socket 1 00:06:44.608 EAL: Detected lcore 26 as core 2 on socket 1 00:06:44.608 EAL: Detected lcore 27 as core 3 on socket 1 00:06:44.608 EAL: Detected lcore 28 as core 4 on socket 1 00:06:44.608 EAL: Detected lcore 29 as core 5 on socket 1 00:06:44.608 EAL: Detected lcore 30 as core 6 on socket 1 00:06:44.608 EAL: Detected lcore 31 as core 7 on socket 1 00:06:44.608 EAL: Maximum logical cores by configuration: 128 00:06:44.608 EAL: Detected CPU lcores: 32 00:06:44.608 EAL: Detected NUMA nodes: 2 00:06:44.608 EAL: Checking presence of .so 'librte_eal.so.23.0' 00:06:44.608 EAL: Detected shared linkage of DPDK 00:06:44.608 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so.23.0 00:06:44.608 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so.23.0 00:06:44.608 EAL: Registered [vdev] bus. 00:06:44.608 EAL: bus.vdev log level changed from disabled to notice 00:06:44.608 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so.23.0 00:06:44.609 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so.23.0 00:06:44.609 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:06:44.609 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:06:44.609 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so 00:06:44.609 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so 00:06:44.609 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so 00:06:44.609 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so 00:06:44.609 EAL: No shared files mode enabled, IPC will be disabled 00:06:44.609 EAL: No shared files mode enabled, IPC is disabled 00:06:44.609 EAL: Bus pci wants IOVA as 'DC' 00:06:44.609 EAL: Bus vdev wants IOVA as 'DC' 00:06:44.609 EAL: Buses did not request a specific IOVA mode. 00:06:44.609 EAL: IOMMU is available, selecting IOVA as VA mode. 00:06:44.609 EAL: Selected IOVA mode 'VA' 00:06:44.609 EAL: No free 2048 kB hugepages reported on node 1 00:06:44.609 EAL: Probing VFIO support... 00:06:44.609 EAL: IOMMU type 1 (Type 1) is supported 00:06:44.609 EAL: IOMMU type 7 (sPAPR) is not supported 00:06:44.609 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:06:44.609 EAL: VFIO support initialized 00:06:44.609 EAL: Ask a virtual area of 0x2e000 bytes 00:06:44.609 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:06:44.609 EAL: Setting up physically contiguous memory... 00:06:44.609 EAL: Setting maximum number of open files to 524288 00:06:44.609 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:06:44.609 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:06:44.609 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:06:44.609 EAL: Ask a virtual area of 0x61000 bytes 00:06:44.609 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:06:44.609 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:44.609 EAL: Ask a virtual area of 0x400000000 bytes 00:06:44.609 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:06:44.609 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:06:44.609 EAL: Ask a virtual area of 0x61000 bytes 00:06:44.609 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:06:44.609 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:44.609 EAL: Ask a virtual area of 0x400000000 bytes 00:06:44.609 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:06:44.609 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:06:44.609 EAL: Ask a virtual area of 0x61000 bytes 00:06:44.609 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:06:44.609 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:44.609 EAL: Ask a virtual area of 0x400000000 bytes 00:06:44.609 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:06:44.609 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:06:44.609 EAL: Ask a virtual area of 0x61000 bytes 00:06:44.609 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:06:44.609 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:44.609 EAL: Ask a virtual area of 0x400000000 bytes 00:06:44.609 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:06:44.609 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:06:44.609 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:06:44.609 EAL: Ask a virtual area of 0x61000 bytes 00:06:44.609 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:06:44.609 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:06:44.609 EAL: Ask a virtual area of 0x400000000 bytes 00:06:44.609 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:06:44.609 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:06:44.609 EAL: Ask a virtual area of 0x61000 bytes 00:06:44.609 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:06:44.609 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:06:44.609 EAL: Ask a virtual area of 0x400000000 bytes 00:06:44.609 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:06:44.609 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:06:44.609 EAL: Ask a virtual area of 0x61000 bytes 00:06:44.609 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:06:44.609 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:06:44.609 EAL: Ask a virtual area of 0x400000000 bytes 00:06:44.609 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:06:44.609 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:06:44.609 EAL: Ask a virtual area of 0x61000 bytes 00:06:44.609 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:06:44.609 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:06:44.609 EAL: Ask a virtual area of 0x400000000 bytes 00:06:44.609 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:06:44.609 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:06:44.609 EAL: Hugepages will be freed exactly as allocated. 00:06:44.609 EAL: No shared files mode enabled, IPC is disabled 00:06:44.609 EAL: No shared files mode enabled, IPC is disabled 00:06:44.609 EAL: TSC frequency is ~2700000 KHz 00:06:44.609 EAL: Main lcore 0 is ready (tid=7f9271e4ba00;cpuset=[0]) 00:06:44.609 EAL: Trying to obtain current memory policy. 00:06:44.609 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:44.609 EAL: Restoring previous memory policy: 0 00:06:44.609 EAL: request: mp_malloc_sync 00:06:44.609 EAL: No shared files mode enabled, IPC is disabled 00:06:44.609 EAL: Heap on socket 0 was expanded by 2MB 00:06:44.609 EAL: No shared files mode enabled, IPC is disabled 00:06:44.868 EAL: No shared files mode enabled, IPC is disabled 00:06:44.868 EAL: No PCI address specified using 'addr=' in: bus=pci 00:06:44.868 EAL: Mem event callback 'spdk:(nil)' registered 00:06:44.868 00:06:44.868 00:06:44.868 CUnit - A unit testing framework for C - Version 2.1-3 00:06:44.868 http://cunit.sourceforge.net/ 00:06:44.868 00:06:44.868 00:06:44.868 Suite: components_suite 00:06:44.868 Test: vtophys_malloc_test ...passed 00:06:44.868 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:06:44.868 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:44.868 EAL: Restoring previous memory policy: 4 00:06:44.868 EAL: Calling mem event callback 'spdk:(nil)' 00:06:44.868 EAL: request: mp_malloc_sync 00:06:44.868 EAL: No shared files mode enabled, IPC is disabled 00:06:44.868 EAL: Heap on socket 0 was expanded by 4MB 00:06:44.868 EAL: Calling mem event callback 'spdk:(nil)' 00:06:44.868 EAL: request: mp_malloc_sync 00:06:44.868 EAL: No shared files mode enabled, IPC is disabled 00:06:44.868 EAL: Heap on socket 0 was shrunk by 4MB 00:06:44.868 EAL: Trying to obtain current memory policy. 00:06:44.868 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:44.868 EAL: Restoring previous memory policy: 4 00:06:44.868 EAL: Calling mem event callback 'spdk:(nil)' 00:06:44.868 EAL: request: mp_malloc_sync 00:06:44.868 EAL: No shared files mode enabled, IPC is disabled 00:06:44.868 EAL: Heap on socket 0 was expanded by 6MB 00:06:44.868 EAL: Calling mem event callback 'spdk:(nil)' 00:06:44.868 EAL: request: mp_malloc_sync 00:06:44.868 EAL: No shared files mode enabled, IPC is disabled 00:06:44.868 EAL: Heap on socket 0 was shrunk by 6MB 00:06:44.868 EAL: Trying to obtain current memory policy. 00:06:44.868 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:44.868 EAL: Restoring previous memory policy: 4 00:06:44.868 EAL: Calling mem event callback 'spdk:(nil)' 00:06:44.868 EAL: request: mp_malloc_sync 00:06:44.868 EAL: No shared files mode enabled, IPC is disabled 00:06:44.868 EAL: Heap on socket 0 was expanded by 10MB 00:06:44.868 EAL: Calling mem event callback 'spdk:(nil)' 00:06:44.868 EAL: request: mp_malloc_sync 00:06:44.868 EAL: No shared files mode enabled, IPC is disabled 00:06:44.868 EAL: Heap on socket 0 was shrunk by 10MB 00:06:44.868 EAL: Trying to obtain current memory policy. 00:06:44.868 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:44.868 EAL: Restoring previous memory policy: 4 00:06:44.868 EAL: Calling mem event callback 'spdk:(nil)' 00:06:44.868 EAL: request: mp_malloc_sync 00:06:44.868 EAL: No shared files mode enabled, IPC is disabled 00:06:44.868 EAL: Heap on socket 0 was expanded by 18MB 00:06:44.868 EAL: Calling mem event callback 'spdk:(nil)' 00:06:44.868 EAL: request: mp_malloc_sync 00:06:44.868 EAL: No shared files mode enabled, IPC is disabled 00:06:44.868 EAL: Heap on socket 0 was shrunk by 18MB 00:06:44.868 EAL: Trying to obtain current memory policy. 00:06:44.868 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:44.868 EAL: Restoring previous memory policy: 4 00:06:44.868 EAL: Calling mem event callback 'spdk:(nil)' 00:06:44.868 EAL: request: mp_malloc_sync 00:06:44.868 EAL: No shared files mode enabled, IPC is disabled 00:06:44.868 EAL: Heap on socket 0 was expanded by 34MB 00:06:44.868 EAL: Calling mem event callback 'spdk:(nil)' 00:06:44.868 EAL: request: mp_malloc_sync 00:06:44.868 EAL: No shared files mode enabled, IPC is disabled 00:06:44.868 EAL: Heap on socket 0 was shrunk by 34MB 00:06:44.868 EAL: Trying to obtain current memory policy. 00:06:44.868 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:44.868 EAL: Restoring previous memory policy: 4 00:06:44.868 EAL: Calling mem event callback 'spdk:(nil)' 00:06:44.868 EAL: request: mp_malloc_sync 00:06:44.868 EAL: No shared files mode enabled, IPC is disabled 00:06:44.868 EAL: Heap on socket 0 was expanded by 66MB 00:06:44.868 EAL: Calling mem event callback 'spdk:(nil)' 00:06:44.868 EAL: request: mp_malloc_sync 00:06:44.868 EAL: No shared files mode enabled, IPC is disabled 00:06:44.868 EAL: Heap on socket 0 was shrunk by 66MB 00:06:44.868 EAL: Trying to obtain current memory policy. 00:06:44.868 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:44.868 EAL: Restoring previous memory policy: 4 00:06:44.868 EAL: Calling mem event callback 'spdk:(nil)' 00:06:44.868 EAL: request: mp_malloc_sync 00:06:44.868 EAL: No shared files mode enabled, IPC is disabled 00:06:44.868 EAL: Heap on socket 0 was expanded by 130MB 00:06:44.868 EAL: Calling mem event callback 'spdk:(nil)' 00:06:44.868 EAL: request: mp_malloc_sync 00:06:44.868 EAL: No shared files mode enabled, IPC is disabled 00:06:44.868 EAL: Heap on socket 0 was shrunk by 130MB 00:06:44.868 EAL: Trying to obtain current memory policy. 00:06:44.868 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:44.868 EAL: Restoring previous memory policy: 4 00:06:44.868 EAL: Calling mem event callback 'spdk:(nil)' 00:06:44.868 EAL: request: mp_malloc_sync 00:06:44.868 EAL: No shared files mode enabled, IPC is disabled 00:06:44.868 EAL: Heap on socket 0 was expanded by 258MB 00:06:44.868 EAL: Calling mem event callback 'spdk:(nil)' 00:06:44.868 EAL: request: mp_malloc_sync 00:06:44.868 EAL: No shared files mode enabled, IPC is disabled 00:06:44.868 EAL: Heap on socket 0 was shrunk by 258MB 00:06:44.868 EAL: Trying to obtain current memory policy. 00:06:44.868 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:45.128 EAL: Restoring previous memory policy: 4 00:06:45.128 EAL: Calling mem event callback 'spdk:(nil)' 00:06:45.128 EAL: request: mp_malloc_sync 00:06:45.128 EAL: No shared files mode enabled, IPC is disabled 00:06:45.128 EAL: Heap on socket 0 was expanded by 514MB 00:06:45.128 EAL: Calling mem event callback 'spdk:(nil)' 00:06:45.128 EAL: request: mp_malloc_sync 00:06:45.128 EAL: No shared files mode enabled, IPC is disabled 00:06:45.128 EAL: Heap on socket 0 was shrunk by 514MB 00:06:45.128 EAL: Trying to obtain current memory policy. 00:06:45.128 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:45.390 EAL: Restoring previous memory policy: 4 00:06:45.390 EAL: Calling mem event callback 'spdk:(nil)' 00:06:45.390 EAL: request: mp_malloc_sync 00:06:45.390 EAL: No shared files mode enabled, IPC is disabled 00:06:45.390 EAL: Heap on socket 0 was expanded by 1026MB 00:06:45.649 EAL: Calling mem event callback 'spdk:(nil)' 00:06:45.649 EAL: request: mp_malloc_sync 00:06:45.649 EAL: No shared files mode enabled, IPC is disabled 00:06:45.649 EAL: Heap on socket 0 was shrunk by 1026MB 00:06:45.649 passed 00:06:45.649 00:06:45.649 Run Summary: Type Total Ran Passed Failed Inactive 00:06:45.649 suites 1 1 n/a 0 0 00:06:45.649 tests 2 2 2 0 0 00:06:45.649 asserts 497 497 497 0 n/a 00:06:45.649 00:06:45.649 Elapsed time = 0.904 seconds 00:06:45.649 EAL: Calling mem event callback 'spdk:(nil)' 00:06:45.649 EAL: request: mp_malloc_sync 00:06:45.649 EAL: No shared files mode enabled, IPC is disabled 00:06:45.649 EAL: Heap on socket 0 was shrunk by 2MB 00:06:45.649 EAL: No shared files mode enabled, IPC is disabled 00:06:45.649 EAL: No shared files mode enabled, IPC is disabled 00:06:45.649 EAL: No shared files mode enabled, IPC is disabled 00:06:45.649 00:06:45.649 real 0m1.019s 00:06:45.649 user 0m0.472s 00:06:45.649 sys 0m0.510s 00:06:45.649 00:19:13 env.env_vtophys -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:45.649 00:19:13 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:06:45.649 ************************************ 00:06:45.649 END TEST env_vtophys 00:06:45.649 ************************************ 00:06:45.649 00:19:13 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:06:45.649 00:19:13 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:45.649 00:19:13 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:45.649 00:19:13 env -- common/autotest_common.sh@10 -- # set +x 00:06:45.649 ************************************ 00:06:45.649 START TEST env_pci 00:06:45.649 ************************************ 00:06:45.649 00:19:13 env.env_pci -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:06:45.649 00:06:45.649 00:06:45.649 CUnit - A unit testing framework for C - Version 2.1-3 00:06:45.649 http://cunit.sourceforge.net/ 00:06:45.649 00:06:45.649 00:06:45.649 Suite: pci 00:06:45.649 Test: pci_hook ...[2024-07-12 00:19:13.474001] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 850991 has claimed it 00:06:45.909 EAL: Cannot find device (10000:00:01.0) 00:06:45.909 EAL: Failed to attach device on primary process 00:06:45.909 passed 00:06:45.909 00:06:45.909 Run Summary: Type Total Ran Passed Failed Inactive 00:06:45.909 suites 1 1 n/a 0 0 00:06:45.909 tests 1 1 1 0 0 00:06:45.909 asserts 25 25 25 0 n/a 00:06:45.909 00:06:45.909 Elapsed time = 0.018 seconds 00:06:45.909 00:06:45.909 real 0m0.030s 00:06:45.909 user 0m0.011s 00:06:45.909 sys 0m0.019s 00:06:45.910 00:19:13 env.env_pci -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:45.910 00:19:13 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:06:45.910 ************************************ 00:06:45.910 END TEST env_pci 00:06:45.910 ************************************ 00:06:45.910 00:19:13 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:06:45.910 00:19:13 env -- env/env.sh@15 -- # uname 00:06:45.910 00:19:13 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:06:45.910 00:19:13 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:06:45.910 00:19:13 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:06:45.910 00:19:13 env -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:06:45.910 00:19:13 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:45.910 00:19:13 env -- common/autotest_common.sh@10 -- # set +x 00:06:45.910 ************************************ 00:06:45.910 START TEST env_dpdk_post_init 00:06:45.910 ************************************ 00:06:45.910 00:19:13 env.env_dpdk_post_init -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:06:45.910 EAL: Detected CPU lcores: 32 00:06:45.910 EAL: Detected NUMA nodes: 2 00:06:45.910 EAL: Detected shared linkage of DPDK 00:06:45.910 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:06:45.910 EAL: Selected IOVA mode 'VA' 00:06:45.910 EAL: No free 2048 kB hugepages reported on node 1 00:06:45.910 EAL: VFIO support initialized 00:06:45.910 TELEMETRY: No legacy callbacks, legacy socket not created 00:06:45.910 EAL: Using IOMMU type 1 (Type 1) 00:06:45.910 EAL: Probe PCI driver: spdk_ioat (8086:3c20) device: 0000:00:04.0 (socket 0) 00:06:45.910 EAL: Probe PCI driver: spdk_ioat (8086:3c21) device: 0000:00:04.1 (socket 0) 00:06:45.910 EAL: Probe PCI driver: spdk_ioat (8086:3c22) device: 0000:00:04.2 (socket 0) 00:06:45.910 EAL: Probe PCI driver: spdk_ioat (8086:3c23) device: 0000:00:04.3 (socket 0) 00:06:45.910 EAL: Probe PCI driver: spdk_ioat (8086:3c24) device: 0000:00:04.4 (socket 0) 00:06:45.910 EAL: Probe PCI driver: spdk_ioat (8086:3c25) device: 0000:00:04.5 (socket 0) 00:06:45.910 EAL: Probe PCI driver: spdk_ioat (8086:3c26) device: 0000:00:04.6 (socket 0) 00:06:45.910 EAL: Probe PCI driver: spdk_ioat (8086:3c27) device: 0000:00:04.7 (socket 0) 00:06:45.910 EAL: Probe PCI driver: spdk_ioat (8086:3c20) device: 0000:80:04.0 (socket 1) 00:06:46.169 EAL: Probe PCI driver: spdk_ioat (8086:3c21) device: 0000:80:04.1 (socket 1) 00:06:46.169 EAL: Probe PCI driver: spdk_ioat (8086:3c22) device: 0000:80:04.2 (socket 1) 00:06:46.169 EAL: Probe PCI driver: spdk_ioat (8086:3c23) device: 0000:80:04.3 (socket 1) 00:06:46.169 EAL: Probe PCI driver: spdk_ioat (8086:3c24) device: 0000:80:04.4 (socket 1) 00:06:46.169 EAL: Probe PCI driver: spdk_ioat (8086:3c25) device: 0000:80:04.5 (socket 1) 00:06:46.169 EAL: Probe PCI driver: spdk_ioat (8086:3c26) device: 0000:80:04.6 (socket 1) 00:06:46.169 EAL: Probe PCI driver: spdk_ioat (8086:3c27) device: 0000:80:04.7 (socket 1) 00:06:46.740 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:84:00.0 (socket 1) 00:06:50.058 EAL: Releasing PCI mapped resource for 0000:84:00.0 00:06:50.058 EAL: Calling pci_unmap_resource for 0000:84:00.0 at 0x202001040000 00:06:50.316 Starting DPDK initialization... 00:06:50.316 Starting SPDK post initialization... 00:06:50.316 SPDK NVMe probe 00:06:50.316 Attaching to 0000:84:00.0 00:06:50.316 Attached to 0000:84:00.0 00:06:50.316 Cleaning up... 00:06:50.316 00:06:50.316 real 0m4.408s 00:06:50.316 user 0m3.302s 00:06:50.316 sys 0m0.174s 00:06:50.316 00:19:17 env.env_dpdk_post_init -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:50.316 00:19:17 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:06:50.316 ************************************ 00:06:50.316 END TEST env_dpdk_post_init 00:06:50.316 ************************************ 00:06:50.316 00:19:17 env -- env/env.sh@26 -- # uname 00:06:50.316 00:19:17 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:06:50.316 00:19:17 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:06:50.316 00:19:17 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:50.316 00:19:17 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:50.316 00:19:17 env -- common/autotest_common.sh@10 -- # set +x 00:06:50.316 ************************************ 00:06:50.316 START TEST env_mem_callbacks 00:06:50.316 ************************************ 00:06:50.316 00:19:18 env.env_mem_callbacks -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:06:50.316 EAL: Detected CPU lcores: 32 00:06:50.316 EAL: Detected NUMA nodes: 2 00:06:50.316 EAL: Detected shared linkage of DPDK 00:06:50.316 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:06:50.316 EAL: Selected IOVA mode 'VA' 00:06:50.316 EAL: No free 2048 kB hugepages reported on node 1 00:06:50.316 EAL: VFIO support initialized 00:06:50.316 TELEMETRY: No legacy callbacks, legacy socket not created 00:06:50.316 00:06:50.316 00:06:50.316 CUnit - A unit testing framework for C - Version 2.1-3 00:06:50.316 http://cunit.sourceforge.net/ 00:06:50.316 00:06:50.316 00:06:50.316 Suite: memory 00:06:50.316 Test: test ... 00:06:50.316 register 0x200000200000 2097152 00:06:50.316 malloc 3145728 00:06:50.316 register 0x200000400000 4194304 00:06:50.317 buf 0x200000500000 len 3145728 PASSED 00:06:50.317 malloc 64 00:06:50.317 buf 0x2000004fff40 len 64 PASSED 00:06:50.317 malloc 4194304 00:06:50.317 register 0x200000800000 6291456 00:06:50.317 buf 0x200000a00000 len 4194304 PASSED 00:06:50.317 free 0x200000500000 3145728 00:06:50.317 free 0x2000004fff40 64 00:06:50.317 unregister 0x200000400000 4194304 PASSED 00:06:50.317 free 0x200000a00000 4194304 00:06:50.317 unregister 0x200000800000 6291456 PASSED 00:06:50.317 malloc 8388608 00:06:50.317 register 0x200000400000 10485760 00:06:50.317 buf 0x200000600000 len 8388608 PASSED 00:06:50.317 free 0x200000600000 8388608 00:06:50.317 unregister 0x200000400000 10485760 PASSED 00:06:50.317 passed 00:06:50.317 00:06:50.317 Run Summary: Type Total Ran Passed Failed Inactive 00:06:50.317 suites 1 1 n/a 0 0 00:06:50.317 tests 1 1 1 0 0 00:06:50.317 asserts 15 15 15 0 n/a 00:06:50.317 00:06:50.317 Elapsed time = 0.005 seconds 00:06:50.317 00:06:50.317 real 0m0.044s 00:06:50.317 user 0m0.016s 00:06:50.317 sys 0m0.028s 00:06:50.317 00:19:18 env.env_mem_callbacks -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:50.317 00:19:18 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:06:50.317 ************************************ 00:06:50.317 END TEST env_mem_callbacks 00:06:50.317 ************************************ 00:06:50.317 00:06:50.317 real 0m6.059s 00:06:50.317 user 0m4.139s 00:06:50.317 sys 0m0.965s 00:06:50.317 00:19:18 env -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:50.317 00:19:18 env -- common/autotest_common.sh@10 -- # set +x 00:06:50.317 ************************************ 00:06:50.317 END TEST env 00:06:50.317 ************************************ 00:06:50.317 00:19:18 -- spdk/autotest.sh@169 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:06:50.317 00:19:18 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:50.317 00:19:18 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:50.317 00:19:18 -- common/autotest_common.sh@10 -- # set +x 00:06:50.317 ************************************ 00:06:50.317 START TEST rpc 00:06:50.317 ************************************ 00:06:50.317 00:19:18 rpc -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:06:50.576 * Looking for test storage... 00:06:50.576 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:06:50.576 00:19:18 rpc -- rpc/rpc.sh@65 -- # spdk_pid=851521 00:06:50.576 00:19:18 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:50.576 00:19:18 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:06:50.576 00:19:18 rpc -- rpc/rpc.sh@67 -- # waitforlisten 851521 00:06:50.576 00:19:18 rpc -- common/autotest_common.sh@827 -- # '[' -z 851521 ']' 00:06:50.576 00:19:18 rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:50.576 00:19:18 rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:50.576 00:19:18 rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:50.576 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:50.576 00:19:18 rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:50.576 00:19:18 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:50.576 [2024-07-12 00:19:18.255707] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:06:50.576 [2024-07-12 00:19:18.255821] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid851521 ] 00:06:50.576 EAL: No free 2048 kB hugepages reported on node 1 00:06:50.576 [2024-07-12 00:19:18.316152] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:50.576 [2024-07-12 00:19:18.403548] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:06:50.576 [2024-07-12 00:19:18.403616] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 851521' to capture a snapshot of events at runtime. 00:06:50.576 [2024-07-12 00:19:18.403633] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:50.576 [2024-07-12 00:19:18.403647] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:50.576 [2024-07-12 00:19:18.403658] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid851521 for offline analysis/debug. 00:06:50.576 [2024-07-12 00:19:18.403690] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.834 00:19:18 rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:50.834 00:19:18 rpc -- common/autotest_common.sh@860 -- # return 0 00:06:50.834 00:19:18 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:06:50.834 00:19:18 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:06:50.834 00:19:18 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:06:50.834 00:19:18 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:06:50.834 00:19:18 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:50.834 00:19:18 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:50.834 00:19:18 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:50.834 ************************************ 00:06:50.834 START TEST rpc_integrity 00:06:50.834 ************************************ 00:06:50.834 00:19:18 rpc.rpc_integrity -- common/autotest_common.sh@1121 -- # rpc_integrity 00:06:50.834 00:19:18 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:06:50.834 00:19:18 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:50.834 00:19:18 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:50.834 00:19:18 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:50.834 00:19:18 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:06:50.834 00:19:18 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:06:51.093 00:19:18 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:06:51.093 00:19:18 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:06:51.093 00:19:18 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:51.093 00:19:18 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:51.093 00:19:18 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:51.093 00:19:18 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:06:51.093 00:19:18 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:06:51.093 00:19:18 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:51.093 00:19:18 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:51.093 00:19:18 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:51.093 00:19:18 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:06:51.093 { 00:06:51.093 "name": "Malloc0", 00:06:51.093 "aliases": [ 00:06:51.093 "5e5534a0-7601-4108-ae41-c50f12953477" 00:06:51.093 ], 00:06:51.093 "product_name": "Malloc disk", 00:06:51.093 "block_size": 512, 00:06:51.093 "num_blocks": 16384, 00:06:51.093 "uuid": "5e5534a0-7601-4108-ae41-c50f12953477", 00:06:51.093 "assigned_rate_limits": { 00:06:51.093 "rw_ios_per_sec": 0, 00:06:51.093 "rw_mbytes_per_sec": 0, 00:06:51.093 "r_mbytes_per_sec": 0, 00:06:51.093 "w_mbytes_per_sec": 0 00:06:51.093 }, 00:06:51.093 "claimed": false, 00:06:51.093 "zoned": false, 00:06:51.093 "supported_io_types": { 00:06:51.093 "read": true, 00:06:51.093 "write": true, 00:06:51.093 "unmap": true, 00:06:51.093 "write_zeroes": true, 00:06:51.093 "flush": true, 00:06:51.093 "reset": true, 00:06:51.093 "compare": false, 00:06:51.093 "compare_and_write": false, 00:06:51.093 "abort": true, 00:06:51.093 "nvme_admin": false, 00:06:51.093 "nvme_io": false 00:06:51.093 }, 00:06:51.093 "memory_domains": [ 00:06:51.093 { 00:06:51.093 "dma_device_id": "system", 00:06:51.093 "dma_device_type": 1 00:06:51.093 }, 00:06:51.093 { 00:06:51.093 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:51.093 "dma_device_type": 2 00:06:51.093 } 00:06:51.093 ], 00:06:51.093 "driver_specific": {} 00:06:51.093 } 00:06:51.093 ]' 00:06:51.093 00:19:18 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:06:51.093 00:19:18 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:06:51.093 00:19:18 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:06:51.093 00:19:18 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:51.093 00:19:18 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:51.093 [2024-07-12 00:19:18.762990] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:06:51.093 [2024-07-12 00:19:18.763043] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:51.093 [2024-07-12 00:19:18.763068] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xaf85f0 00:06:51.093 [2024-07-12 00:19:18.763084] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:51.093 [2024-07-12 00:19:18.764698] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:51.093 [2024-07-12 00:19:18.764726] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:06:51.093 Passthru0 00:06:51.093 00:19:18 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:51.093 00:19:18 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:06:51.093 00:19:18 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:51.093 00:19:18 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:51.093 00:19:18 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:51.093 00:19:18 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:06:51.093 { 00:06:51.093 "name": "Malloc0", 00:06:51.093 "aliases": [ 00:06:51.093 "5e5534a0-7601-4108-ae41-c50f12953477" 00:06:51.093 ], 00:06:51.093 "product_name": "Malloc disk", 00:06:51.093 "block_size": 512, 00:06:51.093 "num_blocks": 16384, 00:06:51.093 "uuid": "5e5534a0-7601-4108-ae41-c50f12953477", 00:06:51.093 "assigned_rate_limits": { 00:06:51.093 "rw_ios_per_sec": 0, 00:06:51.093 "rw_mbytes_per_sec": 0, 00:06:51.093 "r_mbytes_per_sec": 0, 00:06:51.093 "w_mbytes_per_sec": 0 00:06:51.093 }, 00:06:51.093 "claimed": true, 00:06:51.093 "claim_type": "exclusive_write", 00:06:51.093 "zoned": false, 00:06:51.093 "supported_io_types": { 00:06:51.093 "read": true, 00:06:51.093 "write": true, 00:06:51.093 "unmap": true, 00:06:51.093 "write_zeroes": true, 00:06:51.093 "flush": true, 00:06:51.093 "reset": true, 00:06:51.093 "compare": false, 00:06:51.093 "compare_and_write": false, 00:06:51.093 "abort": true, 00:06:51.093 "nvme_admin": false, 00:06:51.093 "nvme_io": false 00:06:51.093 }, 00:06:51.093 "memory_domains": [ 00:06:51.093 { 00:06:51.093 "dma_device_id": "system", 00:06:51.093 "dma_device_type": 1 00:06:51.093 }, 00:06:51.093 { 00:06:51.093 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:51.093 "dma_device_type": 2 00:06:51.093 } 00:06:51.093 ], 00:06:51.093 "driver_specific": {} 00:06:51.093 }, 00:06:51.093 { 00:06:51.093 "name": "Passthru0", 00:06:51.093 "aliases": [ 00:06:51.093 "8be810ea-9780-55c5-a9a3-80094a4b8e20" 00:06:51.093 ], 00:06:51.093 "product_name": "passthru", 00:06:51.093 "block_size": 512, 00:06:51.093 "num_blocks": 16384, 00:06:51.093 "uuid": "8be810ea-9780-55c5-a9a3-80094a4b8e20", 00:06:51.093 "assigned_rate_limits": { 00:06:51.093 "rw_ios_per_sec": 0, 00:06:51.093 "rw_mbytes_per_sec": 0, 00:06:51.093 "r_mbytes_per_sec": 0, 00:06:51.093 "w_mbytes_per_sec": 0 00:06:51.093 }, 00:06:51.093 "claimed": false, 00:06:51.093 "zoned": false, 00:06:51.093 "supported_io_types": { 00:06:51.093 "read": true, 00:06:51.093 "write": true, 00:06:51.093 "unmap": true, 00:06:51.093 "write_zeroes": true, 00:06:51.093 "flush": true, 00:06:51.093 "reset": true, 00:06:51.093 "compare": false, 00:06:51.093 "compare_and_write": false, 00:06:51.093 "abort": true, 00:06:51.093 "nvme_admin": false, 00:06:51.093 "nvme_io": false 00:06:51.093 }, 00:06:51.093 "memory_domains": [ 00:06:51.093 { 00:06:51.093 "dma_device_id": "system", 00:06:51.093 "dma_device_type": 1 00:06:51.093 }, 00:06:51.093 { 00:06:51.093 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:51.093 "dma_device_type": 2 00:06:51.093 } 00:06:51.093 ], 00:06:51.093 "driver_specific": { 00:06:51.093 "passthru": { 00:06:51.093 "name": "Passthru0", 00:06:51.093 "base_bdev_name": "Malloc0" 00:06:51.093 } 00:06:51.093 } 00:06:51.093 } 00:06:51.093 ]' 00:06:51.093 00:19:18 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:06:51.093 00:19:18 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:06:51.093 00:19:18 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:06:51.093 00:19:18 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:51.093 00:19:18 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:51.093 00:19:18 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:51.093 00:19:18 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:06:51.093 00:19:18 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:51.093 00:19:18 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:51.093 00:19:18 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:51.093 00:19:18 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:06:51.093 00:19:18 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:51.093 00:19:18 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:51.093 00:19:18 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:51.093 00:19:18 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:06:51.093 00:19:18 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:06:51.093 00:19:18 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:06:51.093 00:06:51.093 real 0m0.244s 00:06:51.093 user 0m0.161s 00:06:51.093 sys 0m0.027s 00:06:51.093 00:19:18 rpc.rpc_integrity -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:51.093 00:19:18 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:51.093 ************************************ 00:06:51.093 END TEST rpc_integrity 00:06:51.093 ************************************ 00:06:51.093 00:19:18 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:06:51.093 00:19:18 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:51.093 00:19:18 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:51.093 00:19:18 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:51.352 ************************************ 00:06:51.352 START TEST rpc_plugins 00:06:51.352 ************************************ 00:06:51.352 00:19:18 rpc.rpc_plugins -- common/autotest_common.sh@1121 -- # rpc_plugins 00:06:51.352 00:19:18 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:06:51.352 00:19:18 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:51.352 00:19:18 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:51.352 00:19:18 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:51.352 00:19:18 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:06:51.352 00:19:18 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:06:51.352 00:19:18 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:51.352 00:19:18 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:51.352 00:19:18 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:51.352 00:19:18 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:06:51.352 { 00:06:51.352 "name": "Malloc1", 00:06:51.352 "aliases": [ 00:06:51.352 "db28438b-6b62-45c4-9a0c-9b8537c5429b" 00:06:51.352 ], 00:06:51.352 "product_name": "Malloc disk", 00:06:51.352 "block_size": 4096, 00:06:51.352 "num_blocks": 256, 00:06:51.352 "uuid": "db28438b-6b62-45c4-9a0c-9b8537c5429b", 00:06:51.352 "assigned_rate_limits": { 00:06:51.352 "rw_ios_per_sec": 0, 00:06:51.352 "rw_mbytes_per_sec": 0, 00:06:51.352 "r_mbytes_per_sec": 0, 00:06:51.352 "w_mbytes_per_sec": 0 00:06:51.352 }, 00:06:51.352 "claimed": false, 00:06:51.352 "zoned": false, 00:06:51.352 "supported_io_types": { 00:06:51.352 "read": true, 00:06:51.352 "write": true, 00:06:51.352 "unmap": true, 00:06:51.352 "write_zeroes": true, 00:06:51.352 "flush": true, 00:06:51.352 "reset": true, 00:06:51.352 "compare": false, 00:06:51.352 "compare_and_write": false, 00:06:51.352 "abort": true, 00:06:51.352 "nvme_admin": false, 00:06:51.352 "nvme_io": false 00:06:51.352 }, 00:06:51.352 "memory_domains": [ 00:06:51.352 { 00:06:51.352 "dma_device_id": "system", 00:06:51.352 "dma_device_type": 1 00:06:51.352 }, 00:06:51.352 { 00:06:51.352 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:51.352 "dma_device_type": 2 00:06:51.352 } 00:06:51.352 ], 00:06:51.352 "driver_specific": {} 00:06:51.352 } 00:06:51.352 ]' 00:06:51.352 00:19:18 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:06:51.352 00:19:19 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:06:51.352 00:19:19 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:06:51.352 00:19:19 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:51.352 00:19:19 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:51.352 00:19:19 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:51.352 00:19:19 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:06:51.352 00:19:19 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:51.352 00:19:19 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:51.352 00:19:19 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:51.352 00:19:19 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:06:51.352 00:19:19 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:06:51.352 00:19:19 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:06:51.352 00:06:51.352 real 0m0.128s 00:06:51.352 user 0m0.082s 00:06:51.352 sys 0m0.011s 00:06:51.352 00:19:19 rpc.rpc_plugins -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:51.352 00:19:19 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:51.352 ************************************ 00:06:51.352 END TEST rpc_plugins 00:06:51.352 ************************************ 00:06:51.352 00:19:19 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:06:51.352 00:19:19 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:51.352 00:19:19 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:51.352 00:19:19 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:51.352 ************************************ 00:06:51.352 START TEST rpc_trace_cmd_test 00:06:51.352 ************************************ 00:06:51.352 00:19:19 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1121 -- # rpc_trace_cmd_test 00:06:51.352 00:19:19 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:06:51.352 00:19:19 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:06:51.352 00:19:19 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:51.352 00:19:19 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:06:51.352 00:19:19 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:51.352 00:19:19 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:06:51.352 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid851521", 00:06:51.352 "tpoint_group_mask": "0x8", 00:06:51.352 "iscsi_conn": { 00:06:51.352 "mask": "0x2", 00:06:51.352 "tpoint_mask": "0x0" 00:06:51.352 }, 00:06:51.352 "scsi": { 00:06:51.352 "mask": "0x4", 00:06:51.352 "tpoint_mask": "0x0" 00:06:51.352 }, 00:06:51.352 "bdev": { 00:06:51.352 "mask": "0x8", 00:06:51.352 "tpoint_mask": "0xffffffffffffffff" 00:06:51.352 }, 00:06:51.352 "nvmf_rdma": { 00:06:51.352 "mask": "0x10", 00:06:51.352 "tpoint_mask": "0x0" 00:06:51.352 }, 00:06:51.352 "nvmf_tcp": { 00:06:51.352 "mask": "0x20", 00:06:51.352 "tpoint_mask": "0x0" 00:06:51.352 }, 00:06:51.352 "ftl": { 00:06:51.352 "mask": "0x40", 00:06:51.352 "tpoint_mask": "0x0" 00:06:51.352 }, 00:06:51.352 "blobfs": { 00:06:51.352 "mask": "0x80", 00:06:51.352 "tpoint_mask": "0x0" 00:06:51.352 }, 00:06:51.352 "dsa": { 00:06:51.352 "mask": "0x200", 00:06:51.352 "tpoint_mask": "0x0" 00:06:51.352 }, 00:06:51.352 "thread": { 00:06:51.352 "mask": "0x400", 00:06:51.352 "tpoint_mask": "0x0" 00:06:51.352 }, 00:06:51.352 "nvme_pcie": { 00:06:51.352 "mask": "0x800", 00:06:51.352 "tpoint_mask": "0x0" 00:06:51.352 }, 00:06:51.352 "iaa": { 00:06:51.352 "mask": "0x1000", 00:06:51.352 "tpoint_mask": "0x0" 00:06:51.352 }, 00:06:51.352 "nvme_tcp": { 00:06:51.352 "mask": "0x2000", 00:06:51.352 "tpoint_mask": "0x0" 00:06:51.352 }, 00:06:51.352 "bdev_nvme": { 00:06:51.352 "mask": "0x4000", 00:06:51.352 "tpoint_mask": "0x0" 00:06:51.352 }, 00:06:51.352 "sock": { 00:06:51.352 "mask": "0x8000", 00:06:51.352 "tpoint_mask": "0x0" 00:06:51.352 } 00:06:51.352 }' 00:06:51.352 00:19:19 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:06:51.352 00:19:19 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:06:51.352 00:19:19 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:06:51.611 00:19:19 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:06:51.611 00:19:19 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:06:51.611 00:19:19 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:06:51.611 00:19:19 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:06:51.611 00:19:19 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:06:51.611 00:19:19 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:06:51.611 00:19:19 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:06:51.611 00:06:51.611 real 0m0.215s 00:06:51.611 user 0m0.187s 00:06:51.611 sys 0m0.020s 00:06:51.611 00:19:19 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:51.611 00:19:19 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:06:51.611 ************************************ 00:06:51.611 END TEST rpc_trace_cmd_test 00:06:51.611 ************************************ 00:06:51.611 00:19:19 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:06:51.611 00:19:19 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:06:51.611 00:19:19 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:06:51.611 00:19:19 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:51.611 00:19:19 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:51.611 00:19:19 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:51.611 ************************************ 00:06:51.611 START TEST rpc_daemon_integrity 00:06:51.611 ************************************ 00:06:51.611 00:19:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1121 -- # rpc_integrity 00:06:51.611 00:19:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:06:51.611 00:19:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:51.611 00:19:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:51.611 00:19:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:51.611 00:19:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:06:51.611 00:19:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:06:51.611 00:19:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:06:51.611 00:19:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:06:51.611 00:19:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:51.611 00:19:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:51.611 00:19:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:51.611 00:19:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:06:51.611 00:19:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:06:51.611 00:19:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:51.611 00:19:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:51.871 00:19:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:51.871 00:19:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:06:51.871 { 00:06:51.871 "name": "Malloc2", 00:06:51.871 "aliases": [ 00:06:51.871 "7a9c0c04-ebb3-4228-b77f-66b19c0e98e0" 00:06:51.871 ], 00:06:51.871 "product_name": "Malloc disk", 00:06:51.871 "block_size": 512, 00:06:51.871 "num_blocks": 16384, 00:06:51.871 "uuid": "7a9c0c04-ebb3-4228-b77f-66b19c0e98e0", 00:06:51.871 "assigned_rate_limits": { 00:06:51.871 "rw_ios_per_sec": 0, 00:06:51.871 "rw_mbytes_per_sec": 0, 00:06:51.871 "r_mbytes_per_sec": 0, 00:06:51.871 "w_mbytes_per_sec": 0 00:06:51.871 }, 00:06:51.871 "claimed": false, 00:06:51.871 "zoned": false, 00:06:51.871 "supported_io_types": { 00:06:51.871 "read": true, 00:06:51.871 "write": true, 00:06:51.871 "unmap": true, 00:06:51.871 "write_zeroes": true, 00:06:51.871 "flush": true, 00:06:51.871 "reset": true, 00:06:51.871 "compare": false, 00:06:51.871 "compare_and_write": false, 00:06:51.871 "abort": true, 00:06:51.871 "nvme_admin": false, 00:06:51.871 "nvme_io": false 00:06:51.871 }, 00:06:51.871 "memory_domains": [ 00:06:51.871 { 00:06:51.871 "dma_device_id": "system", 00:06:51.871 "dma_device_type": 1 00:06:51.871 }, 00:06:51.871 { 00:06:51.871 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:51.871 "dma_device_type": 2 00:06:51.871 } 00:06:51.871 ], 00:06:51.871 "driver_specific": {} 00:06:51.871 } 00:06:51.871 ]' 00:06:51.871 00:19:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:06:51.871 00:19:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:06:51.871 00:19:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:06:51.871 00:19:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:51.871 00:19:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:51.871 [2024-07-12 00:19:19.501215] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:06:51.871 [2024-07-12 00:19:19.501263] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:51.871 [2024-07-12 00:19:19.501290] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x9462f0 00:06:51.871 [2024-07-12 00:19:19.501305] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:51.871 [2024-07-12 00:19:19.503047] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:51.871 [2024-07-12 00:19:19.503074] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:06:51.871 Passthru0 00:06:51.871 00:19:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:51.871 00:19:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:06:51.871 00:19:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:51.871 00:19:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:51.871 00:19:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:51.871 00:19:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:06:51.871 { 00:06:51.871 "name": "Malloc2", 00:06:51.871 "aliases": [ 00:06:51.871 "7a9c0c04-ebb3-4228-b77f-66b19c0e98e0" 00:06:51.871 ], 00:06:51.871 "product_name": "Malloc disk", 00:06:51.871 "block_size": 512, 00:06:51.871 "num_blocks": 16384, 00:06:51.871 "uuid": "7a9c0c04-ebb3-4228-b77f-66b19c0e98e0", 00:06:51.871 "assigned_rate_limits": { 00:06:51.871 "rw_ios_per_sec": 0, 00:06:51.871 "rw_mbytes_per_sec": 0, 00:06:51.871 "r_mbytes_per_sec": 0, 00:06:51.871 "w_mbytes_per_sec": 0 00:06:51.871 }, 00:06:51.871 "claimed": true, 00:06:51.871 "claim_type": "exclusive_write", 00:06:51.871 "zoned": false, 00:06:51.871 "supported_io_types": { 00:06:51.871 "read": true, 00:06:51.871 "write": true, 00:06:51.871 "unmap": true, 00:06:51.871 "write_zeroes": true, 00:06:51.871 "flush": true, 00:06:51.871 "reset": true, 00:06:51.871 "compare": false, 00:06:51.871 "compare_and_write": false, 00:06:51.871 "abort": true, 00:06:51.871 "nvme_admin": false, 00:06:51.871 "nvme_io": false 00:06:51.871 }, 00:06:51.871 "memory_domains": [ 00:06:51.871 { 00:06:51.871 "dma_device_id": "system", 00:06:51.871 "dma_device_type": 1 00:06:51.871 }, 00:06:51.871 { 00:06:51.871 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:51.871 "dma_device_type": 2 00:06:51.871 } 00:06:51.871 ], 00:06:51.871 "driver_specific": {} 00:06:51.871 }, 00:06:51.871 { 00:06:51.871 "name": "Passthru0", 00:06:51.871 "aliases": [ 00:06:51.871 "81877b64-f722-596b-a090-a597987a678b" 00:06:51.871 ], 00:06:51.871 "product_name": "passthru", 00:06:51.871 "block_size": 512, 00:06:51.871 "num_blocks": 16384, 00:06:51.871 "uuid": "81877b64-f722-596b-a090-a597987a678b", 00:06:51.871 "assigned_rate_limits": { 00:06:51.871 "rw_ios_per_sec": 0, 00:06:51.871 "rw_mbytes_per_sec": 0, 00:06:51.871 "r_mbytes_per_sec": 0, 00:06:51.871 "w_mbytes_per_sec": 0 00:06:51.871 }, 00:06:51.871 "claimed": false, 00:06:51.871 "zoned": false, 00:06:51.871 "supported_io_types": { 00:06:51.871 "read": true, 00:06:51.871 "write": true, 00:06:51.871 "unmap": true, 00:06:51.871 "write_zeroes": true, 00:06:51.871 "flush": true, 00:06:51.871 "reset": true, 00:06:51.871 "compare": false, 00:06:51.871 "compare_and_write": false, 00:06:51.871 "abort": true, 00:06:51.871 "nvme_admin": false, 00:06:51.871 "nvme_io": false 00:06:51.871 }, 00:06:51.871 "memory_domains": [ 00:06:51.871 { 00:06:51.871 "dma_device_id": "system", 00:06:51.871 "dma_device_type": 1 00:06:51.871 }, 00:06:51.871 { 00:06:51.871 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:51.871 "dma_device_type": 2 00:06:51.871 } 00:06:51.871 ], 00:06:51.871 "driver_specific": { 00:06:51.871 "passthru": { 00:06:51.871 "name": "Passthru0", 00:06:51.871 "base_bdev_name": "Malloc2" 00:06:51.871 } 00:06:51.871 } 00:06:51.871 } 00:06:51.871 ]' 00:06:51.871 00:19:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:06:51.871 00:19:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:06:51.871 00:19:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:06:51.871 00:19:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:51.871 00:19:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:51.871 00:19:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:51.871 00:19:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:06:51.871 00:19:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:51.871 00:19:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:51.871 00:19:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:51.871 00:19:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:06:51.871 00:19:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:51.871 00:19:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:51.871 00:19:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:51.871 00:19:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:06:51.871 00:19:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:06:51.871 00:19:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:06:51.871 00:06:51.871 real 0m0.249s 00:06:51.871 user 0m0.167s 00:06:51.871 sys 0m0.022s 00:06:51.871 00:19:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:51.871 00:19:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:51.871 ************************************ 00:06:51.871 END TEST rpc_daemon_integrity 00:06:51.871 ************************************ 00:06:51.871 00:19:19 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:06:51.871 00:19:19 rpc -- rpc/rpc.sh@84 -- # killprocess 851521 00:06:51.871 00:19:19 rpc -- common/autotest_common.sh@946 -- # '[' -z 851521 ']' 00:06:51.871 00:19:19 rpc -- common/autotest_common.sh@950 -- # kill -0 851521 00:06:51.871 00:19:19 rpc -- common/autotest_common.sh@951 -- # uname 00:06:51.871 00:19:19 rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:51.871 00:19:19 rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 851521 00:06:51.871 00:19:19 rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:51.871 00:19:19 rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:51.871 00:19:19 rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 851521' 00:06:51.871 killing process with pid 851521 00:06:51.871 00:19:19 rpc -- common/autotest_common.sh@965 -- # kill 851521 00:06:51.871 00:19:19 rpc -- common/autotest_common.sh@970 -- # wait 851521 00:06:52.131 00:06:52.131 real 0m1.796s 00:06:52.131 user 0m2.397s 00:06:52.131 sys 0m0.575s 00:06:52.131 00:19:19 rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:52.131 00:19:19 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:52.131 ************************************ 00:06:52.131 END TEST rpc 00:06:52.131 ************************************ 00:06:52.131 00:19:19 -- spdk/autotest.sh@170 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:06:52.131 00:19:19 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:52.131 00:19:19 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:52.131 00:19:19 -- common/autotest_common.sh@10 -- # set +x 00:06:52.389 ************************************ 00:06:52.389 START TEST skip_rpc 00:06:52.389 ************************************ 00:06:52.389 00:19:19 skip_rpc -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:06:52.389 * Looking for test storage... 00:06:52.389 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:06:52.389 00:19:20 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:06:52.389 00:19:20 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:06:52.389 00:19:20 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:06:52.389 00:19:20 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:52.389 00:19:20 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:52.389 00:19:20 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:52.389 ************************************ 00:06:52.389 START TEST skip_rpc 00:06:52.389 ************************************ 00:06:52.389 00:19:20 skip_rpc.skip_rpc -- common/autotest_common.sh@1121 -- # test_skip_rpc 00:06:52.389 00:19:20 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=851896 00:06:52.389 00:19:20 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:06:52.389 00:19:20 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:52.389 00:19:20 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:06:52.389 [2024-07-12 00:19:20.131434] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:06:52.389 [2024-07-12 00:19:20.131521] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid851896 ] 00:06:52.389 EAL: No free 2048 kB hugepages reported on node 1 00:06:52.389 [2024-07-12 00:19:20.193318] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:52.691 [2024-07-12 00:19:20.280470] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.953 00:19:25 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:06:57.954 00:19:25 skip_rpc.skip_rpc -- common/autotest_common.sh@648 -- # local es=0 00:06:57.954 00:19:25 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd spdk_get_version 00:06:57.954 00:19:25 skip_rpc.skip_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:06:57.954 00:19:25 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:57.954 00:19:25 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:06:57.954 00:19:25 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:57.954 00:19:25 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # rpc_cmd spdk_get_version 00:06:57.954 00:19:25 skip_rpc.skip_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:57.954 00:19:25 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:57.954 00:19:25 skip_rpc.skip_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:06:57.954 00:19:25 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # es=1 00:06:57.954 00:19:25 skip_rpc.skip_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:57.954 00:19:25 skip_rpc.skip_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:57.954 00:19:25 skip_rpc.skip_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:57.954 00:19:25 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:06:57.954 00:19:25 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 851896 00:06:57.954 00:19:25 skip_rpc.skip_rpc -- common/autotest_common.sh@946 -- # '[' -z 851896 ']' 00:06:57.954 00:19:25 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # kill -0 851896 00:06:57.954 00:19:25 skip_rpc.skip_rpc -- common/autotest_common.sh@951 -- # uname 00:06:57.954 00:19:25 skip_rpc.skip_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:57.954 00:19:25 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 851896 00:06:57.954 00:19:25 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:57.954 00:19:25 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:57.954 00:19:25 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 851896' 00:06:57.954 killing process with pid 851896 00:06:57.954 00:19:25 skip_rpc.skip_rpc -- common/autotest_common.sh@965 -- # kill 851896 00:06:57.954 00:19:25 skip_rpc.skip_rpc -- common/autotest_common.sh@970 -- # wait 851896 00:06:57.954 00:06:57.954 real 0m5.261s 00:06:57.954 user 0m4.984s 00:06:57.954 sys 0m0.270s 00:06:57.954 00:19:25 skip_rpc.skip_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:57.954 00:19:25 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:57.954 ************************************ 00:06:57.954 END TEST skip_rpc 00:06:57.954 ************************************ 00:06:57.954 00:19:25 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:06:57.954 00:19:25 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:57.954 00:19:25 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:57.954 00:19:25 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:57.954 ************************************ 00:06:57.954 START TEST skip_rpc_with_json 00:06:57.954 ************************************ 00:06:57.954 00:19:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1121 -- # test_skip_rpc_with_json 00:06:57.954 00:19:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:06:57.954 00:19:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=852407 00:06:57.954 00:19:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:57.954 00:19:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:57.954 00:19:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 852407 00:06:57.954 00:19:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@827 -- # '[' -z 852407 ']' 00:06:57.954 00:19:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:57.954 00:19:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:57.954 00:19:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:57.954 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:57.954 00:19:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:57.954 00:19:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:57.954 [2024-07-12 00:19:25.442420] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:06:57.954 [2024-07-12 00:19:25.442501] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid852407 ] 00:06:57.954 EAL: No free 2048 kB hugepages reported on node 1 00:06:57.954 [2024-07-12 00:19:25.491343] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:57.954 [2024-07-12 00:19:25.572156] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.954 00:19:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:57.954 00:19:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # return 0 00:06:57.954 00:19:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:06:57.954 00:19:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:57.954 00:19:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:57.954 [2024-07-12 00:19:25.766752] nvmf_rpc.c:2558:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:06:57.954 request: 00:06:57.954 { 00:06:57.954 "trtype": "tcp", 00:06:57.954 "method": "nvmf_get_transports", 00:06:57.954 "req_id": 1 00:06:57.954 } 00:06:57.954 Got JSON-RPC error response 00:06:57.954 response: 00:06:57.954 { 00:06:57.954 "code": -19, 00:06:57.954 "message": "No such device" 00:06:57.954 } 00:06:57.954 00:19:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:06:57.954 00:19:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:06:57.954 00:19:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:57.954 00:19:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:57.954 [2024-07-12 00:19:25.774846] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:57.954 00:19:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:57.954 00:19:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:06:57.954 00:19:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:57.954 00:19:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:58.212 00:19:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:58.212 00:19:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:06:58.212 { 00:06:58.212 "subsystems": [ 00:06:58.212 { 00:06:58.212 "subsystem": "vfio_user_target", 00:06:58.212 "config": null 00:06:58.212 }, 00:06:58.212 { 00:06:58.212 "subsystem": "keyring", 00:06:58.212 "config": [] 00:06:58.212 }, 00:06:58.212 { 00:06:58.212 "subsystem": "iobuf", 00:06:58.212 "config": [ 00:06:58.212 { 00:06:58.212 "method": "iobuf_set_options", 00:06:58.212 "params": { 00:06:58.212 "small_pool_count": 8192, 00:06:58.212 "large_pool_count": 1024, 00:06:58.212 "small_bufsize": 8192, 00:06:58.212 "large_bufsize": 135168 00:06:58.212 } 00:06:58.212 } 00:06:58.212 ] 00:06:58.212 }, 00:06:58.212 { 00:06:58.212 "subsystem": "sock", 00:06:58.212 "config": [ 00:06:58.212 { 00:06:58.212 "method": "sock_set_default_impl", 00:06:58.212 "params": { 00:06:58.212 "impl_name": "posix" 00:06:58.212 } 00:06:58.212 }, 00:06:58.212 { 00:06:58.212 "method": "sock_impl_set_options", 00:06:58.212 "params": { 00:06:58.212 "impl_name": "ssl", 00:06:58.212 "recv_buf_size": 4096, 00:06:58.212 "send_buf_size": 4096, 00:06:58.212 "enable_recv_pipe": true, 00:06:58.212 "enable_quickack": false, 00:06:58.212 "enable_placement_id": 0, 00:06:58.212 "enable_zerocopy_send_server": true, 00:06:58.212 "enable_zerocopy_send_client": false, 00:06:58.212 "zerocopy_threshold": 0, 00:06:58.212 "tls_version": 0, 00:06:58.212 "enable_ktls": false 00:06:58.212 } 00:06:58.212 }, 00:06:58.212 { 00:06:58.212 "method": "sock_impl_set_options", 00:06:58.212 "params": { 00:06:58.212 "impl_name": "posix", 00:06:58.212 "recv_buf_size": 2097152, 00:06:58.212 "send_buf_size": 2097152, 00:06:58.212 "enable_recv_pipe": true, 00:06:58.212 "enable_quickack": false, 00:06:58.212 "enable_placement_id": 0, 00:06:58.212 "enable_zerocopy_send_server": true, 00:06:58.212 "enable_zerocopy_send_client": false, 00:06:58.212 "zerocopy_threshold": 0, 00:06:58.212 "tls_version": 0, 00:06:58.212 "enable_ktls": false 00:06:58.212 } 00:06:58.212 } 00:06:58.212 ] 00:06:58.212 }, 00:06:58.212 { 00:06:58.212 "subsystem": "vmd", 00:06:58.212 "config": [] 00:06:58.212 }, 00:06:58.212 { 00:06:58.212 "subsystem": "accel", 00:06:58.212 "config": [ 00:06:58.212 { 00:06:58.212 "method": "accel_set_options", 00:06:58.212 "params": { 00:06:58.212 "small_cache_size": 128, 00:06:58.212 "large_cache_size": 16, 00:06:58.212 "task_count": 2048, 00:06:58.212 "sequence_count": 2048, 00:06:58.212 "buf_count": 2048 00:06:58.212 } 00:06:58.212 } 00:06:58.212 ] 00:06:58.212 }, 00:06:58.212 { 00:06:58.212 "subsystem": "bdev", 00:06:58.212 "config": [ 00:06:58.212 { 00:06:58.212 "method": "bdev_set_options", 00:06:58.212 "params": { 00:06:58.212 "bdev_io_pool_size": 65535, 00:06:58.212 "bdev_io_cache_size": 256, 00:06:58.212 "bdev_auto_examine": true, 00:06:58.212 "iobuf_small_cache_size": 128, 00:06:58.212 "iobuf_large_cache_size": 16 00:06:58.212 } 00:06:58.212 }, 00:06:58.212 { 00:06:58.212 "method": "bdev_raid_set_options", 00:06:58.212 "params": { 00:06:58.212 "process_window_size_kb": 1024 00:06:58.212 } 00:06:58.212 }, 00:06:58.212 { 00:06:58.212 "method": "bdev_iscsi_set_options", 00:06:58.212 "params": { 00:06:58.212 "timeout_sec": 30 00:06:58.212 } 00:06:58.212 }, 00:06:58.212 { 00:06:58.212 "method": "bdev_nvme_set_options", 00:06:58.212 "params": { 00:06:58.212 "action_on_timeout": "none", 00:06:58.212 "timeout_us": 0, 00:06:58.212 "timeout_admin_us": 0, 00:06:58.212 "keep_alive_timeout_ms": 10000, 00:06:58.212 "arbitration_burst": 0, 00:06:58.212 "low_priority_weight": 0, 00:06:58.212 "medium_priority_weight": 0, 00:06:58.212 "high_priority_weight": 0, 00:06:58.212 "nvme_adminq_poll_period_us": 10000, 00:06:58.212 "nvme_ioq_poll_period_us": 0, 00:06:58.212 "io_queue_requests": 0, 00:06:58.212 "delay_cmd_submit": true, 00:06:58.212 "transport_retry_count": 4, 00:06:58.212 "bdev_retry_count": 3, 00:06:58.212 "transport_ack_timeout": 0, 00:06:58.212 "ctrlr_loss_timeout_sec": 0, 00:06:58.212 "reconnect_delay_sec": 0, 00:06:58.212 "fast_io_fail_timeout_sec": 0, 00:06:58.212 "disable_auto_failback": false, 00:06:58.212 "generate_uuids": false, 00:06:58.212 "transport_tos": 0, 00:06:58.212 "nvme_error_stat": false, 00:06:58.212 "rdma_srq_size": 0, 00:06:58.212 "io_path_stat": false, 00:06:58.212 "allow_accel_sequence": false, 00:06:58.212 "rdma_max_cq_size": 0, 00:06:58.212 "rdma_cm_event_timeout_ms": 0, 00:06:58.212 "dhchap_digests": [ 00:06:58.212 "sha256", 00:06:58.212 "sha384", 00:06:58.212 "sha512" 00:06:58.212 ], 00:06:58.212 "dhchap_dhgroups": [ 00:06:58.212 "null", 00:06:58.212 "ffdhe2048", 00:06:58.212 "ffdhe3072", 00:06:58.213 "ffdhe4096", 00:06:58.213 "ffdhe6144", 00:06:58.213 "ffdhe8192" 00:06:58.213 ] 00:06:58.213 } 00:06:58.213 }, 00:06:58.213 { 00:06:58.213 "method": "bdev_nvme_set_hotplug", 00:06:58.213 "params": { 00:06:58.213 "period_us": 100000, 00:06:58.213 "enable": false 00:06:58.213 } 00:06:58.213 }, 00:06:58.213 { 00:06:58.213 "method": "bdev_wait_for_examine" 00:06:58.213 } 00:06:58.213 ] 00:06:58.213 }, 00:06:58.213 { 00:06:58.213 "subsystem": "scsi", 00:06:58.213 "config": null 00:06:58.213 }, 00:06:58.213 { 00:06:58.213 "subsystem": "scheduler", 00:06:58.213 "config": [ 00:06:58.213 { 00:06:58.213 "method": "framework_set_scheduler", 00:06:58.213 "params": { 00:06:58.213 "name": "static" 00:06:58.213 } 00:06:58.213 } 00:06:58.213 ] 00:06:58.213 }, 00:06:58.213 { 00:06:58.213 "subsystem": "vhost_scsi", 00:06:58.213 "config": [] 00:06:58.213 }, 00:06:58.213 { 00:06:58.213 "subsystem": "vhost_blk", 00:06:58.213 "config": [] 00:06:58.213 }, 00:06:58.213 { 00:06:58.213 "subsystem": "ublk", 00:06:58.213 "config": [] 00:06:58.213 }, 00:06:58.213 { 00:06:58.213 "subsystem": "nbd", 00:06:58.213 "config": [] 00:06:58.213 }, 00:06:58.213 { 00:06:58.213 "subsystem": "nvmf", 00:06:58.213 "config": [ 00:06:58.213 { 00:06:58.213 "method": "nvmf_set_config", 00:06:58.213 "params": { 00:06:58.213 "discovery_filter": "match_any", 00:06:58.213 "admin_cmd_passthru": { 00:06:58.213 "identify_ctrlr": false 00:06:58.213 } 00:06:58.213 } 00:06:58.213 }, 00:06:58.213 { 00:06:58.213 "method": "nvmf_set_max_subsystems", 00:06:58.213 "params": { 00:06:58.213 "max_subsystems": 1024 00:06:58.213 } 00:06:58.213 }, 00:06:58.213 { 00:06:58.213 "method": "nvmf_set_crdt", 00:06:58.213 "params": { 00:06:58.213 "crdt1": 0, 00:06:58.213 "crdt2": 0, 00:06:58.213 "crdt3": 0 00:06:58.213 } 00:06:58.213 }, 00:06:58.213 { 00:06:58.213 "method": "nvmf_create_transport", 00:06:58.213 "params": { 00:06:58.213 "trtype": "TCP", 00:06:58.213 "max_queue_depth": 128, 00:06:58.213 "max_io_qpairs_per_ctrlr": 127, 00:06:58.213 "in_capsule_data_size": 4096, 00:06:58.213 "max_io_size": 131072, 00:06:58.213 "io_unit_size": 131072, 00:06:58.213 "max_aq_depth": 128, 00:06:58.213 "num_shared_buffers": 511, 00:06:58.213 "buf_cache_size": 4294967295, 00:06:58.213 "dif_insert_or_strip": false, 00:06:58.213 "zcopy": false, 00:06:58.213 "c2h_success": true, 00:06:58.213 "sock_priority": 0, 00:06:58.213 "abort_timeout_sec": 1, 00:06:58.213 "ack_timeout": 0, 00:06:58.213 "data_wr_pool_size": 0 00:06:58.213 } 00:06:58.213 } 00:06:58.213 ] 00:06:58.213 }, 00:06:58.213 { 00:06:58.213 "subsystem": "iscsi", 00:06:58.213 "config": [ 00:06:58.213 { 00:06:58.213 "method": "iscsi_set_options", 00:06:58.213 "params": { 00:06:58.213 "node_base": "iqn.2016-06.io.spdk", 00:06:58.213 "max_sessions": 128, 00:06:58.213 "max_connections_per_session": 2, 00:06:58.213 "max_queue_depth": 64, 00:06:58.213 "default_time2wait": 2, 00:06:58.213 "default_time2retain": 20, 00:06:58.213 "first_burst_length": 8192, 00:06:58.213 "immediate_data": true, 00:06:58.213 "allow_duplicated_isid": false, 00:06:58.213 "error_recovery_level": 0, 00:06:58.213 "nop_timeout": 60, 00:06:58.213 "nop_in_interval": 30, 00:06:58.213 "disable_chap": false, 00:06:58.213 "require_chap": false, 00:06:58.213 "mutual_chap": false, 00:06:58.213 "chap_group": 0, 00:06:58.213 "max_large_datain_per_connection": 64, 00:06:58.213 "max_r2t_per_connection": 4, 00:06:58.213 "pdu_pool_size": 36864, 00:06:58.213 "immediate_data_pool_size": 16384, 00:06:58.213 "data_out_pool_size": 2048 00:06:58.213 } 00:06:58.213 } 00:06:58.213 ] 00:06:58.213 } 00:06:58.213 ] 00:06:58.213 } 00:06:58.213 00:19:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:06:58.213 00:19:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 852407 00:06:58.213 00:19:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@946 -- # '[' -z 852407 ']' 00:06:58.213 00:19:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # kill -0 852407 00:06:58.213 00:19:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # uname 00:06:58.213 00:19:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:58.213 00:19:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 852407 00:06:58.213 00:19:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:58.213 00:19:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:58.213 00:19:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # echo 'killing process with pid 852407' 00:06:58.213 killing process with pid 852407 00:06:58.213 00:19:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@965 -- # kill 852407 00:06:58.213 00:19:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # wait 852407 00:06:58.469 00:19:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=852442 00:06:58.469 00:19:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:06:58.469 00:19:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:07:03.727 00:19:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 852442 00:07:03.727 00:19:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@946 -- # '[' -z 852442 ']' 00:07:03.727 00:19:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # kill -0 852442 00:07:03.727 00:19:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # uname 00:07:03.727 00:19:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:03.727 00:19:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 852442 00:07:03.727 00:19:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:03.727 00:19:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:03.727 00:19:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # echo 'killing process with pid 852442' 00:07:03.727 killing process with pid 852442 00:07:03.727 00:19:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@965 -- # kill 852442 00:07:03.727 00:19:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # wait 852442 00:07:03.727 00:19:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:07:03.727 00:19:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:07:03.727 00:07:03.727 real 0m6.070s 00:07:03.727 user 0m5.780s 00:07:03.727 sys 0m0.563s 00:07:03.727 00:19:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:03.727 00:19:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:07:03.727 ************************************ 00:07:03.727 END TEST skip_rpc_with_json 00:07:03.727 ************************************ 00:07:03.727 00:19:31 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:07:03.727 00:19:31 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:03.727 00:19:31 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:03.727 00:19:31 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:03.727 ************************************ 00:07:03.727 START TEST skip_rpc_with_delay 00:07:03.727 ************************************ 00:07:03.727 00:19:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1121 -- # test_skip_rpc_with_delay 00:07:03.727 00:19:31 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:07:03.727 00:19:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@648 -- # local es=0 00:07:03.727 00:19:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:07:03.727 00:19:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:07:03.727 00:19:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:03.727 00:19:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:07:03.727 00:19:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:03.727 00:19:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:07:03.727 00:19:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:03.727 00:19:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:07:03.727 00:19:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:07:03.727 00:19:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:07:03.986 [2024-07-12 00:19:31.576824] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:07:03.986 [2024-07-12 00:19:31.576968] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:07:03.986 00:19:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # es=1 00:07:03.986 00:19:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:03.986 00:19:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:03.986 00:19:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:03.986 00:07:03.986 real 0m0.078s 00:07:03.986 user 0m0.053s 00:07:03.986 sys 0m0.025s 00:07:03.986 00:19:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:03.986 00:19:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:07:03.986 ************************************ 00:07:03.986 END TEST skip_rpc_with_delay 00:07:03.986 ************************************ 00:07:03.986 00:19:31 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:07:03.986 00:19:31 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:07:03.986 00:19:31 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:07:03.986 00:19:31 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:03.986 00:19:31 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:03.986 00:19:31 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:03.986 ************************************ 00:07:03.986 START TEST exit_on_failed_rpc_init 00:07:03.986 ************************************ 00:07:03.986 00:19:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1121 -- # test_exit_on_failed_rpc_init 00:07:03.986 00:19:31 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=852994 00:07:03.986 00:19:31 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:03.986 00:19:31 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 852994 00:07:03.986 00:19:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@827 -- # '[' -z 852994 ']' 00:07:03.986 00:19:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:03.986 00:19:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:03.986 00:19:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:03.986 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:03.986 00:19:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:03.986 00:19:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:07:03.986 [2024-07-12 00:19:31.699844] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:07:03.986 [2024-07-12 00:19:31.699951] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid852994 ] 00:07:03.986 EAL: No free 2048 kB hugepages reported on node 1 00:07:03.986 [2024-07-12 00:19:31.761866] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:04.245 [2024-07-12 00:19:31.852688] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.245 00:19:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:04.245 00:19:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # return 0 00:07:04.245 00:19:32 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:07:04.245 00:19:32 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:07:04.245 00:19:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@648 -- # local es=0 00:07:04.245 00:19:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:07:04.245 00:19:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:07:04.245 00:19:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:04.245 00:19:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:07:04.245 00:19:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:04.245 00:19:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:07:04.245 00:19:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:04.245 00:19:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:07:04.245 00:19:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:07:04.245 00:19:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:07:04.503 [2024-07-12 00:19:32.133574] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:07:04.503 [2024-07-12 00:19:32.133685] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid853084 ] 00:07:04.503 EAL: No free 2048 kB hugepages reported on node 1 00:07:04.503 [2024-07-12 00:19:32.188182] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:04.503 [2024-07-12 00:19:32.269441] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:04.503 [2024-07-12 00:19:32.269552] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:07:04.503 [2024-07-12 00:19:32.269569] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:07:04.503 [2024-07-12 00:19:32.269580] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:04.503 00:19:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # es=234 00:07:04.503 00:19:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:04.503 00:19:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # es=106 00:07:04.503 00:19:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # case "$es" in 00:07:04.503 00:19:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@668 -- # es=1 00:07:04.503 00:19:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:04.503 00:19:32 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:07:04.503 00:19:32 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 852994 00:07:04.503 00:19:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@946 -- # '[' -z 852994 ']' 00:07:04.503 00:19:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # kill -0 852994 00:07:04.503 00:19:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@951 -- # uname 00:07:04.763 00:19:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:04.763 00:19:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 852994 00:07:04.763 00:19:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:04.763 00:19:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:04.763 00:19:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # echo 'killing process with pid 852994' 00:07:04.763 killing process with pid 852994 00:07:04.763 00:19:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@965 -- # kill 852994 00:07:04.763 00:19:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@970 -- # wait 852994 00:07:04.763 00:07:04.763 real 0m0.951s 00:07:04.763 user 0m1.113s 00:07:04.763 sys 0m0.402s 00:07:04.763 00:19:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:04.763 00:19:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:07:04.763 ************************************ 00:07:04.763 END TEST exit_on_failed_rpc_init 00:07:04.763 ************************************ 00:07:05.020 00:19:32 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:07:05.020 00:07:05.020 real 0m12.631s 00:07:05.020 user 0m12.033s 00:07:05.020 sys 0m1.444s 00:07:05.020 00:19:32 skip_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:05.020 00:19:32 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:05.020 ************************************ 00:07:05.020 END TEST skip_rpc 00:07:05.020 ************************************ 00:07:05.020 00:19:32 -- spdk/autotest.sh@171 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:07:05.020 00:19:32 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:05.020 00:19:32 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:05.020 00:19:32 -- common/autotest_common.sh@10 -- # set +x 00:07:05.020 ************************************ 00:07:05.020 START TEST rpc_client 00:07:05.020 ************************************ 00:07:05.020 00:19:32 rpc_client -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:07:05.020 * Looking for test storage... 00:07:05.020 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:07:05.020 00:19:32 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:07:05.020 OK 00:07:05.020 00:19:32 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:07:05.020 00:07:05.020 real 0m0.071s 00:07:05.020 user 0m0.034s 00:07:05.020 sys 0m0.042s 00:07:05.020 00:19:32 rpc_client -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:05.020 00:19:32 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:07:05.020 ************************************ 00:07:05.020 END TEST rpc_client 00:07:05.020 ************************************ 00:07:05.020 00:19:32 -- spdk/autotest.sh@172 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:07:05.020 00:19:32 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:05.020 00:19:32 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:05.020 00:19:32 -- common/autotest_common.sh@10 -- # set +x 00:07:05.020 ************************************ 00:07:05.020 START TEST json_config 00:07:05.020 ************************************ 00:07:05.020 00:19:32 json_config -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:07:05.020 00:19:32 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:05.020 00:19:32 json_config -- nvmf/common.sh@7 -- # uname -s 00:07:05.020 00:19:32 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:05.020 00:19:32 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:05.020 00:19:32 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:05.020 00:19:32 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:05.020 00:19:32 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:05.020 00:19:32 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:05.020 00:19:32 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:05.020 00:19:32 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:05.020 00:19:32 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:05.020 00:19:32 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:05.020 00:19:32 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:07:05.020 00:19:32 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:07:05.020 00:19:32 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:05.020 00:19:32 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:05.020 00:19:32 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:07:05.020 00:19:32 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:05.020 00:19:32 json_config -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:05.020 00:19:32 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:05.020 00:19:32 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:05.020 00:19:32 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:05.020 00:19:32 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:05.020 00:19:32 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:05.020 00:19:32 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:05.020 00:19:32 json_config -- paths/export.sh@5 -- # export PATH 00:07:05.020 00:19:32 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:05.020 00:19:32 json_config -- nvmf/common.sh@47 -- # : 0 00:07:05.020 00:19:32 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:05.020 00:19:32 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:05.020 00:19:32 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:05.020 00:19:32 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:05.020 00:19:32 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:05.020 00:19:32 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:05.020 00:19:32 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:05.020 00:19:32 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:05.020 00:19:32 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:07:05.020 00:19:32 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:07:05.020 00:19:32 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:07:05.020 00:19:32 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:07:05.020 00:19:32 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:07:05.020 00:19:32 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:07:05.020 00:19:32 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:07:05.020 00:19:32 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:07:05.020 00:19:32 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:07:05.021 00:19:32 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:07:05.021 00:19:32 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:07:05.021 00:19:32 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:07:05.021 00:19:32 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:07:05.021 00:19:32 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:07:05.021 00:19:32 json_config -- json_config/json_config.sh@355 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:07:05.021 00:19:32 json_config -- json_config/json_config.sh@356 -- # echo 'INFO: JSON configuration test init' 00:07:05.021 INFO: JSON configuration test init 00:07:05.021 00:19:32 json_config -- json_config/json_config.sh@357 -- # json_config_test_init 00:07:05.021 00:19:32 json_config -- json_config/json_config.sh@262 -- # timing_enter json_config_test_init 00:07:05.021 00:19:32 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:07:05.021 00:19:32 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:05.021 00:19:32 json_config -- json_config/json_config.sh@263 -- # timing_enter json_config_setup_target 00:07:05.021 00:19:32 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:07:05.021 00:19:32 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:05.279 00:19:32 json_config -- json_config/json_config.sh@265 -- # json_config_test_start_app target --wait-for-rpc 00:07:05.279 00:19:32 json_config -- json_config/common.sh@9 -- # local app=target 00:07:05.279 00:19:32 json_config -- json_config/common.sh@10 -- # shift 00:07:05.279 00:19:32 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:07:05.279 00:19:32 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:07:05.279 00:19:32 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:07:05.279 00:19:32 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:07:05.279 00:19:32 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:07:05.279 00:19:32 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=853211 00:07:05.279 00:19:32 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:07:05.279 00:19:32 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:07:05.279 Waiting for target to run... 00:07:05.279 00:19:32 json_config -- json_config/common.sh@25 -- # waitforlisten 853211 /var/tmp/spdk_tgt.sock 00:07:05.279 00:19:32 json_config -- common/autotest_common.sh@827 -- # '[' -z 853211 ']' 00:07:05.279 00:19:32 json_config -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:07:05.279 00:19:32 json_config -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:05.279 00:19:32 json_config -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:07:05.279 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:07:05.279 00:19:32 json_config -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:05.279 00:19:32 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:05.279 [2024-07-12 00:19:32.915775] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:07:05.279 [2024-07-12 00:19:32.915885] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid853211 ] 00:07:05.279 EAL: No free 2048 kB hugepages reported on node 1 00:07:05.537 [2024-07-12 00:19:33.273034] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:05.537 [2024-07-12 00:19:33.339991] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.473 00:19:33 json_config -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:06.473 00:19:33 json_config -- common/autotest_common.sh@860 -- # return 0 00:07:06.473 00:19:33 json_config -- json_config/common.sh@26 -- # echo '' 00:07:06.473 00:07:06.473 00:19:33 json_config -- json_config/json_config.sh@269 -- # create_accel_config 00:07:06.473 00:19:33 json_config -- json_config/json_config.sh@93 -- # timing_enter create_accel_config 00:07:06.473 00:19:33 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:07:06.473 00:19:33 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:06.473 00:19:33 json_config -- json_config/json_config.sh@95 -- # [[ 0 -eq 1 ]] 00:07:06.473 00:19:33 json_config -- json_config/json_config.sh@101 -- # timing_exit create_accel_config 00:07:06.473 00:19:33 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:06.473 00:19:33 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:06.473 00:19:33 json_config -- json_config/json_config.sh@273 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:07:06.473 00:19:33 json_config -- json_config/json_config.sh@274 -- # tgt_rpc load_config 00:07:06.473 00:19:33 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:07:09.766 00:19:37 json_config -- json_config/json_config.sh@276 -- # tgt_check_notification_types 00:07:09.766 00:19:37 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:07:09.766 00:19:37 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:07:09.766 00:19:37 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:09.766 00:19:37 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:07:09.766 00:19:37 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:07:09.766 00:19:37 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:07:09.766 00:19:37 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:07:09.766 00:19:37 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:07:09.766 00:19:37 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:07:09.766 00:19:37 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:07:09.766 00:19:37 json_config -- json_config/json_config.sh@48 -- # local get_types 00:07:09.766 00:19:37 json_config -- json_config/json_config.sh@49 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:07:09.766 00:19:37 json_config -- json_config/json_config.sh@54 -- # timing_exit tgt_check_notification_types 00:07:09.766 00:19:37 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:09.766 00:19:37 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:09.766 00:19:37 json_config -- json_config/json_config.sh@55 -- # return 0 00:07:09.766 00:19:37 json_config -- json_config/json_config.sh@278 -- # [[ 0 -eq 1 ]] 00:07:09.766 00:19:37 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:07:09.766 00:19:37 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:07:09.766 00:19:37 json_config -- json_config/json_config.sh@290 -- # [[ 1 -eq 1 ]] 00:07:09.766 00:19:37 json_config -- json_config/json_config.sh@291 -- # create_nvmf_subsystem_config 00:07:09.766 00:19:37 json_config -- json_config/json_config.sh@230 -- # timing_enter create_nvmf_subsystem_config 00:07:09.766 00:19:37 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:07:09.766 00:19:37 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:09.766 00:19:37 json_config -- json_config/json_config.sh@232 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:07:09.766 00:19:37 json_config -- json_config/json_config.sh@233 -- # [[ tcp == \r\d\m\a ]] 00:07:09.766 00:19:37 json_config -- json_config/json_config.sh@237 -- # [[ -z 127.0.0.1 ]] 00:07:09.766 00:19:37 json_config -- json_config/json_config.sh@242 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:07:09.766 00:19:37 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:07:10.056 MallocForNvmf0 00:07:10.056 00:19:37 json_config -- json_config/json_config.sh@243 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:07:10.056 00:19:37 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:07:10.312 MallocForNvmf1 00:07:10.312 00:19:37 json_config -- json_config/json_config.sh@245 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:07:10.312 00:19:37 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:07:10.569 [2024-07-12 00:19:38.184603] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:10.569 00:19:38 json_config -- json_config/json_config.sh@246 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:10.569 00:19:38 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:10.827 00:19:38 json_config -- json_config/json_config.sh@247 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:07:10.827 00:19:38 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:07:11.084 00:19:38 json_config -- json_config/json_config.sh@248 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:07:11.084 00:19:38 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:07:11.341 00:19:38 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:07:11.341 00:19:38 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:07:11.341 [2024-07-12 00:19:39.159647] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:07:11.341 00:19:39 json_config -- json_config/json_config.sh@251 -- # timing_exit create_nvmf_subsystem_config 00:07:11.341 00:19:39 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:11.341 00:19:39 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:11.599 00:19:39 json_config -- json_config/json_config.sh@293 -- # timing_exit json_config_setup_target 00:07:11.599 00:19:39 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:11.599 00:19:39 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:11.599 00:19:39 json_config -- json_config/json_config.sh@295 -- # [[ 0 -eq 1 ]] 00:07:11.599 00:19:39 json_config -- json_config/json_config.sh@300 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:07:11.599 00:19:39 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:07:11.599 MallocBdevForConfigChangeCheck 00:07:11.855 00:19:39 json_config -- json_config/json_config.sh@302 -- # timing_exit json_config_test_init 00:07:11.855 00:19:39 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:11.855 00:19:39 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:11.855 00:19:39 json_config -- json_config/json_config.sh@359 -- # tgt_rpc save_config 00:07:11.855 00:19:39 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:07:12.113 00:19:39 json_config -- json_config/json_config.sh@361 -- # echo 'INFO: shutting down applications...' 00:07:12.113 INFO: shutting down applications... 00:07:12.113 00:19:39 json_config -- json_config/json_config.sh@362 -- # [[ 0 -eq 1 ]] 00:07:12.113 00:19:39 json_config -- json_config/json_config.sh@368 -- # json_config_clear target 00:07:12.113 00:19:39 json_config -- json_config/json_config.sh@332 -- # [[ -n 22 ]] 00:07:12.113 00:19:39 json_config -- json_config/json_config.sh@333 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:07:14.008 Calling clear_iscsi_subsystem 00:07:14.008 Calling clear_nvmf_subsystem 00:07:14.008 Calling clear_nbd_subsystem 00:07:14.008 Calling clear_ublk_subsystem 00:07:14.008 Calling clear_vhost_blk_subsystem 00:07:14.008 Calling clear_vhost_scsi_subsystem 00:07:14.008 Calling clear_bdev_subsystem 00:07:14.008 00:19:41 json_config -- json_config/json_config.sh@337 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:07:14.008 00:19:41 json_config -- json_config/json_config.sh@343 -- # count=100 00:07:14.008 00:19:41 json_config -- json_config/json_config.sh@344 -- # '[' 100 -gt 0 ']' 00:07:14.008 00:19:41 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:07:14.008 00:19:41 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:07:14.008 00:19:41 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:07:14.268 00:19:41 json_config -- json_config/json_config.sh@345 -- # break 00:07:14.268 00:19:41 json_config -- json_config/json_config.sh@350 -- # '[' 100 -eq 0 ']' 00:07:14.268 00:19:41 json_config -- json_config/json_config.sh@369 -- # json_config_test_shutdown_app target 00:07:14.268 00:19:41 json_config -- json_config/common.sh@31 -- # local app=target 00:07:14.268 00:19:41 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:07:14.268 00:19:41 json_config -- json_config/common.sh@35 -- # [[ -n 853211 ]] 00:07:14.268 00:19:41 json_config -- json_config/common.sh@38 -- # kill -SIGINT 853211 00:07:14.268 00:19:41 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:07:14.268 00:19:41 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:14.268 00:19:41 json_config -- json_config/common.sh@41 -- # kill -0 853211 00:07:14.268 00:19:41 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:07:14.835 00:19:42 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:07:14.835 00:19:42 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:14.835 00:19:42 json_config -- json_config/common.sh@41 -- # kill -0 853211 00:07:14.835 00:19:42 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:07:14.835 00:19:42 json_config -- json_config/common.sh@43 -- # break 00:07:14.835 00:19:42 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:07:14.835 00:19:42 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:07:14.835 SPDK target shutdown done 00:07:14.835 00:19:42 json_config -- json_config/json_config.sh@371 -- # echo 'INFO: relaunching applications...' 00:07:14.835 INFO: relaunching applications... 00:07:14.835 00:19:42 json_config -- json_config/json_config.sh@372 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:07:14.835 00:19:42 json_config -- json_config/common.sh@9 -- # local app=target 00:07:14.835 00:19:42 json_config -- json_config/common.sh@10 -- # shift 00:07:14.835 00:19:42 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:07:14.835 00:19:42 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:07:14.835 00:19:42 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:07:14.835 00:19:42 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:07:14.835 00:19:42 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:07:14.835 00:19:42 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=854231 00:07:14.835 00:19:42 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:07:14.835 00:19:42 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:07:14.835 Waiting for target to run... 00:07:14.835 00:19:42 json_config -- json_config/common.sh@25 -- # waitforlisten 854231 /var/tmp/spdk_tgt.sock 00:07:14.835 00:19:42 json_config -- common/autotest_common.sh@827 -- # '[' -z 854231 ']' 00:07:14.835 00:19:42 json_config -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:07:14.835 00:19:42 json_config -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:14.835 00:19:42 json_config -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:07:14.835 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:07:14.835 00:19:42 json_config -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:14.835 00:19:42 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:14.835 [2024-07-12 00:19:42.455982] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:07:14.835 [2024-07-12 00:19:42.456069] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid854231 ] 00:07:14.835 EAL: No free 2048 kB hugepages reported on node 1 00:07:15.093 [2024-07-12 00:19:42.757689] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:15.093 [2024-07-12 00:19:42.823486] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:18.384 [2024-07-12 00:19:45.817833] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:18.384 [2024-07-12 00:19:45.850151] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:07:18.384 00:19:45 json_config -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:18.384 00:19:45 json_config -- common/autotest_common.sh@860 -- # return 0 00:07:18.384 00:19:45 json_config -- json_config/common.sh@26 -- # echo '' 00:07:18.384 00:07:18.384 00:19:45 json_config -- json_config/json_config.sh@373 -- # [[ 0 -eq 1 ]] 00:07:18.384 00:19:45 json_config -- json_config/json_config.sh@377 -- # echo 'INFO: Checking if target configuration is the same...' 00:07:18.384 INFO: Checking if target configuration is the same... 00:07:18.384 00:19:45 json_config -- json_config/json_config.sh@378 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:07:18.384 00:19:45 json_config -- json_config/json_config.sh@378 -- # tgt_rpc save_config 00:07:18.384 00:19:45 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:07:18.384 + '[' 2 -ne 2 ']' 00:07:18.384 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:07:18.384 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:07:18.384 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:07:18.384 +++ basename /dev/fd/62 00:07:18.384 ++ mktemp /tmp/62.XXX 00:07:18.384 + tmp_file_1=/tmp/62.4Tx 00:07:18.384 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:07:18.384 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:07:18.384 + tmp_file_2=/tmp/spdk_tgt_config.json.AWQ 00:07:18.384 + ret=0 00:07:18.384 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:07:18.643 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:07:18.643 + diff -u /tmp/62.4Tx /tmp/spdk_tgt_config.json.AWQ 00:07:18.643 + echo 'INFO: JSON config files are the same' 00:07:18.643 INFO: JSON config files are the same 00:07:18.643 + rm /tmp/62.4Tx /tmp/spdk_tgt_config.json.AWQ 00:07:18.643 + exit 0 00:07:18.643 00:19:46 json_config -- json_config/json_config.sh@379 -- # [[ 0 -eq 1 ]] 00:07:18.643 00:19:46 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:07:18.643 INFO: changing configuration and checking if this can be detected... 00:07:18.643 00:19:46 json_config -- json_config/json_config.sh@386 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:07:18.643 00:19:46 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:07:18.901 00:19:46 json_config -- json_config/json_config.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:07:18.901 00:19:46 json_config -- json_config/json_config.sh@387 -- # tgt_rpc save_config 00:07:18.901 00:19:46 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:07:18.901 + '[' 2 -ne 2 ']' 00:07:18.901 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:07:18.901 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:07:18.901 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:07:18.901 +++ basename /dev/fd/62 00:07:18.901 ++ mktemp /tmp/62.XXX 00:07:18.901 + tmp_file_1=/tmp/62.W1X 00:07:18.901 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:07:18.901 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:07:18.901 + tmp_file_2=/tmp/spdk_tgt_config.json.iPh 00:07:18.901 + ret=0 00:07:18.901 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:07:19.159 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:07:19.417 + diff -u /tmp/62.W1X /tmp/spdk_tgt_config.json.iPh 00:07:19.417 + ret=1 00:07:19.417 + echo '=== Start of file: /tmp/62.W1X ===' 00:07:19.417 + cat /tmp/62.W1X 00:07:19.417 + echo '=== End of file: /tmp/62.W1X ===' 00:07:19.417 + echo '' 00:07:19.417 + echo '=== Start of file: /tmp/spdk_tgt_config.json.iPh ===' 00:07:19.417 + cat /tmp/spdk_tgt_config.json.iPh 00:07:19.417 + echo '=== End of file: /tmp/spdk_tgt_config.json.iPh ===' 00:07:19.417 + echo '' 00:07:19.417 + rm /tmp/62.W1X /tmp/spdk_tgt_config.json.iPh 00:07:19.417 + exit 1 00:07:19.417 00:19:47 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: configuration change detected.' 00:07:19.417 INFO: configuration change detected. 00:07:19.417 00:19:47 json_config -- json_config/json_config.sh@394 -- # json_config_test_fini 00:07:19.417 00:19:47 json_config -- json_config/json_config.sh@306 -- # timing_enter json_config_test_fini 00:07:19.417 00:19:47 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:07:19.417 00:19:47 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:19.417 00:19:47 json_config -- json_config/json_config.sh@307 -- # local ret=0 00:07:19.417 00:19:47 json_config -- json_config/json_config.sh@309 -- # [[ -n '' ]] 00:07:19.417 00:19:47 json_config -- json_config/json_config.sh@317 -- # [[ -n 854231 ]] 00:07:19.417 00:19:47 json_config -- json_config/json_config.sh@320 -- # cleanup_bdev_subsystem_config 00:07:19.417 00:19:47 json_config -- json_config/json_config.sh@184 -- # timing_enter cleanup_bdev_subsystem_config 00:07:19.417 00:19:47 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:07:19.417 00:19:47 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:19.417 00:19:47 json_config -- json_config/json_config.sh@186 -- # [[ 0 -eq 1 ]] 00:07:19.417 00:19:47 json_config -- json_config/json_config.sh@193 -- # uname -s 00:07:19.417 00:19:47 json_config -- json_config/json_config.sh@193 -- # [[ Linux = Linux ]] 00:07:19.417 00:19:47 json_config -- json_config/json_config.sh@194 -- # rm -f /sample_aio 00:07:19.417 00:19:47 json_config -- json_config/json_config.sh@197 -- # [[ 0 -eq 1 ]] 00:07:19.417 00:19:47 json_config -- json_config/json_config.sh@201 -- # timing_exit cleanup_bdev_subsystem_config 00:07:19.417 00:19:47 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:19.417 00:19:47 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:19.417 00:19:47 json_config -- json_config/json_config.sh@323 -- # killprocess 854231 00:07:19.417 00:19:47 json_config -- common/autotest_common.sh@946 -- # '[' -z 854231 ']' 00:07:19.417 00:19:47 json_config -- common/autotest_common.sh@950 -- # kill -0 854231 00:07:19.417 00:19:47 json_config -- common/autotest_common.sh@951 -- # uname 00:07:19.417 00:19:47 json_config -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:19.417 00:19:47 json_config -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 854231 00:07:19.417 00:19:47 json_config -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:19.417 00:19:47 json_config -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:19.417 00:19:47 json_config -- common/autotest_common.sh@964 -- # echo 'killing process with pid 854231' 00:07:19.417 killing process with pid 854231 00:07:19.417 00:19:47 json_config -- common/autotest_common.sh@965 -- # kill 854231 00:07:19.417 00:19:47 json_config -- common/autotest_common.sh@970 -- # wait 854231 00:07:20.790 00:19:48 json_config -- json_config/json_config.sh@326 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:07:20.790 00:19:48 json_config -- json_config/json_config.sh@327 -- # timing_exit json_config_test_fini 00:07:20.790 00:19:48 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:20.790 00:19:48 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:20.790 00:19:48 json_config -- json_config/json_config.sh@328 -- # return 0 00:07:20.790 00:19:48 json_config -- json_config/json_config.sh@396 -- # echo 'INFO: Success' 00:07:20.790 INFO: Success 00:07:20.790 00:07:20.790 real 0m15.803s 00:07:20.790 user 0m18.044s 00:07:20.790 sys 0m1.789s 00:07:20.790 00:19:48 json_config -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:20.790 00:19:48 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:20.790 ************************************ 00:07:20.790 END TEST json_config 00:07:20.790 ************************************ 00:07:20.790 00:19:48 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:07:20.790 00:19:48 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:20.790 00:19:48 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:20.790 00:19:48 -- common/autotest_common.sh@10 -- # set +x 00:07:21.049 ************************************ 00:07:21.049 START TEST json_config_extra_key 00:07:21.049 ************************************ 00:07:21.050 00:19:48 json_config_extra_key -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:07:21.050 00:19:48 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:21.050 00:19:48 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:07:21.050 00:19:48 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:21.050 00:19:48 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:21.050 00:19:48 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:21.050 00:19:48 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:21.050 00:19:48 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:21.050 00:19:48 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:21.050 00:19:48 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:21.050 00:19:48 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:21.050 00:19:48 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:21.050 00:19:48 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:21.050 00:19:48 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:07:21.050 00:19:48 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:07:21.050 00:19:48 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:21.050 00:19:48 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:21.050 00:19:48 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:07:21.050 00:19:48 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:21.050 00:19:48 json_config_extra_key -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:21.050 00:19:48 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:21.050 00:19:48 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:21.050 00:19:48 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:21.050 00:19:48 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:21.050 00:19:48 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:21.050 00:19:48 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:21.050 00:19:48 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:07:21.050 00:19:48 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:21.050 00:19:48 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:07:21.050 00:19:48 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:21.050 00:19:48 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:21.050 00:19:48 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:21.050 00:19:48 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:21.050 00:19:48 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:21.050 00:19:48 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:21.050 00:19:48 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:21.050 00:19:48 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:21.050 00:19:48 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:07:21.050 00:19:48 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:07:21.050 00:19:48 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:07:21.050 00:19:48 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:07:21.050 00:19:48 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:07:21.050 00:19:48 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:07:21.050 00:19:48 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:07:21.050 00:19:48 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:07:21.050 00:19:48 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:07:21.050 00:19:48 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:07:21.050 00:19:48 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:07:21.050 INFO: launching applications... 00:07:21.050 00:19:48 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:07:21.050 00:19:48 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:07:21.050 00:19:48 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:07:21.050 00:19:48 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:07:21.050 00:19:48 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:07:21.050 00:19:48 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:07:21.050 00:19:48 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:07:21.050 00:19:48 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:07:21.050 00:19:48 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=854941 00:07:21.050 00:19:48 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:07:21.050 00:19:48 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:07:21.050 Waiting for target to run... 00:07:21.050 00:19:48 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 854941 /var/tmp/spdk_tgt.sock 00:07:21.050 00:19:48 json_config_extra_key -- common/autotest_common.sh@827 -- # '[' -z 854941 ']' 00:07:21.050 00:19:48 json_config_extra_key -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:07:21.050 00:19:48 json_config_extra_key -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:21.050 00:19:48 json_config_extra_key -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:07:21.050 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:07:21.050 00:19:48 json_config_extra_key -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:21.050 00:19:48 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:07:21.050 [2024-07-12 00:19:48.761292] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:07:21.050 [2024-07-12 00:19:48.761382] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid854941 ] 00:07:21.050 EAL: No free 2048 kB hugepages reported on node 1 00:07:21.309 [2024-07-12 00:19:49.068905] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:21.309 [2024-07-12 00:19:49.134404] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:22.244 00:19:49 json_config_extra_key -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:22.244 00:19:49 json_config_extra_key -- common/autotest_common.sh@860 -- # return 0 00:07:22.244 00:19:49 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:07:22.244 00:07:22.244 00:19:49 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:07:22.244 INFO: shutting down applications... 00:07:22.244 00:19:49 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:07:22.244 00:19:49 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:07:22.244 00:19:49 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:07:22.244 00:19:49 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 854941 ]] 00:07:22.244 00:19:49 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 854941 00:07:22.244 00:19:49 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:07:22.244 00:19:49 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:22.244 00:19:49 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 854941 00:07:22.244 00:19:49 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:07:22.503 00:19:50 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:07:22.503 00:19:50 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:22.503 00:19:50 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 854941 00:07:22.503 00:19:50 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:07:22.503 00:19:50 json_config_extra_key -- json_config/common.sh@43 -- # break 00:07:22.503 00:19:50 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:07:22.503 00:19:50 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:07:22.503 SPDK target shutdown done 00:07:22.503 00:19:50 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:07:22.503 Success 00:07:22.503 00:07:22.503 real 0m1.648s 00:07:22.503 user 0m1.534s 00:07:22.503 sys 0m0.406s 00:07:22.503 00:19:50 json_config_extra_key -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:22.503 00:19:50 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:07:22.503 ************************************ 00:07:22.503 END TEST json_config_extra_key 00:07:22.503 ************************************ 00:07:22.503 00:19:50 -- spdk/autotest.sh@174 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:07:22.503 00:19:50 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:22.503 00:19:50 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:22.503 00:19:50 -- common/autotest_common.sh@10 -- # set +x 00:07:22.762 ************************************ 00:07:22.762 START TEST alias_rpc 00:07:22.762 ************************************ 00:07:22.762 00:19:50 alias_rpc -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:07:22.762 * Looking for test storage... 00:07:22.762 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:07:22.762 00:19:50 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:22.762 00:19:50 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=855108 00:07:22.762 00:19:50 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 855108 00:07:22.762 00:19:50 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:07:22.762 00:19:50 alias_rpc -- common/autotest_common.sh@827 -- # '[' -z 855108 ']' 00:07:22.762 00:19:50 alias_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:22.762 00:19:50 alias_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:22.762 00:19:50 alias_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:22.762 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:22.762 00:19:50 alias_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:22.762 00:19:50 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:22.762 [2024-07-12 00:19:50.464782] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:07:22.762 [2024-07-12 00:19:50.464893] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid855108 ] 00:07:22.762 EAL: No free 2048 kB hugepages reported on node 1 00:07:22.762 [2024-07-12 00:19:50.528376] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:23.021 [2024-07-12 00:19:50.615798] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:23.021 00:19:50 alias_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:23.021 00:19:50 alias_rpc -- common/autotest_common.sh@860 -- # return 0 00:07:23.021 00:19:50 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:07:23.587 00:19:51 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 855108 00:07:23.587 00:19:51 alias_rpc -- common/autotest_common.sh@946 -- # '[' -z 855108 ']' 00:07:23.587 00:19:51 alias_rpc -- common/autotest_common.sh@950 -- # kill -0 855108 00:07:23.587 00:19:51 alias_rpc -- common/autotest_common.sh@951 -- # uname 00:07:23.587 00:19:51 alias_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:23.587 00:19:51 alias_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 855108 00:07:23.587 00:19:51 alias_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:23.587 00:19:51 alias_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:23.587 00:19:51 alias_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 855108' 00:07:23.587 killing process with pid 855108 00:07:23.587 00:19:51 alias_rpc -- common/autotest_common.sh@965 -- # kill 855108 00:07:23.587 00:19:51 alias_rpc -- common/autotest_common.sh@970 -- # wait 855108 00:07:23.587 00:07:23.587 real 0m1.046s 00:07:23.587 user 0m1.246s 00:07:23.587 sys 0m0.375s 00:07:23.587 00:19:51 alias_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:23.587 00:19:51 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:23.587 ************************************ 00:07:23.587 END TEST alias_rpc 00:07:23.587 ************************************ 00:07:23.587 00:19:51 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:07:23.587 00:19:51 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:07:23.587 00:19:51 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:23.587 00:19:51 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:23.587 00:19:51 -- common/autotest_common.sh@10 -- # set +x 00:07:23.845 ************************************ 00:07:23.845 START TEST spdkcli_tcp 00:07:23.845 ************************************ 00:07:23.845 00:19:51 spdkcli_tcp -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:07:23.845 * Looking for test storage... 00:07:23.845 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:07:23.845 00:19:51 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:07:23.845 00:19:51 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:07:23.845 00:19:51 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:07:23.845 00:19:51 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:07:23.845 00:19:51 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:07:23.845 00:19:51 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:07:23.845 00:19:51 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:07:23.845 00:19:51 spdkcli_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:07:23.845 00:19:51 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:23.845 00:19:51 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=855262 00:07:23.845 00:19:51 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:07:23.845 00:19:51 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 855262 00:07:23.845 00:19:51 spdkcli_tcp -- common/autotest_common.sh@827 -- # '[' -z 855262 ']' 00:07:23.845 00:19:51 spdkcli_tcp -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:23.845 00:19:51 spdkcli_tcp -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:23.845 00:19:51 spdkcli_tcp -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:23.845 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:23.845 00:19:51 spdkcli_tcp -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:23.846 00:19:51 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:23.846 [2024-07-12 00:19:51.564644] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:07:23.846 [2024-07-12 00:19:51.564750] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid855262 ] 00:07:23.846 EAL: No free 2048 kB hugepages reported on node 1 00:07:23.846 [2024-07-12 00:19:51.620784] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:24.132 [2024-07-12 00:19:51.702918] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:24.132 [2024-07-12 00:19:51.702928] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:24.132 00:19:51 spdkcli_tcp -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:24.132 00:19:51 spdkcli_tcp -- common/autotest_common.sh@860 -- # return 0 00:07:24.132 00:19:51 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=855351 00:07:24.132 00:19:51 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:07:24.132 00:19:51 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:07:24.415 [ 00:07:24.415 "bdev_malloc_delete", 00:07:24.415 "bdev_malloc_create", 00:07:24.415 "bdev_null_resize", 00:07:24.415 "bdev_null_delete", 00:07:24.415 "bdev_null_create", 00:07:24.415 "bdev_nvme_cuse_unregister", 00:07:24.415 "bdev_nvme_cuse_register", 00:07:24.415 "bdev_opal_new_user", 00:07:24.415 "bdev_opal_set_lock_state", 00:07:24.415 "bdev_opal_delete", 00:07:24.415 "bdev_opal_get_info", 00:07:24.415 "bdev_opal_create", 00:07:24.415 "bdev_nvme_opal_revert", 00:07:24.415 "bdev_nvme_opal_init", 00:07:24.415 "bdev_nvme_send_cmd", 00:07:24.415 "bdev_nvme_get_path_iostat", 00:07:24.415 "bdev_nvme_get_mdns_discovery_info", 00:07:24.415 "bdev_nvme_stop_mdns_discovery", 00:07:24.415 "bdev_nvme_start_mdns_discovery", 00:07:24.415 "bdev_nvme_set_multipath_policy", 00:07:24.415 "bdev_nvme_set_preferred_path", 00:07:24.415 "bdev_nvme_get_io_paths", 00:07:24.415 "bdev_nvme_remove_error_injection", 00:07:24.415 "bdev_nvme_add_error_injection", 00:07:24.415 "bdev_nvme_get_discovery_info", 00:07:24.415 "bdev_nvme_stop_discovery", 00:07:24.415 "bdev_nvme_start_discovery", 00:07:24.415 "bdev_nvme_get_controller_health_info", 00:07:24.415 "bdev_nvme_disable_controller", 00:07:24.415 "bdev_nvme_enable_controller", 00:07:24.415 "bdev_nvme_reset_controller", 00:07:24.415 "bdev_nvme_get_transport_statistics", 00:07:24.415 "bdev_nvme_apply_firmware", 00:07:24.415 "bdev_nvme_detach_controller", 00:07:24.415 "bdev_nvme_get_controllers", 00:07:24.415 "bdev_nvme_attach_controller", 00:07:24.415 "bdev_nvme_set_hotplug", 00:07:24.415 "bdev_nvme_set_options", 00:07:24.415 "bdev_passthru_delete", 00:07:24.415 "bdev_passthru_create", 00:07:24.415 "bdev_lvol_set_parent_bdev", 00:07:24.415 "bdev_lvol_set_parent", 00:07:24.415 "bdev_lvol_check_shallow_copy", 00:07:24.415 "bdev_lvol_start_shallow_copy", 00:07:24.415 "bdev_lvol_grow_lvstore", 00:07:24.415 "bdev_lvol_get_lvols", 00:07:24.415 "bdev_lvol_get_lvstores", 00:07:24.415 "bdev_lvol_delete", 00:07:24.415 "bdev_lvol_set_read_only", 00:07:24.415 "bdev_lvol_resize", 00:07:24.415 "bdev_lvol_decouple_parent", 00:07:24.415 "bdev_lvol_inflate", 00:07:24.415 "bdev_lvol_rename", 00:07:24.415 "bdev_lvol_clone_bdev", 00:07:24.415 "bdev_lvol_clone", 00:07:24.415 "bdev_lvol_snapshot", 00:07:24.415 "bdev_lvol_create", 00:07:24.415 "bdev_lvol_delete_lvstore", 00:07:24.415 "bdev_lvol_rename_lvstore", 00:07:24.415 "bdev_lvol_create_lvstore", 00:07:24.415 "bdev_raid_set_options", 00:07:24.415 "bdev_raid_remove_base_bdev", 00:07:24.415 "bdev_raid_add_base_bdev", 00:07:24.415 "bdev_raid_delete", 00:07:24.415 "bdev_raid_create", 00:07:24.415 "bdev_raid_get_bdevs", 00:07:24.415 "bdev_error_inject_error", 00:07:24.416 "bdev_error_delete", 00:07:24.416 "bdev_error_create", 00:07:24.416 "bdev_split_delete", 00:07:24.416 "bdev_split_create", 00:07:24.416 "bdev_delay_delete", 00:07:24.416 "bdev_delay_create", 00:07:24.416 "bdev_delay_update_latency", 00:07:24.416 "bdev_zone_block_delete", 00:07:24.416 "bdev_zone_block_create", 00:07:24.416 "blobfs_create", 00:07:24.416 "blobfs_detect", 00:07:24.416 "blobfs_set_cache_size", 00:07:24.416 "bdev_aio_delete", 00:07:24.416 "bdev_aio_rescan", 00:07:24.416 "bdev_aio_create", 00:07:24.416 "bdev_ftl_set_property", 00:07:24.416 "bdev_ftl_get_properties", 00:07:24.416 "bdev_ftl_get_stats", 00:07:24.416 "bdev_ftl_unmap", 00:07:24.416 "bdev_ftl_unload", 00:07:24.416 "bdev_ftl_delete", 00:07:24.416 "bdev_ftl_load", 00:07:24.416 "bdev_ftl_create", 00:07:24.416 "bdev_virtio_attach_controller", 00:07:24.416 "bdev_virtio_scsi_get_devices", 00:07:24.416 "bdev_virtio_detach_controller", 00:07:24.416 "bdev_virtio_blk_set_hotplug", 00:07:24.416 "bdev_iscsi_delete", 00:07:24.416 "bdev_iscsi_create", 00:07:24.416 "bdev_iscsi_set_options", 00:07:24.416 "accel_error_inject_error", 00:07:24.416 "ioat_scan_accel_module", 00:07:24.416 "dsa_scan_accel_module", 00:07:24.416 "iaa_scan_accel_module", 00:07:24.416 "vfu_virtio_create_scsi_endpoint", 00:07:24.416 "vfu_virtio_scsi_remove_target", 00:07:24.416 "vfu_virtio_scsi_add_target", 00:07:24.416 "vfu_virtio_create_blk_endpoint", 00:07:24.416 "vfu_virtio_delete_endpoint", 00:07:24.416 "keyring_file_remove_key", 00:07:24.416 "keyring_file_add_key", 00:07:24.416 "keyring_linux_set_options", 00:07:24.416 "iscsi_get_histogram", 00:07:24.416 "iscsi_enable_histogram", 00:07:24.416 "iscsi_set_options", 00:07:24.416 "iscsi_get_auth_groups", 00:07:24.416 "iscsi_auth_group_remove_secret", 00:07:24.416 "iscsi_auth_group_add_secret", 00:07:24.416 "iscsi_delete_auth_group", 00:07:24.416 "iscsi_create_auth_group", 00:07:24.416 "iscsi_set_discovery_auth", 00:07:24.416 "iscsi_get_options", 00:07:24.416 "iscsi_target_node_request_logout", 00:07:24.416 "iscsi_target_node_set_redirect", 00:07:24.416 "iscsi_target_node_set_auth", 00:07:24.416 "iscsi_target_node_add_lun", 00:07:24.416 "iscsi_get_stats", 00:07:24.416 "iscsi_get_connections", 00:07:24.416 "iscsi_portal_group_set_auth", 00:07:24.416 "iscsi_start_portal_group", 00:07:24.416 "iscsi_delete_portal_group", 00:07:24.416 "iscsi_create_portal_group", 00:07:24.416 "iscsi_get_portal_groups", 00:07:24.416 "iscsi_delete_target_node", 00:07:24.416 "iscsi_target_node_remove_pg_ig_maps", 00:07:24.416 "iscsi_target_node_add_pg_ig_maps", 00:07:24.416 "iscsi_create_target_node", 00:07:24.416 "iscsi_get_target_nodes", 00:07:24.416 "iscsi_delete_initiator_group", 00:07:24.416 "iscsi_initiator_group_remove_initiators", 00:07:24.416 "iscsi_initiator_group_add_initiators", 00:07:24.416 "iscsi_create_initiator_group", 00:07:24.416 "iscsi_get_initiator_groups", 00:07:24.416 "nvmf_set_crdt", 00:07:24.416 "nvmf_set_config", 00:07:24.416 "nvmf_set_max_subsystems", 00:07:24.416 "nvmf_stop_mdns_prr", 00:07:24.416 "nvmf_publish_mdns_prr", 00:07:24.416 "nvmf_subsystem_get_listeners", 00:07:24.416 "nvmf_subsystem_get_qpairs", 00:07:24.416 "nvmf_subsystem_get_controllers", 00:07:24.416 "nvmf_get_stats", 00:07:24.416 "nvmf_get_transports", 00:07:24.416 "nvmf_create_transport", 00:07:24.416 "nvmf_get_targets", 00:07:24.416 "nvmf_delete_target", 00:07:24.416 "nvmf_create_target", 00:07:24.416 "nvmf_subsystem_allow_any_host", 00:07:24.416 "nvmf_subsystem_remove_host", 00:07:24.416 "nvmf_subsystem_add_host", 00:07:24.416 "nvmf_ns_remove_host", 00:07:24.416 "nvmf_ns_add_host", 00:07:24.416 "nvmf_subsystem_remove_ns", 00:07:24.416 "nvmf_subsystem_add_ns", 00:07:24.416 "nvmf_subsystem_listener_set_ana_state", 00:07:24.416 "nvmf_discovery_get_referrals", 00:07:24.416 "nvmf_discovery_remove_referral", 00:07:24.416 "nvmf_discovery_add_referral", 00:07:24.416 "nvmf_subsystem_remove_listener", 00:07:24.416 "nvmf_subsystem_add_listener", 00:07:24.416 "nvmf_delete_subsystem", 00:07:24.416 "nvmf_create_subsystem", 00:07:24.416 "nvmf_get_subsystems", 00:07:24.416 "env_dpdk_get_mem_stats", 00:07:24.416 "nbd_get_disks", 00:07:24.416 "nbd_stop_disk", 00:07:24.416 "nbd_start_disk", 00:07:24.416 "ublk_recover_disk", 00:07:24.416 "ublk_get_disks", 00:07:24.416 "ublk_stop_disk", 00:07:24.416 "ublk_start_disk", 00:07:24.416 "ublk_destroy_target", 00:07:24.416 "ublk_create_target", 00:07:24.416 "virtio_blk_create_transport", 00:07:24.416 "virtio_blk_get_transports", 00:07:24.416 "vhost_controller_set_coalescing", 00:07:24.416 "vhost_get_controllers", 00:07:24.416 "vhost_delete_controller", 00:07:24.416 "vhost_create_blk_controller", 00:07:24.416 "vhost_scsi_controller_remove_target", 00:07:24.416 "vhost_scsi_controller_add_target", 00:07:24.416 "vhost_start_scsi_controller", 00:07:24.416 "vhost_create_scsi_controller", 00:07:24.416 "thread_set_cpumask", 00:07:24.416 "framework_get_scheduler", 00:07:24.416 "framework_set_scheduler", 00:07:24.416 "framework_get_reactors", 00:07:24.416 "thread_get_io_channels", 00:07:24.416 "thread_get_pollers", 00:07:24.416 "thread_get_stats", 00:07:24.416 "framework_monitor_context_switch", 00:07:24.416 "spdk_kill_instance", 00:07:24.416 "log_enable_timestamps", 00:07:24.416 "log_get_flags", 00:07:24.416 "log_clear_flag", 00:07:24.416 "log_set_flag", 00:07:24.416 "log_get_level", 00:07:24.416 "log_set_level", 00:07:24.416 "log_get_print_level", 00:07:24.416 "log_set_print_level", 00:07:24.416 "framework_enable_cpumask_locks", 00:07:24.416 "framework_disable_cpumask_locks", 00:07:24.416 "framework_wait_init", 00:07:24.416 "framework_start_init", 00:07:24.416 "scsi_get_devices", 00:07:24.416 "bdev_get_histogram", 00:07:24.416 "bdev_enable_histogram", 00:07:24.416 "bdev_set_qos_limit", 00:07:24.416 "bdev_set_qd_sampling_period", 00:07:24.416 "bdev_get_bdevs", 00:07:24.416 "bdev_reset_iostat", 00:07:24.416 "bdev_get_iostat", 00:07:24.416 "bdev_examine", 00:07:24.416 "bdev_wait_for_examine", 00:07:24.416 "bdev_set_options", 00:07:24.416 "notify_get_notifications", 00:07:24.416 "notify_get_types", 00:07:24.416 "accel_get_stats", 00:07:24.416 "accel_set_options", 00:07:24.416 "accel_set_driver", 00:07:24.416 "accel_crypto_key_destroy", 00:07:24.416 "accel_crypto_keys_get", 00:07:24.416 "accel_crypto_key_create", 00:07:24.416 "accel_assign_opc", 00:07:24.416 "accel_get_module_info", 00:07:24.416 "accel_get_opc_assignments", 00:07:24.416 "vmd_rescan", 00:07:24.416 "vmd_remove_device", 00:07:24.416 "vmd_enable", 00:07:24.416 "sock_get_default_impl", 00:07:24.416 "sock_set_default_impl", 00:07:24.416 "sock_impl_set_options", 00:07:24.416 "sock_impl_get_options", 00:07:24.416 "iobuf_get_stats", 00:07:24.416 "iobuf_set_options", 00:07:24.416 "keyring_get_keys", 00:07:24.416 "framework_get_pci_devices", 00:07:24.416 "framework_get_config", 00:07:24.416 "framework_get_subsystems", 00:07:24.416 "vfu_tgt_set_base_path", 00:07:24.416 "trace_get_info", 00:07:24.416 "trace_get_tpoint_group_mask", 00:07:24.416 "trace_disable_tpoint_group", 00:07:24.416 "trace_enable_tpoint_group", 00:07:24.416 "trace_clear_tpoint_mask", 00:07:24.416 "trace_set_tpoint_mask", 00:07:24.416 "spdk_get_version", 00:07:24.416 "rpc_get_methods" 00:07:24.416 ] 00:07:24.416 00:19:52 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:07:24.416 00:19:52 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:24.416 00:19:52 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:24.416 00:19:52 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:07:24.416 00:19:52 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 855262 00:07:24.416 00:19:52 spdkcli_tcp -- common/autotest_common.sh@946 -- # '[' -z 855262 ']' 00:07:24.416 00:19:52 spdkcli_tcp -- common/autotest_common.sh@950 -- # kill -0 855262 00:07:24.416 00:19:52 spdkcli_tcp -- common/autotest_common.sh@951 -- # uname 00:07:24.416 00:19:52 spdkcli_tcp -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:24.416 00:19:52 spdkcli_tcp -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 855262 00:07:24.416 00:19:52 spdkcli_tcp -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:24.416 00:19:52 spdkcli_tcp -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:24.416 00:19:52 spdkcli_tcp -- common/autotest_common.sh@964 -- # echo 'killing process with pid 855262' 00:07:24.416 killing process with pid 855262 00:07:24.416 00:19:52 spdkcli_tcp -- common/autotest_common.sh@965 -- # kill 855262 00:07:24.416 00:19:52 spdkcli_tcp -- common/autotest_common.sh@970 -- # wait 855262 00:07:24.675 00:07:24.675 real 0m1.027s 00:07:24.675 user 0m1.900s 00:07:24.675 sys 0m0.390s 00:07:24.675 00:19:52 spdkcli_tcp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:24.675 00:19:52 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:24.675 ************************************ 00:07:24.675 END TEST spdkcli_tcp 00:07:24.675 ************************************ 00:07:24.675 00:19:52 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:07:24.675 00:19:52 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:24.675 00:19:52 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:24.675 00:19:52 -- common/autotest_common.sh@10 -- # set +x 00:07:24.932 ************************************ 00:07:24.932 START TEST dpdk_mem_utility 00:07:24.932 ************************************ 00:07:24.932 00:19:52 dpdk_mem_utility -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:07:24.932 * Looking for test storage... 00:07:24.933 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:07:24.933 00:19:52 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:07:24.933 00:19:52 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=855434 00:07:24.933 00:19:52 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:07:24.933 00:19:52 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 855434 00:07:24.933 00:19:52 dpdk_mem_utility -- common/autotest_common.sh@827 -- # '[' -z 855434 ']' 00:07:24.933 00:19:52 dpdk_mem_utility -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:24.933 00:19:52 dpdk_mem_utility -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:24.933 00:19:52 dpdk_mem_utility -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:24.933 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:24.933 00:19:52 dpdk_mem_utility -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:24.933 00:19:52 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:07:24.933 [2024-07-12 00:19:52.632365] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:07:24.933 [2024-07-12 00:19:52.632471] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid855434 ] 00:07:24.933 EAL: No free 2048 kB hugepages reported on node 1 00:07:24.933 [2024-07-12 00:19:52.689367] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:25.191 [2024-07-12 00:19:52.771816] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:25.191 00:19:52 dpdk_mem_utility -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:25.191 00:19:52 dpdk_mem_utility -- common/autotest_common.sh@860 -- # return 0 00:07:25.191 00:19:52 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:07:25.191 00:19:52 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:07:25.191 00:19:52 dpdk_mem_utility -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:25.191 00:19:52 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:07:25.191 { 00:07:25.191 "filename": "/tmp/spdk_mem_dump.txt" 00:07:25.191 } 00:07:25.191 00:19:52 dpdk_mem_utility -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:25.191 00:19:52 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:07:25.449 DPDK memory size 814.000000 MiB in 1 heap(s) 00:07:25.449 1 heaps totaling size 814.000000 MiB 00:07:25.449 size: 814.000000 MiB heap id: 0 00:07:25.449 end heaps---------- 00:07:25.449 8 mempools totaling size 598.116089 MiB 00:07:25.449 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:07:25.449 size: 158.602051 MiB name: PDU_data_out_Pool 00:07:25.449 size: 84.521057 MiB name: bdev_io_855434 00:07:25.449 size: 51.011292 MiB name: evtpool_855434 00:07:25.449 size: 50.003479 MiB name: msgpool_855434 00:07:25.449 size: 21.763794 MiB name: PDU_Pool 00:07:25.449 size: 19.513306 MiB name: SCSI_TASK_Pool 00:07:25.449 size: 0.026123 MiB name: Session_Pool 00:07:25.449 end mempools------- 00:07:25.449 6 memzones totaling size 4.142822 MiB 00:07:25.449 size: 1.000366 MiB name: RG_ring_0_855434 00:07:25.449 size: 1.000366 MiB name: RG_ring_1_855434 00:07:25.449 size: 1.000366 MiB name: RG_ring_4_855434 00:07:25.449 size: 1.000366 MiB name: RG_ring_5_855434 00:07:25.449 size: 0.125366 MiB name: RG_ring_2_855434 00:07:25.449 size: 0.015991 MiB name: RG_ring_3_855434 00:07:25.449 end memzones------- 00:07:25.449 00:19:53 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:07:25.449 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:07:25.449 list of free elements. size: 12.519348 MiB 00:07:25.449 element at address: 0x200000400000 with size: 1.999512 MiB 00:07:25.449 element at address: 0x200018e00000 with size: 0.999878 MiB 00:07:25.449 element at address: 0x200019000000 with size: 0.999878 MiB 00:07:25.449 element at address: 0x200003e00000 with size: 0.996277 MiB 00:07:25.449 element at address: 0x200031c00000 with size: 0.994446 MiB 00:07:25.449 element at address: 0x200013800000 with size: 0.978699 MiB 00:07:25.449 element at address: 0x200007000000 with size: 0.959839 MiB 00:07:25.449 element at address: 0x200019200000 with size: 0.936584 MiB 00:07:25.449 element at address: 0x200000200000 with size: 0.841614 MiB 00:07:25.449 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:07:25.449 element at address: 0x20000b200000 with size: 0.490723 MiB 00:07:25.449 element at address: 0x200000800000 with size: 0.487793 MiB 00:07:25.449 element at address: 0x200019400000 with size: 0.485657 MiB 00:07:25.449 element at address: 0x200027e00000 with size: 0.410034 MiB 00:07:25.449 element at address: 0x200003a00000 with size: 0.355530 MiB 00:07:25.449 list of standard malloc elements. size: 199.218079 MiB 00:07:25.449 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:07:25.449 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:07:25.449 element at address: 0x200018efff80 with size: 1.000122 MiB 00:07:25.449 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:07:25.449 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:07:25.449 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:07:25.449 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:07:25.449 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:07:25.449 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:07:25.449 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:07:25.449 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:07:25.449 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:07:25.449 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:07:25.449 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:07:25.449 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:07:25.449 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:07:25.449 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:07:25.449 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:07:25.449 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:07:25.449 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:07:25.449 element at address: 0x200003adb300 with size: 0.000183 MiB 00:07:25.449 element at address: 0x200003adb500 with size: 0.000183 MiB 00:07:25.449 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:07:25.449 element at address: 0x200003affa80 with size: 0.000183 MiB 00:07:25.449 element at address: 0x200003affb40 with size: 0.000183 MiB 00:07:25.449 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:07:25.449 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:07:25.449 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:07:25.449 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:07:25.449 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:07:25.449 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:07:25.449 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:07:25.449 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:07:25.449 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:07:25.449 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:07:25.449 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:07:25.449 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:07:25.449 element at address: 0x200027e69040 with size: 0.000183 MiB 00:07:25.449 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:07:25.449 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:07:25.449 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:07:25.449 list of memzone associated elements. size: 602.262573 MiB 00:07:25.449 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:07:25.449 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:07:25.449 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:07:25.449 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:07:25.449 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:07:25.449 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_855434_0 00:07:25.449 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:07:25.449 associated memzone info: size: 48.002930 MiB name: MP_evtpool_855434_0 00:07:25.449 element at address: 0x200003fff380 with size: 48.003052 MiB 00:07:25.449 associated memzone info: size: 48.002930 MiB name: MP_msgpool_855434_0 00:07:25.449 element at address: 0x2000195be940 with size: 20.255554 MiB 00:07:25.449 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:07:25.449 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:07:25.449 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:07:25.449 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:07:25.449 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_855434 00:07:25.449 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:07:25.449 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_855434 00:07:25.449 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:07:25.449 associated memzone info: size: 1.007996 MiB name: MP_evtpool_855434 00:07:25.449 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:07:25.449 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:07:25.449 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:07:25.449 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:07:25.449 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:07:25.449 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:07:25.449 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:07:25.449 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:07:25.449 element at address: 0x200003eff180 with size: 1.000488 MiB 00:07:25.449 associated memzone info: size: 1.000366 MiB name: RG_ring_0_855434 00:07:25.450 element at address: 0x200003affc00 with size: 1.000488 MiB 00:07:25.450 associated memzone info: size: 1.000366 MiB name: RG_ring_1_855434 00:07:25.450 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:07:25.450 associated memzone info: size: 1.000366 MiB name: RG_ring_4_855434 00:07:25.450 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:07:25.450 associated memzone info: size: 1.000366 MiB name: RG_ring_5_855434 00:07:25.450 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:07:25.450 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_855434 00:07:25.450 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:07:25.450 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:07:25.450 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:07:25.450 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:07:25.450 element at address: 0x20001947c540 with size: 0.250488 MiB 00:07:25.450 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:07:25.450 element at address: 0x200003adf880 with size: 0.125488 MiB 00:07:25.450 associated memzone info: size: 0.125366 MiB name: RG_ring_2_855434 00:07:25.450 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:07:25.450 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:07:25.450 element at address: 0x200027e69100 with size: 0.023743 MiB 00:07:25.450 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:07:25.450 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:07:25.450 associated memzone info: size: 0.015991 MiB name: RG_ring_3_855434 00:07:25.450 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:07:25.450 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:07:25.450 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:07:25.450 associated memzone info: size: 0.000183 MiB name: MP_msgpool_855434 00:07:25.450 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:07:25.450 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_855434 00:07:25.450 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:07:25.450 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:07:25.450 00:19:53 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:07:25.450 00:19:53 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 855434 00:07:25.450 00:19:53 dpdk_mem_utility -- common/autotest_common.sh@946 -- # '[' -z 855434 ']' 00:07:25.450 00:19:53 dpdk_mem_utility -- common/autotest_common.sh@950 -- # kill -0 855434 00:07:25.450 00:19:53 dpdk_mem_utility -- common/autotest_common.sh@951 -- # uname 00:07:25.450 00:19:53 dpdk_mem_utility -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:25.450 00:19:53 dpdk_mem_utility -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 855434 00:07:25.450 00:19:53 dpdk_mem_utility -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:25.450 00:19:53 dpdk_mem_utility -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:25.450 00:19:53 dpdk_mem_utility -- common/autotest_common.sh@964 -- # echo 'killing process with pid 855434' 00:07:25.450 killing process with pid 855434 00:07:25.450 00:19:53 dpdk_mem_utility -- common/autotest_common.sh@965 -- # kill 855434 00:07:25.450 00:19:53 dpdk_mem_utility -- common/autotest_common.sh@970 -- # wait 855434 00:07:25.707 00:07:25.707 real 0m0.826s 00:07:25.707 user 0m0.851s 00:07:25.707 sys 0m0.356s 00:07:25.707 00:19:53 dpdk_mem_utility -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:25.707 00:19:53 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:07:25.707 ************************************ 00:07:25.707 END TEST dpdk_mem_utility 00:07:25.707 ************************************ 00:07:25.707 00:19:53 -- spdk/autotest.sh@181 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:07:25.707 00:19:53 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:25.707 00:19:53 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:25.707 00:19:53 -- common/autotest_common.sh@10 -- # set +x 00:07:25.707 ************************************ 00:07:25.707 START TEST event 00:07:25.707 ************************************ 00:07:25.707 00:19:53 event -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:07:25.707 * Looking for test storage... 00:07:25.707 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:07:25.707 00:19:53 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:07:25.707 00:19:53 event -- bdev/nbd_common.sh@6 -- # set -e 00:07:25.707 00:19:53 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:07:25.707 00:19:53 event -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:07:25.707 00:19:53 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:25.707 00:19:53 event -- common/autotest_common.sh@10 -- # set +x 00:07:25.707 ************************************ 00:07:25.707 START TEST event_perf 00:07:25.707 ************************************ 00:07:25.707 00:19:53 event.event_perf -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:07:25.707 Running I/O for 1 seconds...[2024-07-12 00:19:53.505540] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:07:25.707 [2024-07-12 00:19:53.505650] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid855586 ] 00:07:25.707 EAL: No free 2048 kB hugepages reported on node 1 00:07:25.966 [2024-07-12 00:19:53.563548] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:25.966 [2024-07-12 00:19:53.647626] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:25.966 [2024-07-12 00:19:53.647644] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:25.966 [2024-07-12 00:19:53.647694] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:25.966 [2024-07-12 00:19:53.647697] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:26.900 Running I/O for 1 seconds... 00:07:26.900 lcore 0: 271717 00:07:26.900 lcore 1: 271716 00:07:26.900 lcore 2: 271716 00:07:26.900 lcore 3: 271718 00:07:26.900 done. 00:07:26.900 00:07:26.900 real 0m1.214s 00:07:26.900 user 0m4.132s 00:07:26.900 sys 0m0.073s 00:07:26.900 00:19:54 event.event_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:26.900 00:19:54 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:07:26.900 ************************************ 00:07:26.900 END TEST event_perf 00:07:26.901 ************************************ 00:07:26.901 00:19:54 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:07:26.901 00:19:54 event -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:07:26.901 00:19:54 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:26.901 00:19:54 event -- common/autotest_common.sh@10 -- # set +x 00:07:27.159 ************************************ 00:07:27.159 START TEST event_reactor 00:07:27.159 ************************************ 00:07:27.159 00:19:54 event.event_reactor -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:07:27.159 [2024-07-12 00:19:54.773791] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:07:27.159 [2024-07-12 00:19:54.773858] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid855709 ] 00:07:27.159 EAL: No free 2048 kB hugepages reported on node 1 00:07:27.159 [2024-07-12 00:19:54.828820] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:27.159 [2024-07-12 00:19:54.904267] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:28.535 test_start 00:07:28.535 oneshot 00:07:28.535 tick 100 00:07:28.535 tick 100 00:07:28.535 tick 250 00:07:28.535 tick 100 00:07:28.535 tick 100 00:07:28.535 tick 100 00:07:28.535 tick 250 00:07:28.535 tick 500 00:07:28.535 tick 100 00:07:28.535 tick 100 00:07:28.535 tick 250 00:07:28.535 tick 100 00:07:28.535 tick 100 00:07:28.535 test_end 00:07:28.535 00:07:28.535 real 0m1.201s 00:07:28.535 user 0m1.132s 00:07:28.535 sys 0m0.065s 00:07:28.535 00:19:55 event.event_reactor -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:28.535 00:19:55 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:07:28.535 ************************************ 00:07:28.535 END TEST event_reactor 00:07:28.535 ************************************ 00:07:28.535 00:19:55 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:07:28.535 00:19:55 event -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:07:28.535 00:19:55 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:28.535 00:19:55 event -- common/autotest_common.sh@10 -- # set +x 00:07:28.535 ************************************ 00:07:28.535 START TEST event_reactor_perf 00:07:28.535 ************************************ 00:07:28.535 00:19:56 event.event_reactor_perf -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:07:28.535 [2024-07-12 00:19:56.027068] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:07:28.535 [2024-07-12 00:19:56.027151] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid855833 ] 00:07:28.535 EAL: No free 2048 kB hugepages reported on node 1 00:07:28.535 [2024-07-12 00:19:56.078658] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:28.535 [2024-07-12 00:19:56.155316] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:29.468 test_start 00:07:29.468 test_end 00:07:29.468 Performance: 418065 events per second 00:07:29.468 00:07:29.468 real 0m1.195s 00:07:29.468 user 0m1.121s 00:07:29.468 sys 0m0.070s 00:07:29.468 00:19:57 event.event_reactor_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:29.468 00:19:57 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:07:29.468 ************************************ 00:07:29.468 END TEST event_reactor_perf 00:07:29.468 ************************************ 00:07:29.468 00:19:57 event -- event/event.sh@49 -- # uname -s 00:07:29.468 00:19:57 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:07:29.468 00:19:57 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:07:29.468 00:19:57 event -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:29.468 00:19:57 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:29.468 00:19:57 event -- common/autotest_common.sh@10 -- # set +x 00:07:29.468 ************************************ 00:07:29.468 START TEST event_scheduler 00:07:29.468 ************************************ 00:07:29.468 00:19:57 event.event_scheduler -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:07:29.726 * Looking for test storage... 00:07:29.726 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:07:29.726 00:19:57 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:07:29.726 00:19:57 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=856058 00:07:29.726 00:19:57 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:07:29.726 00:19:57 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:07:29.726 00:19:57 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 856058 00:07:29.726 00:19:57 event.event_scheduler -- common/autotest_common.sh@827 -- # '[' -z 856058 ']' 00:07:29.726 00:19:57 event.event_scheduler -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:29.726 00:19:57 event.event_scheduler -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:29.726 00:19:57 event.event_scheduler -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:29.726 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:29.726 00:19:57 event.event_scheduler -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:29.726 00:19:57 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:29.726 [2024-07-12 00:19:57.360150] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:07:29.726 [2024-07-12 00:19:57.360250] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid856058 ] 00:07:29.726 EAL: No free 2048 kB hugepages reported on node 1 00:07:29.726 [2024-07-12 00:19:57.417364] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:29.726 [2024-07-12 00:19:57.502409] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:29.726 [2024-07-12 00:19:57.502457] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:29.726 [2024-07-12 00:19:57.502478] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:29.726 [2024-07-12 00:19:57.502481] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:29.726 00:19:57 event.event_scheduler -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:29.726 00:19:57 event.event_scheduler -- common/autotest_common.sh@860 -- # return 0 00:07:29.726 00:19:57 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:07:29.726 00:19:57 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:29.726 00:19:57 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:29.984 POWER: Env isn't set yet! 00:07:29.984 POWER: Attempting to initialise ACPI cpufreq power management... 00:07:29.984 POWER: Power management governor of lcore 0 has been set to 'userspace' successfully 00:07:29.984 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_available_frequencies 00:07:29.984 POWER: Cannot get available frequencies of lcore 0 00:07:29.984 POWER: Attempting to initialise PSTAT power management... 00:07:29.984 POWER: Power management governor of lcore 0 has been set to 'performance' successfully 00:07:29.984 POWER: Initialized successfully for lcore 0 power management 00:07:29.984 POWER: Power management governor of lcore 1 has been set to 'performance' successfully 00:07:29.984 POWER: Initialized successfully for lcore 1 power management 00:07:29.984 POWER: Power management governor of lcore 2 has been set to 'performance' successfully 00:07:29.984 POWER: Initialized successfully for lcore 2 power management 00:07:29.984 POWER: Power management governor of lcore 3 has been set to 'performance' successfully 00:07:29.984 POWER: Initialized successfully for lcore 3 power management 00:07:29.984 [2024-07-12 00:19:57.600773] scheduler_dynamic.c: 382:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:07:29.984 [2024-07-12 00:19:57.600790] scheduler_dynamic.c: 384:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:07:29.984 [2024-07-12 00:19:57.600800] scheduler_dynamic.c: 386:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:07:29.984 00:19:57 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:29.984 00:19:57 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:07:29.984 00:19:57 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:29.984 00:19:57 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:29.984 [2024-07-12 00:19:57.684422] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:07:29.984 00:19:57 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:29.985 00:19:57 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:07:29.985 00:19:57 event.event_scheduler -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:29.985 00:19:57 event.event_scheduler -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:29.985 00:19:57 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:29.985 ************************************ 00:07:29.985 START TEST scheduler_create_thread 00:07:29.985 ************************************ 00:07:29.985 00:19:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1121 -- # scheduler_create_thread 00:07:29.985 00:19:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:07:29.985 00:19:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:29.985 00:19:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:29.985 2 00:07:29.985 00:19:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:29.985 00:19:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:07:29.985 00:19:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:29.985 00:19:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:29.985 3 00:07:29.985 00:19:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:29.985 00:19:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:07:29.985 00:19:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:29.985 00:19:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:29.985 4 00:07:29.985 00:19:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:29.985 00:19:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:07:29.985 00:19:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:29.985 00:19:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:29.985 5 00:07:29.985 00:19:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:29.985 00:19:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:07:29.985 00:19:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:29.985 00:19:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:29.985 6 00:07:29.985 00:19:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:29.985 00:19:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:07:29.985 00:19:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:29.985 00:19:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:29.985 7 00:07:29.985 00:19:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:29.985 00:19:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:07:29.985 00:19:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:29.985 00:19:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:29.985 8 00:07:29.985 00:19:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:29.985 00:19:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:07:29.985 00:19:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:29.985 00:19:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:29.985 9 00:07:29.985 00:19:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:29.985 00:19:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:07:29.985 00:19:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:29.985 00:19:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:29.985 10 00:07:29.985 00:19:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:29.985 00:19:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:07:29.985 00:19:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:29.985 00:19:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:29.985 00:19:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:29.985 00:19:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:07:29.985 00:19:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:07:29.985 00:19:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:29.985 00:19:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:30.551 00:19:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:30.551 00:19:58 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:07:30.551 00:19:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:30.551 00:19:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:31.949 00:19:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:31.949 00:19:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:07:31.949 00:19:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:07:31.949 00:19:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:31.949 00:19:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:33.320 00:20:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:33.321 00:07:33.321 real 0m3.100s 00:07:33.321 user 0m0.014s 00:07:33.321 sys 0m0.004s 00:07:33.321 00:20:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:33.321 00:20:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:33.321 ************************************ 00:07:33.321 END TEST scheduler_create_thread 00:07:33.321 ************************************ 00:07:33.321 00:20:00 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:07:33.321 00:20:00 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 856058 00:07:33.321 00:20:00 event.event_scheduler -- common/autotest_common.sh@946 -- # '[' -z 856058 ']' 00:07:33.321 00:20:00 event.event_scheduler -- common/autotest_common.sh@950 -- # kill -0 856058 00:07:33.321 00:20:00 event.event_scheduler -- common/autotest_common.sh@951 -- # uname 00:07:33.321 00:20:00 event.event_scheduler -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:33.321 00:20:00 event.event_scheduler -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 856058 00:07:33.321 00:20:00 event.event_scheduler -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:07:33.321 00:20:00 event.event_scheduler -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:07:33.321 00:20:00 event.event_scheduler -- common/autotest_common.sh@964 -- # echo 'killing process with pid 856058' 00:07:33.321 killing process with pid 856058 00:07:33.321 00:20:00 event.event_scheduler -- common/autotest_common.sh@965 -- # kill 856058 00:07:33.321 00:20:00 event.event_scheduler -- common/autotest_common.sh@970 -- # wait 856058 00:07:33.578 [2024-07-12 00:20:01.192917] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:07:33.578 POWER: Power management governor of lcore 0 has been set to 'userspace' successfully 00:07:33.578 POWER: Power management of lcore 0 has exited from 'performance' mode and been set back to the original 00:07:33.578 POWER: Power management governor of lcore 1 has been set to 'schedutil' successfully 00:07:33.578 POWER: Power management of lcore 1 has exited from 'performance' mode and been set back to the original 00:07:33.578 POWER: Power management governor of lcore 2 has been set to 'schedutil' successfully 00:07:33.578 POWER: Power management of lcore 2 has exited from 'performance' mode and been set back to the original 00:07:33.578 POWER: Power management governor of lcore 3 has been set to 'schedutil' successfully 00:07:33.578 POWER: Power management of lcore 3 has exited from 'performance' mode and been set back to the original 00:07:33.578 00:07:33.578 real 0m4.124s 00:07:33.578 user 0m6.681s 00:07:33.578 sys 0m0.303s 00:07:33.578 00:20:01 event.event_scheduler -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:33.578 00:20:01 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:33.578 ************************************ 00:07:33.578 END TEST event_scheduler 00:07:33.578 ************************************ 00:07:33.578 00:20:01 event -- event/event.sh@51 -- # modprobe -n nbd 00:07:33.837 00:20:01 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:07:33.837 00:20:01 event -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:33.837 00:20:01 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:33.837 00:20:01 event -- common/autotest_common.sh@10 -- # set +x 00:07:33.837 ************************************ 00:07:33.837 START TEST app_repeat 00:07:33.837 ************************************ 00:07:33.837 00:20:01 event.app_repeat -- common/autotest_common.sh@1121 -- # app_repeat_test 00:07:33.837 00:20:01 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:33.837 00:20:01 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:33.837 00:20:01 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:07:33.837 00:20:01 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:33.837 00:20:01 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:07:33.837 00:20:01 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:07:33.837 00:20:01 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:07:33.837 00:20:01 event.app_repeat -- event/event.sh@19 -- # repeat_pid=856429 00:07:33.837 00:20:01 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:07:33.837 00:20:01 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:07:33.837 00:20:01 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 856429' 00:07:33.837 Process app_repeat pid: 856429 00:07:33.837 00:20:01 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:33.837 00:20:01 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:07:33.837 spdk_app_start Round 0 00:07:33.837 00:20:01 event.app_repeat -- event/event.sh@25 -- # waitforlisten 856429 /var/tmp/spdk-nbd.sock 00:07:33.837 00:20:01 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 856429 ']' 00:07:33.837 00:20:01 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:33.837 00:20:01 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:33.837 00:20:01 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:33.837 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:33.837 00:20:01 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:33.837 00:20:01 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:33.837 [2024-07-12 00:20:01.468189] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:07:33.837 [2024-07-12 00:20:01.468254] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid856429 ] 00:07:33.837 EAL: No free 2048 kB hugepages reported on node 1 00:07:33.837 [2024-07-12 00:20:01.526339] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:33.837 [2024-07-12 00:20:01.614369] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:33.837 [2024-07-12 00:20:01.614374] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:34.094 00:20:01 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:34.094 00:20:01 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:07:34.094 00:20:01 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:34.352 Malloc0 00:07:34.352 00:20:02 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:34.610 Malloc1 00:07:34.610 00:20:02 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:34.610 00:20:02 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:34.610 00:20:02 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:34.610 00:20:02 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:34.610 00:20:02 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:34.610 00:20:02 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:34.610 00:20:02 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:34.610 00:20:02 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:34.610 00:20:02 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:34.610 00:20:02 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:34.610 00:20:02 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:34.610 00:20:02 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:34.610 00:20:02 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:34.610 00:20:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:34.610 00:20:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:34.610 00:20:02 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:34.869 /dev/nbd0 00:07:34.869 00:20:02 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:34.869 00:20:02 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:34.869 00:20:02 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:07:34.869 00:20:02 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:07:34.869 00:20:02 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:07:34.869 00:20:02 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:07:34.869 00:20:02 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:07:34.869 00:20:02 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:07:34.869 00:20:02 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:07:34.869 00:20:02 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:07:34.869 00:20:02 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:34.869 1+0 records in 00:07:34.869 1+0 records out 00:07:34.869 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000177576 s, 23.1 MB/s 00:07:34.869 00:20:02 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:34.869 00:20:02 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:07:34.869 00:20:02 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:34.869 00:20:02 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:07:34.869 00:20:02 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:07:34.869 00:20:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:34.869 00:20:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:34.869 00:20:02 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:35.127 /dev/nbd1 00:07:35.127 00:20:02 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:35.127 00:20:02 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:35.127 00:20:02 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:07:35.127 00:20:02 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:07:35.127 00:20:02 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:07:35.127 00:20:02 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:07:35.127 00:20:02 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:07:35.127 00:20:02 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:07:35.127 00:20:02 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:07:35.127 00:20:02 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:07:35.127 00:20:02 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:35.385 1+0 records in 00:07:35.385 1+0 records out 00:07:35.385 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000189527 s, 21.6 MB/s 00:07:35.385 00:20:02 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:35.385 00:20:02 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:07:35.386 00:20:02 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:35.386 00:20:02 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:07:35.386 00:20:02 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:07:35.386 00:20:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:35.386 00:20:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:35.386 00:20:02 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:35.386 00:20:02 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:35.386 00:20:02 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:35.386 00:20:03 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:35.386 { 00:07:35.386 "nbd_device": "/dev/nbd0", 00:07:35.386 "bdev_name": "Malloc0" 00:07:35.386 }, 00:07:35.386 { 00:07:35.386 "nbd_device": "/dev/nbd1", 00:07:35.386 "bdev_name": "Malloc1" 00:07:35.386 } 00:07:35.386 ]' 00:07:35.643 00:20:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:35.643 { 00:07:35.643 "nbd_device": "/dev/nbd0", 00:07:35.643 "bdev_name": "Malloc0" 00:07:35.643 }, 00:07:35.643 { 00:07:35.643 "nbd_device": "/dev/nbd1", 00:07:35.643 "bdev_name": "Malloc1" 00:07:35.643 } 00:07:35.643 ]' 00:07:35.643 00:20:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:35.643 00:20:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:35.643 /dev/nbd1' 00:07:35.643 00:20:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:35.643 /dev/nbd1' 00:07:35.643 00:20:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:35.643 00:20:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:35.643 00:20:03 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:35.643 00:20:03 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:35.643 00:20:03 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:35.643 00:20:03 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:35.643 00:20:03 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:35.643 00:20:03 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:35.643 00:20:03 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:35.643 00:20:03 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:35.643 00:20:03 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:35.643 00:20:03 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:35.643 256+0 records in 00:07:35.643 256+0 records out 00:07:35.643 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00419762 s, 250 MB/s 00:07:35.643 00:20:03 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:35.643 00:20:03 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:35.643 256+0 records in 00:07:35.643 256+0 records out 00:07:35.643 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0322164 s, 32.5 MB/s 00:07:35.643 00:20:03 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:35.643 00:20:03 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:35.643 256+0 records in 00:07:35.643 256+0 records out 00:07:35.643 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0320071 s, 32.8 MB/s 00:07:35.643 00:20:03 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:35.643 00:20:03 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:35.643 00:20:03 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:35.643 00:20:03 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:35.643 00:20:03 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:35.643 00:20:03 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:35.643 00:20:03 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:35.643 00:20:03 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:35.643 00:20:03 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:07:35.643 00:20:03 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:35.643 00:20:03 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:07:35.643 00:20:03 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:35.643 00:20:03 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:35.643 00:20:03 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:35.643 00:20:03 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:35.643 00:20:03 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:35.643 00:20:03 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:35.643 00:20:03 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:35.643 00:20:03 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:35.901 00:20:03 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:35.901 00:20:03 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:35.901 00:20:03 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:35.901 00:20:03 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:35.901 00:20:03 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:35.901 00:20:03 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:35.901 00:20:03 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:35.901 00:20:03 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:35.901 00:20:03 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:35.901 00:20:03 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:36.161 00:20:03 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:36.161 00:20:03 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:36.161 00:20:03 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:36.161 00:20:03 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:36.161 00:20:03 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:36.161 00:20:03 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:36.161 00:20:03 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:36.161 00:20:03 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:36.161 00:20:03 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:36.161 00:20:03 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:36.161 00:20:03 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:36.419 00:20:04 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:36.419 00:20:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:36.419 00:20:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:36.419 00:20:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:36.419 00:20:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:36.419 00:20:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:36.419 00:20:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:36.419 00:20:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:36.419 00:20:04 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:36.419 00:20:04 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:36.419 00:20:04 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:36.419 00:20:04 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:36.419 00:20:04 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:36.677 00:20:04 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:36.935 [2024-07-12 00:20:04.584296] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:36.935 [2024-07-12 00:20:04.673347] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:36.935 [2024-07-12 00:20:04.673347] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:36.935 [2024-07-12 00:20:04.723493] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:36.935 [2024-07-12 00:20:04.723557] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:40.250 00:20:07 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:40.250 00:20:07 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:07:40.250 spdk_app_start Round 1 00:07:40.250 00:20:07 event.app_repeat -- event/event.sh@25 -- # waitforlisten 856429 /var/tmp/spdk-nbd.sock 00:07:40.250 00:20:07 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 856429 ']' 00:07:40.250 00:20:07 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:40.250 00:20:07 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:40.250 00:20:07 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:40.250 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:40.250 00:20:07 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:40.250 00:20:07 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:40.250 00:20:07 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:40.250 00:20:07 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:07:40.250 00:20:07 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:40.250 Malloc0 00:07:40.250 00:20:08 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:40.508 Malloc1 00:07:40.766 00:20:08 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:40.766 00:20:08 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:40.766 00:20:08 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:40.766 00:20:08 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:40.766 00:20:08 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:40.766 00:20:08 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:40.766 00:20:08 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:40.766 00:20:08 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:40.766 00:20:08 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:40.766 00:20:08 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:40.766 00:20:08 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:40.766 00:20:08 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:40.766 00:20:08 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:40.766 00:20:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:40.766 00:20:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:40.766 00:20:08 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:41.025 /dev/nbd0 00:07:41.025 00:20:08 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:41.025 00:20:08 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:41.025 00:20:08 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:07:41.025 00:20:08 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:07:41.025 00:20:08 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:07:41.025 00:20:08 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:07:41.025 00:20:08 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:07:41.025 00:20:08 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:07:41.025 00:20:08 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:07:41.025 00:20:08 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:07:41.025 00:20:08 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:41.025 1+0 records in 00:07:41.025 1+0 records out 00:07:41.025 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000167037 s, 24.5 MB/s 00:07:41.025 00:20:08 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:41.025 00:20:08 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:07:41.025 00:20:08 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:41.025 00:20:08 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:07:41.025 00:20:08 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:07:41.025 00:20:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:41.025 00:20:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:41.025 00:20:08 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:41.283 /dev/nbd1 00:07:41.283 00:20:08 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:41.283 00:20:08 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:41.283 00:20:08 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:07:41.283 00:20:08 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:07:41.283 00:20:08 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:07:41.283 00:20:08 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:07:41.283 00:20:08 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:07:41.283 00:20:08 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:07:41.283 00:20:08 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:07:41.283 00:20:08 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:07:41.283 00:20:08 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:41.283 1+0 records in 00:07:41.283 1+0 records out 00:07:41.283 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000184528 s, 22.2 MB/s 00:07:41.283 00:20:09 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:41.283 00:20:09 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:07:41.283 00:20:09 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:41.283 00:20:09 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:07:41.283 00:20:09 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:07:41.283 00:20:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:41.283 00:20:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:41.283 00:20:09 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:41.283 00:20:09 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:41.283 00:20:09 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:41.542 00:20:09 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:41.542 { 00:07:41.542 "nbd_device": "/dev/nbd0", 00:07:41.542 "bdev_name": "Malloc0" 00:07:41.542 }, 00:07:41.542 { 00:07:41.542 "nbd_device": "/dev/nbd1", 00:07:41.542 "bdev_name": "Malloc1" 00:07:41.542 } 00:07:41.542 ]' 00:07:41.542 00:20:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:41.542 { 00:07:41.542 "nbd_device": "/dev/nbd0", 00:07:41.542 "bdev_name": "Malloc0" 00:07:41.542 }, 00:07:41.542 { 00:07:41.542 "nbd_device": "/dev/nbd1", 00:07:41.542 "bdev_name": "Malloc1" 00:07:41.542 } 00:07:41.542 ]' 00:07:41.542 00:20:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:41.542 00:20:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:41.542 /dev/nbd1' 00:07:41.542 00:20:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:41.542 /dev/nbd1' 00:07:41.542 00:20:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:41.542 00:20:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:41.542 00:20:09 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:41.542 00:20:09 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:41.542 00:20:09 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:41.542 00:20:09 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:41.542 00:20:09 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:41.542 00:20:09 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:41.542 00:20:09 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:41.542 00:20:09 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:41.542 00:20:09 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:41.542 00:20:09 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:41.542 256+0 records in 00:07:41.542 256+0 records out 00:07:41.542 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00595446 s, 176 MB/s 00:07:41.542 00:20:09 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:41.542 00:20:09 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:41.800 256+0 records in 00:07:41.800 256+0 records out 00:07:41.800 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0284061 s, 36.9 MB/s 00:07:41.800 00:20:09 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:41.800 00:20:09 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:41.800 256+0 records in 00:07:41.800 256+0 records out 00:07:41.800 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0291228 s, 36.0 MB/s 00:07:41.800 00:20:09 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:41.800 00:20:09 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:41.800 00:20:09 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:41.800 00:20:09 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:41.800 00:20:09 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:41.800 00:20:09 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:41.801 00:20:09 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:41.801 00:20:09 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:41.801 00:20:09 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:07:41.801 00:20:09 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:41.801 00:20:09 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:07:41.801 00:20:09 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:41.801 00:20:09 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:41.801 00:20:09 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:41.801 00:20:09 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:41.801 00:20:09 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:41.801 00:20:09 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:41.801 00:20:09 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:41.801 00:20:09 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:42.058 00:20:09 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:42.058 00:20:09 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:42.058 00:20:09 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:42.058 00:20:09 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:42.058 00:20:09 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:42.058 00:20:09 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:42.058 00:20:09 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:42.058 00:20:09 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:42.058 00:20:09 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:42.058 00:20:09 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:42.315 00:20:10 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:42.316 00:20:10 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:42.316 00:20:10 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:42.316 00:20:10 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:42.316 00:20:10 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:42.316 00:20:10 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:42.316 00:20:10 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:42.316 00:20:10 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:42.316 00:20:10 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:42.316 00:20:10 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:42.316 00:20:10 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:42.573 00:20:10 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:42.573 00:20:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:42.573 00:20:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:42.831 00:20:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:42.831 00:20:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:42.831 00:20:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:42.831 00:20:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:42.831 00:20:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:42.831 00:20:10 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:42.831 00:20:10 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:42.831 00:20:10 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:42.831 00:20:10 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:42.831 00:20:10 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:43.089 00:20:10 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:43.089 [2024-07-12 00:20:10.881004] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:43.348 [2024-07-12 00:20:10.967898] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:43.348 [2024-07-12 00:20:10.967923] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:43.348 [2024-07-12 00:20:11.019095] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:43.348 [2024-07-12 00:20:11.019164] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:46.628 00:20:13 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:46.628 00:20:13 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:07:46.628 spdk_app_start Round 2 00:07:46.628 00:20:13 event.app_repeat -- event/event.sh@25 -- # waitforlisten 856429 /var/tmp/spdk-nbd.sock 00:07:46.628 00:20:13 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 856429 ']' 00:07:46.628 00:20:13 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:46.628 00:20:13 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:46.628 00:20:13 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:46.628 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:46.628 00:20:13 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:46.628 00:20:13 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:46.628 00:20:14 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:46.628 00:20:14 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:07:46.628 00:20:14 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:46.628 Malloc0 00:07:46.628 00:20:14 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:46.886 Malloc1 00:07:46.886 00:20:14 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:46.886 00:20:14 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:46.886 00:20:14 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:46.886 00:20:14 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:46.886 00:20:14 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:46.886 00:20:14 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:46.886 00:20:14 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:46.886 00:20:14 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:46.886 00:20:14 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:46.886 00:20:14 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:46.886 00:20:14 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:46.886 00:20:14 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:46.886 00:20:14 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:46.886 00:20:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:46.886 00:20:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:46.886 00:20:14 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:47.143 /dev/nbd0 00:07:47.143 00:20:14 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:47.143 00:20:14 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:47.143 00:20:14 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:07:47.143 00:20:14 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:07:47.143 00:20:14 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:07:47.143 00:20:14 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:07:47.143 00:20:14 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:07:47.143 00:20:14 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:07:47.143 00:20:14 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:07:47.143 00:20:14 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:07:47.144 00:20:14 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:47.144 1+0 records in 00:07:47.144 1+0 records out 00:07:47.144 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000240095 s, 17.1 MB/s 00:07:47.144 00:20:14 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:47.144 00:20:14 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:07:47.144 00:20:14 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:47.144 00:20:14 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:07:47.144 00:20:14 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:07:47.144 00:20:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:47.144 00:20:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:47.144 00:20:14 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:47.400 /dev/nbd1 00:07:47.400 00:20:15 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:47.400 00:20:15 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:47.400 00:20:15 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:07:47.400 00:20:15 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:07:47.400 00:20:15 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:07:47.400 00:20:15 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:07:47.400 00:20:15 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:07:47.400 00:20:15 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:07:47.400 00:20:15 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:07:47.400 00:20:15 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:07:47.400 00:20:15 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:47.400 1+0 records in 00:07:47.400 1+0 records out 00:07:47.400 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000236701 s, 17.3 MB/s 00:07:47.400 00:20:15 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:47.400 00:20:15 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:07:47.400 00:20:15 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:47.400 00:20:15 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:07:47.400 00:20:15 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:07:47.400 00:20:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:47.400 00:20:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:47.400 00:20:15 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:47.400 00:20:15 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:47.400 00:20:15 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:47.657 00:20:15 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:47.657 { 00:07:47.657 "nbd_device": "/dev/nbd0", 00:07:47.657 "bdev_name": "Malloc0" 00:07:47.657 }, 00:07:47.657 { 00:07:47.657 "nbd_device": "/dev/nbd1", 00:07:47.657 "bdev_name": "Malloc1" 00:07:47.657 } 00:07:47.657 ]' 00:07:47.657 00:20:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:47.657 { 00:07:47.657 "nbd_device": "/dev/nbd0", 00:07:47.657 "bdev_name": "Malloc0" 00:07:47.657 }, 00:07:47.657 { 00:07:47.657 "nbd_device": "/dev/nbd1", 00:07:47.657 "bdev_name": "Malloc1" 00:07:47.657 } 00:07:47.657 ]' 00:07:47.657 00:20:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:47.657 00:20:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:47.657 /dev/nbd1' 00:07:47.657 00:20:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:47.657 /dev/nbd1' 00:07:47.657 00:20:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:47.657 00:20:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:47.657 00:20:15 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:47.657 00:20:15 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:47.657 00:20:15 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:47.657 00:20:15 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:47.657 00:20:15 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:47.657 00:20:15 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:47.657 00:20:15 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:47.657 00:20:15 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:47.657 00:20:15 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:47.657 00:20:15 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:47.657 256+0 records in 00:07:47.657 256+0 records out 00:07:47.657 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00596849 s, 176 MB/s 00:07:47.657 00:20:15 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:47.657 00:20:15 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:47.657 256+0 records in 00:07:47.657 256+0 records out 00:07:47.657 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0294413 s, 35.6 MB/s 00:07:47.657 00:20:15 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:47.657 00:20:15 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:47.915 256+0 records in 00:07:47.915 256+0 records out 00:07:47.915 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0278895 s, 37.6 MB/s 00:07:47.915 00:20:15 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:47.915 00:20:15 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:47.915 00:20:15 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:47.915 00:20:15 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:47.915 00:20:15 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:47.915 00:20:15 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:47.915 00:20:15 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:47.915 00:20:15 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:47.915 00:20:15 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:07:47.915 00:20:15 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:47.915 00:20:15 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:07:47.915 00:20:15 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:47.915 00:20:15 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:47.915 00:20:15 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:47.915 00:20:15 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:47.915 00:20:15 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:47.915 00:20:15 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:47.915 00:20:15 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:47.915 00:20:15 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:48.172 00:20:15 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:48.172 00:20:15 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:48.172 00:20:15 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:48.172 00:20:15 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:48.172 00:20:15 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:48.172 00:20:15 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:48.172 00:20:15 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:48.172 00:20:15 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:48.172 00:20:15 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:48.172 00:20:15 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:48.430 00:20:16 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:48.430 00:20:16 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:48.430 00:20:16 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:48.430 00:20:16 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:48.430 00:20:16 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:48.430 00:20:16 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:48.430 00:20:16 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:48.430 00:20:16 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:48.430 00:20:16 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:48.430 00:20:16 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:48.430 00:20:16 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:48.687 00:20:16 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:48.687 00:20:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:48.687 00:20:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:48.687 00:20:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:48.687 00:20:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:48.687 00:20:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:48.687 00:20:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:48.687 00:20:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:48.687 00:20:16 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:48.687 00:20:16 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:48.687 00:20:16 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:48.687 00:20:16 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:48.687 00:20:16 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:48.945 00:20:16 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:48.945 [2024-07-12 00:20:16.754251] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:49.203 [2024-07-12 00:20:16.843146] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:49.203 [2024-07-12 00:20:16.843179] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:49.203 [2024-07-12 00:20:16.893323] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:49.203 [2024-07-12 00:20:16.893391] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:52.480 00:20:19 event.app_repeat -- event/event.sh@38 -- # waitforlisten 856429 /var/tmp/spdk-nbd.sock 00:07:52.480 00:20:19 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 856429 ']' 00:07:52.480 00:20:19 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:52.480 00:20:19 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:52.480 00:20:19 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:52.480 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:52.480 00:20:19 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:52.480 00:20:19 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:52.480 00:20:19 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:52.480 00:20:19 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:07:52.480 00:20:19 event.app_repeat -- event/event.sh@39 -- # killprocess 856429 00:07:52.480 00:20:19 event.app_repeat -- common/autotest_common.sh@946 -- # '[' -z 856429 ']' 00:07:52.480 00:20:19 event.app_repeat -- common/autotest_common.sh@950 -- # kill -0 856429 00:07:52.480 00:20:19 event.app_repeat -- common/autotest_common.sh@951 -- # uname 00:07:52.480 00:20:19 event.app_repeat -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:52.480 00:20:19 event.app_repeat -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 856429 00:07:52.480 00:20:19 event.app_repeat -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:52.480 00:20:19 event.app_repeat -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:52.480 00:20:19 event.app_repeat -- common/autotest_common.sh@964 -- # echo 'killing process with pid 856429' 00:07:52.480 killing process with pid 856429 00:07:52.480 00:20:19 event.app_repeat -- common/autotest_common.sh@965 -- # kill 856429 00:07:52.480 00:20:19 event.app_repeat -- common/autotest_common.sh@970 -- # wait 856429 00:07:52.480 spdk_app_start is called in Round 0. 00:07:52.480 Shutdown signal received, stop current app iteration 00:07:52.480 Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 reinitialization... 00:07:52.480 spdk_app_start is called in Round 1. 00:07:52.480 Shutdown signal received, stop current app iteration 00:07:52.480 Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 reinitialization... 00:07:52.480 spdk_app_start is called in Round 2. 00:07:52.480 Shutdown signal received, stop current app iteration 00:07:52.480 Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 reinitialization... 00:07:52.480 spdk_app_start is called in Round 3. 00:07:52.480 Shutdown signal received, stop current app iteration 00:07:52.480 00:20:20 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:07:52.480 00:20:20 event.app_repeat -- event/event.sh@42 -- # return 0 00:07:52.480 00:07:52.480 real 0m18.644s 00:07:52.480 user 0m41.263s 00:07:52.480 sys 0m3.273s 00:07:52.480 00:20:20 event.app_repeat -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:52.480 00:20:20 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:52.480 ************************************ 00:07:52.480 END TEST app_repeat 00:07:52.480 ************************************ 00:07:52.480 00:20:20 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:07:52.480 00:20:20 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:07:52.480 00:20:20 event -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:52.480 00:20:20 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:52.480 00:20:20 event -- common/autotest_common.sh@10 -- # set +x 00:07:52.480 ************************************ 00:07:52.480 START TEST cpu_locks 00:07:52.480 ************************************ 00:07:52.480 00:20:20 event.cpu_locks -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:07:52.480 * Looking for test storage... 00:07:52.480 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:07:52.480 00:20:20 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:07:52.480 00:20:20 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:07:52.480 00:20:20 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:07:52.480 00:20:20 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:07:52.480 00:20:20 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:52.480 00:20:20 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:52.480 00:20:20 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:52.480 ************************************ 00:07:52.480 START TEST default_locks 00:07:52.480 ************************************ 00:07:52.480 00:20:20 event.cpu_locks.default_locks -- common/autotest_common.sh@1121 -- # default_locks 00:07:52.480 00:20:20 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=858364 00:07:52.480 00:20:20 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:52.480 00:20:20 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 858364 00:07:52.480 00:20:20 event.cpu_locks.default_locks -- common/autotest_common.sh@827 -- # '[' -z 858364 ']' 00:07:52.480 00:20:20 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:52.480 00:20:20 event.cpu_locks.default_locks -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:52.480 00:20:20 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:52.480 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:52.481 00:20:20 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:52.481 00:20:20 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:52.481 [2024-07-12 00:20:20.285204] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:07:52.481 [2024-07-12 00:20:20.285298] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid858364 ] 00:07:52.481 EAL: No free 2048 kB hugepages reported on node 1 00:07:52.738 [2024-07-12 00:20:20.346840] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:52.738 [2024-07-12 00:20:20.438108] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:52.996 00:20:20 event.cpu_locks.default_locks -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:52.996 00:20:20 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # return 0 00:07:52.996 00:20:20 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 858364 00:07:52.996 00:20:20 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 858364 00:07:52.996 00:20:20 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:53.560 lslocks: write error 00:07:53.560 00:20:21 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 858364 00:07:53.560 00:20:21 event.cpu_locks.default_locks -- common/autotest_common.sh@946 -- # '[' -z 858364 ']' 00:07:53.560 00:20:21 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # kill -0 858364 00:07:53.560 00:20:21 event.cpu_locks.default_locks -- common/autotest_common.sh@951 -- # uname 00:07:53.560 00:20:21 event.cpu_locks.default_locks -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:53.560 00:20:21 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 858364 00:07:53.560 00:20:21 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:53.560 00:20:21 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:53.560 00:20:21 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # echo 'killing process with pid 858364' 00:07:53.560 killing process with pid 858364 00:07:53.560 00:20:21 event.cpu_locks.default_locks -- common/autotest_common.sh@965 -- # kill 858364 00:07:53.560 00:20:21 event.cpu_locks.default_locks -- common/autotest_common.sh@970 -- # wait 858364 00:07:53.818 00:20:21 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 858364 00:07:53.818 00:20:21 event.cpu_locks.default_locks -- common/autotest_common.sh@648 -- # local es=0 00:07:53.818 00:20:21 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 858364 00:07:53.818 00:20:21 event.cpu_locks.default_locks -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:07:53.818 00:20:21 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:53.818 00:20:21 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:07:53.818 00:20:21 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:53.818 00:20:21 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # waitforlisten 858364 00:07:53.818 00:20:21 event.cpu_locks.default_locks -- common/autotest_common.sh@827 -- # '[' -z 858364 ']' 00:07:53.818 00:20:21 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:53.818 00:20:21 event.cpu_locks.default_locks -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:53.818 00:20:21 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:53.818 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:53.818 00:20:21 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:53.818 00:20:21 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:53.818 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 842: kill: (858364) - No such process 00:07:53.818 ERROR: process (pid: 858364) is no longer running 00:07:53.818 00:20:21 event.cpu_locks.default_locks -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:53.818 00:20:21 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # return 1 00:07:53.818 00:20:21 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # es=1 00:07:53.818 00:20:21 event.cpu_locks.default_locks -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:53.818 00:20:21 event.cpu_locks.default_locks -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:53.818 00:20:21 event.cpu_locks.default_locks -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:53.818 00:20:21 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:07:53.818 00:20:21 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:53.818 00:20:21 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:07:53.818 00:20:21 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:53.818 00:07:53.818 real 0m1.221s 00:07:53.818 user 0m1.231s 00:07:53.818 sys 0m0.546s 00:07:53.818 00:20:21 event.cpu_locks.default_locks -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:53.818 00:20:21 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:53.818 ************************************ 00:07:53.818 END TEST default_locks 00:07:53.818 ************************************ 00:07:53.818 00:20:21 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:07:53.818 00:20:21 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:53.818 00:20:21 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:53.818 00:20:21 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:53.818 ************************************ 00:07:53.818 START TEST default_locks_via_rpc 00:07:53.818 ************************************ 00:07:53.818 00:20:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1121 -- # default_locks_via_rpc 00:07:53.818 00:20:21 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=858577 00:07:53.818 00:20:21 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:53.818 00:20:21 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 858577 00:07:53.818 00:20:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 858577 ']' 00:07:53.818 00:20:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:53.818 00:20:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:53.818 00:20:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:53.818 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:53.818 00:20:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:53.818 00:20:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:53.818 [2024-07-12 00:20:21.556429] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:07:53.818 [2024-07-12 00:20:21.556532] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid858577 ] 00:07:53.818 EAL: No free 2048 kB hugepages reported on node 1 00:07:53.818 [2024-07-12 00:20:21.619332] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:54.076 [2024-07-12 00:20:21.706522] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:54.076 00:20:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:54.076 00:20:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:07:54.076 00:20:21 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:07:54.076 00:20:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:54.076 00:20:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:54.333 00:20:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:54.333 00:20:21 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:07:54.333 00:20:21 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:54.333 00:20:21 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:07:54.333 00:20:21 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:54.333 00:20:21 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:07:54.333 00:20:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:54.334 00:20:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:54.334 00:20:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:54.334 00:20:21 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 858577 00:07:54.334 00:20:21 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 858577 00:07:54.334 00:20:21 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:54.591 00:20:22 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 858577 00:07:54.591 00:20:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@946 -- # '[' -z 858577 ']' 00:07:54.591 00:20:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # kill -0 858577 00:07:54.591 00:20:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@951 -- # uname 00:07:54.591 00:20:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:54.591 00:20:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 858577 00:07:54.591 00:20:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:54.591 00:20:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:54.591 00:20:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 858577' 00:07:54.591 killing process with pid 858577 00:07:54.591 00:20:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@965 -- # kill 858577 00:07:54.591 00:20:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@970 -- # wait 858577 00:07:54.849 00:07:54.849 real 0m1.077s 00:07:54.849 user 0m1.101s 00:07:54.849 sys 0m0.504s 00:07:54.849 00:20:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:54.849 00:20:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:54.849 ************************************ 00:07:54.849 END TEST default_locks_via_rpc 00:07:54.849 ************************************ 00:07:54.849 00:20:22 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:07:54.849 00:20:22 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:54.849 00:20:22 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:54.849 00:20:22 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:54.849 ************************************ 00:07:54.849 START TEST non_locking_app_on_locked_coremask 00:07:54.849 ************************************ 00:07:54.849 00:20:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1121 -- # non_locking_app_on_locked_coremask 00:07:54.849 00:20:22 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=858707 00:07:54.849 00:20:22 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:54.849 00:20:22 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 858707 /var/tmp/spdk.sock 00:07:54.850 00:20:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 858707 ']' 00:07:54.850 00:20:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:54.850 00:20:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:54.850 00:20:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:54.850 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:54.850 00:20:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:54.850 00:20:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:54.850 [2024-07-12 00:20:22.687786] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:07:54.850 [2024-07-12 00:20:22.687888] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid858707 ] 00:07:55.108 EAL: No free 2048 kB hugepages reported on node 1 00:07:55.108 [2024-07-12 00:20:22.750988] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:55.108 [2024-07-12 00:20:22.842096] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:55.365 00:20:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:55.365 00:20:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 0 00:07:55.365 00:20:23 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=858726 00:07:55.365 00:20:23 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 858726 /var/tmp/spdk2.sock 00:07:55.365 00:20:23 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:07:55.365 00:20:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 858726 ']' 00:07:55.365 00:20:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:55.365 00:20:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:55.365 00:20:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:55.365 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:55.365 00:20:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:55.365 00:20:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:55.365 [2024-07-12 00:20:23.110161] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:07:55.365 [2024-07-12 00:20:23.110267] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid858726 ] 00:07:55.365 EAL: No free 2048 kB hugepages reported on node 1 00:07:55.365 [2024-07-12 00:20:23.202828] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:55.365 [2024-07-12 00:20:23.202875] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:55.622 [2024-07-12 00:20:23.386012] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:56.554 00:20:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:56.554 00:20:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 0 00:07:56.554 00:20:24 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 858707 00:07:56.554 00:20:24 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 858707 00:07:56.554 00:20:24 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:57.119 lslocks: write error 00:07:57.119 00:20:24 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 858707 00:07:57.119 00:20:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@946 -- # '[' -z 858707 ']' 00:07:57.119 00:20:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # kill -0 858707 00:07:57.119 00:20:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # uname 00:07:57.119 00:20:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:57.119 00:20:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 858707 00:07:57.119 00:20:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:57.119 00:20:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:57.119 00:20:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 858707' 00:07:57.119 killing process with pid 858707 00:07:57.119 00:20:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@965 -- # kill 858707 00:07:57.119 00:20:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # wait 858707 00:07:57.685 00:20:25 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 858726 00:07:57.685 00:20:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@946 -- # '[' -z 858726 ']' 00:07:57.685 00:20:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # kill -0 858726 00:07:57.685 00:20:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # uname 00:07:57.685 00:20:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:57.685 00:20:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 858726 00:07:57.685 00:20:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:57.685 00:20:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:57.685 00:20:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 858726' 00:07:57.685 killing process with pid 858726 00:07:57.685 00:20:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@965 -- # kill 858726 00:07:57.685 00:20:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # wait 858726 00:07:57.944 00:07:57.944 real 0m3.113s 00:07:57.944 user 0m3.501s 00:07:57.944 sys 0m1.054s 00:07:57.944 00:20:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:57.944 00:20:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:57.944 ************************************ 00:07:57.944 END TEST non_locking_app_on_locked_coremask 00:07:57.944 ************************************ 00:07:57.944 00:20:25 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:07:57.944 00:20:25 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:57.944 00:20:25 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:57.944 00:20:25 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:58.203 ************************************ 00:07:58.203 START TEST locking_app_on_unlocked_coremask 00:07:58.203 ************************************ 00:07:58.203 00:20:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1121 -- # locking_app_on_unlocked_coremask 00:07:58.203 00:20:25 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=858983 00:07:58.203 00:20:25 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:07:58.203 00:20:25 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 858983 /var/tmp/spdk.sock 00:07:58.203 00:20:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@827 -- # '[' -z 858983 ']' 00:07:58.203 00:20:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:58.203 00:20:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:58.203 00:20:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:58.203 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:58.203 00:20:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:58.203 00:20:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:58.203 [2024-07-12 00:20:25.853916] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:07:58.203 [2024-07-12 00:20:25.854004] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid858983 ] 00:07:58.203 EAL: No free 2048 kB hugepages reported on node 1 00:07:58.203 [2024-07-12 00:20:25.913188] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:58.203 [2024-07-12 00:20:25.913223] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:58.203 [2024-07-12 00:20:26.000369] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:58.490 00:20:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:58.490 00:20:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # return 0 00:07:58.490 00:20:26 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=859057 00:07:58.490 00:20:26 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 859057 /var/tmp/spdk2.sock 00:07:58.490 00:20:26 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:58.490 00:20:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@827 -- # '[' -z 859057 ']' 00:07:58.490 00:20:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:58.490 00:20:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:58.490 00:20:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:58.490 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:58.490 00:20:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:58.490 00:20:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:58.490 [2024-07-12 00:20:26.272989] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:07:58.491 [2024-07-12 00:20:26.273091] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid859057 ] 00:07:58.491 EAL: No free 2048 kB hugepages reported on node 1 00:07:58.786 [2024-07-12 00:20:26.364857] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:58.786 [2024-07-12 00:20:26.542266] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:59.722 00:20:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:59.722 00:20:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # return 0 00:07:59.722 00:20:27 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 859057 00:07:59.722 00:20:27 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 859057 00:07:59.722 00:20:27 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:08:00.288 lslocks: write error 00:08:00.288 00:20:28 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 858983 00:08:00.288 00:20:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@946 -- # '[' -z 858983 ']' 00:08:00.288 00:20:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # kill -0 858983 00:08:00.288 00:20:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # uname 00:08:00.288 00:20:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:08:00.288 00:20:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 858983 00:08:00.288 00:20:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:08:00.288 00:20:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:08:00.288 00:20:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 858983' 00:08:00.288 killing process with pid 858983 00:08:00.288 00:20:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@965 -- # kill 858983 00:08:00.288 00:20:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # wait 858983 00:08:00.856 00:20:28 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 859057 00:08:00.856 00:20:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@946 -- # '[' -z 859057 ']' 00:08:00.856 00:20:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # kill -0 859057 00:08:00.856 00:20:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # uname 00:08:00.856 00:20:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:08:00.856 00:20:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 859057 00:08:00.856 00:20:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:08:00.856 00:20:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:08:00.856 00:20:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 859057' 00:08:00.856 killing process with pid 859057 00:08:00.856 00:20:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@965 -- # kill 859057 00:08:00.856 00:20:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # wait 859057 00:08:01.115 00:08:01.115 real 0m3.083s 00:08:01.115 user 0m3.418s 00:08:01.115 sys 0m1.080s 00:08:01.115 00:20:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:01.115 00:20:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:01.115 ************************************ 00:08:01.115 END TEST locking_app_on_unlocked_coremask 00:08:01.115 ************************************ 00:08:01.115 00:20:28 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:08:01.115 00:20:28 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:08:01.115 00:20:28 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:01.115 00:20:28 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:01.115 ************************************ 00:08:01.115 START TEST locking_app_on_locked_coremask 00:08:01.115 ************************************ 00:08:01.115 00:20:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1121 -- # locking_app_on_locked_coremask 00:08:01.115 00:20:28 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=859307 00:08:01.115 00:20:28 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:08:01.115 00:20:28 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 859307 /var/tmp/spdk.sock 00:08:01.115 00:20:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 859307 ']' 00:08:01.115 00:20:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:01.115 00:20:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:08:01.115 00:20:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:01.115 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:01.115 00:20:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:08:01.115 00:20:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:01.374 [2024-07-12 00:20:28.994085] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:08:01.374 [2024-07-12 00:20:28.994185] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid859307 ] 00:08:01.374 EAL: No free 2048 kB hugepages reported on node 1 00:08:01.374 [2024-07-12 00:20:29.056811] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:01.374 [2024-07-12 00:20:29.147415] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:01.634 00:20:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:08:01.634 00:20:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 0 00:08:01.634 00:20:29 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=859393 00:08:01.634 00:20:29 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 859393 /var/tmp/spdk2.sock 00:08:01.634 00:20:29 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:08:01.634 00:20:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@648 -- # local es=0 00:08:01.634 00:20:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 859393 /var/tmp/spdk2.sock 00:08:01.634 00:20:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:08:01.634 00:20:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:01.634 00:20:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:08:01.634 00:20:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:01.634 00:20:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # waitforlisten 859393 /var/tmp/spdk2.sock 00:08:01.634 00:20:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 859393 ']' 00:08:01.634 00:20:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:01.634 00:20:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:08:01.634 00:20:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:01.634 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:01.634 00:20:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:08:01.634 00:20:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:01.634 [2024-07-12 00:20:29.421076] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:08:01.634 [2024-07-12 00:20:29.421175] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid859393 ] 00:08:01.634 EAL: No free 2048 kB hugepages reported on node 1 00:08:01.892 [2024-07-12 00:20:29.512380] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 859307 has claimed it. 00:08:01.892 [2024-07-12 00:20:29.512435] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:08:02.457 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 842: kill: (859393) - No such process 00:08:02.457 ERROR: process (pid: 859393) is no longer running 00:08:02.457 00:20:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:08:02.457 00:20:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 1 00:08:02.457 00:20:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # es=1 00:08:02.457 00:20:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:02.457 00:20:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:02.457 00:20:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:02.457 00:20:30 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 859307 00:08:02.457 00:20:30 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 859307 00:08:02.457 00:20:30 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:08:03.024 lslocks: write error 00:08:03.024 00:20:30 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 859307 00:08:03.024 00:20:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@946 -- # '[' -z 859307 ']' 00:08:03.024 00:20:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # kill -0 859307 00:08:03.024 00:20:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # uname 00:08:03.024 00:20:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:08:03.024 00:20:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 859307 00:08:03.024 00:20:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:08:03.024 00:20:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:08:03.024 00:20:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 859307' 00:08:03.024 killing process with pid 859307 00:08:03.024 00:20:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@965 -- # kill 859307 00:08:03.024 00:20:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # wait 859307 00:08:03.024 00:08:03.024 real 0m1.915s 00:08:03.024 user 0m2.186s 00:08:03.024 sys 0m0.626s 00:08:03.024 00:20:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:03.024 00:20:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:03.024 ************************************ 00:08:03.024 END TEST locking_app_on_locked_coremask 00:08:03.024 ************************************ 00:08:03.282 00:20:30 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:08:03.282 00:20:30 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:08:03.282 00:20:30 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:03.282 00:20:30 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:03.282 ************************************ 00:08:03.282 START TEST locking_overlapped_coremask 00:08:03.282 ************************************ 00:08:03.282 00:20:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1121 -- # locking_overlapped_coremask 00:08:03.282 00:20:30 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=859534 00:08:03.282 00:20:30 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:08:03.282 00:20:30 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 859534 /var/tmp/spdk.sock 00:08:03.282 00:20:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@827 -- # '[' -z 859534 ']' 00:08:03.282 00:20:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:03.282 00:20:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:08:03.282 00:20:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:03.282 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:03.282 00:20:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:08:03.282 00:20:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:03.282 [2024-07-12 00:20:30.967887] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:08:03.282 [2024-07-12 00:20:30.967987] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid859534 ] 00:08:03.282 EAL: No free 2048 kB hugepages reported on node 1 00:08:03.282 [2024-07-12 00:20:31.029264] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:03.282 [2024-07-12 00:20:31.120562] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:03.282 [2024-07-12 00:20:31.120612] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:03.282 [2024-07-12 00:20:31.120622] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:03.541 00:20:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:08:03.541 00:20:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # return 0 00:08:03.541 00:20:31 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=859592 00:08:03.541 00:20:31 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 859592 /var/tmp/spdk2.sock 00:08:03.541 00:20:31 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:08:03.541 00:20:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@648 -- # local es=0 00:08:03.541 00:20:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 859592 /var/tmp/spdk2.sock 00:08:03.541 00:20:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:08:03.541 00:20:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:03.541 00:20:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:08:03.541 00:20:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:03.541 00:20:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # waitforlisten 859592 /var/tmp/spdk2.sock 00:08:03.541 00:20:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@827 -- # '[' -z 859592 ']' 00:08:03.541 00:20:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:03.541 00:20:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:08:03.541 00:20:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:03.541 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:03.541 00:20:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:08:03.541 00:20:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:03.799 [2024-07-12 00:20:31.402188] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:08:03.799 [2024-07-12 00:20:31.402293] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid859592 ] 00:08:03.799 EAL: No free 2048 kB hugepages reported on node 1 00:08:03.799 [2024-07-12 00:20:31.492919] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 859534 has claimed it. 00:08:03.799 [2024-07-12 00:20:31.492979] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:08:04.364 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 842: kill: (859592) - No such process 00:08:04.364 ERROR: process (pid: 859592) is no longer running 00:08:04.364 00:20:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:08:04.364 00:20:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # return 1 00:08:04.364 00:20:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # es=1 00:08:04.364 00:20:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:04.364 00:20:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:04.364 00:20:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:04.364 00:20:32 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:08:04.364 00:20:32 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:08:04.364 00:20:32 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:08:04.364 00:20:32 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:08:04.364 00:20:32 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 859534 00:08:04.364 00:20:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@946 -- # '[' -z 859534 ']' 00:08:04.364 00:20:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # kill -0 859534 00:08:04.364 00:20:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@951 -- # uname 00:08:04.364 00:20:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:08:04.364 00:20:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 859534 00:08:04.364 00:20:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:08:04.364 00:20:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:08:04.364 00:20:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 859534' 00:08:04.364 killing process with pid 859534 00:08:04.364 00:20:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@965 -- # kill 859534 00:08:04.364 00:20:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@970 -- # wait 859534 00:08:04.621 00:08:04.621 real 0m1.526s 00:08:04.621 user 0m4.220s 00:08:04.621 sys 0m0.437s 00:08:04.621 00:20:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:04.621 00:20:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:04.621 ************************************ 00:08:04.621 END TEST locking_overlapped_coremask 00:08:04.621 ************************************ 00:08:04.621 00:20:32 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:08:04.621 00:20:32 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:08:04.621 00:20:32 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:04.621 00:20:32 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:04.879 ************************************ 00:08:04.879 START TEST locking_overlapped_coremask_via_rpc 00:08:04.879 ************************************ 00:08:04.879 00:20:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1121 -- # locking_overlapped_coremask_via_rpc 00:08:04.879 00:20:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=859755 00:08:04.879 00:20:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:08:04.879 00:20:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 859755 /var/tmp/spdk.sock 00:08:04.879 00:20:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 859755 ']' 00:08:04.879 00:20:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:04.879 00:20:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:08:04.879 00:20:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:04.879 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:04.879 00:20:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:08:04.879 00:20:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:04.879 [2024-07-12 00:20:32.543621] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:08:04.879 [2024-07-12 00:20:32.543726] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid859755 ] 00:08:04.879 EAL: No free 2048 kB hugepages reported on node 1 00:08:04.879 [2024-07-12 00:20:32.604458] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:08:04.879 [2024-07-12 00:20:32.604507] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:04.879 [2024-07-12 00:20:32.697047] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:04.879 [2024-07-12 00:20:32.697126] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:04.879 [2024-07-12 00:20:32.697130] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:05.137 00:20:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:08:05.137 00:20:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:08:05.137 00:20:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=859775 00:08:05.137 00:20:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:08:05.137 00:20:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 859775 /var/tmp/spdk2.sock 00:08:05.137 00:20:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 859775 ']' 00:08:05.137 00:20:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:05.137 00:20:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:08:05.137 00:20:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:05.137 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:05.137 00:20:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:08:05.137 00:20:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:05.137 [2024-07-12 00:20:32.972619] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:08:05.137 [2024-07-12 00:20:32.972723] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid859775 ] 00:08:05.394 EAL: No free 2048 kB hugepages reported on node 1 00:08:05.394 [2024-07-12 00:20:33.063837] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:08:05.394 [2024-07-12 00:20:33.063879] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:05.652 [2024-07-12 00:20:33.241865] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:05.652 [2024-07-12 00:20:33.245637] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:08:05.652 [2024-07-12 00:20:33.245639] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:06.217 00:20:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:08:06.217 00:20:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:08:06.217 00:20:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:08:06.217 00:20:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:06.218 00:20:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:06.218 00:20:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:06.218 00:20:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:08:06.218 00:20:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@648 -- # local es=0 00:08:06.218 00:20:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:08:06.218 00:20:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:08:06.218 00:20:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:06.218 00:20:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:08:06.218 00:20:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:06.218 00:20:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:08:06.218 00:20:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:06.218 00:20:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:06.218 [2024-07-12 00:20:34.016699] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 859755 has claimed it. 00:08:06.218 request: 00:08:06.218 { 00:08:06.218 "method": "framework_enable_cpumask_locks", 00:08:06.218 "req_id": 1 00:08:06.218 } 00:08:06.218 Got JSON-RPC error response 00:08:06.218 response: 00:08:06.218 { 00:08:06.218 "code": -32603, 00:08:06.218 "message": "Failed to claim CPU core: 2" 00:08:06.218 } 00:08:06.218 00:20:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:08:06.218 00:20:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # es=1 00:08:06.218 00:20:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:06.218 00:20:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:06.218 00:20:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:06.218 00:20:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 859755 /var/tmp/spdk.sock 00:08:06.218 00:20:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 859755 ']' 00:08:06.218 00:20:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:06.218 00:20:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:08:06.218 00:20:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:06.218 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:06.218 00:20:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:08:06.218 00:20:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:06.784 00:20:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:08:06.784 00:20:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:08:06.784 00:20:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 859775 /var/tmp/spdk2.sock 00:08:06.784 00:20:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 859775 ']' 00:08:06.784 00:20:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:06.784 00:20:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:08:06.784 00:20:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:06.784 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:06.784 00:20:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:08:06.784 00:20:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:07.043 00:20:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:08:07.043 00:20:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:08:07.043 00:20:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:08:07.043 00:20:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:08:07.043 00:20:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:08:07.043 00:20:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:08:07.043 00:08:07.043 real 0m2.146s 00:08:07.043 user 0m1.238s 00:08:07.043 sys 0m0.202s 00:08:07.043 00:20:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:07.043 00:20:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:07.043 ************************************ 00:08:07.043 END TEST locking_overlapped_coremask_via_rpc 00:08:07.043 ************************************ 00:08:07.043 00:20:34 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:08:07.043 00:20:34 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 859755 ]] 00:08:07.043 00:20:34 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 859755 00:08:07.043 00:20:34 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 859755 ']' 00:08:07.043 00:20:34 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 859755 00:08:07.043 00:20:34 event.cpu_locks -- common/autotest_common.sh@951 -- # uname 00:08:07.043 00:20:34 event.cpu_locks -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:08:07.043 00:20:34 event.cpu_locks -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 859755 00:08:07.043 00:20:34 event.cpu_locks -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:08:07.043 00:20:34 event.cpu_locks -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:08:07.043 00:20:34 event.cpu_locks -- common/autotest_common.sh@964 -- # echo 'killing process with pid 859755' 00:08:07.043 killing process with pid 859755 00:08:07.043 00:20:34 event.cpu_locks -- common/autotest_common.sh@965 -- # kill 859755 00:08:07.043 00:20:34 event.cpu_locks -- common/autotest_common.sh@970 -- # wait 859755 00:08:07.302 00:20:34 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 859775 ]] 00:08:07.302 00:20:34 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 859775 00:08:07.302 00:20:34 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 859775 ']' 00:08:07.302 00:20:34 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 859775 00:08:07.302 00:20:34 event.cpu_locks -- common/autotest_common.sh@951 -- # uname 00:08:07.302 00:20:34 event.cpu_locks -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:08:07.302 00:20:34 event.cpu_locks -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 859775 00:08:07.302 00:20:34 event.cpu_locks -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:08:07.302 00:20:34 event.cpu_locks -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:08:07.302 00:20:34 event.cpu_locks -- common/autotest_common.sh@964 -- # echo 'killing process with pid 859775' 00:08:07.302 killing process with pid 859775 00:08:07.302 00:20:34 event.cpu_locks -- common/autotest_common.sh@965 -- # kill 859775 00:08:07.302 00:20:34 event.cpu_locks -- common/autotest_common.sh@970 -- # wait 859775 00:08:07.560 00:20:35 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:08:07.560 00:20:35 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:08:07.560 00:20:35 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 859755 ]] 00:08:07.561 00:20:35 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 859755 00:08:07.561 00:20:35 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 859755 ']' 00:08:07.561 00:20:35 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 859755 00:08:07.561 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 950: kill: (859755) - No such process 00:08:07.561 00:20:35 event.cpu_locks -- common/autotest_common.sh@973 -- # echo 'Process with pid 859755 is not found' 00:08:07.561 Process with pid 859755 is not found 00:08:07.561 00:20:35 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 859775 ]] 00:08:07.561 00:20:35 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 859775 00:08:07.561 00:20:35 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 859775 ']' 00:08:07.561 00:20:35 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 859775 00:08:07.561 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 950: kill: (859775) - No such process 00:08:07.561 00:20:35 event.cpu_locks -- common/autotest_common.sh@973 -- # echo 'Process with pid 859775 is not found' 00:08:07.561 Process with pid 859775 is not found 00:08:07.561 00:20:35 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:08:07.561 00:08:07.561 real 0m15.112s 00:08:07.561 user 0m27.602s 00:08:07.561 sys 0m5.296s 00:08:07.561 00:20:35 event.cpu_locks -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:07.561 00:20:35 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:07.561 ************************************ 00:08:07.561 END TEST cpu_locks 00:08:07.561 ************************************ 00:08:07.561 00:08:07.561 real 0m41.875s 00:08:07.561 user 1m22.082s 00:08:07.561 sys 0m9.332s 00:08:07.561 00:20:35 event -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:07.561 00:20:35 event -- common/autotest_common.sh@10 -- # set +x 00:08:07.561 ************************************ 00:08:07.561 END TEST event 00:08:07.561 ************************************ 00:08:07.561 00:20:35 -- spdk/autotest.sh@182 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:08:07.561 00:20:35 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:08:07.561 00:20:35 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:07.561 00:20:35 -- common/autotest_common.sh@10 -- # set +x 00:08:07.561 ************************************ 00:08:07.561 START TEST thread 00:08:07.561 ************************************ 00:08:07.561 00:20:35 thread -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:08:07.561 * Looking for test storage... 00:08:07.561 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:08:07.561 00:20:35 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:08:07.561 00:20:35 thread -- common/autotest_common.sh@1097 -- # '[' 8 -le 1 ']' 00:08:07.561 00:20:35 thread -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:07.561 00:20:35 thread -- common/autotest_common.sh@10 -- # set +x 00:08:07.819 ************************************ 00:08:07.819 START TEST thread_poller_perf 00:08:07.819 ************************************ 00:08:07.819 00:20:35 thread.thread_poller_perf -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:08:07.819 [2024-07-12 00:20:35.422584] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:08:07.819 [2024-07-12 00:20:35.422657] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid860074 ] 00:08:07.819 EAL: No free 2048 kB hugepages reported on node 1 00:08:07.819 [2024-07-12 00:20:35.479959] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:07.819 [2024-07-12 00:20:35.566905] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:07.819 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:08:09.194 ====================================== 00:08:09.194 busy:2709375044 (cyc) 00:08:09.194 total_run_count: 261000 00:08:09.194 tsc_hz: 2700000000 (cyc) 00:08:09.194 ====================================== 00:08:09.194 poller_cost: 10380 (cyc), 3844 (nsec) 00:08:09.194 00:08:09.194 real 0m1.227s 00:08:09.194 user 0m1.149s 00:08:09.194 sys 0m0.071s 00:08:09.194 00:20:36 thread.thread_poller_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:09.194 00:20:36 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:08:09.194 ************************************ 00:08:09.194 END TEST thread_poller_perf 00:08:09.194 ************************************ 00:08:09.194 00:20:36 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:08:09.194 00:20:36 thread -- common/autotest_common.sh@1097 -- # '[' 8 -le 1 ']' 00:08:09.194 00:20:36 thread -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:09.194 00:20:36 thread -- common/autotest_common.sh@10 -- # set +x 00:08:09.194 ************************************ 00:08:09.194 START TEST thread_poller_perf 00:08:09.194 ************************************ 00:08:09.194 00:20:36 thread.thread_poller_perf -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:08:09.194 [2024-07-12 00:20:36.704744] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:08:09.194 [2024-07-12 00:20:36.704817] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid860195 ] 00:08:09.194 EAL: No free 2048 kB hugepages reported on node 1 00:08:09.194 [2024-07-12 00:20:36.763778] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:09.194 [2024-07-12 00:20:36.852436] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:09.194 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:08:10.129 ====================================== 00:08:10.129 busy:2702681052 (cyc) 00:08:10.129 total_run_count: 3641000 00:08:10.129 tsc_hz: 2700000000 (cyc) 00:08:10.129 ====================================== 00:08:10.129 poller_cost: 742 (cyc), 274 (nsec) 00:08:10.129 00:08:10.129 real 0m1.226s 00:08:10.129 user 0m1.141s 00:08:10.129 sys 0m0.077s 00:08:10.129 00:20:37 thread.thread_poller_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:10.129 00:20:37 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:08:10.129 ************************************ 00:08:10.129 END TEST thread_poller_perf 00:08:10.129 ************************************ 00:08:10.129 00:20:37 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:08:10.129 00:08:10.129 real 0m2.605s 00:08:10.129 user 0m2.338s 00:08:10.129 sys 0m0.264s 00:08:10.129 00:20:37 thread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:10.129 00:20:37 thread -- common/autotest_common.sh@10 -- # set +x 00:08:10.129 ************************************ 00:08:10.129 END TEST thread 00:08:10.129 ************************************ 00:08:10.129 00:20:37 -- spdk/autotest.sh@183 -- # run_test accel /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:08:10.129 00:20:37 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:08:10.129 00:20:37 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:10.129 00:20:37 -- common/autotest_common.sh@10 -- # set +x 00:08:10.388 ************************************ 00:08:10.388 START TEST accel 00:08:10.388 ************************************ 00:08:10.388 00:20:37 accel -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:08:10.388 * Looking for test storage... 00:08:10.388 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:08:10.388 00:20:38 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:08:10.388 00:20:38 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:08:10.388 00:20:38 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:08:10.388 00:20:38 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=860445 00:08:10.388 00:20:38 accel -- accel/accel.sh@63 -- # waitforlisten 860445 00:08:10.388 00:20:38 accel -- common/autotest_common.sh@827 -- # '[' -z 860445 ']' 00:08:10.388 00:20:38 accel -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:10.388 00:20:38 accel -- common/autotest_common.sh@832 -- # local max_retries=100 00:08:10.388 00:20:38 accel -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:10.388 00:20:38 accel -- accel/accel.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:08:10.388 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:10.388 00:20:38 accel -- accel/accel.sh@61 -- # build_accel_config 00:08:10.388 00:20:38 accel -- common/autotest_common.sh@836 -- # xtrace_disable 00:08:10.388 00:20:38 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:10.388 00:20:38 accel -- common/autotest_common.sh@10 -- # set +x 00:08:10.388 00:20:38 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:10.388 00:20:38 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:10.388 00:20:38 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:10.388 00:20:38 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:10.388 00:20:38 accel -- accel/accel.sh@40 -- # local IFS=, 00:08:10.388 00:20:38 accel -- accel/accel.sh@41 -- # jq -r . 00:08:10.388 [2024-07-12 00:20:38.101932] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:08:10.388 [2024-07-12 00:20:38.102038] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid860445 ] 00:08:10.388 EAL: No free 2048 kB hugepages reported on node 1 00:08:10.388 [2024-07-12 00:20:38.161860] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:10.646 [2024-07-12 00:20:38.251575] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:10.646 00:20:38 accel -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:08:10.646 00:20:38 accel -- common/autotest_common.sh@860 -- # return 0 00:08:10.646 00:20:38 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:08:10.646 00:20:38 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:08:10.646 00:20:38 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:08:10.646 00:20:38 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:08:10.646 00:20:38 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:08:10.646 00:20:38 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:08:10.646 00:20:38 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:08:10.646 00:20:38 accel -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:10.646 00:20:38 accel -- common/autotest_common.sh@10 -- # set +x 00:08:10.646 00:20:38 accel -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:10.905 00:20:38 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:10.905 00:20:38 accel -- accel/accel.sh@72 -- # IFS== 00:08:10.905 00:20:38 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:10.905 00:20:38 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:10.905 00:20:38 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:10.905 00:20:38 accel -- accel/accel.sh@72 -- # IFS== 00:08:10.905 00:20:38 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:10.905 00:20:38 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:10.905 00:20:38 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:10.905 00:20:38 accel -- accel/accel.sh@72 -- # IFS== 00:08:10.905 00:20:38 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:10.905 00:20:38 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:10.905 00:20:38 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:10.905 00:20:38 accel -- accel/accel.sh@72 -- # IFS== 00:08:10.905 00:20:38 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:10.905 00:20:38 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:10.905 00:20:38 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:10.905 00:20:38 accel -- accel/accel.sh@72 -- # IFS== 00:08:10.905 00:20:38 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:10.905 00:20:38 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:10.905 00:20:38 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:10.905 00:20:38 accel -- accel/accel.sh@72 -- # IFS== 00:08:10.905 00:20:38 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:10.905 00:20:38 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:10.905 00:20:38 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:10.905 00:20:38 accel -- accel/accel.sh@72 -- # IFS== 00:08:10.905 00:20:38 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:10.905 00:20:38 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:10.905 00:20:38 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:10.905 00:20:38 accel -- accel/accel.sh@72 -- # IFS== 00:08:10.905 00:20:38 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:10.905 00:20:38 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:10.905 00:20:38 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:10.905 00:20:38 accel -- accel/accel.sh@72 -- # IFS== 00:08:10.905 00:20:38 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:10.905 00:20:38 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:10.905 00:20:38 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:10.905 00:20:38 accel -- accel/accel.sh@72 -- # IFS== 00:08:10.905 00:20:38 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:10.905 00:20:38 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:10.905 00:20:38 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:10.905 00:20:38 accel -- accel/accel.sh@72 -- # IFS== 00:08:10.905 00:20:38 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:10.905 00:20:38 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:10.905 00:20:38 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:10.905 00:20:38 accel -- accel/accel.sh@72 -- # IFS== 00:08:10.905 00:20:38 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:10.905 00:20:38 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:10.905 00:20:38 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:10.905 00:20:38 accel -- accel/accel.sh@72 -- # IFS== 00:08:10.905 00:20:38 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:10.905 00:20:38 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:10.905 00:20:38 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:10.905 00:20:38 accel -- accel/accel.sh@72 -- # IFS== 00:08:10.905 00:20:38 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:10.905 00:20:38 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:10.905 00:20:38 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:10.905 00:20:38 accel -- accel/accel.sh@72 -- # IFS== 00:08:10.905 00:20:38 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:10.905 00:20:38 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:10.905 00:20:38 accel -- accel/accel.sh@75 -- # killprocess 860445 00:08:10.905 00:20:38 accel -- common/autotest_common.sh@946 -- # '[' -z 860445 ']' 00:08:10.905 00:20:38 accel -- common/autotest_common.sh@950 -- # kill -0 860445 00:08:10.905 00:20:38 accel -- common/autotest_common.sh@951 -- # uname 00:08:10.905 00:20:38 accel -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:08:10.905 00:20:38 accel -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 860445 00:08:10.905 00:20:38 accel -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:08:10.905 00:20:38 accel -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:08:10.905 00:20:38 accel -- common/autotest_common.sh@964 -- # echo 'killing process with pid 860445' 00:08:10.905 killing process with pid 860445 00:08:10.905 00:20:38 accel -- common/autotest_common.sh@965 -- # kill 860445 00:08:10.905 00:20:38 accel -- common/autotest_common.sh@970 -- # wait 860445 00:08:11.163 00:20:38 accel -- accel/accel.sh@76 -- # trap - ERR 00:08:11.163 00:20:38 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:08:11.163 00:20:38 accel -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:08:11.163 00:20:38 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:11.163 00:20:38 accel -- common/autotest_common.sh@10 -- # set +x 00:08:11.163 00:20:38 accel.accel_help -- common/autotest_common.sh@1121 -- # accel_perf -h 00:08:11.163 00:20:38 accel.accel_help -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:08:11.163 00:20:38 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:08:11.163 00:20:38 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:11.163 00:20:38 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:11.163 00:20:38 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:11.163 00:20:38 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:11.163 00:20:38 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:11.163 00:20:38 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:08:11.163 00:20:38 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:08:11.163 00:20:38 accel.accel_help -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:11.163 00:20:38 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:08:11.163 00:20:38 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:08:11.163 00:20:38 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:08:11.163 00:20:38 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:11.163 00:20:38 accel -- common/autotest_common.sh@10 -- # set +x 00:08:11.163 ************************************ 00:08:11.163 START TEST accel_missing_filename 00:08:11.163 ************************************ 00:08:11.163 00:20:38 accel.accel_missing_filename -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w compress 00:08:11.163 00:20:38 accel.accel_missing_filename -- common/autotest_common.sh@648 -- # local es=0 00:08:11.163 00:20:38 accel.accel_missing_filename -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress 00:08:11.163 00:20:38 accel.accel_missing_filename -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:08:11.163 00:20:38 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:11.163 00:20:38 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # type -t accel_perf 00:08:11.164 00:20:38 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:11.164 00:20:38 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress 00:08:11.164 00:20:38 accel.accel_missing_filename -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:08:11.164 00:20:38 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:08:11.164 00:20:38 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:11.164 00:20:38 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:11.164 00:20:38 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:11.164 00:20:38 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:11.164 00:20:38 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:11.164 00:20:38 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:08:11.164 00:20:38 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:08:11.164 [2024-07-12 00:20:38.933799] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:08:11.164 [2024-07-12 00:20:38.933871] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid860492 ] 00:08:11.164 EAL: No free 2048 kB hugepages reported on node 1 00:08:11.164 [2024-07-12 00:20:38.991907] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:11.422 [2024-07-12 00:20:39.081577] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:11.422 [2024-07-12 00:20:39.131510] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:11.422 [2024-07-12 00:20:39.180489] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:08:11.422 A filename is required. 00:08:11.422 00:20:39 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # es=234 00:08:11.422 00:20:39 accel.accel_missing_filename -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:11.422 00:20:39 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # es=106 00:08:11.422 00:20:39 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # case "$es" in 00:08:11.422 00:20:39 accel.accel_missing_filename -- common/autotest_common.sh@668 -- # es=1 00:08:11.422 00:20:39 accel.accel_missing_filename -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:11.422 00:08:11.423 real 0m0.329s 00:08:11.423 user 0m0.236s 00:08:11.423 sys 0m0.128s 00:08:11.423 00:20:39 accel.accel_missing_filename -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:11.423 00:20:39 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:08:11.423 ************************************ 00:08:11.423 END TEST accel_missing_filename 00:08:11.423 ************************************ 00:08:11.681 00:20:39 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:08:11.681 00:20:39 accel -- common/autotest_common.sh@1097 -- # '[' 10 -le 1 ']' 00:08:11.681 00:20:39 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:11.681 00:20:39 accel -- common/autotest_common.sh@10 -- # set +x 00:08:11.681 ************************************ 00:08:11.681 START TEST accel_compress_verify 00:08:11.681 ************************************ 00:08:11.681 00:20:39 accel.accel_compress_verify -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:08:11.681 00:20:39 accel.accel_compress_verify -- common/autotest_common.sh@648 -- # local es=0 00:08:11.681 00:20:39 accel.accel_compress_verify -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:08:11.681 00:20:39 accel.accel_compress_verify -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:08:11.681 00:20:39 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:11.681 00:20:39 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # type -t accel_perf 00:08:11.681 00:20:39 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:11.681 00:20:39 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:08:11.681 00:20:39 accel.accel_compress_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:08:11.681 00:20:39 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:08:11.681 00:20:39 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:11.681 00:20:39 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:11.681 00:20:39 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:11.681 00:20:39 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:11.681 00:20:39 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:11.681 00:20:39 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:08:11.681 00:20:39 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:08:11.681 [2024-07-12 00:20:39.315151] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:08:11.681 [2024-07-12 00:20:39.315231] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid860604 ] 00:08:11.681 EAL: No free 2048 kB hugepages reported on node 1 00:08:11.681 [2024-07-12 00:20:39.373822] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:11.681 [2024-07-12 00:20:39.463435] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:11.681 [2024-07-12 00:20:39.515095] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:11.940 [2024-07-12 00:20:39.564258] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:08:11.940 00:08:11.940 Compression does not support the verify option, aborting. 00:08:11.940 00:20:39 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # es=161 00:08:11.940 00:20:39 accel.accel_compress_verify -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:11.940 00:20:39 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # es=33 00:08:11.940 00:20:39 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # case "$es" in 00:08:11.940 00:20:39 accel.accel_compress_verify -- common/autotest_common.sh@668 -- # es=1 00:08:11.940 00:20:39 accel.accel_compress_verify -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:11.940 00:08:11.940 real 0m0.332s 00:08:11.940 user 0m0.240s 00:08:11.940 sys 0m0.126s 00:08:11.940 00:20:39 accel.accel_compress_verify -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:11.940 00:20:39 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:08:11.940 ************************************ 00:08:11.940 END TEST accel_compress_verify 00:08:11.940 ************************************ 00:08:11.940 00:20:39 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:08:11.940 00:20:39 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:08:11.940 00:20:39 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:11.940 00:20:39 accel -- common/autotest_common.sh@10 -- # set +x 00:08:11.940 ************************************ 00:08:11.940 START TEST accel_wrong_workload 00:08:11.940 ************************************ 00:08:11.940 00:20:39 accel.accel_wrong_workload -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w foobar 00:08:11.940 00:20:39 accel.accel_wrong_workload -- common/autotest_common.sh@648 -- # local es=0 00:08:11.940 00:20:39 accel.accel_wrong_workload -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:08:11.940 00:20:39 accel.accel_wrong_workload -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:08:11.940 00:20:39 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:11.940 00:20:39 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # type -t accel_perf 00:08:11.940 00:20:39 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:11.940 00:20:39 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w foobar 00:08:11.940 00:20:39 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:08:11.940 00:20:39 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:08:11.940 00:20:39 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:11.940 00:20:39 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:11.940 00:20:39 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:11.940 00:20:39 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:11.940 00:20:39 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:11.940 00:20:39 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:08:11.940 00:20:39 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:08:11.940 Unsupported workload type: foobar 00:08:11.940 [2024-07-12 00:20:39.699854] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:08:11.940 accel_perf options: 00:08:11.940 [-h help message] 00:08:11.940 [-q queue depth per core] 00:08:11.940 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:08:11.940 [-T number of threads per core 00:08:11.940 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:08:11.940 [-t time in seconds] 00:08:11.940 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:08:11.940 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:08:11.940 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:08:11.940 [-l for compress/decompress workloads, name of uncompressed input file 00:08:11.940 [-S for crc32c workload, use this seed value (default 0) 00:08:11.940 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:08:11.940 [-f for fill workload, use this BYTE value (default 255) 00:08:11.940 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:08:11.940 [-y verify result if this switch is on] 00:08:11.940 [-a tasks to allocate per core (default: same value as -q)] 00:08:11.940 Can be used to spread operations across a wider range of memory. 00:08:11.940 00:20:39 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # es=1 00:08:11.940 00:20:39 accel.accel_wrong_workload -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:11.940 00:20:39 accel.accel_wrong_workload -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:11.940 00:20:39 accel.accel_wrong_workload -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:11.940 00:08:11.940 real 0m0.023s 00:08:11.940 user 0m0.013s 00:08:11.940 sys 0m0.010s 00:08:11.940 00:20:39 accel.accel_wrong_workload -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:11.940 00:20:39 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:08:11.940 ************************************ 00:08:11.940 END TEST accel_wrong_workload 00:08:11.940 ************************************ 00:08:11.940 00:20:39 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:08:11.940 00:20:39 accel -- common/autotest_common.sh@1097 -- # '[' 10 -le 1 ']' 00:08:11.940 00:20:39 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:11.940 Error: writing output failed: Broken pipe 00:08:11.940 00:20:39 accel -- common/autotest_common.sh@10 -- # set +x 00:08:11.940 ************************************ 00:08:11.940 START TEST accel_negative_buffers 00:08:11.940 ************************************ 00:08:11.940 00:20:39 accel.accel_negative_buffers -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:08:11.940 00:20:39 accel.accel_negative_buffers -- common/autotest_common.sh@648 -- # local es=0 00:08:11.940 00:20:39 accel.accel_negative_buffers -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:08:11.940 00:20:39 accel.accel_negative_buffers -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:08:11.940 00:20:39 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:11.940 00:20:39 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # type -t accel_perf 00:08:11.940 00:20:39 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:11.940 00:20:39 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w xor -y -x -1 00:08:11.941 00:20:39 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:08:11.941 00:20:39 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:08:11.941 00:20:39 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:11.941 00:20:39 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:11.941 00:20:39 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:11.941 00:20:39 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:11.941 00:20:39 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:11.941 00:20:39 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:08:11.941 00:20:39 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:08:11.941 -x option must be non-negative. 00:08:11.941 [2024-07-12 00:20:39.765475] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:08:11.941 accel_perf options: 00:08:11.941 [-h help message] 00:08:11.941 [-q queue depth per core] 00:08:11.941 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:08:11.941 [-T number of threads per core 00:08:11.941 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:08:11.941 [-t time in seconds] 00:08:11.941 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:08:11.941 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:08:11.941 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:08:11.941 [-l for compress/decompress workloads, name of uncompressed input file 00:08:11.941 [-S for crc32c workload, use this seed value (default 0) 00:08:11.941 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:08:11.941 [-f for fill workload, use this BYTE value (default 255) 00:08:11.941 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:08:11.941 [-y verify result if this switch is on] 00:08:11.941 [-a tasks to allocate per core (default: same value as -q)] 00:08:11.941 Can be used to spread operations across a wider range of memory. 00:08:11.941 00:20:39 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # es=1 00:08:11.941 00:20:39 accel.accel_negative_buffers -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:11.941 00:20:39 accel.accel_negative_buffers -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:11.941 00:20:39 accel.accel_negative_buffers -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:11.941 00:08:11.941 real 0m0.023s 00:08:11.941 user 0m0.015s 00:08:11.941 sys 0m0.007s 00:08:11.941 00:20:39 accel.accel_negative_buffers -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:11.941 00:20:39 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:08:11.941 ************************************ 00:08:11.941 END TEST accel_negative_buffers 00:08:11.941 ************************************ 00:08:12.199 Error: writing output failed: Broken pipe 00:08:12.199 00:20:39 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:08:12.199 00:20:39 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:08:12.199 00:20:39 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:12.199 00:20:39 accel -- common/autotest_common.sh@10 -- # set +x 00:08:12.199 ************************************ 00:08:12.199 START TEST accel_crc32c 00:08:12.199 ************************************ 00:08:12.199 00:20:39 accel.accel_crc32c -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w crc32c -S 32 -y 00:08:12.199 00:20:39 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:08:12.199 00:20:39 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:08:12.199 00:20:39 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:12.199 00:20:39 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:08:12.199 00:20:39 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:12.199 00:20:39 accel.accel_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:08:12.199 00:20:39 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:08:12.199 00:20:39 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:12.199 00:20:39 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:12.199 00:20:39 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:12.199 00:20:39 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:12.199 00:20:39 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:12.199 00:20:39 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:08:12.199 00:20:39 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:08:12.199 [2024-07-12 00:20:39.831265] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:08:12.199 [2024-07-12 00:20:39.831337] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid860671 ] 00:08:12.199 EAL: No free 2048 kB hugepages reported on node 1 00:08:12.199 [2024-07-12 00:20:39.890010] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:12.199 [2024-07-12 00:20:39.980773] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:12.199 00:20:40 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:08:12.199 00:20:40 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:12.199 00:20:40 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:12.199 00:20:40 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:12.199 00:20:40 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:08:12.199 00:20:40 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:12.199 00:20:40 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:12.199 00:20:40 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:12.199 00:20:40 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:08:12.199 00:20:40 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:12.199 00:20:40 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:12.200 00:20:40 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:12.200 00:20:40 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:08:12.200 00:20:40 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:12.200 00:20:40 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:12.200 00:20:40 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:12.200 00:20:40 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:08:12.200 00:20:40 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:12.200 00:20:40 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:12.200 00:20:40 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:12.200 00:20:40 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:08:12.200 00:20:40 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:12.200 00:20:40 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:08:12.200 00:20:40 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:12.200 00:20:40 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:12.200 00:20:40 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:08:12.200 00:20:40 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:12.200 00:20:40 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:12.200 00:20:40 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:12.200 00:20:40 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:12.200 00:20:40 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:12.200 00:20:40 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:12.200 00:20:40 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:12.200 00:20:40 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:08:12.200 00:20:40 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:12.200 00:20:40 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:12.200 00:20:40 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:12.200 00:20:40 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:08:12.458 00:20:40 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:12.458 00:20:40 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:08:12.458 00:20:40 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:12.458 00:20:40 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:12.458 00:20:40 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:08:12.458 00:20:40 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:12.458 00:20:40 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:12.458 00:20:40 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:12.458 00:20:40 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:08:12.458 00:20:40 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:12.458 00:20:40 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:12.458 00:20:40 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:12.458 00:20:40 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:08:12.458 00:20:40 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:12.458 00:20:40 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:12.458 00:20:40 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:12.458 00:20:40 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:08:12.458 00:20:40 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:12.458 00:20:40 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:12.458 00:20:40 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:12.458 00:20:40 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:08:12.458 00:20:40 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:12.458 00:20:40 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:12.458 00:20:40 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:12.458 00:20:40 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:08:12.458 00:20:40 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:12.458 00:20:40 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:12.458 00:20:40 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:12.458 00:20:40 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:08:12.458 00:20:40 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:12.458 00:20:40 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:12.458 00:20:40 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:13.391 00:20:41 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:08:13.391 00:20:41 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:13.391 00:20:41 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:13.391 00:20:41 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:13.391 00:20:41 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:08:13.391 00:20:41 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:13.391 00:20:41 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:13.391 00:20:41 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:13.391 00:20:41 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:08:13.391 00:20:41 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:13.391 00:20:41 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:13.391 00:20:41 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:13.391 00:20:41 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:08:13.391 00:20:41 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:13.391 00:20:41 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:13.391 00:20:41 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:13.391 00:20:41 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:08:13.391 00:20:41 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:13.391 00:20:41 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:13.391 00:20:41 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:13.391 00:20:41 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:08:13.391 00:20:41 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:13.391 00:20:41 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:13.391 00:20:41 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:13.391 00:20:41 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:13.391 00:20:41 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:08:13.391 00:20:41 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:13.391 00:08:13.391 real 0m1.336s 00:08:13.391 user 0m1.208s 00:08:13.391 sys 0m0.129s 00:08:13.391 00:20:41 accel.accel_crc32c -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:13.391 00:20:41 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:08:13.391 ************************************ 00:08:13.391 END TEST accel_crc32c 00:08:13.391 ************************************ 00:08:13.391 00:20:41 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:08:13.391 00:20:41 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:08:13.391 00:20:41 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:13.391 00:20:41 accel -- common/autotest_common.sh@10 -- # set +x 00:08:13.391 ************************************ 00:08:13.391 START TEST accel_crc32c_C2 00:08:13.391 ************************************ 00:08:13.391 00:20:41 accel.accel_crc32c_C2 -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w crc32c -y -C 2 00:08:13.391 00:20:41 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:08:13.391 00:20:41 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:08:13.391 00:20:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:13.391 00:20:41 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:08:13.391 00:20:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:13.391 00:20:41 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:08:13.391 00:20:41 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:08:13.391 00:20:41 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:13.391 00:20:41 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:13.391 00:20:41 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:13.391 00:20:41 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:13.391 00:20:41 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:13.391 00:20:41 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:08:13.391 00:20:41 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:08:13.391 [2024-07-12 00:20:41.216467] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:08:13.391 [2024-07-12 00:20:41.216533] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid860798 ] 00:08:13.650 EAL: No free 2048 kB hugepages reported on node 1 00:08:13.650 [2024-07-12 00:20:41.275240] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:13.650 [2024-07-12 00:20:41.364811] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:13.650 00:20:41 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:13.650 00:20:41 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:13.650 00:20:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:13.650 00:20:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:13.650 00:20:41 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:13.650 00:20:41 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:13.650 00:20:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:13.650 00:20:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:13.650 00:20:41 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:08:13.650 00:20:41 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:13.650 00:20:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:13.650 00:20:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:13.650 00:20:41 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:13.650 00:20:41 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:13.650 00:20:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:13.650 00:20:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:13.650 00:20:41 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:13.650 00:20:41 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:13.650 00:20:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:13.650 00:20:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:13.650 00:20:41 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:08:13.650 00:20:41 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:13.650 00:20:41 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:08:13.650 00:20:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:13.650 00:20:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:13.650 00:20:41 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:08:13.650 00:20:41 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:13.650 00:20:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:13.650 00:20:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:13.650 00:20:41 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:13.650 00:20:41 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:13.650 00:20:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:13.650 00:20:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:13.650 00:20:41 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:13.650 00:20:41 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:13.650 00:20:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:13.650 00:20:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:13.650 00:20:41 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:08:13.650 00:20:41 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:13.650 00:20:41 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:08:13.650 00:20:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:13.650 00:20:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:13.650 00:20:41 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:08:13.650 00:20:41 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:13.650 00:20:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:13.650 00:20:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:13.650 00:20:41 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:08:13.650 00:20:41 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:13.650 00:20:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:13.650 00:20:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:13.650 00:20:41 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:08:13.650 00:20:41 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:13.650 00:20:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:13.650 00:20:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:13.650 00:20:41 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:08:13.650 00:20:41 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:13.650 00:20:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:13.650 00:20:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:13.650 00:20:41 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:08:13.650 00:20:41 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:13.650 00:20:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:13.650 00:20:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:13.650 00:20:41 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:13.651 00:20:41 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:13.651 00:20:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:13.651 00:20:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:13.651 00:20:41 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:13.651 00:20:41 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:13.651 00:20:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:13.651 00:20:41 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:15.077 00:20:42 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:15.077 00:20:42 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:15.077 00:20:42 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:15.077 00:20:42 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:15.077 00:20:42 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:15.077 00:20:42 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:15.077 00:20:42 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:15.077 00:20:42 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:15.077 00:20:42 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:15.077 00:20:42 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:15.077 00:20:42 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:15.077 00:20:42 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:15.077 00:20:42 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:15.077 00:20:42 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:15.077 00:20:42 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:15.077 00:20:42 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:15.077 00:20:42 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:15.077 00:20:42 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:15.077 00:20:42 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:15.077 00:20:42 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:15.077 00:20:42 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:15.077 00:20:42 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:15.077 00:20:42 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:15.077 00:20:42 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:15.077 00:20:42 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:15.077 00:20:42 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:08:15.077 00:20:42 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:15.077 00:08:15.077 real 0m1.334s 00:08:15.077 user 0m0.010s 00:08:15.077 sys 0m0.003s 00:08:15.077 00:20:42 accel.accel_crc32c_C2 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:15.077 00:20:42 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:08:15.077 ************************************ 00:08:15.077 END TEST accel_crc32c_C2 00:08:15.077 ************************************ 00:08:15.077 00:20:42 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:08:15.077 00:20:42 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:08:15.077 00:20:42 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:15.077 00:20:42 accel -- common/autotest_common.sh@10 -- # set +x 00:08:15.077 ************************************ 00:08:15.077 START TEST accel_copy 00:08:15.077 ************************************ 00:08:15.077 00:20:42 accel.accel_copy -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w copy -y 00:08:15.077 00:20:42 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:08:15.077 00:20:42 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:08:15.077 00:20:42 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:15.077 00:20:42 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:08:15.077 00:20:42 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:15.077 00:20:42 accel.accel_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:08:15.077 00:20:42 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:08:15.077 00:20:42 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:15.077 00:20:42 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:15.077 00:20:42 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:15.077 00:20:42 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:15.077 00:20:42 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:15.077 00:20:42 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:08:15.077 00:20:42 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:08:15.077 [2024-07-12 00:20:42.598450] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:08:15.077 [2024-07-12 00:20:42.598516] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid861002 ] 00:08:15.077 EAL: No free 2048 kB hugepages reported on node 1 00:08:15.077 [2024-07-12 00:20:42.656754] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:15.077 [2024-07-12 00:20:42.747458] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:15.077 00:20:42 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:08:15.077 00:20:42 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:15.077 00:20:42 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:15.077 00:20:42 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:15.077 00:20:42 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:08:15.077 00:20:42 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:15.077 00:20:42 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:15.077 00:20:42 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:15.077 00:20:42 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:08:15.077 00:20:42 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:15.077 00:20:42 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:15.077 00:20:42 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:15.077 00:20:42 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:08:15.077 00:20:42 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:15.077 00:20:42 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:15.077 00:20:42 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:15.077 00:20:42 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:08:15.077 00:20:42 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:15.077 00:20:42 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:15.077 00:20:42 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:15.077 00:20:42 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:08:15.077 00:20:42 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:15.077 00:20:42 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:08:15.077 00:20:42 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:15.077 00:20:42 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:15.077 00:20:42 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:15.077 00:20:42 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:15.077 00:20:42 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:15.077 00:20:42 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:15.077 00:20:42 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:08:15.077 00:20:42 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:15.077 00:20:42 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:15.077 00:20:42 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:15.078 00:20:42 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:08:15.078 00:20:42 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:15.078 00:20:42 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:08:15.078 00:20:42 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:15.078 00:20:42 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:15.078 00:20:42 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:08:15.078 00:20:42 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:15.078 00:20:42 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:15.078 00:20:42 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:15.078 00:20:42 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:08:15.078 00:20:42 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:15.078 00:20:42 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:15.078 00:20:42 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:15.078 00:20:42 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:08:15.078 00:20:42 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:15.078 00:20:42 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:15.078 00:20:42 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:15.078 00:20:42 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:08:15.078 00:20:42 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:15.078 00:20:42 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:15.078 00:20:42 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:15.078 00:20:42 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:08:15.078 00:20:42 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:15.078 00:20:42 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:15.078 00:20:42 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:15.078 00:20:42 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:08:15.078 00:20:42 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:15.078 00:20:42 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:15.078 00:20:42 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:15.078 00:20:42 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:08:15.078 00:20:42 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:15.078 00:20:42 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:15.078 00:20:42 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:16.452 00:20:43 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:08:16.452 00:20:43 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:16.452 00:20:43 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:16.452 00:20:43 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:16.452 00:20:43 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:08:16.452 00:20:43 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:16.452 00:20:43 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:16.452 00:20:43 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:16.452 00:20:43 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:08:16.452 00:20:43 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:16.452 00:20:43 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:16.452 00:20:43 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:16.452 00:20:43 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:08:16.452 00:20:43 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:16.452 00:20:43 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:16.452 00:20:43 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:16.452 00:20:43 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:08:16.452 00:20:43 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:16.452 00:20:43 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:16.452 00:20:43 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:16.452 00:20:43 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:08:16.452 00:20:43 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:16.452 00:20:43 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:16.452 00:20:43 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:16.452 00:20:43 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:16.452 00:20:43 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:08:16.452 00:20:43 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:16.452 00:08:16.452 real 0m1.334s 00:08:16.452 user 0m1.210s 00:08:16.452 sys 0m0.125s 00:08:16.452 00:20:43 accel.accel_copy -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:16.452 00:20:43 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:08:16.452 ************************************ 00:08:16.452 END TEST accel_copy 00:08:16.452 ************************************ 00:08:16.452 00:20:43 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:08:16.452 00:20:43 accel -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:08:16.452 00:20:43 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:16.452 00:20:43 accel -- common/autotest_common.sh@10 -- # set +x 00:08:16.452 ************************************ 00:08:16.452 START TEST accel_fill 00:08:16.452 ************************************ 00:08:16.452 00:20:43 accel.accel_fill -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:08:16.452 00:20:43 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:08:16.452 00:20:43 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:08:16.452 00:20:43 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:16.452 00:20:43 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:08:16.452 00:20:43 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:16.452 00:20:43 accel.accel_fill -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:08:16.452 00:20:43 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:08:16.452 00:20:43 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:16.452 00:20:43 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:16.452 00:20:43 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:16.452 00:20:43 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:16.452 00:20:43 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:16.452 00:20:43 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:08:16.452 00:20:43 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:08:16.452 [2024-07-12 00:20:43.978285] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:08:16.452 [2024-07-12 00:20:43.978350] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid861127 ] 00:08:16.452 EAL: No free 2048 kB hugepages reported on node 1 00:08:16.452 [2024-07-12 00:20:44.036796] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:16.452 [2024-07-12 00:20:44.127192] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:16.452 00:20:44 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:08:16.452 00:20:44 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:16.452 00:20:44 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:16.452 00:20:44 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:16.452 00:20:44 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:08:16.452 00:20:44 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:16.452 00:20:44 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:16.452 00:20:44 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:16.452 00:20:44 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:08:16.452 00:20:44 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:16.452 00:20:44 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:16.452 00:20:44 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:16.452 00:20:44 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:08:16.452 00:20:44 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:16.452 00:20:44 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:16.452 00:20:44 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:16.452 00:20:44 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:08:16.452 00:20:44 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:16.452 00:20:44 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:16.452 00:20:44 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:16.452 00:20:44 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:08:16.452 00:20:44 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:16.452 00:20:44 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:08:16.452 00:20:44 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:16.452 00:20:44 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:16.452 00:20:44 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:08:16.452 00:20:44 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:16.452 00:20:44 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:16.452 00:20:44 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:16.452 00:20:44 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:16.452 00:20:44 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:16.452 00:20:44 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:16.452 00:20:44 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:16.452 00:20:44 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:08:16.452 00:20:44 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:16.452 00:20:44 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:16.452 00:20:44 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:16.452 00:20:44 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:08:16.452 00:20:44 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:16.452 00:20:44 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:08:16.452 00:20:44 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:16.452 00:20:44 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:16.452 00:20:44 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:08:16.452 00:20:44 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:16.452 00:20:44 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:16.452 00:20:44 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:16.452 00:20:44 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:08:16.452 00:20:44 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:16.452 00:20:44 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:16.452 00:20:44 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:16.452 00:20:44 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:08:16.452 00:20:44 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:16.452 00:20:44 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:16.452 00:20:44 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:16.452 00:20:44 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:08:16.452 00:20:44 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:16.452 00:20:44 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:16.452 00:20:44 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:16.452 00:20:44 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:08:16.452 00:20:44 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:16.452 00:20:44 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:16.452 00:20:44 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:16.452 00:20:44 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:08:16.452 00:20:44 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:16.452 00:20:44 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:16.452 00:20:44 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:16.452 00:20:44 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:08:16.452 00:20:44 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:16.452 00:20:44 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:16.453 00:20:44 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:17.828 00:20:45 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:08:17.828 00:20:45 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:17.828 00:20:45 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:17.828 00:20:45 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:17.828 00:20:45 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:08:17.828 00:20:45 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:17.828 00:20:45 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:17.828 00:20:45 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:17.828 00:20:45 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:08:17.828 00:20:45 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:17.828 00:20:45 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:17.828 00:20:45 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:17.828 00:20:45 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:08:17.828 00:20:45 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:17.828 00:20:45 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:17.828 00:20:45 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:17.828 00:20:45 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:08:17.828 00:20:45 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:17.828 00:20:45 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:17.828 00:20:45 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:17.828 00:20:45 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:08:17.828 00:20:45 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:17.828 00:20:45 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:17.828 00:20:45 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:17.828 00:20:45 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:17.828 00:20:45 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:08:17.828 00:20:45 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:17.828 00:08:17.828 real 0m1.330s 00:08:17.828 user 0m1.206s 00:08:17.828 sys 0m0.126s 00:08:17.828 00:20:45 accel.accel_fill -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:17.828 00:20:45 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:08:17.828 ************************************ 00:08:17.828 END TEST accel_fill 00:08:17.828 ************************************ 00:08:17.828 00:20:45 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:08:17.828 00:20:45 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:08:17.828 00:20:45 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:17.828 00:20:45 accel -- common/autotest_common.sh@10 -- # set +x 00:08:17.828 ************************************ 00:08:17.828 START TEST accel_copy_crc32c 00:08:17.828 ************************************ 00:08:17.828 00:20:45 accel.accel_copy_crc32c -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w copy_crc32c -y 00:08:17.828 00:20:45 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:08:17.828 00:20:45 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:08:17.828 00:20:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:17.828 00:20:45 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:08:17.828 00:20:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:17.828 00:20:45 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:08:17.828 00:20:45 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:08:17.828 00:20:45 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:17.828 00:20:45 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:17.828 00:20:45 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:17.828 00:20:45 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:17.828 00:20:45 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:17.828 00:20:45 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:08:17.828 00:20:45 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:08:17.828 [2024-07-12 00:20:45.362064] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:08:17.828 [2024-07-12 00:20:45.362137] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid861257 ] 00:08:17.828 EAL: No free 2048 kB hugepages reported on node 1 00:08:17.828 [2024-07-12 00:20:45.421071] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:17.828 [2024-07-12 00:20:45.511196] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:17.828 00:20:45 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:08:17.828 00:20:45 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:17.828 00:20:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:17.828 00:20:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:17.828 00:20:45 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:08:17.828 00:20:45 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:17.828 00:20:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:17.828 00:20:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:17.828 00:20:45 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:08:17.828 00:20:45 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:17.828 00:20:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:17.828 00:20:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:17.828 00:20:45 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:08:17.828 00:20:45 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:17.828 00:20:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:17.828 00:20:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:17.828 00:20:45 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:08:17.828 00:20:45 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:17.828 00:20:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:17.828 00:20:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:17.828 00:20:45 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:08:17.828 00:20:45 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:17.828 00:20:45 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:08:17.828 00:20:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:17.828 00:20:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:17.828 00:20:45 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:08:17.828 00:20:45 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:17.828 00:20:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:17.828 00:20:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:17.828 00:20:45 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:17.828 00:20:45 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:17.828 00:20:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:17.828 00:20:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:17.828 00:20:45 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:17.828 00:20:45 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:17.828 00:20:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:17.828 00:20:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:17.828 00:20:45 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:08:17.828 00:20:45 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:17.828 00:20:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:17.828 00:20:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:17.828 00:20:45 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:08:17.828 00:20:45 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:17.828 00:20:45 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:08:17.828 00:20:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:17.828 00:20:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:17.828 00:20:45 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:08:17.828 00:20:45 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:17.828 00:20:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:17.828 00:20:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:17.828 00:20:45 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:08:17.828 00:20:45 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:17.828 00:20:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:17.828 00:20:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:17.828 00:20:45 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:08:17.828 00:20:45 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:17.828 00:20:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:17.828 00:20:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:17.828 00:20:45 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:08:17.828 00:20:45 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:17.828 00:20:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:17.829 00:20:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:17.829 00:20:45 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:08:17.829 00:20:45 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:17.829 00:20:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:17.829 00:20:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:17.829 00:20:45 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:08:17.829 00:20:45 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:17.829 00:20:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:17.829 00:20:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:17.829 00:20:45 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:08:17.829 00:20:45 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:17.829 00:20:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:17.829 00:20:45 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:19.203 00:20:46 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:08:19.203 00:20:46 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:19.203 00:20:46 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:19.203 00:20:46 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:19.203 00:20:46 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:08:19.203 00:20:46 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:19.203 00:20:46 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:19.203 00:20:46 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:19.203 00:20:46 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:08:19.203 00:20:46 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:19.203 00:20:46 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:19.203 00:20:46 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:19.203 00:20:46 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:08:19.203 00:20:46 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:19.203 00:20:46 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:19.203 00:20:46 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:19.203 00:20:46 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:08:19.203 00:20:46 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:19.203 00:20:46 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:19.203 00:20:46 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:19.203 00:20:46 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:08:19.203 00:20:46 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:19.203 00:20:46 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:19.203 00:20:46 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:19.203 00:20:46 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:19.203 00:20:46 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:08:19.203 00:20:46 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:19.203 00:08:19.203 real 0m1.337s 00:08:19.203 user 0m1.214s 00:08:19.203 sys 0m0.124s 00:08:19.203 00:20:46 accel.accel_copy_crc32c -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:19.203 00:20:46 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:08:19.203 ************************************ 00:08:19.203 END TEST accel_copy_crc32c 00:08:19.203 ************************************ 00:08:19.203 00:20:46 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:08:19.203 00:20:46 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:08:19.203 00:20:46 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:19.203 00:20:46 accel -- common/autotest_common.sh@10 -- # set +x 00:08:19.203 ************************************ 00:08:19.203 START TEST accel_copy_crc32c_C2 00:08:19.203 ************************************ 00:08:19.203 00:20:46 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:08:19.203 00:20:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:08:19.203 00:20:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:08:19.203 00:20:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:19.203 00:20:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:08:19.203 00:20:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:19.203 00:20:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:08:19.203 00:20:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:08:19.203 00:20:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:19.203 00:20:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:19.203 00:20:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:19.203 00:20:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:19.203 00:20:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:19.203 00:20:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:08:19.203 00:20:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:08:19.203 [2024-07-12 00:20:46.744519] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:08:19.203 [2024-07-12 00:20:46.744584] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid861379 ] 00:08:19.203 EAL: No free 2048 kB hugepages reported on node 1 00:08:19.203 [2024-07-12 00:20:46.802769] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:19.203 [2024-07-12 00:20:46.891682] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:19.203 00:20:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:19.203 00:20:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:19.203 00:20:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:19.203 00:20:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:19.203 00:20:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:19.203 00:20:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:19.203 00:20:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:19.203 00:20:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:19.203 00:20:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:08:19.203 00:20:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:19.203 00:20:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:19.203 00:20:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:19.203 00:20:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:19.203 00:20:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:19.203 00:20:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:19.203 00:20:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:19.203 00:20:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:19.203 00:20:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:19.203 00:20:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:19.203 00:20:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:19.204 00:20:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:08:19.204 00:20:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:19.204 00:20:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:08:19.204 00:20:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:19.204 00:20:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:19.204 00:20:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:08:19.204 00:20:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:19.204 00:20:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:19.204 00:20:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:19.204 00:20:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:19.204 00:20:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:19.204 00:20:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:19.204 00:20:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:19.204 00:20:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:08:19.204 00:20:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:19.204 00:20:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:19.204 00:20:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:19.204 00:20:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:19.204 00:20:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:19.204 00:20:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:19.204 00:20:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:19.204 00:20:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:08:19.204 00:20:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:19.204 00:20:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:08:19.204 00:20:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:19.204 00:20:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:19.204 00:20:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:08:19.204 00:20:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:19.204 00:20:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:19.204 00:20:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:19.204 00:20:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:08:19.204 00:20:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:19.204 00:20:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:19.204 00:20:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:19.204 00:20:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:08:19.204 00:20:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:19.204 00:20:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:19.204 00:20:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:19.204 00:20:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:08:19.204 00:20:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:19.204 00:20:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:19.204 00:20:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:19.204 00:20:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:08:19.204 00:20:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:19.204 00:20:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:19.204 00:20:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:19.204 00:20:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:19.204 00:20:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:19.204 00:20:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:19.204 00:20:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:19.204 00:20:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:19.204 00:20:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:19.204 00:20:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:19.204 00:20:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:20.578 00:20:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:20.578 00:20:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:20.578 00:20:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:20.578 00:20:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:20.578 00:20:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:20.578 00:20:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:20.578 00:20:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:20.578 00:20:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:20.578 00:20:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:20.578 00:20:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:20.578 00:20:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:20.578 00:20:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:20.578 00:20:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:20.578 00:20:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:20.578 00:20:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:20.578 00:20:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:20.578 00:20:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:20.578 00:20:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:20.578 00:20:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:20.578 00:20:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:20.578 00:20:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:20.578 00:20:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:20.578 00:20:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:20.578 00:20:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:20.578 00:20:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:20.578 00:20:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:08:20.578 00:20:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:20.578 00:08:20.578 real 0m1.333s 00:08:20.578 user 0m1.205s 00:08:20.578 sys 0m0.130s 00:08:20.579 00:20:48 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:20.579 00:20:48 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:08:20.579 ************************************ 00:08:20.579 END TEST accel_copy_crc32c_C2 00:08:20.579 ************************************ 00:08:20.579 00:20:48 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:08:20.579 00:20:48 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:08:20.579 00:20:48 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:20.579 00:20:48 accel -- common/autotest_common.sh@10 -- # set +x 00:08:20.579 ************************************ 00:08:20.579 START TEST accel_dualcast 00:08:20.579 ************************************ 00:08:20.579 00:20:48 accel.accel_dualcast -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dualcast -y 00:08:20.579 00:20:48 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:08:20.579 00:20:48 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:08:20.579 00:20:48 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:20.579 00:20:48 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:08:20.579 00:20:48 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:20.579 00:20:48 accel.accel_dualcast -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:08:20.579 00:20:48 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:08:20.579 00:20:48 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:20.579 00:20:48 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:20.579 00:20:48 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:20.579 00:20:48 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:20.579 00:20:48 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:20.579 00:20:48 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:08:20.579 00:20:48 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:08:20.579 [2024-07-12 00:20:48.131895] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:08:20.579 [2024-07-12 00:20:48.131970] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid861587 ] 00:08:20.579 EAL: No free 2048 kB hugepages reported on node 1 00:08:20.579 [2024-07-12 00:20:48.191322] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:20.579 [2024-07-12 00:20:48.281186] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:20.579 00:20:48 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:20.579 00:20:48 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:20.579 00:20:48 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:20.579 00:20:48 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:20.579 00:20:48 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:20.579 00:20:48 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:20.579 00:20:48 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:20.579 00:20:48 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:20.579 00:20:48 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:08:20.579 00:20:48 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:20.579 00:20:48 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:20.579 00:20:48 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:20.579 00:20:48 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:20.579 00:20:48 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:20.579 00:20:48 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:20.579 00:20:48 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:20.579 00:20:48 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:20.579 00:20:48 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:20.579 00:20:48 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:20.579 00:20:48 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:20.579 00:20:48 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:08:20.579 00:20:48 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:20.579 00:20:48 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:08:20.579 00:20:48 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:20.579 00:20:48 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:20.579 00:20:48 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:20.579 00:20:48 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:20.579 00:20:48 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:20.579 00:20:48 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:20.579 00:20:48 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:20.579 00:20:48 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:20.579 00:20:48 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:20.579 00:20:48 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:20.579 00:20:48 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:08:20.579 00:20:48 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:20.579 00:20:48 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:08:20.579 00:20:48 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:20.579 00:20:48 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:20.579 00:20:48 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:08:20.579 00:20:48 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:20.579 00:20:48 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:20.579 00:20:48 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:20.579 00:20:48 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:08:20.579 00:20:48 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:20.579 00:20:48 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:20.579 00:20:48 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:20.579 00:20:48 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:08:20.579 00:20:48 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:20.579 00:20:48 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:20.579 00:20:48 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:20.579 00:20:48 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:08:20.579 00:20:48 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:20.579 00:20:48 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:20.579 00:20:48 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:20.579 00:20:48 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:08:20.579 00:20:48 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:20.579 00:20:48 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:20.579 00:20:48 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:20.579 00:20:48 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:20.579 00:20:48 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:20.579 00:20:48 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:20.579 00:20:48 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:20.579 00:20:48 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:20.579 00:20:48 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:20.579 00:20:48 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:20.579 00:20:48 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:21.956 00:20:49 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:21.956 00:20:49 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:21.956 00:20:49 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:21.956 00:20:49 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:21.956 00:20:49 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:21.956 00:20:49 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:21.956 00:20:49 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:21.956 00:20:49 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:21.956 00:20:49 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:21.956 00:20:49 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:21.956 00:20:49 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:21.956 00:20:49 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:21.956 00:20:49 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:21.956 00:20:49 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:21.956 00:20:49 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:21.956 00:20:49 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:21.956 00:20:49 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:21.956 00:20:49 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:21.956 00:20:49 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:21.956 00:20:49 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:21.956 00:20:49 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:21.956 00:20:49 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:21.956 00:20:49 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:21.956 00:20:49 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:21.956 00:20:49 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:21.956 00:20:49 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:08:21.956 00:20:49 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:21.956 00:08:21.956 real 0m1.337s 00:08:21.956 user 0m1.215s 00:08:21.956 sys 0m0.122s 00:08:21.956 00:20:49 accel.accel_dualcast -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:21.956 00:20:49 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:08:21.956 ************************************ 00:08:21.956 END TEST accel_dualcast 00:08:21.956 ************************************ 00:08:21.956 00:20:49 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:08:21.956 00:20:49 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:08:21.956 00:20:49 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:21.956 00:20:49 accel -- common/autotest_common.sh@10 -- # set +x 00:08:21.956 ************************************ 00:08:21.956 START TEST accel_compare 00:08:21.956 ************************************ 00:08:21.956 00:20:49 accel.accel_compare -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w compare -y 00:08:21.956 00:20:49 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:08:21.956 00:20:49 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:08:21.956 00:20:49 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:21.956 00:20:49 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:08:21.956 00:20:49 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:21.956 00:20:49 accel.accel_compare -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:08:21.956 00:20:49 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:08:21.956 00:20:49 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:21.956 00:20:49 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:21.956 00:20:49 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:21.956 00:20:49 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:21.956 00:20:49 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:21.956 00:20:49 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:08:21.956 00:20:49 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:08:21.956 [2024-07-12 00:20:49.515294] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:08:21.956 [2024-07-12 00:20:49.515358] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid861712 ] 00:08:21.956 EAL: No free 2048 kB hugepages reported on node 1 00:08:21.956 [2024-07-12 00:20:49.572911] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:21.956 [2024-07-12 00:20:49.662851] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:21.956 00:20:49 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:21.956 00:20:49 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:21.956 00:20:49 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:21.956 00:20:49 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:21.956 00:20:49 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:21.956 00:20:49 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:21.956 00:20:49 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:21.956 00:20:49 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:21.956 00:20:49 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:08:21.956 00:20:49 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:21.956 00:20:49 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:21.956 00:20:49 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:21.956 00:20:49 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:21.956 00:20:49 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:21.956 00:20:49 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:21.956 00:20:49 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:21.956 00:20:49 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:21.956 00:20:49 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:21.956 00:20:49 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:21.956 00:20:49 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:21.956 00:20:49 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:08:21.956 00:20:49 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:21.956 00:20:49 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:08:21.956 00:20:49 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:21.956 00:20:49 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:21.956 00:20:49 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:21.956 00:20:49 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:21.956 00:20:49 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:21.956 00:20:49 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:21.956 00:20:49 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:21.956 00:20:49 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:21.956 00:20:49 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:21.956 00:20:49 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:21.956 00:20:49 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:08:21.956 00:20:49 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:21.956 00:20:49 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:08:21.956 00:20:49 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:21.956 00:20:49 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:21.956 00:20:49 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:08:21.956 00:20:49 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:21.956 00:20:49 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:21.956 00:20:49 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:21.956 00:20:49 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:08:21.956 00:20:49 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:21.956 00:20:49 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:21.956 00:20:49 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:21.956 00:20:49 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:08:21.956 00:20:49 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:21.956 00:20:49 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:21.956 00:20:49 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:21.956 00:20:49 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:08:21.956 00:20:49 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:21.956 00:20:49 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:21.956 00:20:49 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:21.956 00:20:49 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:08:21.957 00:20:49 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:21.957 00:20:49 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:21.957 00:20:49 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:21.957 00:20:49 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:21.957 00:20:49 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:21.957 00:20:49 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:21.957 00:20:49 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:21.957 00:20:49 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:21.957 00:20:49 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:21.957 00:20:49 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:21.957 00:20:49 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:23.328 00:20:50 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:23.328 00:20:50 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:23.328 00:20:50 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:23.328 00:20:50 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:23.328 00:20:50 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:23.328 00:20:50 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:23.328 00:20:50 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:23.328 00:20:50 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:23.328 00:20:50 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:23.328 00:20:50 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:23.328 00:20:50 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:23.328 00:20:50 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:23.328 00:20:50 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:23.328 00:20:50 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:23.328 00:20:50 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:23.328 00:20:50 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:23.328 00:20:50 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:23.328 00:20:50 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:23.328 00:20:50 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:23.328 00:20:50 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:23.328 00:20:50 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:23.328 00:20:50 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:23.328 00:20:50 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:23.328 00:20:50 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:23.328 00:20:50 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:23.328 00:20:50 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:08:23.328 00:20:50 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:23.328 00:08:23.328 real 0m1.338s 00:08:23.328 user 0m1.215s 00:08:23.328 sys 0m0.124s 00:08:23.328 00:20:50 accel.accel_compare -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:23.328 00:20:50 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:08:23.328 ************************************ 00:08:23.328 END TEST accel_compare 00:08:23.328 ************************************ 00:08:23.328 00:20:50 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:08:23.328 00:20:50 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:08:23.328 00:20:50 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:23.328 00:20:50 accel -- common/autotest_common.sh@10 -- # set +x 00:08:23.328 ************************************ 00:08:23.328 START TEST accel_xor 00:08:23.328 ************************************ 00:08:23.328 00:20:50 accel.accel_xor -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w xor -y 00:08:23.328 00:20:50 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:08:23.328 00:20:50 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:08:23.328 00:20:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:23.328 00:20:50 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:08:23.328 00:20:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:23.328 00:20:50 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:08:23.328 00:20:50 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:08:23.328 00:20:50 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:23.328 00:20:50 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:23.328 00:20:50 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:23.328 00:20:50 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:23.328 00:20:50 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:23.328 00:20:50 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:08:23.328 00:20:50 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:08:23.328 [2024-07-12 00:20:50.908241] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:08:23.328 [2024-07-12 00:20:50.908307] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid861832 ] 00:08:23.328 EAL: No free 2048 kB hugepages reported on node 1 00:08:23.328 [2024-07-12 00:20:50.967199] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:23.328 [2024-07-12 00:20:51.057274] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:23.328 00:20:51 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:23.328 00:20:51 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:23.328 00:20:51 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:23.328 00:20:51 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:23.328 00:20:51 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:23.328 00:20:51 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:23.328 00:20:51 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:23.328 00:20:51 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:23.328 00:20:51 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:08:23.328 00:20:51 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:23.328 00:20:51 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:23.328 00:20:51 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:23.328 00:20:51 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:23.328 00:20:51 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:23.328 00:20:51 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:23.328 00:20:51 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:23.328 00:20:51 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:23.328 00:20:51 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:23.328 00:20:51 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:23.328 00:20:51 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:23.328 00:20:51 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:08:23.328 00:20:51 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:23.328 00:20:51 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:08:23.328 00:20:51 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:23.328 00:20:51 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:23.328 00:20:51 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:08:23.328 00:20:51 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:23.328 00:20:51 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:23.328 00:20:51 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:23.328 00:20:51 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:23.328 00:20:51 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:23.328 00:20:51 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:23.328 00:20:51 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:23.328 00:20:51 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:23.328 00:20:51 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:23.328 00:20:51 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:23.328 00:20:51 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:23.328 00:20:51 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:08:23.328 00:20:51 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:23.328 00:20:51 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:08:23.328 00:20:51 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:23.328 00:20:51 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:23.328 00:20:51 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:08:23.328 00:20:51 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:23.328 00:20:51 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:23.328 00:20:51 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:23.328 00:20:51 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:08:23.328 00:20:51 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:23.328 00:20:51 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:23.329 00:20:51 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:23.329 00:20:51 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:08:23.329 00:20:51 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:23.329 00:20:51 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:23.329 00:20:51 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:23.329 00:20:51 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:08:23.329 00:20:51 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:23.329 00:20:51 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:23.329 00:20:51 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:23.329 00:20:51 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:08:23.329 00:20:51 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:23.329 00:20:51 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:23.329 00:20:51 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:23.329 00:20:51 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:23.329 00:20:51 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:23.329 00:20:51 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:23.329 00:20:51 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:23.329 00:20:51 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:23.329 00:20:51 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:23.329 00:20:51 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:23.329 00:20:51 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:24.699 00:20:52 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:24.699 00:20:52 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:24.699 00:20:52 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:24.699 00:20:52 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:24.699 00:20:52 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:24.699 00:20:52 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:24.699 00:20:52 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:24.700 00:20:52 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:24.700 00:20:52 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:24.700 00:20:52 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:24.700 00:20:52 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:24.700 00:20:52 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:24.700 00:20:52 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:24.700 00:20:52 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:24.700 00:20:52 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:24.700 00:20:52 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:24.700 00:20:52 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:24.700 00:20:52 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:24.700 00:20:52 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:24.700 00:20:52 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:24.700 00:20:52 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:24.700 00:20:52 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:24.700 00:20:52 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:24.700 00:20:52 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:24.700 00:20:52 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:24.700 00:20:52 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:08:24.700 00:20:52 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:24.700 00:08:24.700 real 0m1.335s 00:08:24.700 user 0m1.204s 00:08:24.700 sys 0m0.134s 00:08:24.700 00:20:52 accel.accel_xor -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:24.700 00:20:52 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:08:24.700 ************************************ 00:08:24.700 END TEST accel_xor 00:08:24.700 ************************************ 00:08:24.700 00:20:52 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:08:24.700 00:20:52 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:08:24.700 00:20:52 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:24.700 00:20:52 accel -- common/autotest_common.sh@10 -- # set +x 00:08:24.700 ************************************ 00:08:24.700 START TEST accel_xor 00:08:24.700 ************************************ 00:08:24.700 00:20:52 accel.accel_xor -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w xor -y -x 3 00:08:24.700 00:20:52 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:08:24.700 00:20:52 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:08:24.700 00:20:52 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:24.700 00:20:52 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:08:24.700 00:20:52 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:24.700 00:20:52 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:08:24.700 00:20:52 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:08:24.700 00:20:52 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:24.700 00:20:52 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:24.700 00:20:52 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:24.700 00:20:52 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:24.700 00:20:52 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:24.700 00:20:52 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:08:24.700 00:20:52 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:08:24.700 [2024-07-12 00:20:52.294364] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:08:24.700 [2024-07-12 00:20:52.294434] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid861961 ] 00:08:24.700 EAL: No free 2048 kB hugepages reported on node 1 00:08:24.700 [2024-07-12 00:20:52.352561] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:24.700 [2024-07-12 00:20:52.442607] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:24.700 00:20:52 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:24.700 00:20:52 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:24.700 00:20:52 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:24.700 00:20:52 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:24.700 00:20:52 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:24.700 00:20:52 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:24.700 00:20:52 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:24.700 00:20:52 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:24.700 00:20:52 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:08:24.700 00:20:52 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:24.700 00:20:52 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:24.700 00:20:52 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:24.700 00:20:52 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:24.700 00:20:52 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:24.700 00:20:52 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:24.700 00:20:52 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:24.700 00:20:52 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:24.700 00:20:52 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:24.700 00:20:52 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:24.700 00:20:52 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:24.700 00:20:52 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:08:24.700 00:20:52 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:24.700 00:20:52 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:08:24.700 00:20:52 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:24.700 00:20:52 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:24.700 00:20:52 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:08:24.700 00:20:52 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:24.700 00:20:52 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:24.700 00:20:52 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:24.700 00:20:52 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:24.700 00:20:52 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:24.700 00:20:52 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:24.700 00:20:52 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:24.700 00:20:52 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:24.700 00:20:52 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:24.700 00:20:52 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:24.700 00:20:52 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:24.700 00:20:52 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:08:24.700 00:20:52 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:24.700 00:20:52 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:08:24.700 00:20:52 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:24.700 00:20:52 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:24.700 00:20:52 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:08:24.700 00:20:52 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:24.700 00:20:52 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:24.700 00:20:52 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:24.700 00:20:52 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:08:24.700 00:20:52 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:24.700 00:20:52 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:24.700 00:20:52 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:24.700 00:20:52 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:08:24.700 00:20:52 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:24.700 00:20:52 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:24.700 00:20:52 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:24.700 00:20:52 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:08:24.701 00:20:52 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:24.701 00:20:52 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:24.701 00:20:52 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:24.701 00:20:52 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:08:24.701 00:20:52 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:24.701 00:20:52 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:24.701 00:20:52 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:24.701 00:20:52 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:24.701 00:20:52 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:24.701 00:20:52 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:24.701 00:20:52 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:24.701 00:20:52 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:24.701 00:20:52 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:24.701 00:20:52 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:24.701 00:20:52 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:26.073 00:20:53 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:26.073 00:20:53 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:26.073 00:20:53 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:26.073 00:20:53 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:26.073 00:20:53 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:26.073 00:20:53 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:26.073 00:20:53 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:26.073 00:20:53 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:26.073 00:20:53 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:26.073 00:20:53 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:26.073 00:20:53 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:26.073 00:20:53 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:26.073 00:20:53 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:26.073 00:20:53 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:26.073 00:20:53 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:26.073 00:20:53 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:26.073 00:20:53 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:26.073 00:20:53 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:26.073 00:20:53 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:26.073 00:20:53 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:26.073 00:20:53 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:26.073 00:20:53 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:26.073 00:20:53 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:26.073 00:20:53 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:26.073 00:20:53 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:26.073 00:20:53 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:08:26.073 00:20:53 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:26.073 00:08:26.073 real 0m1.337s 00:08:26.073 user 0m1.207s 00:08:26.073 sys 0m0.133s 00:08:26.073 00:20:53 accel.accel_xor -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:26.074 00:20:53 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:08:26.074 ************************************ 00:08:26.074 END TEST accel_xor 00:08:26.074 ************************************ 00:08:26.074 00:20:53 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:08:26.074 00:20:53 accel -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:08:26.074 00:20:53 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:26.074 00:20:53 accel -- common/autotest_common.sh@10 -- # set +x 00:08:26.074 ************************************ 00:08:26.074 START TEST accel_dif_verify 00:08:26.074 ************************************ 00:08:26.074 00:20:53 accel.accel_dif_verify -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dif_verify 00:08:26.074 00:20:53 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:08:26.074 00:20:53 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:08:26.074 00:20:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:26.074 00:20:53 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:08:26.074 00:20:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:26.074 00:20:53 accel.accel_dif_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:08:26.074 00:20:53 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:08:26.074 00:20:53 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:26.074 00:20:53 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:26.074 00:20:53 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:26.074 00:20:53 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:26.074 00:20:53 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:26.074 00:20:53 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:08:26.074 00:20:53 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:08:26.074 [2024-07-12 00:20:53.684840] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:08:26.074 [2024-07-12 00:20:53.684907] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid862166 ] 00:08:26.074 EAL: No free 2048 kB hugepages reported on node 1 00:08:26.074 [2024-07-12 00:20:53.743425] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:26.074 [2024-07-12 00:20:53.833997] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:26.074 00:20:53 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:26.074 00:20:53 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:26.074 00:20:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:26.074 00:20:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:26.074 00:20:53 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:26.074 00:20:53 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:26.074 00:20:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:26.074 00:20:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:26.074 00:20:53 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:08:26.074 00:20:53 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:26.074 00:20:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:26.074 00:20:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:26.074 00:20:53 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:26.074 00:20:53 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:26.074 00:20:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:26.074 00:20:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:26.074 00:20:53 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:26.074 00:20:53 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:26.074 00:20:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:26.074 00:20:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:26.074 00:20:53 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:08:26.074 00:20:53 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:26.074 00:20:53 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:08:26.074 00:20:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:26.074 00:20:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:26.074 00:20:53 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:26.074 00:20:53 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:26.074 00:20:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:26.074 00:20:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:26.074 00:20:53 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:26.074 00:20:53 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:26.074 00:20:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:26.074 00:20:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:26.074 00:20:53 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:08:26.074 00:20:53 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:26.074 00:20:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:26.074 00:20:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:26.074 00:20:53 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:08:26.074 00:20:53 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:26.074 00:20:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:26.074 00:20:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:26.074 00:20:53 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:26.074 00:20:53 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:26.074 00:20:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:26.074 00:20:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:26.074 00:20:53 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:08:26.074 00:20:53 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:26.074 00:20:53 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:08:26.074 00:20:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:26.074 00:20:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:26.074 00:20:53 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:08:26.074 00:20:53 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:26.074 00:20:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:26.074 00:20:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:26.074 00:20:53 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:08:26.074 00:20:53 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:26.074 00:20:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:26.074 00:20:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:26.074 00:20:53 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:08:26.074 00:20:53 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:26.074 00:20:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:26.074 00:20:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:26.074 00:20:53 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:08:26.074 00:20:53 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:26.074 00:20:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:26.074 00:20:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:26.074 00:20:53 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:08:26.074 00:20:53 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:26.074 00:20:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:26.074 00:20:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:26.074 00:20:53 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:26.074 00:20:53 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:26.074 00:20:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:26.074 00:20:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:26.074 00:20:53 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:26.074 00:20:53 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:26.074 00:20:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:26.074 00:20:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:27.447 00:20:55 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:27.447 00:20:55 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:27.447 00:20:55 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:27.447 00:20:55 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:27.447 00:20:55 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:27.447 00:20:55 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:27.447 00:20:55 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:27.447 00:20:55 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:27.447 00:20:55 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:27.447 00:20:55 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:27.447 00:20:55 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:27.447 00:20:55 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:27.447 00:20:55 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:27.447 00:20:55 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:27.447 00:20:55 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:27.447 00:20:55 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:27.447 00:20:55 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:27.447 00:20:55 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:27.447 00:20:55 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:27.447 00:20:55 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:27.447 00:20:55 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:27.447 00:20:55 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:27.447 00:20:55 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:27.447 00:20:55 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:27.447 00:20:55 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:27.447 00:20:55 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:08:27.447 00:20:55 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:27.447 00:08:27.448 real 0m1.340s 00:08:27.448 user 0m1.222s 00:08:27.448 sys 0m0.122s 00:08:27.448 00:20:55 accel.accel_dif_verify -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:27.448 00:20:55 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:08:27.448 ************************************ 00:08:27.448 END TEST accel_dif_verify 00:08:27.448 ************************************ 00:08:27.448 00:20:55 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:08:27.448 00:20:55 accel -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:08:27.448 00:20:55 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:27.448 00:20:55 accel -- common/autotest_common.sh@10 -- # set +x 00:08:27.448 ************************************ 00:08:27.448 START TEST accel_dif_generate 00:08:27.448 ************************************ 00:08:27.448 00:20:55 accel.accel_dif_generate -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dif_generate 00:08:27.448 00:20:55 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:08:27.448 00:20:55 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:08:27.448 00:20:55 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:27.448 00:20:55 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:08:27.448 00:20:55 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:27.448 00:20:55 accel.accel_dif_generate -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:08:27.448 00:20:55 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:08:27.448 00:20:55 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:27.448 00:20:55 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:27.448 00:20:55 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:27.448 00:20:55 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:27.448 00:20:55 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:27.448 00:20:55 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:08:27.448 00:20:55 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:08:27.448 [2024-07-12 00:20:55.076334] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:08:27.448 [2024-07-12 00:20:55.076408] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid862291 ] 00:08:27.448 EAL: No free 2048 kB hugepages reported on node 1 00:08:27.448 [2024-07-12 00:20:55.134023] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:27.448 [2024-07-12 00:20:55.223596] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:27.448 00:20:55 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:27.448 00:20:55 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:27.448 00:20:55 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:27.448 00:20:55 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:27.448 00:20:55 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:27.448 00:20:55 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:27.448 00:20:55 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:27.448 00:20:55 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:27.448 00:20:55 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:08:27.448 00:20:55 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:27.448 00:20:55 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:27.448 00:20:55 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:27.448 00:20:55 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:27.448 00:20:55 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:27.448 00:20:55 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:27.448 00:20:55 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:27.448 00:20:55 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:27.448 00:20:55 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:27.448 00:20:55 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:27.448 00:20:55 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:27.448 00:20:55 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:08:27.448 00:20:55 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:27.448 00:20:55 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:08:27.448 00:20:55 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:27.448 00:20:55 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:27.448 00:20:55 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:27.448 00:20:55 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:27.448 00:20:55 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:27.448 00:20:55 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:27.448 00:20:55 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:27.448 00:20:55 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:27.448 00:20:55 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:27.448 00:20:55 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:27.448 00:20:55 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:08:27.448 00:20:55 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:27.448 00:20:55 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:27.448 00:20:55 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:27.448 00:20:55 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:08:27.448 00:20:55 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:27.448 00:20:55 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:27.448 00:20:55 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:27.448 00:20:55 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:27.448 00:20:55 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:27.448 00:20:55 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:27.448 00:20:55 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:27.448 00:20:55 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:08:27.448 00:20:55 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:27.448 00:20:55 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:08:27.448 00:20:55 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:27.448 00:20:55 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:27.448 00:20:55 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:08:27.448 00:20:55 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:27.448 00:20:55 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:27.448 00:20:55 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:27.448 00:20:55 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:08:27.448 00:20:55 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:27.448 00:20:55 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:27.448 00:20:55 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:27.448 00:20:55 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:08:27.448 00:20:55 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:27.448 00:20:55 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:27.448 00:20:55 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:27.448 00:20:55 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:08:27.448 00:20:55 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:27.448 00:20:55 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:27.448 00:20:55 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:27.448 00:20:55 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:08:27.448 00:20:55 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:27.448 00:20:55 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:27.448 00:20:55 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:27.448 00:20:55 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:27.448 00:20:55 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:27.448 00:20:55 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:27.448 00:20:55 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:27.448 00:20:55 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:27.448 00:20:55 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:27.448 00:20:55 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:27.448 00:20:55 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:28.822 00:20:56 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:28.822 00:20:56 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:28.822 00:20:56 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:28.822 00:20:56 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:28.822 00:20:56 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:28.822 00:20:56 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:28.822 00:20:56 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:28.822 00:20:56 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:28.822 00:20:56 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:28.822 00:20:56 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:28.822 00:20:56 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:28.822 00:20:56 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:28.822 00:20:56 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:28.822 00:20:56 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:28.822 00:20:56 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:28.822 00:20:56 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:28.822 00:20:56 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:28.822 00:20:56 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:28.822 00:20:56 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:28.822 00:20:56 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:28.822 00:20:56 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:28.822 00:20:56 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:28.822 00:20:56 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:28.822 00:20:56 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:28.822 00:20:56 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:28.822 00:20:56 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:08:28.822 00:20:56 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:28.822 00:08:28.822 real 0m1.333s 00:08:28.822 user 0m1.214s 00:08:28.822 sys 0m0.122s 00:08:28.822 00:20:56 accel.accel_dif_generate -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:28.822 00:20:56 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:08:28.822 ************************************ 00:08:28.822 END TEST accel_dif_generate 00:08:28.822 ************************************ 00:08:28.822 00:20:56 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:08:28.822 00:20:56 accel -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:08:28.822 00:20:56 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:28.822 00:20:56 accel -- common/autotest_common.sh@10 -- # set +x 00:08:28.822 ************************************ 00:08:28.822 START TEST accel_dif_generate_copy 00:08:28.822 ************************************ 00:08:28.822 00:20:56 accel.accel_dif_generate_copy -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dif_generate_copy 00:08:28.822 00:20:56 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:08:28.822 00:20:56 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:08:28.822 00:20:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:28.822 00:20:56 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:08:28.822 00:20:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:28.822 00:20:56 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:08:28.822 00:20:56 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:08:28.822 00:20:56 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:28.822 00:20:56 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:28.822 00:20:56 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:28.822 00:20:56 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:28.822 00:20:56 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:28.822 00:20:56 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:08:28.822 00:20:56 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:08:28.822 [2024-07-12 00:20:56.460875] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:08:28.822 [2024-07-12 00:20:56.460942] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid862417 ] 00:08:28.822 EAL: No free 2048 kB hugepages reported on node 1 00:08:28.822 [2024-07-12 00:20:56.520156] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:28.822 [2024-07-12 00:20:56.610760] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:29.081 00:20:56 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:29.081 00:20:56 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:29.081 00:20:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:29.081 00:20:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:29.081 00:20:56 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:29.081 00:20:56 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:29.081 00:20:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:29.081 00:20:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:29.081 00:20:56 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:08:29.081 00:20:56 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:29.081 00:20:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:29.081 00:20:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:29.081 00:20:56 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:29.081 00:20:56 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:29.081 00:20:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:29.081 00:20:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:29.081 00:20:56 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:29.081 00:20:56 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:29.081 00:20:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:29.081 00:20:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:29.081 00:20:56 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:08:29.081 00:20:56 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:29.081 00:20:56 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:08:29.081 00:20:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:29.081 00:20:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:29.081 00:20:56 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:29.081 00:20:56 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:29.081 00:20:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:29.081 00:20:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:29.081 00:20:56 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:29.081 00:20:56 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:29.081 00:20:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:29.081 00:20:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:29.081 00:20:56 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:29.081 00:20:56 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:29.081 00:20:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:29.081 00:20:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:29.081 00:20:56 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:08:29.081 00:20:56 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:29.081 00:20:56 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:08:29.081 00:20:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:29.081 00:20:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:29.081 00:20:56 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:08:29.081 00:20:56 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:29.081 00:20:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:29.081 00:20:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:29.081 00:20:56 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:08:29.081 00:20:56 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:29.081 00:20:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:29.081 00:20:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:29.081 00:20:56 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:08:29.081 00:20:56 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:29.081 00:20:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:29.081 00:20:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:29.081 00:20:56 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:08:29.081 00:20:56 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:29.081 00:20:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:29.081 00:20:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:29.081 00:20:56 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:08:29.081 00:20:56 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:29.081 00:20:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:29.081 00:20:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:29.081 00:20:56 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:29.081 00:20:56 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:29.081 00:20:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:29.081 00:20:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:29.081 00:20:56 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:29.081 00:20:56 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:29.081 00:20:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:29.081 00:20:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:30.015 00:20:57 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:30.015 00:20:57 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:30.015 00:20:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:30.015 00:20:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:30.015 00:20:57 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:30.015 00:20:57 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:30.015 00:20:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:30.015 00:20:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:30.015 00:20:57 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:30.015 00:20:57 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:30.015 00:20:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:30.015 00:20:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:30.015 00:20:57 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:30.015 00:20:57 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:30.015 00:20:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:30.015 00:20:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:30.015 00:20:57 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:30.015 00:20:57 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:30.015 00:20:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:30.015 00:20:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:30.015 00:20:57 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:30.015 00:20:57 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:30.015 00:20:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:30.015 00:20:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:30.015 00:20:57 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:30.015 00:20:57 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:08:30.015 00:20:57 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:30.015 00:08:30.015 real 0m1.337s 00:08:30.015 user 0m1.204s 00:08:30.015 sys 0m0.135s 00:08:30.015 00:20:57 accel.accel_dif_generate_copy -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:30.015 00:20:57 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:08:30.015 ************************************ 00:08:30.015 END TEST accel_dif_generate_copy 00:08:30.015 ************************************ 00:08:30.015 00:20:57 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:08:30.015 00:20:57 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:08:30.015 00:20:57 accel -- common/autotest_common.sh@1097 -- # '[' 8 -le 1 ']' 00:08:30.015 00:20:57 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:30.015 00:20:57 accel -- common/autotest_common.sh@10 -- # set +x 00:08:30.015 ************************************ 00:08:30.015 START TEST accel_comp 00:08:30.015 ************************************ 00:08:30.015 00:20:57 accel.accel_comp -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:08:30.015 00:20:57 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:08:30.015 00:20:57 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:08:30.015 00:20:57 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:30.015 00:20:57 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:30.015 00:20:57 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:08:30.015 00:20:57 accel.accel_comp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:08:30.015 00:20:57 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:08:30.015 00:20:57 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:30.015 00:20:57 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:30.015 00:20:57 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:30.015 00:20:57 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:30.015 00:20:57 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:30.015 00:20:57 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:08:30.015 00:20:57 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:08:30.015 [2024-07-12 00:20:57.848229] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:08:30.015 [2024-07-12 00:20:57.848298] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid862538 ] 00:08:30.274 EAL: No free 2048 kB hugepages reported on node 1 00:08:30.274 [2024-07-12 00:20:57.906907] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:30.274 [2024-07-12 00:20:57.997689] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:30.274 00:20:58 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:30.274 00:20:58 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:30.274 00:20:58 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:30.274 00:20:58 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:30.274 00:20:58 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:30.274 00:20:58 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:30.274 00:20:58 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:30.274 00:20:58 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:30.274 00:20:58 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:30.274 00:20:58 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:30.274 00:20:58 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:30.274 00:20:58 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:30.274 00:20:58 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:08:30.274 00:20:58 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:30.274 00:20:58 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:30.274 00:20:58 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:30.274 00:20:58 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:30.274 00:20:58 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:30.274 00:20:58 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:30.274 00:20:58 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:30.274 00:20:58 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:30.274 00:20:58 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:30.274 00:20:58 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:30.274 00:20:58 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:30.274 00:20:58 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:08:30.275 00:20:58 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:30.275 00:20:58 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:08:30.275 00:20:58 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:30.275 00:20:58 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:30.275 00:20:58 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:30.275 00:20:58 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:30.275 00:20:58 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:30.275 00:20:58 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:30.275 00:20:58 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:30.275 00:20:58 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:30.275 00:20:58 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:30.275 00:20:58 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:30.275 00:20:58 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:08:30.275 00:20:58 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:30.275 00:20:58 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:08:30.275 00:20:58 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:30.275 00:20:58 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:30.275 00:20:58 accel.accel_comp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:08:30.275 00:20:58 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:30.275 00:20:58 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:30.275 00:20:58 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:30.275 00:20:58 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:08:30.275 00:20:58 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:30.275 00:20:58 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:30.275 00:20:58 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:30.275 00:20:58 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:08:30.275 00:20:58 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:30.275 00:20:58 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:30.275 00:20:58 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:30.275 00:20:58 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:08:30.275 00:20:58 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:30.275 00:20:58 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:30.275 00:20:58 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:30.275 00:20:58 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:08:30.275 00:20:58 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:30.275 00:20:58 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:30.275 00:20:58 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:30.275 00:20:58 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:08:30.275 00:20:58 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:30.275 00:20:58 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:30.275 00:20:58 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:30.275 00:20:58 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:30.275 00:20:58 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:30.275 00:20:58 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:30.275 00:20:58 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:30.275 00:20:58 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:30.275 00:20:58 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:30.275 00:20:58 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:30.275 00:20:58 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:31.654 00:20:59 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:31.654 00:20:59 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:31.654 00:20:59 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:31.654 00:20:59 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:31.654 00:20:59 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:31.654 00:20:59 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:31.654 00:20:59 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:31.654 00:20:59 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:31.654 00:20:59 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:31.654 00:20:59 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:31.654 00:20:59 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:31.654 00:20:59 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:31.654 00:20:59 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:31.654 00:20:59 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:31.654 00:20:59 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:31.654 00:20:59 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:31.654 00:20:59 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:31.654 00:20:59 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:31.654 00:20:59 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:31.654 00:20:59 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:31.654 00:20:59 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:31.654 00:20:59 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:31.654 00:20:59 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:31.654 00:20:59 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:31.654 00:20:59 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:31.654 00:20:59 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:08:31.654 00:20:59 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:31.654 00:08:31.654 real 0m1.339s 00:08:31.654 user 0m1.211s 00:08:31.654 sys 0m0.130s 00:08:31.654 00:20:59 accel.accel_comp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:31.654 00:20:59 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:08:31.654 ************************************ 00:08:31.654 END TEST accel_comp 00:08:31.654 ************************************ 00:08:31.654 00:20:59 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:08:31.654 00:20:59 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:08:31.654 00:20:59 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:31.654 00:20:59 accel -- common/autotest_common.sh@10 -- # set +x 00:08:31.654 ************************************ 00:08:31.654 START TEST accel_decomp 00:08:31.654 ************************************ 00:08:31.654 00:20:59 accel.accel_decomp -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:08:31.654 00:20:59 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:08:31.654 00:20:59 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:08:31.654 00:20:59 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:31.654 00:20:59 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:08:31.655 00:20:59 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:31.655 00:20:59 accel.accel_decomp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:08:31.655 00:20:59 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:08:31.655 00:20:59 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:31.655 00:20:59 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:31.655 00:20:59 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:31.655 00:20:59 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:31.655 00:20:59 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:31.655 00:20:59 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:08:31.655 00:20:59 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:08:31.655 [2024-07-12 00:20:59.238649] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:08:31.655 [2024-07-12 00:20:59.238717] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid862748 ] 00:08:31.655 EAL: No free 2048 kB hugepages reported on node 1 00:08:31.655 [2024-07-12 00:20:59.297901] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:31.655 [2024-07-12 00:20:59.387047] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:31.655 00:20:59 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:31.655 00:20:59 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:31.655 00:20:59 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:31.655 00:20:59 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:31.655 00:20:59 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:31.655 00:20:59 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:31.655 00:20:59 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:31.655 00:20:59 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:31.655 00:20:59 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:31.655 00:20:59 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:31.655 00:20:59 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:31.655 00:20:59 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:31.655 00:20:59 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:08:31.655 00:20:59 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:31.655 00:20:59 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:31.655 00:20:59 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:31.655 00:20:59 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:31.655 00:20:59 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:31.655 00:20:59 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:31.655 00:20:59 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:31.655 00:20:59 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:31.655 00:20:59 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:31.655 00:20:59 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:31.655 00:20:59 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:31.655 00:20:59 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:08:31.655 00:20:59 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:31.655 00:20:59 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:08:31.655 00:20:59 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:31.655 00:20:59 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:31.655 00:20:59 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:31.655 00:20:59 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:31.655 00:20:59 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:31.655 00:20:59 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:31.655 00:20:59 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:31.655 00:20:59 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:31.655 00:20:59 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:31.655 00:20:59 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:31.655 00:20:59 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:08:31.655 00:20:59 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:31.655 00:20:59 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:08:31.655 00:20:59 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:31.655 00:20:59 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:31.655 00:20:59 accel.accel_decomp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:08:31.655 00:20:59 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:31.655 00:20:59 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:31.655 00:20:59 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:31.655 00:20:59 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:08:31.655 00:20:59 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:31.655 00:20:59 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:31.655 00:20:59 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:31.655 00:20:59 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:08:31.655 00:20:59 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:31.655 00:20:59 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:31.655 00:20:59 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:31.655 00:20:59 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:08:31.655 00:20:59 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:31.655 00:20:59 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:31.655 00:20:59 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:31.655 00:20:59 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:08:31.655 00:20:59 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:31.655 00:20:59 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:31.655 00:20:59 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:31.655 00:20:59 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:08:31.655 00:20:59 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:31.655 00:20:59 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:31.655 00:20:59 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:31.655 00:20:59 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:31.655 00:20:59 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:31.655 00:20:59 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:31.655 00:20:59 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:31.655 00:20:59 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:31.655 00:20:59 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:31.655 00:20:59 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:31.655 00:20:59 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:33.030 00:21:00 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:33.030 00:21:00 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:33.030 00:21:00 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:33.030 00:21:00 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:33.030 00:21:00 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:33.030 00:21:00 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:33.030 00:21:00 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:33.030 00:21:00 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:33.030 00:21:00 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:33.030 00:21:00 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:33.030 00:21:00 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:33.030 00:21:00 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:33.030 00:21:00 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:33.030 00:21:00 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:33.030 00:21:00 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:33.030 00:21:00 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:33.030 00:21:00 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:33.030 00:21:00 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:33.030 00:21:00 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:33.030 00:21:00 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:33.030 00:21:00 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:33.030 00:21:00 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:33.030 00:21:00 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:33.030 00:21:00 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:33.030 00:21:00 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:33.030 00:21:00 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:08:33.030 00:21:00 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:33.030 00:08:33.030 real 0m1.338s 00:08:33.030 user 0m1.211s 00:08:33.030 sys 0m0.129s 00:08:33.030 00:21:00 accel.accel_decomp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:33.030 00:21:00 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:08:33.030 ************************************ 00:08:33.030 END TEST accel_decomp 00:08:33.030 ************************************ 00:08:33.030 00:21:00 accel -- accel/accel.sh@118 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:08:33.030 00:21:00 accel -- common/autotest_common.sh@1097 -- # '[' 11 -le 1 ']' 00:08:33.030 00:21:00 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:33.030 00:21:00 accel -- common/autotest_common.sh@10 -- # set +x 00:08:33.030 ************************************ 00:08:33.030 START TEST accel_decmop_full 00:08:33.030 ************************************ 00:08:33.030 00:21:00 accel.accel_decmop_full -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:08:33.030 00:21:00 accel.accel_decmop_full -- accel/accel.sh@16 -- # local accel_opc 00:08:33.030 00:21:00 accel.accel_decmop_full -- accel/accel.sh@17 -- # local accel_module 00:08:33.030 00:21:00 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:08:33.030 00:21:00 accel.accel_decmop_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:08:33.030 00:21:00 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:08:33.030 00:21:00 accel.accel_decmop_full -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:08:33.030 00:21:00 accel.accel_decmop_full -- accel/accel.sh@12 -- # build_accel_config 00:08:33.030 00:21:00 accel.accel_decmop_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:33.030 00:21:00 accel.accel_decmop_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:33.030 00:21:00 accel.accel_decmop_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:33.030 00:21:00 accel.accel_decmop_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:33.030 00:21:00 accel.accel_decmop_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:33.030 00:21:00 accel.accel_decmop_full -- accel/accel.sh@40 -- # local IFS=, 00:08:33.030 00:21:00 accel.accel_decmop_full -- accel/accel.sh@41 -- # jq -r . 00:08:33.030 [2024-07-12 00:21:00.625819] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:08:33.030 [2024-07-12 00:21:00.625886] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid862908 ] 00:08:33.030 EAL: No free 2048 kB hugepages reported on node 1 00:08:33.030 [2024-07-12 00:21:00.683985] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:33.030 [2024-07-12 00:21:00.772067] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:33.030 00:21:00 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:08:33.030 00:21:00 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:08:33.030 00:21:00 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:08:33.030 00:21:00 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:08:33.030 00:21:00 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:08:33.030 00:21:00 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:08:33.030 00:21:00 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:08:33.030 00:21:00 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:08:33.030 00:21:00 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:08:33.030 00:21:00 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:08:33.030 00:21:00 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:08:33.030 00:21:00 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:08:33.030 00:21:00 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=0x1 00:08:33.030 00:21:00 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:08:33.030 00:21:00 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:08:33.030 00:21:00 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:08:33.030 00:21:00 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:08:33.030 00:21:00 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:08:33.030 00:21:00 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:08:33.030 00:21:00 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:08:33.030 00:21:00 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:08:33.030 00:21:00 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:08:33.030 00:21:00 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:08:33.030 00:21:00 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:08:33.030 00:21:00 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=decompress 00:08:33.030 00:21:00 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:08:33.030 00:21:00 accel.accel_decmop_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:08:33.030 00:21:00 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:08:33.030 00:21:00 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:08:33.030 00:21:00 accel.accel_decmop_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:08:33.030 00:21:00 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:08:33.030 00:21:00 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:08:33.030 00:21:00 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:08:33.030 00:21:00 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:08:33.030 00:21:00 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:08:33.030 00:21:00 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:08:33.030 00:21:00 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:08:33.031 00:21:00 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=software 00:08:33.031 00:21:00 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:08:33.031 00:21:00 accel.accel_decmop_full -- accel/accel.sh@22 -- # accel_module=software 00:08:33.031 00:21:00 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:08:33.031 00:21:00 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:08:33.031 00:21:00 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:08:33.031 00:21:00 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:08:33.031 00:21:00 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:08:33.031 00:21:00 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:08:33.031 00:21:00 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=32 00:08:33.031 00:21:00 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:08:33.031 00:21:00 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:08:33.031 00:21:00 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:08:33.031 00:21:00 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=32 00:08:33.031 00:21:00 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:08:33.031 00:21:00 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:08:33.031 00:21:00 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:08:33.031 00:21:00 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=1 00:08:33.031 00:21:00 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:08:33.031 00:21:00 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:08:33.031 00:21:00 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:08:33.031 00:21:00 accel.accel_decmop_full -- accel/accel.sh@20 -- # val='1 seconds' 00:08:33.031 00:21:00 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:08:33.031 00:21:00 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:08:33.031 00:21:00 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:08:33.031 00:21:00 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=Yes 00:08:33.031 00:21:00 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:08:33.031 00:21:00 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:08:33.031 00:21:00 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:08:33.031 00:21:00 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:08:33.031 00:21:00 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:08:33.031 00:21:00 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:08:33.031 00:21:00 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:08:33.031 00:21:00 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:08:33.031 00:21:00 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:08:33.031 00:21:00 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:08:33.031 00:21:00 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:08:34.436 00:21:01 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:08:34.436 00:21:01 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:08:34.436 00:21:01 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:08:34.436 00:21:01 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:08:34.436 00:21:01 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:08:34.436 00:21:01 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:08:34.436 00:21:01 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:08:34.436 00:21:01 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:08:34.436 00:21:01 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:08:34.436 00:21:01 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:08:34.436 00:21:01 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:08:34.436 00:21:01 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:08:34.436 00:21:01 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:08:34.436 00:21:01 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:08:34.436 00:21:01 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:08:34.436 00:21:01 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:08:34.436 00:21:01 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:08:34.436 00:21:01 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:08:34.436 00:21:01 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:08:34.436 00:21:01 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:08:34.436 00:21:01 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:08:34.436 00:21:01 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:08:34.436 00:21:01 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:08:34.436 00:21:01 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:08:34.436 00:21:01 accel.accel_decmop_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:34.436 00:21:01 accel.accel_decmop_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:08:34.436 00:21:01 accel.accel_decmop_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:34.436 00:08:34.436 real 0m1.346s 00:08:34.436 user 0m1.230s 00:08:34.436 sys 0m0.117s 00:08:34.436 00:21:01 accel.accel_decmop_full -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:34.436 00:21:01 accel.accel_decmop_full -- common/autotest_common.sh@10 -- # set +x 00:08:34.436 ************************************ 00:08:34.436 END TEST accel_decmop_full 00:08:34.436 ************************************ 00:08:34.436 00:21:01 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:08:34.436 00:21:01 accel -- common/autotest_common.sh@1097 -- # '[' 11 -le 1 ']' 00:08:34.436 00:21:01 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:34.436 00:21:01 accel -- common/autotest_common.sh@10 -- # set +x 00:08:34.436 ************************************ 00:08:34.436 START TEST accel_decomp_mcore 00:08:34.436 ************************************ 00:08:34.436 00:21:02 accel.accel_decomp_mcore -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:08:34.436 00:21:02 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:08:34.436 00:21:02 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:08:34.436 00:21:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:34.436 00:21:02 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:08:34.436 00:21:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:34.437 00:21:02 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:08:34.437 00:21:02 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:08:34.437 00:21:02 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:34.437 00:21:02 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:34.437 00:21:02 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:34.437 00:21:02 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:34.437 00:21:02 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:34.437 00:21:02 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:08:34.437 00:21:02 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:08:34.437 [2024-07-12 00:21:02.024175] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:08:34.437 [2024-07-12 00:21:02.024240] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid863093 ] 00:08:34.437 EAL: No free 2048 kB hugepages reported on node 1 00:08:34.437 [2024-07-12 00:21:02.083668] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:34.437 [2024-07-12 00:21:02.176449] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:34.437 [2024-07-12 00:21:02.176544] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:34.437 [2024-07-12 00:21:02.176547] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:34.437 [2024-07-12 00:21:02.176495] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:34.437 00:21:02 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:34.437 00:21:02 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:34.437 00:21:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:34.437 00:21:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:34.437 00:21:02 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:34.437 00:21:02 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:34.437 00:21:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:34.437 00:21:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:34.437 00:21:02 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:34.437 00:21:02 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:34.437 00:21:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:34.437 00:21:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:34.437 00:21:02 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:08:34.437 00:21:02 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:34.437 00:21:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:34.437 00:21:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:34.437 00:21:02 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:34.437 00:21:02 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:34.437 00:21:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:34.437 00:21:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:34.437 00:21:02 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:34.437 00:21:02 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:34.437 00:21:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:34.437 00:21:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:34.437 00:21:02 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:08:34.437 00:21:02 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:34.437 00:21:02 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:08:34.437 00:21:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:34.437 00:21:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:34.437 00:21:02 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:34.437 00:21:02 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:34.437 00:21:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:34.437 00:21:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:34.437 00:21:02 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:34.437 00:21:02 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:34.437 00:21:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:34.437 00:21:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:34.437 00:21:02 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:08:34.437 00:21:02 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:34.437 00:21:02 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:08:34.437 00:21:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:34.437 00:21:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:34.437 00:21:02 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:08:34.437 00:21:02 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:34.437 00:21:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:34.437 00:21:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:34.437 00:21:02 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:08:34.437 00:21:02 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:34.437 00:21:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:34.437 00:21:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:34.437 00:21:02 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:08:34.437 00:21:02 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:34.437 00:21:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:34.437 00:21:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:34.437 00:21:02 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:08:34.437 00:21:02 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:34.437 00:21:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:34.437 00:21:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:34.437 00:21:02 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:08:34.437 00:21:02 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:34.437 00:21:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:34.437 00:21:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:34.437 00:21:02 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:08:34.437 00:21:02 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:34.437 00:21:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:34.437 00:21:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:34.437 00:21:02 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:34.437 00:21:02 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:34.437 00:21:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:34.437 00:21:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:34.437 00:21:02 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:34.437 00:21:02 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:34.437 00:21:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:34.437 00:21:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:35.806 00:21:03 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:35.806 00:21:03 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:35.806 00:21:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:35.806 00:21:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:35.806 00:21:03 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:35.806 00:21:03 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:35.806 00:21:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:35.806 00:21:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:35.806 00:21:03 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:35.806 00:21:03 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:35.806 00:21:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:35.806 00:21:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:35.806 00:21:03 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:35.806 00:21:03 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:35.806 00:21:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:35.806 00:21:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:35.806 00:21:03 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:35.806 00:21:03 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:35.806 00:21:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:35.806 00:21:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:35.806 00:21:03 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:35.806 00:21:03 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:35.806 00:21:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:35.806 00:21:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:35.806 00:21:03 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:35.806 00:21:03 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:35.807 00:21:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:35.807 00:21:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:35.807 00:21:03 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:35.807 00:21:03 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:35.807 00:21:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:35.807 00:21:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:35.807 00:21:03 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:35.807 00:21:03 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:35.807 00:21:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:35.807 00:21:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:35.807 00:21:03 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:35.807 00:21:03 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:08:35.807 00:21:03 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:35.807 00:08:35.807 real 0m1.350s 00:08:35.807 user 0m4.531s 00:08:35.807 sys 0m0.129s 00:08:35.807 00:21:03 accel.accel_decomp_mcore -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:35.807 00:21:03 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:08:35.807 ************************************ 00:08:35.807 END TEST accel_decomp_mcore 00:08:35.807 ************************************ 00:08:35.807 00:21:03 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:08:35.807 00:21:03 accel -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:08:35.807 00:21:03 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:35.807 00:21:03 accel -- common/autotest_common.sh@10 -- # set +x 00:08:35.807 ************************************ 00:08:35.807 START TEST accel_decomp_full_mcore 00:08:35.807 ************************************ 00:08:35.807 00:21:03 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:08:35.807 00:21:03 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:08:35.807 00:21:03 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:08:35.807 00:21:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:35.807 00:21:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:35.807 00:21:03 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:08:35.807 00:21:03 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:08:35.807 00:21:03 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:08:35.807 00:21:03 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:35.807 00:21:03 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:35.807 00:21:03 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:35.807 00:21:03 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:35.807 00:21:03 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:35.807 00:21:03 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:08:35.807 00:21:03 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:08:35.807 [2024-07-12 00:21:03.424987] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:08:35.807 [2024-07-12 00:21:03.425055] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid863276 ] 00:08:35.807 EAL: No free 2048 kB hugepages reported on node 1 00:08:35.807 [2024-07-12 00:21:03.483836] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:35.807 [2024-07-12 00:21:03.577774] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:35.807 [2024-07-12 00:21:03.577878] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:35.807 [2024-07-12 00:21:03.577881] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:35.807 [2024-07-12 00:21:03.577825] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:35.807 00:21:03 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:35.807 00:21:03 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:35.807 00:21:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:35.807 00:21:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:35.807 00:21:03 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:35.807 00:21:03 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:35.807 00:21:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:35.807 00:21:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:35.807 00:21:03 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:35.807 00:21:03 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:35.807 00:21:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:35.807 00:21:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:35.807 00:21:03 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:08:35.807 00:21:03 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:35.807 00:21:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:35.807 00:21:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:35.807 00:21:03 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:35.807 00:21:03 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:35.807 00:21:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:35.807 00:21:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:35.807 00:21:03 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:35.807 00:21:03 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:35.807 00:21:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:35.807 00:21:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:35.807 00:21:03 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:08:35.807 00:21:03 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:35.807 00:21:03 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:08:35.807 00:21:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:35.807 00:21:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:35.807 00:21:03 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:08:35.807 00:21:03 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:35.807 00:21:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:35.807 00:21:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:35.807 00:21:03 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:35.807 00:21:03 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:35.807 00:21:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:35.807 00:21:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:35.807 00:21:03 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:08:35.807 00:21:03 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:35.807 00:21:03 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:08:35.807 00:21:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:35.807 00:21:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:35.807 00:21:03 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:08:35.807 00:21:03 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:35.807 00:21:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:35.807 00:21:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:35.807 00:21:03 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:08:35.807 00:21:03 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:35.807 00:21:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:35.807 00:21:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:35.807 00:21:03 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:08:35.807 00:21:03 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:35.807 00:21:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:35.807 00:21:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:35.807 00:21:03 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:08:35.807 00:21:03 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:35.807 00:21:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:35.807 00:21:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:35.807 00:21:03 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:08:35.807 00:21:03 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:35.807 00:21:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:35.807 00:21:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:35.807 00:21:03 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:08:35.807 00:21:03 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:35.807 00:21:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:35.807 00:21:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:35.807 00:21:03 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:35.807 00:21:03 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:35.807 00:21:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:35.807 00:21:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:35.807 00:21:03 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:35.807 00:21:03 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:35.807 00:21:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:35.807 00:21:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:37.182 00:21:04 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:37.182 00:21:04 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:37.182 00:21:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:37.182 00:21:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:37.182 00:21:04 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:37.182 00:21:04 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:37.182 00:21:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:37.182 00:21:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:37.182 00:21:04 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:37.182 00:21:04 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:37.182 00:21:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:37.182 00:21:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:37.182 00:21:04 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:37.182 00:21:04 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:37.182 00:21:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:37.182 00:21:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:37.182 00:21:04 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:37.182 00:21:04 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:37.182 00:21:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:37.182 00:21:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:37.182 00:21:04 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:37.182 00:21:04 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:37.182 00:21:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:37.182 00:21:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:37.182 00:21:04 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:37.182 00:21:04 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:37.182 00:21:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:37.182 00:21:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:37.182 00:21:04 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:37.182 00:21:04 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:37.182 00:21:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:37.182 00:21:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:37.182 00:21:04 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:37.182 00:21:04 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:37.182 00:21:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:37.182 00:21:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:37.182 00:21:04 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:37.182 00:21:04 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:08:37.182 00:21:04 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:37.182 00:08:37.182 real 0m1.365s 00:08:37.182 user 0m4.587s 00:08:37.182 sys 0m0.140s 00:08:37.182 00:21:04 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:37.182 00:21:04 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:08:37.182 ************************************ 00:08:37.182 END TEST accel_decomp_full_mcore 00:08:37.182 ************************************ 00:08:37.182 00:21:04 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:08:37.182 00:21:04 accel -- common/autotest_common.sh@1097 -- # '[' 11 -le 1 ']' 00:08:37.182 00:21:04 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:37.182 00:21:04 accel -- common/autotest_common.sh@10 -- # set +x 00:08:37.182 ************************************ 00:08:37.182 START TEST accel_decomp_mthread 00:08:37.182 ************************************ 00:08:37.182 00:21:04 accel.accel_decomp_mthread -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:08:37.182 00:21:04 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:08:37.182 00:21:04 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:08:37.182 00:21:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:37.182 00:21:04 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:08:37.182 00:21:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:37.182 00:21:04 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:08:37.182 00:21:04 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:08:37.182 00:21:04 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:37.182 00:21:04 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:37.182 00:21:04 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:37.182 00:21:04 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:37.182 00:21:04 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:37.182 00:21:04 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:08:37.182 00:21:04 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:08:37.182 [2024-07-12 00:21:04.844661] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:08:37.182 [2024-07-12 00:21:04.844746] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid863447 ] 00:08:37.182 EAL: No free 2048 kB hugepages reported on node 1 00:08:37.182 [2024-07-12 00:21:04.905313] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:37.182 [2024-07-12 00:21:04.995819] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:37.441 00:21:05 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:37.441 00:21:05 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:37.441 00:21:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:37.441 00:21:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:37.441 00:21:05 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:37.441 00:21:05 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:37.441 00:21:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:37.441 00:21:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:37.441 00:21:05 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:37.441 00:21:05 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:37.441 00:21:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:37.441 00:21:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:37.441 00:21:05 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:08:37.441 00:21:05 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:37.441 00:21:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:37.441 00:21:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:37.441 00:21:05 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:37.441 00:21:05 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:37.441 00:21:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:37.441 00:21:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:37.441 00:21:05 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:37.441 00:21:05 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:37.441 00:21:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:37.441 00:21:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:37.441 00:21:05 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:08:37.441 00:21:05 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:37.441 00:21:05 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:08:37.441 00:21:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:37.441 00:21:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:37.441 00:21:05 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:37.441 00:21:05 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:37.441 00:21:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:37.441 00:21:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:37.441 00:21:05 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:37.441 00:21:05 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:37.441 00:21:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:37.441 00:21:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:37.441 00:21:05 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:08:37.441 00:21:05 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:37.441 00:21:05 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:08:37.441 00:21:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:37.441 00:21:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:37.441 00:21:05 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:08:37.441 00:21:05 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:37.441 00:21:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:37.441 00:21:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:37.441 00:21:05 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:08:37.441 00:21:05 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:37.441 00:21:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:37.441 00:21:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:37.441 00:21:05 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:08:37.441 00:21:05 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:37.441 00:21:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:37.441 00:21:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:37.441 00:21:05 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:08:37.441 00:21:05 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:37.441 00:21:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:37.441 00:21:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:37.441 00:21:05 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:08:37.441 00:21:05 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:37.441 00:21:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:37.441 00:21:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:37.441 00:21:05 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:08:37.441 00:21:05 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:37.441 00:21:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:37.441 00:21:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:37.441 00:21:05 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:37.441 00:21:05 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:37.441 00:21:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:37.441 00:21:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:37.441 00:21:05 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:37.441 00:21:05 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:37.441 00:21:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:37.441 00:21:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:38.378 00:21:06 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:38.378 00:21:06 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:38.378 00:21:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:38.378 00:21:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:38.378 00:21:06 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:38.378 00:21:06 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:38.378 00:21:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:38.378 00:21:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:38.378 00:21:06 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:38.378 00:21:06 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:38.378 00:21:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:38.378 00:21:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:38.378 00:21:06 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:38.378 00:21:06 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:38.378 00:21:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:38.378 00:21:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:38.378 00:21:06 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:38.378 00:21:06 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:38.378 00:21:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:38.378 00:21:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:38.378 00:21:06 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:38.378 00:21:06 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:38.378 00:21:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:38.378 00:21:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:38.378 00:21:06 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:38.378 00:21:06 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:38.378 00:21:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:38.378 00:21:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:38.378 00:21:06 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:38.378 00:21:06 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:08:38.378 00:21:06 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:38.378 00:08:38.378 real 0m1.350s 00:08:38.378 user 0m1.210s 00:08:38.378 sys 0m0.138s 00:08:38.378 00:21:06 accel.accel_decomp_mthread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:38.378 00:21:06 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:08:38.378 ************************************ 00:08:38.378 END TEST accel_decomp_mthread 00:08:38.378 ************************************ 00:08:38.378 00:21:06 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:08:38.378 00:21:06 accel -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:08:38.378 00:21:06 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:38.378 00:21:06 accel -- common/autotest_common.sh@10 -- # set +x 00:08:38.638 ************************************ 00:08:38.638 START TEST accel_decomp_full_mthread 00:08:38.638 ************************************ 00:08:38.638 00:21:06 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:08:38.638 00:21:06 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:08:38.638 00:21:06 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:08:38.638 00:21:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:38.638 00:21:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:38.638 00:21:06 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:08:38.638 00:21:06 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:08:38.638 00:21:06 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:08:38.638 00:21:06 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:38.638 00:21:06 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:38.638 00:21:06 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:38.638 00:21:06 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:38.638 00:21:06 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:38.638 00:21:06 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:08:38.638 00:21:06 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:08:38.638 [2024-07-12 00:21:06.244516] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:08:38.638 [2024-07-12 00:21:06.244583] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid863570 ] 00:08:38.638 EAL: No free 2048 kB hugepages reported on node 1 00:08:38.638 [2024-07-12 00:21:06.304016] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:38.638 [2024-07-12 00:21:06.393314] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:38.638 00:21:06 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:38.638 00:21:06 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:38.638 00:21:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:38.638 00:21:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:38.638 00:21:06 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:38.638 00:21:06 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:38.638 00:21:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:38.638 00:21:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:38.638 00:21:06 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:38.638 00:21:06 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:38.638 00:21:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:38.638 00:21:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:38.638 00:21:06 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:08:38.638 00:21:06 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:38.638 00:21:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:38.638 00:21:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:38.638 00:21:06 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:38.638 00:21:06 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:38.638 00:21:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:38.638 00:21:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:38.638 00:21:06 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:38.638 00:21:06 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:38.638 00:21:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:38.638 00:21:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:38.638 00:21:06 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:08:38.638 00:21:06 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:38.638 00:21:06 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:08:38.638 00:21:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:38.638 00:21:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:38.638 00:21:06 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:08:38.638 00:21:06 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:38.638 00:21:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:38.638 00:21:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:38.638 00:21:06 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:38.638 00:21:06 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:38.638 00:21:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:38.638 00:21:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:38.638 00:21:06 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:08:38.638 00:21:06 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:38.638 00:21:06 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:08:38.638 00:21:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:38.638 00:21:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:38.638 00:21:06 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:08:38.638 00:21:06 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:38.638 00:21:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:38.638 00:21:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:38.638 00:21:06 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:08:38.638 00:21:06 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:38.638 00:21:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:38.638 00:21:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:38.638 00:21:06 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:08:38.638 00:21:06 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:38.638 00:21:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:38.638 00:21:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:38.638 00:21:06 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:08:38.638 00:21:06 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:38.638 00:21:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:38.638 00:21:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:38.638 00:21:06 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:08:38.638 00:21:06 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:38.638 00:21:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:38.638 00:21:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:38.638 00:21:06 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:08:38.638 00:21:06 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:38.638 00:21:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:38.638 00:21:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:38.638 00:21:06 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:38.638 00:21:06 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:38.638 00:21:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:38.638 00:21:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:38.638 00:21:06 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:38.638 00:21:06 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:38.638 00:21:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:38.638 00:21:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:40.012 00:21:07 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:40.012 00:21:07 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:40.012 00:21:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:40.012 00:21:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:40.012 00:21:07 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:40.012 00:21:07 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:40.012 00:21:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:40.012 00:21:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:40.012 00:21:07 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:40.012 00:21:07 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:40.012 00:21:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:40.012 00:21:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:40.012 00:21:07 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:40.012 00:21:07 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:40.012 00:21:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:40.012 00:21:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:40.012 00:21:07 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:40.012 00:21:07 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:40.012 00:21:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:40.012 00:21:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:40.012 00:21:07 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:40.012 00:21:07 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:40.012 00:21:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:40.012 00:21:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:40.012 00:21:07 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:40.012 00:21:07 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:40.012 00:21:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:40.012 00:21:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:40.012 00:21:07 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:40.012 00:21:07 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:08:40.012 00:21:07 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:40.012 00:08:40.012 real 0m1.378s 00:08:40.012 user 0m1.246s 00:08:40.012 sys 0m0.130s 00:08:40.012 00:21:07 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:40.012 00:21:07 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:08:40.012 ************************************ 00:08:40.012 END TEST accel_decomp_full_mthread 00:08:40.012 ************************************ 00:08:40.012 00:21:07 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:08:40.012 00:21:07 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:08:40.012 00:21:07 accel -- accel/accel.sh@137 -- # build_accel_config 00:08:40.012 00:21:07 accel -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:08:40.012 00:21:07 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:40.012 00:21:07 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:40.012 00:21:07 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:40.012 00:21:07 accel -- common/autotest_common.sh@10 -- # set +x 00:08:40.012 00:21:07 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:40.012 00:21:07 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:40.012 00:21:07 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:40.012 00:21:07 accel -- accel/accel.sh@40 -- # local IFS=, 00:08:40.012 00:21:07 accel -- accel/accel.sh@41 -- # jq -r . 00:08:40.012 ************************************ 00:08:40.012 START TEST accel_dif_functional_tests 00:08:40.012 ************************************ 00:08:40.012 00:21:07 accel.accel_dif_functional_tests -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:08:40.012 [2024-07-12 00:21:07.702213] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:08:40.012 [2024-07-12 00:21:07.702314] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid863699 ] 00:08:40.012 EAL: No free 2048 kB hugepages reported on node 1 00:08:40.012 [2024-07-12 00:21:07.762571] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:40.272 [2024-07-12 00:21:07.852763] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:40.272 [2024-07-12 00:21:07.852807] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:40.272 [2024-07-12 00:21:07.852810] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:40.272 00:08:40.272 00:08:40.272 CUnit - A unit testing framework for C - Version 2.1-3 00:08:40.272 http://cunit.sourceforge.net/ 00:08:40.272 00:08:40.272 00:08:40.272 Suite: accel_dif 00:08:40.272 Test: verify: DIF generated, GUARD check ...passed 00:08:40.272 Test: verify: DIF generated, APPTAG check ...passed 00:08:40.272 Test: verify: DIF generated, REFTAG check ...passed 00:08:40.272 Test: verify: DIF not generated, GUARD check ...[2024-07-12 00:21:07.935250] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:08:40.272 passed 00:08:40.273 Test: verify: DIF not generated, APPTAG check ...[2024-07-12 00:21:07.935327] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:08:40.273 passed 00:08:40.273 Test: verify: DIF not generated, REFTAG check ...[2024-07-12 00:21:07.935365] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:08:40.273 passed 00:08:40.273 Test: verify: APPTAG correct, APPTAG check ...passed 00:08:40.273 Test: verify: APPTAG incorrect, APPTAG check ...[2024-07-12 00:21:07.935435] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:08:40.273 passed 00:08:40.273 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:08:40.273 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:08:40.273 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:08:40.273 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-07-12 00:21:07.935594] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:08:40.273 passed 00:08:40.273 Test: verify copy: DIF generated, GUARD check ...passed 00:08:40.273 Test: verify copy: DIF generated, APPTAG check ...passed 00:08:40.273 Test: verify copy: DIF generated, REFTAG check ...passed 00:08:40.273 Test: verify copy: DIF not generated, GUARD check ...[2024-07-12 00:21:07.935779] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:08:40.273 passed 00:08:40.273 Test: verify copy: DIF not generated, APPTAG check ...[2024-07-12 00:21:07.935820] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:08:40.273 passed 00:08:40.273 Test: verify copy: DIF not generated, REFTAG check ...[2024-07-12 00:21:07.935855] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:08:40.273 passed 00:08:40.273 Test: generate copy: DIF generated, GUARD check ...passed 00:08:40.273 Test: generate copy: DIF generated, APTTAG check ...passed 00:08:40.273 Test: generate copy: DIF generated, REFTAG check ...passed 00:08:40.273 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:08:40.273 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:08:40.273 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:08:40.273 Test: generate copy: iovecs-len validate ...[2024-07-12 00:21:07.936107] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:08:40.273 passed 00:08:40.273 Test: generate copy: buffer alignment validate ...passed 00:08:40.273 00:08:40.273 Run Summary: Type Total Ran Passed Failed Inactive 00:08:40.273 suites 1 1 n/a 0 0 00:08:40.273 tests 26 26 26 0 0 00:08:40.273 asserts 115 115 115 0 n/a 00:08:40.273 00:08:40.273 Elapsed time = 0.003 seconds 00:08:40.273 00:08:40.273 real 0m0.428s 00:08:40.273 user 0m0.606s 00:08:40.273 sys 0m0.166s 00:08:40.273 00:21:08 accel.accel_dif_functional_tests -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:40.273 00:21:08 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:08:40.273 ************************************ 00:08:40.273 END TEST accel_dif_functional_tests 00:08:40.273 ************************************ 00:08:40.532 00:08:40.532 real 0m30.125s 00:08:40.532 user 0m33.546s 00:08:40.532 sys 0m4.214s 00:08:40.532 00:21:08 accel -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:40.532 00:21:08 accel -- common/autotest_common.sh@10 -- # set +x 00:08:40.532 ************************************ 00:08:40.532 END TEST accel 00:08:40.532 ************************************ 00:08:40.532 00:21:08 -- spdk/autotest.sh@184 -- # run_test accel_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:08:40.532 00:21:08 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:08:40.532 00:21:08 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:40.532 00:21:08 -- common/autotest_common.sh@10 -- # set +x 00:08:40.532 ************************************ 00:08:40.532 START TEST accel_rpc 00:08:40.532 ************************************ 00:08:40.532 00:21:08 accel_rpc -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:08:40.532 * Looking for test storage... 00:08:40.532 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:08:40.532 00:21:08 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:08:40.532 00:21:08 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=863909 00:08:40.532 00:21:08 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 863909 00:08:40.532 00:21:08 accel_rpc -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:08:40.532 00:21:08 accel_rpc -- common/autotest_common.sh@827 -- # '[' -z 863909 ']' 00:08:40.532 00:21:08 accel_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:40.532 00:21:08 accel_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:08:40.532 00:21:08 accel_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:40.532 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:40.532 00:21:08 accel_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:08:40.532 00:21:08 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:40.532 [2024-07-12 00:21:08.269567] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:08:40.532 [2024-07-12 00:21:08.269698] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid863909 ] 00:08:40.532 EAL: No free 2048 kB hugepages reported on node 1 00:08:40.532 [2024-07-12 00:21:08.331014] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:40.791 [2024-07-12 00:21:08.418386] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:40.791 00:21:08 accel_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:08:40.791 00:21:08 accel_rpc -- common/autotest_common.sh@860 -- # return 0 00:08:40.791 00:21:08 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:08:40.791 00:21:08 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:08:40.791 00:21:08 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:08:40.791 00:21:08 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:08:40.791 00:21:08 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:08:40.791 00:21:08 accel_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:08:40.791 00:21:08 accel_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:40.791 00:21:08 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:40.791 ************************************ 00:08:40.791 START TEST accel_assign_opcode 00:08:40.791 ************************************ 00:08:40.791 00:21:08 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1121 -- # accel_assign_opcode_test_suite 00:08:40.791 00:21:08 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:08:40.791 00:21:08 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:40.791 00:21:08 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:08:40.791 [2024-07-12 00:21:08.535166] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:08:40.791 00:21:08 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:40.791 00:21:08 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:08:40.791 00:21:08 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:40.791 00:21:08 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:08:40.791 [2024-07-12 00:21:08.543165] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:08:40.791 00:21:08 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:40.791 00:21:08 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:08:40.791 00:21:08 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:40.791 00:21:08 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:08:41.051 00:21:08 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:41.051 00:21:08 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:08:41.051 00:21:08 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:08:41.051 00:21:08 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:41.051 00:21:08 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:08:41.051 00:21:08 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:08:41.051 00:21:08 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:41.051 software 00:08:41.051 00:08:41.051 real 0m0.258s 00:08:41.051 user 0m0.039s 00:08:41.051 sys 0m0.007s 00:08:41.051 00:21:08 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:41.051 00:21:08 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:08:41.051 ************************************ 00:08:41.051 END TEST accel_assign_opcode 00:08:41.051 ************************************ 00:08:41.051 00:21:08 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 863909 00:08:41.051 00:21:08 accel_rpc -- common/autotest_common.sh@946 -- # '[' -z 863909 ']' 00:08:41.051 00:21:08 accel_rpc -- common/autotest_common.sh@950 -- # kill -0 863909 00:08:41.051 00:21:08 accel_rpc -- common/autotest_common.sh@951 -- # uname 00:08:41.051 00:21:08 accel_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:08:41.051 00:21:08 accel_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 863909 00:08:41.051 00:21:08 accel_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:08:41.051 00:21:08 accel_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:08:41.051 00:21:08 accel_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 863909' 00:08:41.051 killing process with pid 863909 00:08:41.051 00:21:08 accel_rpc -- common/autotest_common.sh@965 -- # kill 863909 00:08:41.051 00:21:08 accel_rpc -- common/autotest_common.sh@970 -- # wait 863909 00:08:41.334 00:08:41.334 real 0m0.937s 00:08:41.334 user 0m0.927s 00:08:41.334 sys 0m0.392s 00:08:41.334 00:21:09 accel_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:41.334 00:21:09 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:41.334 ************************************ 00:08:41.334 END TEST accel_rpc 00:08:41.334 ************************************ 00:08:41.334 00:21:09 -- spdk/autotest.sh@185 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:08:41.334 00:21:09 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:08:41.334 00:21:09 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:41.334 00:21:09 -- common/autotest_common.sh@10 -- # set +x 00:08:41.334 ************************************ 00:08:41.334 START TEST app_cmdline 00:08:41.334 ************************************ 00:08:41.334 00:21:09 app_cmdline -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:08:41.593 * Looking for test storage... 00:08:41.593 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:08:41.593 00:21:09 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:08:41.593 00:21:09 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=864400 00:08:41.593 00:21:09 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:08:41.593 00:21:09 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 864400 00:08:41.593 00:21:09 app_cmdline -- common/autotest_common.sh@827 -- # '[' -z 864400 ']' 00:08:41.593 00:21:09 app_cmdline -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:41.593 00:21:09 app_cmdline -- common/autotest_common.sh@832 -- # local max_retries=100 00:08:41.593 00:21:09 app_cmdline -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:41.593 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:41.593 00:21:09 app_cmdline -- common/autotest_common.sh@836 -- # xtrace_disable 00:08:41.593 00:21:09 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:08:41.593 [2024-07-12 00:21:09.265003] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:08:41.593 [2024-07-12 00:21:09.265113] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid864400 ] 00:08:41.593 EAL: No free 2048 kB hugepages reported on node 1 00:08:41.593 [2024-07-12 00:21:09.339425] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:41.852 [2024-07-12 00:21:09.446312] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:41.852 00:21:09 app_cmdline -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:08:41.852 00:21:09 app_cmdline -- common/autotest_common.sh@860 -- # return 0 00:08:41.852 00:21:09 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:08:42.419 { 00:08:42.419 "version": "SPDK v24.05.1-pre git sha1 5fa2f5086", 00:08:42.419 "fields": { 00:08:42.419 "major": 24, 00:08:42.419 "minor": 5, 00:08:42.419 "patch": 1, 00:08:42.419 "suffix": "-pre", 00:08:42.419 "commit": "5fa2f5086" 00:08:42.419 } 00:08:42.419 } 00:08:42.419 00:21:09 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:08:42.419 00:21:09 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:08:42.419 00:21:09 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:08:42.419 00:21:09 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:08:42.419 00:21:09 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:08:42.419 00:21:09 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:08:42.419 00:21:09 app_cmdline -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:42.419 00:21:09 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:08:42.419 00:21:09 app_cmdline -- app/cmdline.sh@26 -- # sort 00:08:42.419 00:21:09 app_cmdline -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:42.419 00:21:10 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:08:42.419 00:21:10 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:08:42.419 00:21:10 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:42.419 00:21:10 app_cmdline -- common/autotest_common.sh@648 -- # local es=0 00:08:42.419 00:21:10 app_cmdline -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:42.419 00:21:10 app_cmdline -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:42.419 00:21:10 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:42.419 00:21:10 app_cmdline -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:42.419 00:21:10 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:42.419 00:21:10 app_cmdline -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:42.419 00:21:10 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:42.419 00:21:10 app_cmdline -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:42.419 00:21:10 app_cmdline -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:08:42.419 00:21:10 app_cmdline -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:42.678 request: 00:08:42.678 { 00:08:42.678 "method": "env_dpdk_get_mem_stats", 00:08:42.678 "req_id": 1 00:08:42.678 } 00:08:42.678 Got JSON-RPC error response 00:08:42.678 response: 00:08:42.678 { 00:08:42.678 "code": -32601, 00:08:42.678 "message": "Method not found" 00:08:42.678 } 00:08:42.678 00:21:10 app_cmdline -- common/autotest_common.sh@651 -- # es=1 00:08:42.678 00:21:10 app_cmdline -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:42.678 00:21:10 app_cmdline -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:42.678 00:21:10 app_cmdline -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:42.678 00:21:10 app_cmdline -- app/cmdline.sh@1 -- # killprocess 864400 00:08:42.678 00:21:10 app_cmdline -- common/autotest_common.sh@946 -- # '[' -z 864400 ']' 00:08:42.678 00:21:10 app_cmdline -- common/autotest_common.sh@950 -- # kill -0 864400 00:08:42.678 00:21:10 app_cmdline -- common/autotest_common.sh@951 -- # uname 00:08:42.678 00:21:10 app_cmdline -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:08:42.679 00:21:10 app_cmdline -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 864400 00:08:42.679 00:21:10 app_cmdline -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:08:42.679 00:21:10 app_cmdline -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:08:42.679 00:21:10 app_cmdline -- common/autotest_common.sh@964 -- # echo 'killing process with pid 864400' 00:08:42.679 killing process with pid 864400 00:08:42.679 00:21:10 app_cmdline -- common/autotest_common.sh@965 -- # kill 864400 00:08:42.679 00:21:10 app_cmdline -- common/autotest_common.sh@970 -- # wait 864400 00:08:42.938 00:08:42.938 real 0m1.464s 00:08:42.938 user 0m2.012s 00:08:42.938 sys 0m0.479s 00:08:42.938 00:21:10 app_cmdline -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:42.938 00:21:10 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:08:42.938 ************************************ 00:08:42.938 END TEST app_cmdline 00:08:42.938 ************************************ 00:08:42.938 00:21:10 -- spdk/autotest.sh@186 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:08:42.938 00:21:10 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:08:42.938 00:21:10 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:42.938 00:21:10 -- common/autotest_common.sh@10 -- # set +x 00:08:42.938 ************************************ 00:08:42.938 START TEST version 00:08:42.938 ************************************ 00:08:42.938 00:21:10 version -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:08:42.938 * Looking for test storage... 00:08:42.938 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:08:42.938 00:21:10 version -- app/version.sh@17 -- # get_header_version major 00:08:42.938 00:21:10 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:08:42.938 00:21:10 version -- app/version.sh@14 -- # cut -f2 00:08:42.938 00:21:10 version -- app/version.sh@14 -- # tr -d '"' 00:08:42.938 00:21:10 version -- app/version.sh@17 -- # major=24 00:08:42.938 00:21:10 version -- app/version.sh@18 -- # get_header_version minor 00:08:42.938 00:21:10 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:08:42.938 00:21:10 version -- app/version.sh@14 -- # cut -f2 00:08:42.938 00:21:10 version -- app/version.sh@14 -- # tr -d '"' 00:08:42.938 00:21:10 version -- app/version.sh@18 -- # minor=5 00:08:42.938 00:21:10 version -- app/version.sh@19 -- # get_header_version patch 00:08:42.938 00:21:10 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:08:42.938 00:21:10 version -- app/version.sh@14 -- # cut -f2 00:08:42.938 00:21:10 version -- app/version.sh@14 -- # tr -d '"' 00:08:42.938 00:21:10 version -- app/version.sh@19 -- # patch=1 00:08:42.938 00:21:10 version -- app/version.sh@20 -- # get_header_version suffix 00:08:42.938 00:21:10 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:08:42.938 00:21:10 version -- app/version.sh@14 -- # cut -f2 00:08:42.938 00:21:10 version -- app/version.sh@14 -- # tr -d '"' 00:08:42.938 00:21:10 version -- app/version.sh@20 -- # suffix=-pre 00:08:42.938 00:21:10 version -- app/version.sh@22 -- # version=24.5 00:08:42.938 00:21:10 version -- app/version.sh@25 -- # (( patch != 0 )) 00:08:42.938 00:21:10 version -- app/version.sh@25 -- # version=24.5.1 00:08:42.938 00:21:10 version -- app/version.sh@28 -- # version=24.5.1rc0 00:08:42.938 00:21:10 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:08:42.938 00:21:10 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:08:43.197 00:21:10 version -- app/version.sh@30 -- # py_version=24.5.1rc0 00:08:43.197 00:21:10 version -- app/version.sh@31 -- # [[ 24.5.1rc0 == \2\4\.\5\.\1\r\c\0 ]] 00:08:43.197 00:08:43.197 real 0m0.108s 00:08:43.197 user 0m0.058s 00:08:43.197 sys 0m0.070s 00:08:43.197 00:21:10 version -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:43.197 00:21:10 version -- common/autotest_common.sh@10 -- # set +x 00:08:43.197 ************************************ 00:08:43.197 END TEST version 00:08:43.197 ************************************ 00:08:43.197 00:21:10 -- spdk/autotest.sh@188 -- # '[' 0 -eq 1 ']' 00:08:43.197 00:21:10 -- spdk/autotest.sh@198 -- # uname -s 00:08:43.197 00:21:10 -- spdk/autotest.sh@198 -- # [[ Linux == Linux ]] 00:08:43.197 00:21:10 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:08:43.197 00:21:10 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:08:43.197 00:21:10 -- spdk/autotest.sh@211 -- # '[' 0 -eq 1 ']' 00:08:43.197 00:21:10 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:08:43.197 00:21:10 -- spdk/autotest.sh@260 -- # timing_exit lib 00:08:43.197 00:21:10 -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:43.197 00:21:10 -- common/autotest_common.sh@10 -- # set +x 00:08:43.197 00:21:10 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:08:43.197 00:21:10 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:08:43.197 00:21:10 -- spdk/autotest.sh@279 -- # '[' 1 -eq 1 ']' 00:08:43.197 00:21:10 -- spdk/autotest.sh@280 -- # export NET_TYPE 00:08:43.197 00:21:10 -- spdk/autotest.sh@283 -- # '[' tcp = rdma ']' 00:08:43.197 00:21:10 -- spdk/autotest.sh@286 -- # '[' tcp = tcp ']' 00:08:43.197 00:21:10 -- spdk/autotest.sh@287 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:08:43.197 00:21:10 -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:08:43.197 00:21:10 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:43.197 00:21:10 -- common/autotest_common.sh@10 -- # set +x 00:08:43.197 ************************************ 00:08:43.197 START TEST nvmf_tcp 00:08:43.197 ************************************ 00:08:43.197 00:21:10 nvmf_tcp -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:08:43.197 * Looking for test storage... 00:08:43.197 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:08:43.197 00:21:10 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:08:43.197 00:21:10 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:08:43.197 00:21:10 nvmf_tcp -- nvmf/nvmf.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:43.197 00:21:10 nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:08:43.197 00:21:10 nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:43.197 00:21:10 nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:43.197 00:21:10 nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:43.197 00:21:10 nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:43.197 00:21:10 nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:43.197 00:21:10 nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:43.197 00:21:10 nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:43.197 00:21:10 nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:43.197 00:21:10 nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:43.197 00:21:10 nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:43.197 00:21:10 nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:08:43.197 00:21:10 nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:08:43.197 00:21:10 nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:43.197 00:21:10 nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:43.197 00:21:10 nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:43.197 00:21:10 nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:43.197 00:21:10 nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:43.197 00:21:10 nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:43.197 00:21:10 nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:43.197 00:21:10 nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:43.197 00:21:10 nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:43.198 00:21:10 nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:43.198 00:21:10 nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:43.198 00:21:10 nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:08:43.198 00:21:10 nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:43.198 00:21:10 nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:08:43.198 00:21:10 nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:43.198 00:21:10 nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:43.198 00:21:10 nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:43.198 00:21:10 nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:43.198 00:21:10 nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:43.198 00:21:10 nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:43.198 00:21:10 nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:43.198 00:21:10 nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:43.198 00:21:10 nvmf_tcp -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:08:43.198 00:21:10 nvmf_tcp -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:08:43.198 00:21:10 nvmf_tcp -- nvmf/nvmf.sh@20 -- # timing_enter target 00:08:43.198 00:21:10 nvmf_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:08:43.198 00:21:10 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:43.198 00:21:10 nvmf_tcp -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:08:43.198 00:21:10 nvmf_tcp -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:08:43.198 00:21:10 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:08:43.198 00:21:10 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:43.198 00:21:10 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:43.198 ************************************ 00:08:43.198 START TEST nvmf_example 00:08:43.198 ************************************ 00:08:43.198 00:21:10 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:08:43.198 * Looking for test storage... 00:08:43.198 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:43.198 00:21:10 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:43.198 00:21:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:08:43.198 00:21:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:43.198 00:21:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:43.198 00:21:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:43.198 00:21:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:43.198 00:21:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:43.198 00:21:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:43.198 00:21:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:43.198 00:21:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:43.198 00:21:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:43.198 00:21:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:43.198 00:21:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:08:43.198 00:21:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:08:43.198 00:21:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:43.198 00:21:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:43.198 00:21:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:43.198 00:21:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:43.198 00:21:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:43.198 00:21:10 nvmf_tcp.nvmf_example -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:43.198 00:21:10 nvmf_tcp.nvmf_example -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:43.198 00:21:11 nvmf_tcp.nvmf_example -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:43.198 00:21:11 nvmf_tcp.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:43.198 00:21:11 nvmf_tcp.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:43.198 00:21:11 nvmf_tcp.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:43.198 00:21:11 nvmf_tcp.nvmf_example -- paths/export.sh@5 -- # export PATH 00:08:43.198 00:21:11 nvmf_tcp.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:43.198 00:21:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@47 -- # : 0 00:08:43.198 00:21:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:43.198 00:21:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:43.198 00:21:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:43.198 00:21:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:43.198 00:21:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:43.198 00:21:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:43.198 00:21:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:43.198 00:21:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:43.198 00:21:11 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:08:43.198 00:21:11 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:08:43.198 00:21:11 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:08:43.198 00:21:11 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:08:43.198 00:21:11 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:08:43.198 00:21:11 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:08:43.198 00:21:11 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:08:43.198 00:21:11 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:08:43.198 00:21:11 nvmf_tcp.nvmf_example -- common/autotest_common.sh@720 -- # xtrace_disable 00:08:43.198 00:21:11 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:43.198 00:21:11 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:08:43.198 00:21:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:43.198 00:21:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:43.198 00:21:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:43.198 00:21:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:43.198 00:21:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:43.198 00:21:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:43.198 00:21:11 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:43.198 00:21:11 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:43.198 00:21:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:43.198 00:21:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:43.198 00:21:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@285 -- # xtrace_disable 00:08:43.198 00:21:11 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:45.104 00:21:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:45.104 00:21:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # pci_devs=() 00:08:45.104 00:21:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:45.104 00:21:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:45.104 00:21:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:45.104 00:21:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:45.104 00:21:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:45.104 00:21:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # net_devs=() 00:08:45.104 00:21:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:45.104 00:21:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # e810=() 00:08:45.104 00:21:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # local -ga e810 00:08:45.104 00:21:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # x722=() 00:08:45.104 00:21:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # local -ga x722 00:08:45.104 00:21:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # mlx=() 00:08:45.104 00:21:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # local -ga mlx 00:08:45.104 00:21:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:45.104 00:21:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:45.104 00:21:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:45.104 00:21:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:45.104 00:21:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:45.104 00:21:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:45.104 00:21:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:45.104 00:21:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:45.104 00:21:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:45.104 00:21:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:45.104 00:21:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:45.104 00:21:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:45.104 00:21:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:45.104 00:21:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:45.104 00:21:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:45.104 00:21:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:45.104 00:21:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:45.105 00:21:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:45.105 00:21:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:08:45.105 Found 0000:08:00.0 (0x8086 - 0x159b) 00:08:45.105 00:21:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:45.105 00:21:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:45.105 00:21:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:45.105 00:21:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:45.105 00:21:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:45.105 00:21:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:45.105 00:21:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:08:45.105 Found 0000:08:00.1 (0x8086 - 0x159b) 00:08:45.105 00:21:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:45.105 00:21:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:45.105 00:21:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:45.105 00:21:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:45.105 00:21:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:45.105 00:21:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:45.105 00:21:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:45.105 00:21:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:45.105 00:21:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:45.105 00:21:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:45.105 00:21:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:45.105 00:21:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:45.105 00:21:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:45.105 00:21:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:45.105 00:21:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:45.105 00:21:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:08:45.105 Found net devices under 0000:08:00.0: cvl_0_0 00:08:45.105 00:21:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:45.105 00:21:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:45.105 00:21:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:45.105 00:21:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:45.105 00:21:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:45.105 00:21:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:45.105 00:21:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:45.105 00:21:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:45.105 00:21:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:08:45.105 Found net devices under 0000:08:00.1: cvl_0_1 00:08:45.105 00:21:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:45.105 00:21:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:45.105 00:21:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # is_hw=yes 00:08:45.105 00:21:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:45.105 00:21:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:45.105 00:21:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:45.105 00:21:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:45.105 00:21:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:45.105 00:21:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:45.105 00:21:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:45.105 00:21:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:45.105 00:21:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:45.105 00:21:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:45.105 00:21:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:45.105 00:21:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:45.105 00:21:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:45.105 00:21:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:45.105 00:21:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:45.105 00:21:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:45.105 00:21:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:45.105 00:21:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:45.105 00:21:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:45.105 00:21:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:45.105 00:21:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:45.105 00:21:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:45.105 00:21:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:45.105 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:45.105 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.344 ms 00:08:45.105 00:08:45.105 --- 10.0.0.2 ping statistics --- 00:08:45.105 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:45.105 rtt min/avg/max/mdev = 0.344/0.344/0.344/0.000 ms 00:08:45.105 00:21:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:45.105 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:45.105 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.156 ms 00:08:45.105 00:08:45.105 --- 10.0.0.1 ping statistics --- 00:08:45.105 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:45.105 rtt min/avg/max/mdev = 0.156/0.156/0.156/0.000 ms 00:08:45.105 00:21:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:45.105 00:21:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@422 -- # return 0 00:08:45.105 00:21:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:45.105 00:21:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:45.105 00:21:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:45.105 00:21:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:45.105 00:21:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:45.105 00:21:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:45.105 00:21:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:45.105 00:21:12 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:08:45.105 00:21:12 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:08:45.105 00:21:12 nvmf_tcp.nvmf_example -- common/autotest_common.sh@720 -- # xtrace_disable 00:08:45.105 00:21:12 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:45.105 00:21:12 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:08:45.105 00:21:12 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:08:45.105 00:21:12 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=866032 00:08:45.105 00:21:12 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:08:45.105 00:21:12 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:45.105 00:21:12 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 866032 00:08:45.105 00:21:12 nvmf_tcp.nvmf_example -- common/autotest_common.sh@827 -- # '[' -z 866032 ']' 00:08:45.105 00:21:12 nvmf_tcp.nvmf_example -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:45.105 00:21:12 nvmf_tcp.nvmf_example -- common/autotest_common.sh@832 -- # local max_retries=100 00:08:45.105 00:21:12 nvmf_tcp.nvmf_example -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:45.105 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:45.105 00:21:12 nvmf_tcp.nvmf_example -- common/autotest_common.sh@836 -- # xtrace_disable 00:08:45.105 00:21:12 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:45.105 EAL: No free 2048 kB hugepages reported on node 1 00:08:45.363 00:21:13 nvmf_tcp.nvmf_example -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:08:45.363 00:21:13 nvmf_tcp.nvmf_example -- common/autotest_common.sh@860 -- # return 0 00:08:45.363 00:21:13 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:08:45.363 00:21:13 nvmf_tcp.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:45.363 00:21:13 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:45.363 00:21:13 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:45.363 00:21:13 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:45.363 00:21:13 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:45.363 00:21:13 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:45.363 00:21:13 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:08:45.363 00:21:13 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:45.363 00:21:13 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:45.363 00:21:13 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:45.363 00:21:13 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:08:45.363 00:21:13 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:45.363 00:21:13 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:45.363 00:21:13 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:45.363 00:21:13 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:45.363 00:21:13 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:08:45.363 00:21:13 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:45.363 00:21:13 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:45.363 00:21:13 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:45.363 00:21:13 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:45.363 00:21:13 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:45.363 00:21:13 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:45.363 00:21:13 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:45.363 00:21:13 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:45.363 00:21:13 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:08:45.363 00:21:13 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:08:45.363 EAL: No free 2048 kB hugepages reported on node 1 00:08:57.560 Initializing NVMe Controllers 00:08:57.560 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:57.560 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:08:57.560 Initialization complete. Launching workers. 00:08:57.560 ======================================================== 00:08:57.560 Latency(us) 00:08:57.560 Device Information : IOPS MiB/s Average min max 00:08:57.560 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 14033.98 54.82 4560.19 1111.83 16544.24 00:08:57.560 ======================================================== 00:08:57.560 Total : 14033.98 54.82 4560.19 1111.83 16544.24 00:08:57.560 00:08:57.560 00:21:23 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:08:57.560 00:21:23 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:08:57.560 00:21:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:57.560 00:21:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@117 -- # sync 00:08:57.560 00:21:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:57.560 00:21:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@120 -- # set +e 00:08:57.560 00:21:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:57.560 00:21:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:57.560 rmmod nvme_tcp 00:08:57.560 rmmod nvme_fabrics 00:08:57.560 rmmod nvme_keyring 00:08:57.560 00:21:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:57.560 00:21:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@124 -- # set -e 00:08:57.560 00:21:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@125 -- # return 0 00:08:57.560 00:21:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@489 -- # '[' -n 866032 ']' 00:08:57.560 00:21:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@490 -- # killprocess 866032 00:08:57.560 00:21:23 nvmf_tcp.nvmf_example -- common/autotest_common.sh@946 -- # '[' -z 866032 ']' 00:08:57.560 00:21:23 nvmf_tcp.nvmf_example -- common/autotest_common.sh@950 -- # kill -0 866032 00:08:57.560 00:21:23 nvmf_tcp.nvmf_example -- common/autotest_common.sh@951 -- # uname 00:08:57.560 00:21:23 nvmf_tcp.nvmf_example -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:08:57.560 00:21:23 nvmf_tcp.nvmf_example -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 866032 00:08:57.560 00:21:23 nvmf_tcp.nvmf_example -- common/autotest_common.sh@952 -- # process_name=nvmf 00:08:57.560 00:21:23 nvmf_tcp.nvmf_example -- common/autotest_common.sh@956 -- # '[' nvmf = sudo ']' 00:08:57.560 00:21:23 nvmf_tcp.nvmf_example -- common/autotest_common.sh@964 -- # echo 'killing process with pid 866032' 00:08:57.560 killing process with pid 866032 00:08:57.560 00:21:23 nvmf_tcp.nvmf_example -- common/autotest_common.sh@965 -- # kill 866032 00:08:57.560 00:21:23 nvmf_tcp.nvmf_example -- common/autotest_common.sh@970 -- # wait 866032 00:08:57.560 nvmf threads initialize successfully 00:08:57.560 bdev subsystem init successfully 00:08:57.560 created a nvmf target service 00:08:57.560 create targets's poll groups done 00:08:57.560 all subsystems of target started 00:08:57.560 nvmf target is running 00:08:57.560 all subsystems of target stopped 00:08:57.560 destroy targets's poll groups done 00:08:57.560 destroyed the nvmf target service 00:08:57.560 bdev subsystem finish successfully 00:08:57.560 nvmf threads destroy successfully 00:08:57.560 00:21:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:57.560 00:21:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:57.560 00:21:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:57.560 00:21:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:57.560 00:21:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:57.560 00:21:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:57.560 00:21:23 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:57.560 00:21:23 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:57.820 00:21:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:57.820 00:21:25 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:08:57.820 00:21:25 nvmf_tcp.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:57.820 00:21:25 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:58.081 00:08:58.081 real 0m14.721s 00:08:58.081 user 0m41.181s 00:08:58.081 sys 0m3.235s 00:08:58.081 00:21:25 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:58.081 00:21:25 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:58.081 ************************************ 00:08:58.081 END TEST nvmf_example 00:08:58.081 ************************************ 00:08:58.081 00:21:25 nvmf_tcp -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:08:58.081 00:21:25 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:08:58.081 00:21:25 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:58.081 00:21:25 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:58.081 ************************************ 00:08:58.081 START TEST nvmf_filesystem 00:08:58.081 ************************************ 00:08:58.081 00:21:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:08:58.081 * Looking for test storage... 00:08:58.081 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:58.081 00:21:25 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:08:58.081 00:21:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:08:58.081 00:21:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:08:58.081 00:21:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:08:58.081 00:21:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:08:58.081 00:21:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@38 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:08:58.081 00:21:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@43 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:08:58.081 00:21:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:08:58.081 00:21:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:08:58.081 00:21:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:08:58.081 00:21:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:08:58.081 00:21:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:08:58.081 00:21:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:08:58.081 00:21:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:08:58.081 00:21:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:08:58.081 00:21:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:08:58.081 00:21:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:08:58.081 00:21:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:08:58.081 00:21:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:08:58.081 00:21:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:08:58.081 00:21:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:08:58.081 00:21:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:08:58.081 00:21:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:08:58.081 00:21:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:08:58.081 00:21:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:08:58.081 00:21:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:08:58.081 00:21:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:08:58.081 00:21:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:08:58.081 00:21:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:08:58.081 00:21:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:08:58.081 00:21:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:08:58.081 00:21:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:08:58.081 00:21:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:08:58.081 00:21:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:08:58.081 00:21:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:08:58.081 00:21:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:08:58.081 00:21:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:08:58.081 00:21:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:08:58.081 00:21:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:08:58.081 00:21:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:08:58.081 00:21:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:08:58.081 00:21:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:08:58.081 00:21:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:08:58.081 00:21:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:08:58.081 00:21:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:08:58.081 00:21:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:08:58.081 00:21:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:08:58.081 00:21:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:08:58.081 00:21:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR=//var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:08:58.081 00:21:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:08:58.081 00:21:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:08:58.081 00:21:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:08:58.081 00:21:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:08:58.081 00:21:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:08:58.081 00:21:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:08:58.081 00:21:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:08:58.081 00:21:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:08:58.081 00:21:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:08:58.081 00:21:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:08:58.081 00:21:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=y 00:08:58.081 00:21:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:08:58.081 00:21:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:08:58.081 00:21:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:08:58.081 00:21:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:08:58.081 00:21:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:08:58.081 00:21:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:08:58.081 00:21:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:08:58.081 00:21:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:08:58.081 00:21:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:08:58.081 00:21:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:08:58.081 00:21:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:08:58.081 00:21:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:08:58.081 00:21:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:08:58.081 00:21:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:08:58.081 00:21:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:08:58.081 00:21:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:08:58.082 00:21:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:08:58.082 00:21:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_FC=n 00:08:58.082 00:21:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:08:58.082 00:21:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:08:58.082 00:21:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:08:58.082 00:21:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:08:58.082 00:21:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:08:58.082 00:21:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:08:58.082 00:21:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES= 00:08:58.082 00:21:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:08:58.082 00:21:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:08:58.082 00:21:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:08:58.082 00:21:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:08:58.082 00:21:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:08:58.082 00:21:25 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_URING=n 00:08:58.082 00:21:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@53 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:08:58.082 00:21:25 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:08:58.082 00:21:25 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:08:58.082 00:21:25 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:08:58.082 00:21:25 nvmf_tcp.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:08:58.082 00:21:25 nvmf_tcp.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:08:58.082 00:21:25 nvmf_tcp.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:08:58.082 00:21:25 nvmf_tcp.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:08:58.082 00:21:25 nvmf_tcp.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:08:58.082 00:21:25 nvmf_tcp.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:08:58.082 00:21:25 nvmf_tcp.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:08:58.082 00:21:25 nvmf_tcp.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:08:58.082 00:21:25 nvmf_tcp.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:08:58.082 00:21:25 nvmf_tcp.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:08:58.082 00:21:25 nvmf_tcp.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:08:58.082 00:21:25 nvmf_tcp.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:08:58.082 #define SPDK_CONFIG_H 00:08:58.082 #define SPDK_CONFIG_APPS 1 00:08:58.082 #define SPDK_CONFIG_ARCH native 00:08:58.082 #undef SPDK_CONFIG_ASAN 00:08:58.082 #undef SPDK_CONFIG_AVAHI 00:08:58.082 #undef SPDK_CONFIG_CET 00:08:58.082 #define SPDK_CONFIG_COVERAGE 1 00:08:58.082 #define SPDK_CONFIG_CROSS_PREFIX 00:08:58.082 #undef SPDK_CONFIG_CRYPTO 00:08:58.082 #undef SPDK_CONFIG_CRYPTO_MLX5 00:08:58.082 #undef SPDK_CONFIG_CUSTOMOCF 00:08:58.082 #undef SPDK_CONFIG_DAOS 00:08:58.082 #define SPDK_CONFIG_DAOS_DIR 00:08:58.082 #define SPDK_CONFIG_DEBUG 1 00:08:58.082 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:08:58.082 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:08:58.082 #define SPDK_CONFIG_DPDK_INC_DIR //var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:08:58.082 #define SPDK_CONFIG_DPDK_LIB_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:08:58.082 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:08:58.082 #undef SPDK_CONFIG_DPDK_UADK 00:08:58.082 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:08:58.082 #define SPDK_CONFIG_EXAMPLES 1 00:08:58.082 #undef SPDK_CONFIG_FC 00:08:58.082 #define SPDK_CONFIG_FC_PATH 00:08:58.082 #define SPDK_CONFIG_FIO_PLUGIN 1 00:08:58.082 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:08:58.082 #undef SPDK_CONFIG_FUSE 00:08:58.082 #undef SPDK_CONFIG_FUZZER 00:08:58.082 #define SPDK_CONFIG_FUZZER_LIB 00:08:58.082 #undef SPDK_CONFIG_GOLANG 00:08:58.082 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:08:58.082 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:08:58.082 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:08:58.082 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:08:58.082 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:08:58.082 #undef SPDK_CONFIG_HAVE_LIBBSD 00:08:58.082 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:08:58.082 #define SPDK_CONFIG_IDXD 1 00:08:58.082 #define SPDK_CONFIG_IDXD_KERNEL 1 00:08:58.082 #undef SPDK_CONFIG_IPSEC_MB 00:08:58.082 #define SPDK_CONFIG_IPSEC_MB_DIR 00:08:58.082 #define SPDK_CONFIG_ISAL 1 00:08:58.082 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:08:58.082 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:08:58.082 #define SPDK_CONFIG_LIBDIR 00:08:58.082 #undef SPDK_CONFIG_LTO 00:08:58.082 #define SPDK_CONFIG_MAX_LCORES 00:08:58.082 #define SPDK_CONFIG_NVME_CUSE 1 00:08:58.082 #undef SPDK_CONFIG_OCF 00:08:58.082 #define SPDK_CONFIG_OCF_PATH 00:08:58.082 #define SPDK_CONFIG_OPENSSL_PATH 00:08:58.082 #undef SPDK_CONFIG_PGO_CAPTURE 00:08:58.082 #define SPDK_CONFIG_PGO_DIR 00:08:58.082 #undef SPDK_CONFIG_PGO_USE 00:08:58.082 #define SPDK_CONFIG_PREFIX /usr/local 00:08:58.082 #undef SPDK_CONFIG_RAID5F 00:08:58.082 #undef SPDK_CONFIG_RBD 00:08:58.082 #define SPDK_CONFIG_RDMA 1 00:08:58.082 #define SPDK_CONFIG_RDMA_PROV verbs 00:08:58.082 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:08:58.082 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:08:58.082 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:08:58.082 #define SPDK_CONFIG_SHARED 1 00:08:58.082 #undef SPDK_CONFIG_SMA 00:08:58.082 #define SPDK_CONFIG_TESTS 1 00:08:58.082 #undef SPDK_CONFIG_TSAN 00:08:58.082 #define SPDK_CONFIG_UBLK 1 00:08:58.082 #define SPDK_CONFIG_UBSAN 1 00:08:58.082 #undef SPDK_CONFIG_UNIT_TESTS 00:08:58.082 #undef SPDK_CONFIG_URING 00:08:58.082 #define SPDK_CONFIG_URING_PATH 00:08:58.082 #undef SPDK_CONFIG_URING_ZNS 00:08:58.082 #undef SPDK_CONFIG_USDT 00:08:58.082 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:08:58.082 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:08:58.082 #define SPDK_CONFIG_VFIO_USER 1 00:08:58.082 #define SPDK_CONFIG_VFIO_USER_DIR 00:08:58.082 #define SPDK_CONFIG_VHOST 1 00:08:58.082 #define SPDK_CONFIG_VIRTIO 1 00:08:58.082 #undef SPDK_CONFIG_VTUNE 00:08:58.082 #define SPDK_CONFIG_VTUNE_DIR 00:08:58.082 #define SPDK_CONFIG_WERROR 1 00:08:58.082 #define SPDK_CONFIG_WPDK_DIR 00:08:58.082 #undef SPDK_CONFIG_XNVME 00:08:58.082 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:08:58.082 00:21:25 nvmf_tcp.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:08:58.082 00:21:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:58.082 00:21:25 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:58.082 00:21:25 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:58.082 00:21:25 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:58.082 00:21:25 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:58.082 00:21:25 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:58.082 00:21:25 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:58.082 00:21:25 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:08:58.082 00:21:25 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:58.082 00:21:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:08:58.082 00:21:25 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:08:58.082 00:21:25 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:08:58.082 00:21:25 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:08:58.082 00:21:25 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:08:58.082 00:21:25 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:08:58.082 00:21:25 nvmf_tcp.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:08:58.082 00:21:25 nvmf_tcp.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:08:58.082 00:21:25 nvmf_tcp.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:08:58.082 00:21:25 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # uname -s 00:08:58.082 00:21:25 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:08:58.082 00:21:25 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:08:58.082 00:21:25 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:08:58.082 00:21:25 nvmf_tcp.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:08:58.082 00:21:25 nvmf_tcp.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:08:58.082 00:21:25 nvmf_tcp.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:08:58.082 00:21:25 nvmf_tcp.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:08:58.082 00:21:25 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:08:58.082 00:21:25 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:08:58.082 00:21:25 nvmf_tcp.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:08:58.083 00:21:25 nvmf_tcp.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:08:58.083 00:21:25 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:08:58.083 00:21:25 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:08:58.083 00:21:25 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:08:58.083 00:21:25 nvmf_tcp.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:08:58.083 00:21:25 nvmf_tcp.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:08:58.083 00:21:25 nvmf_tcp.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:08:58.083 00:21:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@57 -- # : 1 00:08:58.083 00:21:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@58 -- # export RUN_NIGHTLY 00:08:58.083 00:21:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@61 -- # : 0 00:08:58.083 00:21:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@62 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:08:58.083 00:21:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@63 -- # : 0 00:08:58.083 00:21:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@64 -- # export SPDK_RUN_VALGRIND 00:08:58.083 00:21:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@65 -- # : 1 00:08:58.083 00:21:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@66 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:08:58.083 00:21:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@67 -- # : 0 00:08:58.083 00:21:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@68 -- # export SPDK_TEST_UNITTEST 00:08:58.083 00:21:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@69 -- # : 00:08:58.083 00:21:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@70 -- # export SPDK_TEST_AUTOBUILD 00:08:58.083 00:21:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@71 -- # : 0 00:08:58.083 00:21:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@72 -- # export SPDK_TEST_RELEASE_BUILD 00:08:58.083 00:21:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@73 -- # : 0 00:08:58.083 00:21:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@74 -- # export SPDK_TEST_ISAL 00:08:58.083 00:21:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@75 -- # : 0 00:08:58.083 00:21:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@76 -- # export SPDK_TEST_ISCSI 00:08:58.083 00:21:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@77 -- # : 0 00:08:58.083 00:21:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@78 -- # export SPDK_TEST_ISCSI_INITIATOR 00:08:58.083 00:21:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@79 -- # : 0 00:08:58.083 00:21:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@80 -- # export SPDK_TEST_NVME 00:08:58.083 00:21:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@81 -- # : 0 00:08:58.083 00:21:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@82 -- # export SPDK_TEST_NVME_PMR 00:08:58.083 00:21:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@83 -- # : 0 00:08:58.083 00:21:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@84 -- # export SPDK_TEST_NVME_BP 00:08:58.083 00:21:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@85 -- # : 1 00:08:58.083 00:21:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@86 -- # export SPDK_TEST_NVME_CLI 00:08:58.083 00:21:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@87 -- # : 0 00:08:58.083 00:21:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@88 -- # export SPDK_TEST_NVME_CUSE 00:08:58.083 00:21:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@89 -- # : 0 00:08:58.083 00:21:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@90 -- # export SPDK_TEST_NVME_FDP 00:08:58.083 00:21:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@91 -- # : 1 00:08:58.083 00:21:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@92 -- # export SPDK_TEST_NVMF 00:08:58.083 00:21:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@93 -- # : 1 00:08:58.083 00:21:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@94 -- # export SPDK_TEST_VFIOUSER 00:08:58.083 00:21:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@95 -- # : 0 00:08:58.083 00:21:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@96 -- # export SPDK_TEST_VFIOUSER_QEMU 00:08:58.083 00:21:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@97 -- # : 0 00:08:58.083 00:21:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@98 -- # export SPDK_TEST_FUZZER 00:08:58.083 00:21:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@99 -- # : 0 00:08:58.083 00:21:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@100 -- # export SPDK_TEST_FUZZER_SHORT 00:08:58.083 00:21:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@101 -- # : tcp 00:08:58.083 00:21:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@102 -- # export SPDK_TEST_NVMF_TRANSPORT 00:08:58.083 00:21:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@103 -- # : 0 00:08:58.083 00:21:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@104 -- # export SPDK_TEST_RBD 00:08:58.083 00:21:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@105 -- # : 0 00:08:58.083 00:21:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@106 -- # export SPDK_TEST_VHOST 00:08:58.083 00:21:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@107 -- # : 0 00:08:58.083 00:21:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@108 -- # export SPDK_TEST_BLOCKDEV 00:08:58.083 00:21:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@109 -- # : 0 00:08:58.083 00:21:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@110 -- # export SPDK_TEST_IOAT 00:08:58.083 00:21:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@111 -- # : 0 00:08:58.083 00:21:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@112 -- # export SPDK_TEST_BLOBFS 00:08:58.083 00:21:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@113 -- # : 0 00:08:58.083 00:21:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@114 -- # export SPDK_TEST_VHOST_INIT 00:08:58.083 00:21:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@115 -- # : 0 00:08:58.083 00:21:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@116 -- # export SPDK_TEST_LVOL 00:08:58.083 00:21:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@117 -- # : 0 00:08:58.083 00:21:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@118 -- # export SPDK_TEST_VBDEV_COMPRESS 00:08:58.083 00:21:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@119 -- # : 0 00:08:58.083 00:21:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@120 -- # export SPDK_RUN_ASAN 00:08:58.083 00:21:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@121 -- # : 1 00:08:58.083 00:21:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@122 -- # export SPDK_RUN_UBSAN 00:08:58.083 00:21:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@123 -- # : /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:08:58.083 00:21:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@124 -- # export SPDK_RUN_EXTERNAL_DPDK 00:08:58.083 00:21:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@125 -- # : 0 00:08:58.083 00:21:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@126 -- # export SPDK_RUN_NON_ROOT 00:08:58.083 00:21:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@127 -- # : 0 00:08:58.083 00:21:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@128 -- # export SPDK_TEST_CRYPTO 00:08:58.083 00:21:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@129 -- # : 0 00:08:58.083 00:21:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@130 -- # export SPDK_TEST_FTL 00:08:58.083 00:21:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@131 -- # : 0 00:08:58.083 00:21:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@132 -- # export SPDK_TEST_OCF 00:08:58.083 00:21:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@133 -- # : 0 00:08:58.083 00:21:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@134 -- # export SPDK_TEST_VMD 00:08:58.083 00:21:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@135 -- # : 0 00:08:58.083 00:21:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@136 -- # export SPDK_TEST_OPAL 00:08:58.083 00:21:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@137 -- # : v22.11.4 00:08:58.083 00:21:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@138 -- # export SPDK_TEST_NATIVE_DPDK 00:08:58.083 00:21:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@139 -- # : true 00:08:58.083 00:21:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@140 -- # export SPDK_AUTOTEST_X 00:08:58.083 00:21:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@141 -- # : 0 00:08:58.083 00:21:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@142 -- # export SPDK_TEST_RAID5 00:08:58.083 00:21:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@143 -- # : 0 00:08:58.083 00:21:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@144 -- # export SPDK_TEST_URING 00:08:58.083 00:21:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@145 -- # : 0 00:08:58.083 00:21:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@146 -- # export SPDK_TEST_USDT 00:08:58.083 00:21:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@147 -- # : 0 00:08:58.083 00:21:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@148 -- # export SPDK_TEST_USE_IGB_UIO 00:08:58.083 00:21:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@149 -- # : 0 00:08:58.083 00:21:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@150 -- # export SPDK_TEST_SCHEDULER 00:08:58.083 00:21:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@151 -- # : 0 00:08:58.083 00:21:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@152 -- # export SPDK_TEST_SCANBUILD 00:08:58.083 00:21:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@153 -- # : e810 00:08:58.083 00:21:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@154 -- # export SPDK_TEST_NVMF_NICS 00:08:58.083 00:21:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@155 -- # : 0 00:08:58.083 00:21:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@156 -- # export SPDK_TEST_SMA 00:08:58.083 00:21:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@157 -- # : 0 00:08:58.083 00:21:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@158 -- # export SPDK_TEST_DAOS 00:08:58.083 00:21:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@159 -- # : 0 00:08:58.083 00:21:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@160 -- # export SPDK_TEST_XNVME 00:08:58.083 00:21:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@161 -- # : 0 00:08:58.083 00:21:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@162 -- # export SPDK_TEST_ACCEL_DSA 00:08:58.083 00:21:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@163 -- # : 0 00:08:58.083 00:21:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@164 -- # export SPDK_TEST_ACCEL_IAA 00:08:58.083 00:21:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 00:08:58.083 00:21:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_FUZZER_TARGET 00:08:58.083 00:21:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@168 -- # : 0 00:08:58.083 00:21:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@169 -- # export SPDK_TEST_NVMF_MDNS 00:08:58.083 00:21:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@170 -- # : 0 00:08:58.083 00:21:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@171 -- # export SPDK_JSONRPC_GO_CLIENT 00:08:58.083 00:21:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:08:58.083 00:21:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@174 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:08:58.083 00:21:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:08:58.083 00:21:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:08:58.083 00:21:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:58.083 00:21:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:58.084 00:21:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:58.084 00:21:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:58.084 00:21:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@180 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:08:58.084 00:21:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@180 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:08:58.084 00:21:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@184 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:08:58.084 00:21:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@184 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:08:58.084 00:21:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@188 -- # export PYTHONDONTWRITEBYTECODE=1 00:08:58.084 00:21:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@188 -- # PYTHONDONTWRITEBYTECODE=1 00:08:58.084 00:21:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@192 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:08:58.084 00:21:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@192 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:08:58.084 00:21:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:08:58.084 00:21:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:08:58.084 00:21:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@197 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:08:58.084 00:21:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@198 -- # rm -rf /var/tmp/asan_suppression_file 00:08:58.084 00:21:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@199 -- # cat 00:08:58.084 00:21:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@235 -- # echo leak:libfuse3.so 00:08:58.084 00:21:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@237 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:08:58.084 00:21:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@237 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:08:58.084 00:21:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@239 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:08:58.084 00:21:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@239 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:08:58.084 00:21:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@241 -- # '[' -z /var/spdk/dependencies ']' 00:08:58.084 00:21:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@244 -- # export DEPENDENCY_DIR 00:08:58.084 00:21:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@248 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:08:58.084 00:21:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@248 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:08:58.084 00:21:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:08:58.084 00:21:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:08:58.084 00:21:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@252 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:08:58.084 00:21:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@252 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:08:58.084 00:21:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:08:58.084 00:21:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:08:58.084 00:21:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@255 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:08:58.084 00:21:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@255 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:08:58.084 00:21:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@258 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:08:58.084 00:21:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@258 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:08:58.084 00:21:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@261 -- # '[' 0 -eq 0 ']' 00:08:58.084 00:21:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@262 -- # export valgrind= 00:08:58.084 00:21:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@262 -- # valgrind= 00:08:58.084 00:21:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@268 -- # uname -s 00:08:58.084 00:21:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@268 -- # '[' Linux = Linux ']' 00:08:58.084 00:21:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # HUGEMEM=4096 00:08:58.084 00:21:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@270 -- # export CLEAR_HUGE=yes 00:08:58.084 00:21:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@270 -- # CLEAR_HUGE=yes 00:08:58.084 00:21:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # [[ 0 -eq 1 ]] 00:08:58.084 00:21:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # [[ 0 -eq 1 ]] 00:08:58.084 00:21:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@278 -- # MAKE=make 00:08:58.084 00:21:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@279 -- # MAKEFLAGS=-j32 00:08:58.084 00:21:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@295 -- # export HUGEMEM=4096 00:08:58.084 00:21:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@295 -- # HUGEMEM=4096 00:08:58.084 00:21:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@297 -- # NO_HUGE=() 00:08:58.084 00:21:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@298 -- # TEST_MODE= 00:08:58.084 00:21:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@299 -- # for i in "$@" 00:08:58.084 00:21:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@300 -- # case "$i" in 00:08:58.084 00:21:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@305 -- # TEST_TRANSPORT=tcp 00:08:58.084 00:21:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@317 -- # [[ -z 867346 ]] 00:08:58.084 00:21:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@317 -- # kill -0 867346 00:08:58.084 00:21:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1676 -- # set_test_storage 2147483648 00:08:58.084 00:21:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # [[ -v testdir ]] 00:08:58.084 00:21:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@329 -- # local requested_size=2147483648 00:08:58.084 00:21:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@330 -- # local mount target_dir 00:08:58.084 00:21:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@332 -- # local -A mounts fss sizes avails uses 00:08:58.084 00:21:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@333 -- # local source fs size avail mount use 00:08:58.084 00:21:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@335 -- # local storage_fallback storage_candidates 00:08:58.084 00:21:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@337 -- # mktemp -udt spdk.XXXXXX 00:08:58.084 00:21:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@337 -- # storage_fallback=/tmp/spdk.yDzcRL 00:08:58.084 00:21:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@342 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:08:58.084 00:21:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@344 -- # [[ -n '' ]] 00:08:58.084 00:21:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@349 -- # [[ -n '' ]] 00:08:58.084 00:21:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@354 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.yDzcRL/tests/target /tmp/spdk.yDzcRL 00:08:58.084 00:21:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@357 -- # requested_size=2214592512 00:08:58.084 00:21:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:08:58.084 00:21:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@326 -- # df -T 00:08:58.084 00:21:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@326 -- # grep -v Filesystem 00:08:58.084 00:21:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=spdk_devtmpfs 00:08:58.084 00:21:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=devtmpfs 00:08:58.084 00:21:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=67108864 00:08:58.084 00:21:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=67108864 00:08:58.084 00:21:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=0 00:08:58.084 00:21:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:08:58.084 00:21:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=/dev/pmem0 00:08:58.084 00:21:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=ext2 00:08:58.084 00:21:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=1957711872 00:08:58.084 00:21:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=5284429824 00:08:58.084 00:21:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=3326717952 00:08:58.084 00:21:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:08:58.084 00:21:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=spdk_root 00:08:58.084 00:21:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=overlay 00:08:58.084 00:21:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=41829347328 00:08:58.084 00:21:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=53546164224 00:08:58.084 00:21:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=11716816896 00:08:58.084 00:21:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:08:58.084 00:21:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:08:58.084 00:21:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:08:58.084 00:21:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=26768371712 00:08:58.084 00:21:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=26773082112 00:08:58.084 00:21:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=4710400 00:08:58.084 00:21:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:08:58.084 00:21:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:08:58.084 00:21:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:08:58.084 00:21:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=10700750848 00:08:58.084 00:21:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=10709233664 00:08:58.084 00:21:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=8482816 00:08:58.084 00:21:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:08:58.085 00:21:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:08:58.085 00:21:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:08:58.085 00:21:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=26772754432 00:08:58.085 00:21:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=26773082112 00:08:58.085 00:21:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=327680 00:08:58.085 00:21:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:08:58.085 00:21:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:08:58.085 00:21:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:08:58.085 00:21:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=5354610688 00:08:58.085 00:21:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=5354614784 00:08:58.085 00:21:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=4096 00:08:58.085 00:21:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:08:58.085 00:21:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@365 -- # printf '* Looking for test storage...\n' 00:08:58.085 * Looking for test storage... 00:08:58.085 00:21:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@367 -- # local target_space new_size 00:08:58.085 00:21:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@368 -- # for target_dir in "${storage_candidates[@]}" 00:08:58.085 00:21:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@371 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:58.085 00:21:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@371 -- # awk '$1 !~ /Filesystem/{print $6}' 00:08:58.085 00:21:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@371 -- # mount=/ 00:08:58.085 00:21:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@373 -- # target_space=41829347328 00:08:58.085 00:21:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@374 -- # (( target_space == 0 || target_space < requested_size )) 00:08:58.085 00:21:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@377 -- # (( target_space >= requested_size )) 00:08:58.085 00:21:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@379 -- # [[ overlay == tmpfs ]] 00:08:58.085 00:21:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@379 -- # [[ overlay == ramfs ]] 00:08:58.085 00:21:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@379 -- # [[ / == / ]] 00:08:58.085 00:21:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # new_size=13931409408 00:08:58.085 00:21:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@381 -- # (( new_size * 100 / sizes[/] > 95 )) 00:08:58.085 00:21:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@386 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:58.085 00:21:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@386 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:58.085 00:21:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:58.085 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:58.085 00:21:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@388 -- # return 0 00:08:58.085 00:21:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1678 -- # set -o errtrace 00:08:58.085 00:21:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1679 -- # shopt -s extdebug 00:08:58.085 00:21:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1680 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:08:58.085 00:21:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1682 -- # PS4=' \t $test_domain -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:08:58.085 00:21:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1683 -- # true 00:08:58.085 00:21:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1685 -- # xtrace_fd 00:08:58.085 00:21:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:08:58.085 00:21:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:08:58.085 00:21:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:08:58.085 00:21:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:08:58.085 00:21:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:08:58.085 00:21:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:08:58.085 00:21:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:08:58.085 00:21:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:08:58.085 00:21:25 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:58.085 00:21:25 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:08:58.085 00:21:25 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:58.085 00:21:25 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:58.085 00:21:25 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:58.085 00:21:25 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:58.085 00:21:25 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:58.085 00:21:25 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:58.085 00:21:25 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:58.085 00:21:25 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:58.085 00:21:25 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:58.085 00:21:25 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:58.085 00:21:25 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:08:58.085 00:21:25 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:08:58.085 00:21:25 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:58.085 00:21:25 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:58.085 00:21:25 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:58.085 00:21:25 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:58.085 00:21:25 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:58.085 00:21:25 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:58.085 00:21:25 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:58.085 00:21:25 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:58.085 00:21:25 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:58.085 00:21:25 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:58.085 00:21:25 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:58.085 00:21:25 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:08:58.085 00:21:25 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:58.085 00:21:25 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@47 -- # : 0 00:08:58.085 00:21:25 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:58.085 00:21:25 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:58.085 00:21:25 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:58.085 00:21:25 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:58.085 00:21:25 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:58.085 00:21:25 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:58.085 00:21:25 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:58.085 00:21:25 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:58.085 00:21:25 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:08:58.085 00:21:25 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:08:58.085 00:21:25 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:08:58.085 00:21:25 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:58.085 00:21:25 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:58.086 00:21:25 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:58.086 00:21:25 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:58.086 00:21:25 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:58.086 00:21:25 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:58.086 00:21:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:58.086 00:21:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:58.086 00:21:25 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:58.086 00:21:25 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:58.086 00:21:25 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@285 -- # xtrace_disable 00:08:58.086 00:21:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:09:00.027 00:21:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:00.027 00:21:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # pci_devs=() 00:09:00.027 00:21:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:00.027 00:21:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:00.027 00:21:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:00.027 00:21:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:00.027 00:21:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:00.027 00:21:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # net_devs=() 00:09:00.027 00:21:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:00.027 00:21:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # e810=() 00:09:00.027 00:21:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # local -ga e810 00:09:00.027 00:21:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # x722=() 00:09:00.027 00:21:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # local -ga x722 00:09:00.027 00:21:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # mlx=() 00:09:00.027 00:21:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # local -ga mlx 00:09:00.027 00:21:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:00.027 00:21:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:00.027 00:21:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:00.027 00:21:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:00.027 00:21:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:00.027 00:21:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:00.027 00:21:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:00.027 00:21:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:00.027 00:21:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:00.027 00:21:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:00.027 00:21:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:00.027 00:21:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:00.027 00:21:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:00.027 00:21:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:00.027 00:21:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:00.027 00:21:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:00.027 00:21:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:00.027 00:21:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:00.027 00:21:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:09:00.027 Found 0000:08:00.0 (0x8086 - 0x159b) 00:09:00.027 00:21:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:00.027 00:21:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:00.027 00:21:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:00.027 00:21:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:00.027 00:21:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:00.027 00:21:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:00.027 00:21:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:09:00.027 Found 0000:08:00.1 (0x8086 - 0x159b) 00:09:00.027 00:21:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:00.027 00:21:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:00.027 00:21:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:00.027 00:21:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:00.027 00:21:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:00.027 00:21:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:00.027 00:21:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:00.027 00:21:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:00.027 00:21:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:00.027 00:21:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:00.027 00:21:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:00.027 00:21:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:00.027 00:21:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:00.027 00:21:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:00.027 00:21:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:00.027 00:21:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:09:00.027 Found net devices under 0000:08:00.0: cvl_0_0 00:09:00.027 00:21:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:00.027 00:21:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:00.027 00:21:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:00.027 00:21:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:00.027 00:21:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:00.027 00:21:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:00.027 00:21:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:00.027 00:21:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:00.027 00:21:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:09:00.027 Found net devices under 0000:08:00.1: cvl_0_1 00:09:00.027 00:21:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:00.027 00:21:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:00.027 00:21:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # is_hw=yes 00:09:00.027 00:21:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:00.027 00:21:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:00.027 00:21:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:00.027 00:21:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:00.027 00:21:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:00.027 00:21:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:00.027 00:21:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:00.027 00:21:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:00.027 00:21:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:00.027 00:21:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:00.027 00:21:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:00.027 00:21:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:00.027 00:21:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:00.027 00:21:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:00.027 00:21:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:00.027 00:21:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:00.027 00:21:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:00.027 00:21:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:00.027 00:21:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:00.027 00:21:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:00.027 00:21:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:00.027 00:21:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:00.027 00:21:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:00.027 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:00.027 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.224 ms 00:09:00.027 00:09:00.027 --- 10.0.0.2 ping statistics --- 00:09:00.027 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:00.027 rtt min/avg/max/mdev = 0.224/0.224/0.224/0.000 ms 00:09:00.027 00:21:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:00.027 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:00.027 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.139 ms 00:09:00.027 00:09:00.027 --- 10.0.0.1 ping statistics --- 00:09:00.027 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:00.027 rtt min/avg/max/mdev = 0.139/0.139/0.139/0.000 ms 00:09:00.027 00:21:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:00.027 00:21:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@422 -- # return 0 00:09:00.027 00:21:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:00.027 00:21:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:00.027 00:21:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:00.027 00:21:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:00.027 00:21:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:00.027 00:21:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:00.027 00:21:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:00.027 00:21:27 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:09:00.027 00:21:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:09:00.027 00:21:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:00.027 00:21:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:09:00.028 ************************************ 00:09:00.028 START TEST nvmf_filesystem_no_in_capsule 00:09:00.028 ************************************ 00:09:00.028 00:21:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1121 -- # nvmf_filesystem_part 0 00:09:00.028 00:21:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:09:00.028 00:21:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:09:00.028 00:21:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:00.028 00:21:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@720 -- # xtrace_disable 00:09:00.028 00:21:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:00.028 00:21:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=868525 00:09:00.028 00:21:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:00.028 00:21:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 868525 00:09:00.028 00:21:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@827 -- # '[' -z 868525 ']' 00:09:00.028 00:21:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:00.028 00:21:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@832 -- # local max_retries=100 00:09:00.028 00:21:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:00.028 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:00.028 00:21:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@836 -- # xtrace_disable 00:09:00.028 00:21:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:00.028 [2024-07-12 00:21:27.726939] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:09:00.028 [2024-07-12 00:21:27.727041] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:00.028 EAL: No free 2048 kB hugepages reported on node 1 00:09:00.028 [2024-07-12 00:21:27.794248] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:00.288 [2024-07-12 00:21:27.888564] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:00.288 [2024-07-12 00:21:27.888635] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:00.288 [2024-07-12 00:21:27.888652] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:00.288 [2024-07-12 00:21:27.888666] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:00.288 [2024-07-12 00:21:27.888677] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:00.288 [2024-07-12 00:21:27.888758] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:00.288 [2024-07-12 00:21:27.888815] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:00.288 [2024-07-12 00:21:27.888898] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:00.288 [2024-07-12 00:21:27.888863] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:00.288 00:21:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:09:00.288 00:21:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@860 -- # return 0 00:09:00.288 00:21:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:00.288 00:21:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:00.288 00:21:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:00.288 00:21:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:00.288 00:21:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:09:00.288 00:21:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:09:00.288 00:21:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:00.288 00:21:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:00.288 [2024-07-12 00:21:28.030234] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:00.288 00:21:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:00.288 00:21:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:09:00.288 00:21:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:00.288 00:21:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:00.547 Malloc1 00:09:00.547 00:21:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:00.547 00:21:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:00.547 00:21:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:00.547 00:21:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:00.547 00:21:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:00.547 00:21:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:00.547 00:21:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:00.547 00:21:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:00.547 00:21:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:00.547 00:21:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:00.547 00:21:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:00.547 00:21:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:00.547 [2024-07-12 00:21:28.192832] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:00.547 00:21:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:00.547 00:21:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:09:00.547 00:21:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1374 -- # local bdev_name=Malloc1 00:09:00.547 00:21:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1375 -- # local bdev_info 00:09:00.547 00:21:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1376 -- # local bs 00:09:00.547 00:21:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1377 -- # local nb 00:09:00.547 00:21:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:09:00.547 00:21:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:00.547 00:21:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:00.547 00:21:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:00.547 00:21:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:09:00.547 { 00:09:00.547 "name": "Malloc1", 00:09:00.547 "aliases": [ 00:09:00.547 "a7c990e7-98e7-4880-9889-3ee566f8f243" 00:09:00.547 ], 00:09:00.547 "product_name": "Malloc disk", 00:09:00.547 "block_size": 512, 00:09:00.547 "num_blocks": 1048576, 00:09:00.547 "uuid": "a7c990e7-98e7-4880-9889-3ee566f8f243", 00:09:00.547 "assigned_rate_limits": { 00:09:00.547 "rw_ios_per_sec": 0, 00:09:00.547 "rw_mbytes_per_sec": 0, 00:09:00.547 "r_mbytes_per_sec": 0, 00:09:00.547 "w_mbytes_per_sec": 0 00:09:00.547 }, 00:09:00.547 "claimed": true, 00:09:00.547 "claim_type": "exclusive_write", 00:09:00.547 "zoned": false, 00:09:00.547 "supported_io_types": { 00:09:00.547 "read": true, 00:09:00.547 "write": true, 00:09:00.547 "unmap": true, 00:09:00.547 "write_zeroes": true, 00:09:00.547 "flush": true, 00:09:00.547 "reset": true, 00:09:00.547 "compare": false, 00:09:00.547 "compare_and_write": false, 00:09:00.547 "abort": true, 00:09:00.547 "nvme_admin": false, 00:09:00.547 "nvme_io": false 00:09:00.547 }, 00:09:00.547 "memory_domains": [ 00:09:00.547 { 00:09:00.547 "dma_device_id": "system", 00:09:00.547 "dma_device_type": 1 00:09:00.547 }, 00:09:00.547 { 00:09:00.547 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:00.547 "dma_device_type": 2 00:09:00.547 } 00:09:00.547 ], 00:09:00.547 "driver_specific": {} 00:09:00.547 } 00:09:00.547 ]' 00:09:00.547 00:21:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:09:00.547 00:21:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # bs=512 00:09:00.547 00:21:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:09:00.547 00:21:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # nb=1048576 00:09:00.547 00:21:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # bdev_size=512 00:09:00.547 00:21:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # echo 512 00:09:00.547 00:21:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:09:00.547 00:21:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:01.113 00:21:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:09:01.113 00:21:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1194 -- # local i=0 00:09:01.113 00:21:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:09:01.113 00:21:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:09:01.113 00:21:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1201 -- # sleep 2 00:09:03.014 00:21:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:09:03.014 00:21:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:09:03.014 00:21:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:09:03.014 00:21:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:09:03.014 00:21:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:09:03.014 00:21:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # return 0 00:09:03.014 00:21:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:09:03.014 00:21:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:09:03.014 00:21:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:09:03.014 00:21:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:09:03.014 00:21:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:09:03.014 00:21:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:09:03.014 00:21:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:09:03.014 00:21:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:09:03.014 00:21:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:09:03.014 00:21:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:09:03.014 00:21:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:09:03.580 00:21:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:09:04.146 00:21:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:09:05.537 00:21:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:09:05.537 00:21:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:09:05.537 00:21:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:09:05.537 00:21:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:05.537 00:21:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:05.537 ************************************ 00:09:05.537 START TEST filesystem_ext4 00:09:05.537 ************************************ 00:09:05.537 00:21:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create ext4 nvme0n1 00:09:05.537 00:21:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:09:05.537 00:21:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:09:05.537 00:21:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:09:05.537 00:21:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@922 -- # local fstype=ext4 00:09:05.537 00:21:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:09:05.537 00:21:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@924 -- # local i=0 00:09:05.537 00:21:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@925 -- # local force 00:09:05.537 00:21:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # '[' ext4 = ext4 ']' 00:09:05.537 00:21:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@928 -- # force=-F 00:09:05.537 00:21:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@933 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:09:05.537 mke2fs 1.46.5 (30-Dec-2021) 00:09:05.537 Discarding device blocks: 0/522240 done 00:09:05.537 Creating filesystem with 522240 1k blocks and 130560 inodes 00:09:05.537 Filesystem UUID: 769eef7b-aef5-4ea1-b2c3-24877747c262 00:09:05.537 Superblock backups stored on blocks: 00:09:05.537 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:09:05.537 00:09:05.537 Allocating group tables: 0/64 done 00:09:05.537 Writing inode tables: 0/64 done 00:09:07.432 Creating journal (8192 blocks): done 00:09:07.432 Writing superblocks and filesystem accounting information: 0/64 done 00:09:07.432 00:09:07.432 00:21:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@941 -- # return 0 00:09:07.432 00:21:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:09:07.432 00:21:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:09:07.432 00:21:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:09:07.432 00:21:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:09:07.432 00:21:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:09:07.432 00:21:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:09:07.432 00:21:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:09:07.432 00:21:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 868525 00:09:07.432 00:21:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:09:07.432 00:21:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:09:07.432 00:21:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:09:07.432 00:21:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:09:07.432 00:09:07.432 real 0m1.995s 00:09:07.432 user 0m0.018s 00:09:07.432 sys 0m0.057s 00:09:07.432 00:21:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:07.432 00:21:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:09:07.432 ************************************ 00:09:07.432 END TEST filesystem_ext4 00:09:07.432 ************************************ 00:09:07.432 00:21:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:09:07.432 00:21:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:09:07.432 00:21:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:07.432 00:21:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:07.432 ************************************ 00:09:07.432 START TEST filesystem_btrfs 00:09:07.432 ************************************ 00:09:07.432 00:21:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create btrfs nvme0n1 00:09:07.432 00:21:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:09:07.433 00:21:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:09:07.433 00:21:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:09:07.433 00:21:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@922 -- # local fstype=btrfs 00:09:07.433 00:21:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:09:07.433 00:21:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@924 -- # local i=0 00:09:07.433 00:21:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@925 -- # local force 00:09:07.433 00:21:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # '[' btrfs = ext4 ']' 00:09:07.433 00:21:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@930 -- # force=-f 00:09:07.433 00:21:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@933 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:09:07.433 btrfs-progs v6.6.2 00:09:07.433 See https://btrfs.readthedocs.io for more information. 00:09:07.433 00:09:07.433 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:09:07.433 NOTE: several default settings have changed in version 5.15, please make sure 00:09:07.433 this does not affect your deployments: 00:09:07.433 - DUP for metadata (-m dup) 00:09:07.433 - enabled no-holes (-O no-holes) 00:09:07.433 - enabled free-space-tree (-R free-space-tree) 00:09:07.433 00:09:07.433 Label: (null) 00:09:07.433 UUID: c219e360-beb8-4486-8c5e-2d8f1720641e 00:09:07.433 Node size: 16384 00:09:07.433 Sector size: 4096 00:09:07.433 Filesystem size: 510.00MiB 00:09:07.433 Block group profiles: 00:09:07.433 Data: single 8.00MiB 00:09:07.433 Metadata: DUP 32.00MiB 00:09:07.433 System: DUP 8.00MiB 00:09:07.433 SSD detected: yes 00:09:07.433 Zoned device: no 00:09:07.433 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:09:07.433 Runtime features: free-space-tree 00:09:07.433 Checksum: crc32c 00:09:07.433 Number of devices: 1 00:09:07.433 Devices: 00:09:07.433 ID SIZE PATH 00:09:07.433 1 510.00MiB /dev/nvme0n1p1 00:09:07.433 00:09:07.433 00:21:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@941 -- # return 0 00:09:07.433 00:21:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:09:07.998 00:21:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:09:07.998 00:21:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:09:07.998 00:21:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:09:07.998 00:21:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:09:07.998 00:21:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:09:07.998 00:21:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:09:07.998 00:21:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 868525 00:09:07.998 00:21:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:09:07.998 00:21:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:09:07.998 00:21:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:09:07.998 00:21:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:09:07.998 00:09:07.998 real 0m0.584s 00:09:07.998 user 0m0.030s 00:09:07.998 sys 0m0.139s 00:09:07.998 00:21:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:07.998 00:21:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:09:07.998 ************************************ 00:09:07.998 END TEST filesystem_btrfs 00:09:07.998 ************************************ 00:09:07.998 00:21:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:09:07.998 00:21:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:09:07.998 00:21:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:07.998 00:21:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:07.998 ************************************ 00:09:07.998 START TEST filesystem_xfs 00:09:07.998 ************************************ 00:09:07.998 00:21:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create xfs nvme0n1 00:09:07.998 00:21:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:09:07.998 00:21:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:09:07.998 00:21:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:09:07.998 00:21:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@922 -- # local fstype=xfs 00:09:07.998 00:21:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:09:07.998 00:21:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@924 -- # local i=0 00:09:07.998 00:21:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@925 -- # local force 00:09:07.998 00:21:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # '[' xfs = ext4 ']' 00:09:07.998 00:21:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@930 -- # force=-f 00:09:07.998 00:21:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@933 -- # mkfs.xfs -f /dev/nvme0n1p1 00:09:07.998 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:09:07.998 = sectsz=512 attr=2, projid32bit=1 00:09:07.998 = crc=1 finobt=1, sparse=1, rmapbt=0 00:09:07.998 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:09:07.998 data = bsize=4096 blocks=130560, imaxpct=25 00:09:07.998 = sunit=0 swidth=0 blks 00:09:07.998 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:09:07.998 log =internal log bsize=4096 blocks=16384, version=2 00:09:07.998 = sectsz=512 sunit=0 blks, lazy-count=1 00:09:07.998 realtime =none extsz=4096 blocks=0, rtextents=0 00:09:08.932 Discarding blocks...Done. 00:09:08.932 00:21:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@941 -- # return 0 00:09:08.932 00:21:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:09:12.212 00:21:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:09:12.212 00:21:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:09:12.212 00:21:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:09:12.212 00:21:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:09:12.212 00:21:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:09:12.212 00:21:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:09:12.212 00:21:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 868525 00:09:12.212 00:21:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:09:12.212 00:21:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:09:12.212 00:21:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:09:12.213 00:21:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:09:12.213 00:09:12.213 real 0m3.884s 00:09:12.213 user 0m0.017s 00:09:12.213 sys 0m0.089s 00:09:12.213 00:21:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:12.213 00:21:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:09:12.213 ************************************ 00:09:12.213 END TEST filesystem_xfs 00:09:12.213 ************************************ 00:09:12.213 00:21:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:09:12.213 00:21:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:09:12.213 00:21:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:12.213 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:12.213 00:21:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:12.213 00:21:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1215 -- # local i=0 00:09:12.213 00:21:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:09:12.213 00:21:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:12.213 00:21:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:09:12.213 00:21:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:12.213 00:21:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # return 0 00:09:12.213 00:21:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:12.213 00:21:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:12.213 00:21:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:12.213 00:21:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:12.213 00:21:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:09:12.213 00:21:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 868525 00:09:12.213 00:21:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@946 -- # '[' -z 868525 ']' 00:09:12.213 00:21:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@950 -- # kill -0 868525 00:09:12.213 00:21:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@951 -- # uname 00:09:12.213 00:21:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:09:12.213 00:21:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 868525 00:09:12.472 00:21:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:09:12.472 00:21:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:09:12.472 00:21:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@964 -- # echo 'killing process with pid 868525' 00:09:12.472 killing process with pid 868525 00:09:12.472 00:21:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@965 -- # kill 868525 00:09:12.472 00:21:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@970 -- # wait 868525 00:09:12.731 00:21:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:09:12.731 00:09:12.731 real 0m12.681s 00:09:12.731 user 0m48.715s 00:09:12.731 sys 0m2.004s 00:09:12.731 00:21:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:12.731 00:21:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:12.731 ************************************ 00:09:12.731 END TEST nvmf_filesystem_no_in_capsule 00:09:12.731 ************************************ 00:09:12.731 00:21:40 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:09:12.731 00:21:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:09:12.731 00:21:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:12.731 00:21:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:09:12.731 ************************************ 00:09:12.731 START TEST nvmf_filesystem_in_capsule 00:09:12.731 ************************************ 00:09:12.731 00:21:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1121 -- # nvmf_filesystem_part 4096 00:09:12.731 00:21:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:09:12.731 00:21:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:09:12.731 00:21:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:12.731 00:21:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@720 -- # xtrace_disable 00:09:12.731 00:21:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:12.731 00:21:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=869940 00:09:12.731 00:21:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:12.731 00:21:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 869940 00:09:12.732 00:21:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@827 -- # '[' -z 869940 ']' 00:09:12.732 00:21:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:12.732 00:21:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@832 -- # local max_retries=100 00:09:12.732 00:21:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:12.732 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:12.732 00:21:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@836 -- # xtrace_disable 00:09:12.732 00:21:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:12.732 [2024-07-12 00:21:40.462049] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:09:12.732 [2024-07-12 00:21:40.462144] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:12.732 EAL: No free 2048 kB hugepages reported on node 1 00:09:12.732 [2024-07-12 00:21:40.528487] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:12.991 [2024-07-12 00:21:40.618124] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:12.991 [2024-07-12 00:21:40.618197] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:12.991 [2024-07-12 00:21:40.618213] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:12.991 [2024-07-12 00:21:40.618227] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:12.991 [2024-07-12 00:21:40.618239] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:12.991 [2024-07-12 00:21:40.618323] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:12.991 [2024-07-12 00:21:40.618376] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:12.991 [2024-07-12 00:21:40.618423] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:12.991 [2024-07-12 00:21:40.618425] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:12.991 00:21:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:09:12.991 00:21:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@860 -- # return 0 00:09:12.991 00:21:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:12.991 00:21:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:12.991 00:21:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:12.991 00:21:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:12.991 00:21:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:09:12.991 00:21:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:09:12.991 00:21:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:12.991 00:21:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:12.991 [2024-07-12 00:21:40.762146] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:12.991 00:21:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:12.991 00:21:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:09:12.991 00:21:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:12.991 00:21:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:13.250 Malloc1 00:09:13.250 00:21:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:13.250 00:21:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:13.250 00:21:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:13.250 00:21:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:13.250 00:21:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:13.250 00:21:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:13.250 00:21:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:13.250 00:21:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:13.250 00:21:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:13.250 00:21:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:13.250 00:21:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:13.250 00:21:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:13.250 [2024-07-12 00:21:40.923366] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:13.250 00:21:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:13.250 00:21:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:09:13.250 00:21:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1374 -- # local bdev_name=Malloc1 00:09:13.250 00:21:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1375 -- # local bdev_info 00:09:13.250 00:21:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1376 -- # local bs 00:09:13.250 00:21:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1377 -- # local nb 00:09:13.250 00:21:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:09:13.250 00:21:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:13.250 00:21:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:13.250 00:21:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:13.250 00:21:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:09:13.250 { 00:09:13.250 "name": "Malloc1", 00:09:13.250 "aliases": [ 00:09:13.250 "204949d0-eae9-469a-98ef-a27b106ed515" 00:09:13.250 ], 00:09:13.250 "product_name": "Malloc disk", 00:09:13.250 "block_size": 512, 00:09:13.250 "num_blocks": 1048576, 00:09:13.250 "uuid": "204949d0-eae9-469a-98ef-a27b106ed515", 00:09:13.250 "assigned_rate_limits": { 00:09:13.250 "rw_ios_per_sec": 0, 00:09:13.250 "rw_mbytes_per_sec": 0, 00:09:13.250 "r_mbytes_per_sec": 0, 00:09:13.250 "w_mbytes_per_sec": 0 00:09:13.250 }, 00:09:13.250 "claimed": true, 00:09:13.250 "claim_type": "exclusive_write", 00:09:13.250 "zoned": false, 00:09:13.250 "supported_io_types": { 00:09:13.250 "read": true, 00:09:13.250 "write": true, 00:09:13.250 "unmap": true, 00:09:13.250 "write_zeroes": true, 00:09:13.250 "flush": true, 00:09:13.250 "reset": true, 00:09:13.250 "compare": false, 00:09:13.250 "compare_and_write": false, 00:09:13.250 "abort": true, 00:09:13.250 "nvme_admin": false, 00:09:13.250 "nvme_io": false 00:09:13.250 }, 00:09:13.250 "memory_domains": [ 00:09:13.250 { 00:09:13.250 "dma_device_id": "system", 00:09:13.251 "dma_device_type": 1 00:09:13.251 }, 00:09:13.251 { 00:09:13.251 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:13.251 "dma_device_type": 2 00:09:13.251 } 00:09:13.251 ], 00:09:13.251 "driver_specific": {} 00:09:13.251 } 00:09:13.251 ]' 00:09:13.251 00:21:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:09:13.251 00:21:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # bs=512 00:09:13.251 00:21:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:09:13.251 00:21:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # nb=1048576 00:09:13.251 00:21:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # bdev_size=512 00:09:13.251 00:21:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # echo 512 00:09:13.251 00:21:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:09:13.251 00:21:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:13.817 00:21:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:09:13.817 00:21:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1194 -- # local i=0 00:09:13.817 00:21:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:09:13.817 00:21:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:09:13.817 00:21:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1201 -- # sleep 2 00:09:15.715 00:21:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:09:15.715 00:21:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:09:15.715 00:21:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:09:15.972 00:21:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:09:15.972 00:21:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:09:15.972 00:21:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # return 0 00:09:15.972 00:21:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:09:15.972 00:21:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:09:15.972 00:21:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:09:15.972 00:21:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:09:15.972 00:21:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:09:15.972 00:21:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:09:15.972 00:21:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:09:15.972 00:21:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:09:15.972 00:21:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:09:15.972 00:21:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:09:15.972 00:21:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:09:16.230 00:21:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:09:16.488 00:21:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:09:17.422 00:21:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:09:17.422 00:21:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:09:17.422 00:21:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:09:17.422 00:21:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:17.422 00:21:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:17.681 ************************************ 00:09:17.681 START TEST filesystem_in_capsule_ext4 00:09:17.681 ************************************ 00:09:17.681 00:21:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create ext4 nvme0n1 00:09:17.681 00:21:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:09:17.681 00:21:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:09:17.681 00:21:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:09:17.681 00:21:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@922 -- # local fstype=ext4 00:09:17.681 00:21:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:09:17.681 00:21:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@924 -- # local i=0 00:09:17.681 00:21:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@925 -- # local force 00:09:17.681 00:21:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # '[' ext4 = ext4 ']' 00:09:17.681 00:21:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@928 -- # force=-F 00:09:17.681 00:21:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@933 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:09:17.681 mke2fs 1.46.5 (30-Dec-2021) 00:09:17.681 Discarding device blocks: 0/522240 done 00:09:17.681 Creating filesystem with 522240 1k blocks and 130560 inodes 00:09:17.681 Filesystem UUID: 3129403b-ac1e-4ec1-8512-cb8aca9c6564 00:09:17.681 Superblock backups stored on blocks: 00:09:17.681 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:09:17.681 00:09:17.681 Allocating group tables: 0/64 done 00:09:17.681 Writing inode tables: 0/64 done 00:09:17.681 Creating journal (8192 blocks): done 00:09:17.939 Writing superblocks and filesystem accounting information: 0/64 done 00:09:17.939 00:09:17.939 00:21:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@941 -- # return 0 00:09:17.939 00:21:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:09:18.197 00:21:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:09:18.197 00:21:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:09:18.197 00:21:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:09:18.197 00:21:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:09:18.197 00:21:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:09:18.197 00:21:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:09:18.455 00:21:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 869940 00:09:18.455 00:21:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:09:18.455 00:21:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:09:18.455 00:21:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:09:18.455 00:21:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:09:18.455 00:09:18.455 real 0m0.819s 00:09:18.455 user 0m0.020s 00:09:18.455 sys 0m0.054s 00:09:18.455 00:21:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:18.455 00:21:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:09:18.455 ************************************ 00:09:18.455 END TEST filesystem_in_capsule_ext4 00:09:18.455 ************************************ 00:09:18.455 00:21:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:09:18.455 00:21:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:09:18.455 00:21:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:18.455 00:21:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:18.455 ************************************ 00:09:18.455 START TEST filesystem_in_capsule_btrfs 00:09:18.455 ************************************ 00:09:18.455 00:21:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create btrfs nvme0n1 00:09:18.455 00:21:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:09:18.455 00:21:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:09:18.455 00:21:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:09:18.455 00:21:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@922 -- # local fstype=btrfs 00:09:18.455 00:21:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:09:18.456 00:21:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@924 -- # local i=0 00:09:18.456 00:21:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@925 -- # local force 00:09:18.456 00:21:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # '[' btrfs = ext4 ']' 00:09:18.456 00:21:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@930 -- # force=-f 00:09:18.456 00:21:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@933 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:09:18.456 btrfs-progs v6.6.2 00:09:18.456 See https://btrfs.readthedocs.io for more information. 00:09:18.456 00:09:18.456 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:09:18.456 NOTE: several default settings have changed in version 5.15, please make sure 00:09:18.456 this does not affect your deployments: 00:09:18.456 - DUP for metadata (-m dup) 00:09:18.456 - enabled no-holes (-O no-holes) 00:09:18.456 - enabled free-space-tree (-R free-space-tree) 00:09:18.456 00:09:18.456 Label: (null) 00:09:18.456 UUID: f8078ab5-58f7-441f-a535-0a98d6ec82bc 00:09:18.456 Node size: 16384 00:09:18.456 Sector size: 4096 00:09:18.456 Filesystem size: 510.00MiB 00:09:18.456 Block group profiles: 00:09:18.456 Data: single 8.00MiB 00:09:18.456 Metadata: DUP 32.00MiB 00:09:18.456 System: DUP 8.00MiB 00:09:18.456 SSD detected: yes 00:09:18.456 Zoned device: no 00:09:18.456 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:09:18.456 Runtime features: free-space-tree 00:09:18.456 Checksum: crc32c 00:09:18.456 Number of devices: 1 00:09:18.456 Devices: 00:09:18.456 ID SIZE PATH 00:09:18.456 1 510.00MiB /dev/nvme0n1p1 00:09:18.456 00:09:18.456 00:21:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@941 -- # return 0 00:09:18.456 00:21:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:09:19.022 00:21:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:09:19.022 00:21:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:09:19.022 00:21:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:09:19.022 00:21:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:09:19.022 00:21:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:09:19.022 00:21:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:09:19.022 00:21:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 869940 00:09:19.022 00:21:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:09:19.022 00:21:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:09:19.022 00:21:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:09:19.022 00:21:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:09:19.022 00:09:19.022 real 0m0.615s 00:09:19.022 user 0m0.018s 00:09:19.022 sys 0m0.123s 00:09:19.022 00:21:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:19.022 00:21:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:09:19.022 ************************************ 00:09:19.022 END TEST filesystem_in_capsule_btrfs 00:09:19.022 ************************************ 00:09:19.022 00:21:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:09:19.022 00:21:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:09:19.022 00:21:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:19.022 00:21:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:19.022 ************************************ 00:09:19.022 START TEST filesystem_in_capsule_xfs 00:09:19.022 ************************************ 00:09:19.022 00:21:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create xfs nvme0n1 00:09:19.022 00:21:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:09:19.022 00:21:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:09:19.022 00:21:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:09:19.022 00:21:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@922 -- # local fstype=xfs 00:09:19.022 00:21:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:09:19.022 00:21:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@924 -- # local i=0 00:09:19.022 00:21:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@925 -- # local force 00:09:19.022 00:21:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # '[' xfs = ext4 ']' 00:09:19.022 00:21:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@930 -- # force=-f 00:09:19.022 00:21:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@933 -- # mkfs.xfs -f /dev/nvme0n1p1 00:09:19.022 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:09:19.022 = sectsz=512 attr=2, projid32bit=1 00:09:19.022 = crc=1 finobt=1, sparse=1, rmapbt=0 00:09:19.022 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:09:19.022 data = bsize=4096 blocks=130560, imaxpct=25 00:09:19.022 = sunit=0 swidth=0 blks 00:09:19.022 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:09:19.022 log =internal log bsize=4096 blocks=16384, version=2 00:09:19.022 = sectsz=512 sunit=0 blks, lazy-count=1 00:09:19.022 realtime =none extsz=4096 blocks=0, rtextents=0 00:09:20.434 Discarding blocks...Done. 00:09:20.434 00:21:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@941 -- # return 0 00:09:20.434 00:21:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:09:22.340 00:21:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:09:22.340 00:21:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:09:22.340 00:21:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:09:22.340 00:21:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:09:22.340 00:21:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:09:22.340 00:21:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:09:22.340 00:21:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 869940 00:09:22.340 00:21:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:09:22.340 00:21:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:09:22.340 00:21:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:09:22.340 00:21:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:09:22.340 00:09:22.340 real 0m3.006s 00:09:22.340 user 0m0.019s 00:09:22.340 sys 0m0.054s 00:09:22.340 00:21:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:22.340 00:21:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:09:22.340 ************************************ 00:09:22.340 END TEST filesystem_in_capsule_xfs 00:09:22.340 ************************************ 00:09:22.340 00:21:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:09:22.340 00:21:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:09:22.341 00:21:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:22.341 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:22.341 00:21:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:22.341 00:21:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1215 -- # local i=0 00:09:22.341 00:21:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:09:22.341 00:21:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:22.341 00:21:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:09:22.341 00:21:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:22.341 00:21:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # return 0 00:09:22.341 00:21:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:22.341 00:21:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:22.341 00:21:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:22.341 00:21:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:22.341 00:21:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:09:22.341 00:21:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 869940 00:09:22.341 00:21:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@946 -- # '[' -z 869940 ']' 00:09:22.341 00:21:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@950 -- # kill -0 869940 00:09:22.341 00:21:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@951 -- # uname 00:09:22.341 00:21:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:09:22.341 00:21:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 869940 00:09:22.341 00:21:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:09:22.341 00:21:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:09:22.341 00:21:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@964 -- # echo 'killing process with pid 869940' 00:09:22.341 killing process with pid 869940 00:09:22.341 00:21:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@965 -- # kill 869940 00:09:22.341 00:21:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@970 -- # wait 869940 00:09:22.601 00:21:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:09:22.601 00:09:22.601 real 0m9.906s 00:09:22.601 user 0m37.952s 00:09:22.601 sys 0m1.632s 00:09:22.601 00:21:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:22.601 00:21:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:22.601 ************************************ 00:09:22.601 END TEST nvmf_filesystem_in_capsule 00:09:22.601 ************************************ 00:09:22.601 00:21:50 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:09:22.601 00:21:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:22.601 00:21:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@117 -- # sync 00:09:22.601 00:21:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:22.601 00:21:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@120 -- # set +e 00:09:22.601 00:21:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:22.601 00:21:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:22.601 rmmod nvme_tcp 00:09:22.601 rmmod nvme_fabrics 00:09:22.601 rmmod nvme_keyring 00:09:22.601 00:21:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:22.601 00:21:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@124 -- # set -e 00:09:22.601 00:21:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@125 -- # return 0 00:09:22.601 00:21:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:09:22.601 00:21:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:22.601 00:21:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:22.601 00:21:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:22.601 00:21:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:22.601 00:21:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:22.601 00:21:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:22.601 00:21:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:22.601 00:21:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:25.142 00:21:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:25.142 00:09:25.142 real 0m26.732s 00:09:25.142 user 1m27.464s 00:09:25.142 sys 0m4.969s 00:09:25.142 00:21:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:25.142 00:21:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:09:25.142 ************************************ 00:09:25.142 END TEST nvmf_filesystem 00:09:25.142 ************************************ 00:09:25.142 00:21:52 nvmf_tcp -- nvmf/nvmf.sh@25 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:09:25.142 00:21:52 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:09:25.142 00:21:52 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:25.142 00:21:52 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:25.142 ************************************ 00:09:25.142 START TEST nvmf_target_discovery 00:09:25.142 ************************************ 00:09:25.142 00:21:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:09:25.142 * Looking for test storage... 00:09:25.142 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:25.142 00:21:52 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:25.142 00:21:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:09:25.142 00:21:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:25.142 00:21:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:25.142 00:21:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:25.142 00:21:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:25.142 00:21:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:25.142 00:21:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:25.142 00:21:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:25.142 00:21:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:25.142 00:21:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:25.142 00:21:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:25.142 00:21:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:09:25.142 00:21:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:09:25.142 00:21:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:25.142 00:21:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:25.142 00:21:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:25.142 00:21:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:25.142 00:21:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:25.142 00:21:52 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:25.142 00:21:52 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:25.142 00:21:52 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:25.142 00:21:52 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:25.142 00:21:52 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:25.142 00:21:52 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:25.142 00:21:52 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:09:25.142 00:21:52 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:25.142 00:21:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@47 -- # : 0 00:09:25.142 00:21:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:25.142 00:21:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:25.142 00:21:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:25.142 00:21:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:25.142 00:21:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:25.142 00:21:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:25.142 00:21:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:25.142 00:21:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:25.142 00:21:52 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:09:25.142 00:21:52 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:09:25.142 00:21:52 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:09:25.142 00:21:52 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:09:25.142 00:21:52 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:09:25.142 00:21:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:25.142 00:21:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:25.142 00:21:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:25.142 00:21:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:25.142 00:21:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:25.142 00:21:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:25.142 00:21:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:25.142 00:21:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:25.142 00:21:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:25.142 00:21:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:25.142 00:21:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:09:25.142 00:21:52 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:26.521 00:21:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:26.521 00:21:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:09:26.521 00:21:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:26.521 00:21:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:26.521 00:21:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:26.521 00:21:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:26.521 00:21:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:26.521 00:21:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:09:26.521 00:21:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:26.521 00:21:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # e810=() 00:09:26.521 00:21:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:09:26.521 00:21:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # x722=() 00:09:26.521 00:21:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:09:26.521 00:21:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # mlx=() 00:09:26.521 00:21:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:09:26.521 00:21:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:26.521 00:21:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:26.521 00:21:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:26.521 00:21:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:26.521 00:21:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:26.521 00:21:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:26.521 00:21:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:26.521 00:21:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:26.521 00:21:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:26.521 00:21:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:26.521 00:21:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:26.521 00:21:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:26.521 00:21:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:26.521 00:21:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:26.521 00:21:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:26.521 00:21:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:26.521 00:21:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:26.521 00:21:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:26.521 00:21:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:09:26.521 Found 0000:08:00.0 (0x8086 - 0x159b) 00:09:26.521 00:21:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:26.521 00:21:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:26.521 00:21:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:26.521 00:21:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:26.521 00:21:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:26.521 00:21:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:26.521 00:21:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:09:26.521 Found 0000:08:00.1 (0x8086 - 0x159b) 00:09:26.521 00:21:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:26.521 00:21:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:26.521 00:21:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:26.521 00:21:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:26.521 00:21:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:26.521 00:21:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:26.521 00:21:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:26.521 00:21:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:26.521 00:21:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:26.521 00:21:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:26.521 00:21:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:26.521 00:21:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:26.521 00:21:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:26.521 00:21:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:26.521 00:21:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:26.521 00:21:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:09:26.521 Found net devices under 0000:08:00.0: cvl_0_0 00:09:26.521 00:21:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:26.521 00:21:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:26.521 00:21:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:26.521 00:21:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:26.521 00:21:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:26.521 00:21:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:26.521 00:21:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:26.521 00:21:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:26.521 00:21:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:09:26.521 Found net devices under 0000:08:00.1: cvl_0_1 00:09:26.521 00:21:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:26.521 00:21:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:26.521 00:21:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:09:26.521 00:21:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:26.521 00:21:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:26.521 00:21:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:26.521 00:21:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:26.522 00:21:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:26.522 00:21:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:26.522 00:21:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:26.522 00:21:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:26.522 00:21:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:26.522 00:21:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:26.522 00:21:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:26.522 00:21:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:26.522 00:21:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:26.522 00:21:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:26.522 00:21:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:26.522 00:21:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:26.522 00:21:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:26.522 00:21:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:26.522 00:21:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:26.522 00:21:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:26.522 00:21:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:26.522 00:21:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:26.522 00:21:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:26.522 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:26.522 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.370 ms 00:09:26.522 00:09:26.522 --- 10.0.0.2 ping statistics --- 00:09:26.522 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:26.522 rtt min/avg/max/mdev = 0.370/0.370/0.370/0.000 ms 00:09:26.522 00:21:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:26.522 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:26.522 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.213 ms 00:09:26.522 00:09:26.522 --- 10.0.0.1 ping statistics --- 00:09:26.522 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:26.522 rtt min/avg/max/mdev = 0.213/0.213/0.213/0.000 ms 00:09:26.522 00:21:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:26.522 00:21:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@422 -- # return 0 00:09:26.522 00:21:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:26.522 00:21:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:26.522 00:21:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:26.522 00:21:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:26.522 00:21:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:26.522 00:21:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:26.522 00:21:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:26.781 00:21:54 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:09:26.781 00:21:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:26.781 00:21:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@720 -- # xtrace_disable 00:09:26.781 00:21:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:26.781 00:21:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@481 -- # nvmfpid=872542 00:09:26.781 00:21:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:26.781 00:21:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@482 -- # waitforlisten 872542 00:09:26.781 00:21:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@827 -- # '[' -z 872542 ']' 00:09:26.781 00:21:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:26.781 00:21:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@832 -- # local max_retries=100 00:09:26.781 00:21:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:26.781 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:26.781 00:21:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@836 -- # xtrace_disable 00:09:26.781 00:21:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:26.781 [2024-07-12 00:21:54.420767] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:09:26.781 [2024-07-12 00:21:54.420856] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:26.781 EAL: No free 2048 kB hugepages reported on node 1 00:09:26.781 [2024-07-12 00:21:54.485085] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:26.781 [2024-07-12 00:21:54.572454] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:26.782 [2024-07-12 00:21:54.572510] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:26.782 [2024-07-12 00:21:54.572526] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:26.782 [2024-07-12 00:21:54.572540] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:26.782 [2024-07-12 00:21:54.572552] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:26.782 [2024-07-12 00:21:54.572890] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:26.782 [2024-07-12 00:21:54.572965] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:26.782 [2024-07-12 00:21:54.573046] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:26.782 [2024-07-12 00:21:54.573051] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:27.041 00:21:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:09:27.041 00:21:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@860 -- # return 0 00:09:27.041 00:21:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:27.041 00:21:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:27.041 00:21:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:27.041 00:21:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:27.041 00:21:54 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:27.041 00:21:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:27.041 00:21:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:27.041 [2024-07-12 00:21:54.711162] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:27.041 00:21:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:27.041 00:21:54 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:09:27.041 00:21:54 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:09:27.041 00:21:54 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:09:27.041 00:21:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:27.041 00:21:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:27.041 Null1 00:09:27.041 00:21:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:27.041 00:21:54 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:27.041 00:21:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:27.041 00:21:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:27.041 00:21:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:27.041 00:21:54 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:09:27.041 00:21:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:27.041 00:21:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:27.041 00:21:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:27.041 00:21:54 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:27.041 00:21:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:27.041 00:21:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:27.041 [2024-07-12 00:21:54.751435] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:27.041 00:21:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:27.042 00:21:54 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:09:27.042 00:21:54 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:09:27.042 00:21:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:27.042 00:21:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:27.042 Null2 00:09:27.042 00:21:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:27.042 00:21:54 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:09:27.042 00:21:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:27.042 00:21:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:27.042 00:21:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:27.042 00:21:54 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:09:27.042 00:21:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:27.042 00:21:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:27.042 00:21:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:27.042 00:21:54 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:09:27.042 00:21:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:27.042 00:21:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:27.042 00:21:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:27.042 00:21:54 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:09:27.042 00:21:54 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:09:27.042 00:21:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:27.042 00:21:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:27.042 Null3 00:09:27.042 00:21:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:27.042 00:21:54 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:09:27.042 00:21:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:27.042 00:21:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:27.042 00:21:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:27.042 00:21:54 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:09:27.042 00:21:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:27.042 00:21:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:27.042 00:21:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:27.042 00:21:54 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:09:27.042 00:21:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:27.042 00:21:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:27.042 00:21:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:27.042 00:21:54 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:09:27.042 00:21:54 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:09:27.042 00:21:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:27.042 00:21:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:27.042 Null4 00:09:27.042 00:21:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:27.042 00:21:54 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:09:27.042 00:21:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:27.042 00:21:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:27.042 00:21:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:27.042 00:21:54 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:09:27.042 00:21:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:27.042 00:21:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:27.042 00:21:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:27.042 00:21:54 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:09:27.042 00:21:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:27.042 00:21:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:27.042 00:21:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:27.042 00:21:54 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:27.042 00:21:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:27.042 00:21:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:27.042 00:21:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:27.042 00:21:54 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:09:27.042 00:21:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:27.042 00:21:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:27.042 00:21:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:27.042 00:21:54 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -a 10.0.0.2 -s 4420 00:09:27.300 00:09:27.300 Discovery Log Number of Records 6, Generation counter 6 00:09:27.300 =====Discovery Log Entry 0====== 00:09:27.300 trtype: tcp 00:09:27.300 adrfam: ipv4 00:09:27.300 subtype: current discovery subsystem 00:09:27.300 treq: not required 00:09:27.300 portid: 0 00:09:27.300 trsvcid: 4420 00:09:27.300 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:09:27.300 traddr: 10.0.0.2 00:09:27.300 eflags: explicit discovery connections, duplicate discovery information 00:09:27.300 sectype: none 00:09:27.300 =====Discovery Log Entry 1====== 00:09:27.300 trtype: tcp 00:09:27.300 adrfam: ipv4 00:09:27.300 subtype: nvme subsystem 00:09:27.300 treq: not required 00:09:27.300 portid: 0 00:09:27.300 trsvcid: 4420 00:09:27.300 subnqn: nqn.2016-06.io.spdk:cnode1 00:09:27.300 traddr: 10.0.0.2 00:09:27.300 eflags: none 00:09:27.300 sectype: none 00:09:27.300 =====Discovery Log Entry 2====== 00:09:27.300 trtype: tcp 00:09:27.300 adrfam: ipv4 00:09:27.300 subtype: nvme subsystem 00:09:27.300 treq: not required 00:09:27.300 portid: 0 00:09:27.300 trsvcid: 4420 00:09:27.300 subnqn: nqn.2016-06.io.spdk:cnode2 00:09:27.300 traddr: 10.0.0.2 00:09:27.300 eflags: none 00:09:27.300 sectype: none 00:09:27.300 =====Discovery Log Entry 3====== 00:09:27.300 trtype: tcp 00:09:27.300 adrfam: ipv4 00:09:27.300 subtype: nvme subsystem 00:09:27.300 treq: not required 00:09:27.300 portid: 0 00:09:27.300 trsvcid: 4420 00:09:27.300 subnqn: nqn.2016-06.io.spdk:cnode3 00:09:27.300 traddr: 10.0.0.2 00:09:27.300 eflags: none 00:09:27.300 sectype: none 00:09:27.300 =====Discovery Log Entry 4====== 00:09:27.300 trtype: tcp 00:09:27.300 adrfam: ipv4 00:09:27.300 subtype: nvme subsystem 00:09:27.300 treq: not required 00:09:27.300 portid: 0 00:09:27.300 trsvcid: 4420 00:09:27.300 subnqn: nqn.2016-06.io.spdk:cnode4 00:09:27.300 traddr: 10.0.0.2 00:09:27.300 eflags: none 00:09:27.300 sectype: none 00:09:27.300 =====Discovery Log Entry 5====== 00:09:27.300 trtype: tcp 00:09:27.300 adrfam: ipv4 00:09:27.300 subtype: discovery subsystem referral 00:09:27.300 treq: not required 00:09:27.300 portid: 0 00:09:27.300 trsvcid: 4430 00:09:27.300 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:09:27.300 traddr: 10.0.0.2 00:09:27.300 eflags: none 00:09:27.300 sectype: none 00:09:27.301 00:21:55 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:09:27.301 Perform nvmf subsystem discovery via RPC 00:09:27.301 00:21:55 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:09:27.301 00:21:55 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:27.301 00:21:55 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:27.301 [ 00:09:27.301 { 00:09:27.301 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:09:27.301 "subtype": "Discovery", 00:09:27.301 "listen_addresses": [ 00:09:27.301 { 00:09:27.301 "trtype": "TCP", 00:09:27.301 "adrfam": "IPv4", 00:09:27.301 "traddr": "10.0.0.2", 00:09:27.301 "trsvcid": "4420" 00:09:27.301 } 00:09:27.301 ], 00:09:27.301 "allow_any_host": true, 00:09:27.301 "hosts": [] 00:09:27.301 }, 00:09:27.301 { 00:09:27.301 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:09:27.301 "subtype": "NVMe", 00:09:27.301 "listen_addresses": [ 00:09:27.301 { 00:09:27.301 "trtype": "TCP", 00:09:27.301 "adrfam": "IPv4", 00:09:27.301 "traddr": "10.0.0.2", 00:09:27.301 "trsvcid": "4420" 00:09:27.301 } 00:09:27.301 ], 00:09:27.301 "allow_any_host": true, 00:09:27.301 "hosts": [], 00:09:27.301 "serial_number": "SPDK00000000000001", 00:09:27.301 "model_number": "SPDK bdev Controller", 00:09:27.301 "max_namespaces": 32, 00:09:27.301 "min_cntlid": 1, 00:09:27.301 "max_cntlid": 65519, 00:09:27.301 "namespaces": [ 00:09:27.301 { 00:09:27.301 "nsid": 1, 00:09:27.301 "bdev_name": "Null1", 00:09:27.301 "name": "Null1", 00:09:27.301 "nguid": "A1B5A694CC9F4708B05763594B67F634", 00:09:27.301 "uuid": "a1b5a694-cc9f-4708-b057-63594b67f634" 00:09:27.301 } 00:09:27.301 ] 00:09:27.301 }, 00:09:27.301 { 00:09:27.301 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:09:27.301 "subtype": "NVMe", 00:09:27.301 "listen_addresses": [ 00:09:27.301 { 00:09:27.301 "trtype": "TCP", 00:09:27.301 "adrfam": "IPv4", 00:09:27.301 "traddr": "10.0.0.2", 00:09:27.301 "trsvcid": "4420" 00:09:27.301 } 00:09:27.301 ], 00:09:27.301 "allow_any_host": true, 00:09:27.301 "hosts": [], 00:09:27.301 "serial_number": "SPDK00000000000002", 00:09:27.301 "model_number": "SPDK bdev Controller", 00:09:27.301 "max_namespaces": 32, 00:09:27.301 "min_cntlid": 1, 00:09:27.301 "max_cntlid": 65519, 00:09:27.301 "namespaces": [ 00:09:27.301 { 00:09:27.301 "nsid": 1, 00:09:27.301 "bdev_name": "Null2", 00:09:27.301 "name": "Null2", 00:09:27.301 "nguid": "FC55D6CAF9894A1F9B6B635C77FFA796", 00:09:27.301 "uuid": "fc55d6ca-f989-4a1f-9b6b-635c77ffa796" 00:09:27.301 } 00:09:27.301 ] 00:09:27.301 }, 00:09:27.301 { 00:09:27.301 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:09:27.301 "subtype": "NVMe", 00:09:27.301 "listen_addresses": [ 00:09:27.301 { 00:09:27.301 "trtype": "TCP", 00:09:27.301 "adrfam": "IPv4", 00:09:27.301 "traddr": "10.0.0.2", 00:09:27.301 "trsvcid": "4420" 00:09:27.301 } 00:09:27.301 ], 00:09:27.301 "allow_any_host": true, 00:09:27.301 "hosts": [], 00:09:27.301 "serial_number": "SPDK00000000000003", 00:09:27.301 "model_number": "SPDK bdev Controller", 00:09:27.301 "max_namespaces": 32, 00:09:27.301 "min_cntlid": 1, 00:09:27.301 "max_cntlid": 65519, 00:09:27.301 "namespaces": [ 00:09:27.301 { 00:09:27.301 "nsid": 1, 00:09:27.301 "bdev_name": "Null3", 00:09:27.301 "name": "Null3", 00:09:27.301 "nguid": "31E4B8B3F3D7422583008B9F743C5972", 00:09:27.301 "uuid": "31e4b8b3-f3d7-4225-8300-8b9f743c5972" 00:09:27.301 } 00:09:27.301 ] 00:09:27.301 }, 00:09:27.301 { 00:09:27.301 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:09:27.301 "subtype": "NVMe", 00:09:27.301 "listen_addresses": [ 00:09:27.301 { 00:09:27.301 "trtype": "TCP", 00:09:27.301 "adrfam": "IPv4", 00:09:27.301 "traddr": "10.0.0.2", 00:09:27.301 "trsvcid": "4420" 00:09:27.301 } 00:09:27.301 ], 00:09:27.301 "allow_any_host": true, 00:09:27.301 "hosts": [], 00:09:27.301 "serial_number": "SPDK00000000000004", 00:09:27.301 "model_number": "SPDK bdev Controller", 00:09:27.301 "max_namespaces": 32, 00:09:27.301 "min_cntlid": 1, 00:09:27.301 "max_cntlid": 65519, 00:09:27.301 "namespaces": [ 00:09:27.301 { 00:09:27.301 "nsid": 1, 00:09:27.301 "bdev_name": "Null4", 00:09:27.301 "name": "Null4", 00:09:27.301 "nguid": "6F893C34FFF040CF91645837CF4630A8", 00:09:27.301 "uuid": "6f893c34-fff0-40cf-9164-5837cf4630a8" 00:09:27.301 } 00:09:27.301 ] 00:09:27.301 } 00:09:27.301 ] 00:09:27.301 00:21:55 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:27.301 00:21:55 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:09:27.301 00:21:55 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:09:27.301 00:21:55 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:27.301 00:21:55 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:27.301 00:21:55 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:27.301 00:21:55 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:27.301 00:21:55 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:09:27.301 00:21:55 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:27.301 00:21:55 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:27.301 00:21:55 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:27.301 00:21:55 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:09:27.301 00:21:55 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:09:27.301 00:21:55 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:27.301 00:21:55 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:27.301 00:21:55 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:27.301 00:21:55 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:09:27.301 00:21:55 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:27.301 00:21:55 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:27.301 00:21:55 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:27.301 00:21:55 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:09:27.301 00:21:55 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:09:27.301 00:21:55 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:27.301 00:21:55 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:27.560 00:21:55 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:27.561 00:21:55 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:09:27.561 00:21:55 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:27.561 00:21:55 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:27.561 00:21:55 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:27.561 00:21:55 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:09:27.561 00:21:55 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:09:27.561 00:21:55 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:27.561 00:21:55 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:27.561 00:21:55 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:27.561 00:21:55 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:09:27.561 00:21:55 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:27.561 00:21:55 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:27.561 00:21:55 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:27.561 00:21:55 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:09:27.561 00:21:55 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:27.561 00:21:55 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:27.561 00:21:55 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:27.561 00:21:55 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:09:27.561 00:21:55 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:09:27.561 00:21:55 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:27.561 00:21:55 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:27.561 00:21:55 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:27.561 00:21:55 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:09:27.561 00:21:55 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:09:27.561 00:21:55 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:09:27.561 00:21:55 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:09:27.561 00:21:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:27.561 00:21:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@117 -- # sync 00:09:27.561 00:21:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:27.561 00:21:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@120 -- # set +e 00:09:27.561 00:21:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:27.561 00:21:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:27.561 rmmod nvme_tcp 00:09:27.561 rmmod nvme_fabrics 00:09:27.561 rmmod nvme_keyring 00:09:27.561 00:21:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:27.561 00:21:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@124 -- # set -e 00:09:27.561 00:21:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@125 -- # return 0 00:09:27.561 00:21:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@489 -- # '[' -n 872542 ']' 00:09:27.561 00:21:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@490 -- # killprocess 872542 00:09:27.561 00:21:55 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@946 -- # '[' -z 872542 ']' 00:09:27.561 00:21:55 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@950 -- # kill -0 872542 00:09:27.561 00:21:55 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@951 -- # uname 00:09:27.561 00:21:55 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:09:27.561 00:21:55 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 872542 00:09:27.561 00:21:55 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:09:27.561 00:21:55 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:09:27.561 00:21:55 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@964 -- # echo 'killing process with pid 872542' 00:09:27.561 killing process with pid 872542 00:09:27.561 00:21:55 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@965 -- # kill 872542 00:09:27.561 00:21:55 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@970 -- # wait 872542 00:09:27.821 00:21:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:27.821 00:21:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:27.821 00:21:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:27.821 00:21:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:27.821 00:21:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:27.821 00:21:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:27.821 00:21:55 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:27.821 00:21:55 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:29.726 00:21:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:29.726 00:09:29.726 real 0m5.022s 00:09:29.726 user 0m4.284s 00:09:29.726 sys 0m1.606s 00:09:29.726 00:21:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:29.726 00:21:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:29.726 ************************************ 00:09:29.726 END TEST nvmf_target_discovery 00:09:29.726 ************************************ 00:09:29.726 00:21:57 nvmf_tcp -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:09:29.726 00:21:57 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:09:29.726 00:21:57 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:29.726 00:21:57 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:29.983 ************************************ 00:09:29.983 START TEST nvmf_referrals 00:09:29.983 ************************************ 00:09:29.983 00:21:57 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:09:29.983 * Looking for test storage... 00:09:29.983 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:29.983 00:21:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:29.983 00:21:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:09:29.983 00:21:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:29.983 00:21:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:29.983 00:21:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:29.983 00:21:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:29.983 00:21:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:29.983 00:21:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:29.983 00:21:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:29.983 00:21:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:29.983 00:21:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:29.983 00:21:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:29.983 00:21:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:09:29.983 00:21:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:09:29.983 00:21:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:29.983 00:21:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:29.983 00:21:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:29.983 00:21:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:29.983 00:21:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:29.983 00:21:57 nvmf_tcp.nvmf_referrals -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:29.983 00:21:57 nvmf_tcp.nvmf_referrals -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:29.983 00:21:57 nvmf_tcp.nvmf_referrals -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:29.983 00:21:57 nvmf_tcp.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:29.984 00:21:57 nvmf_tcp.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:29.984 00:21:57 nvmf_tcp.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:29.984 00:21:57 nvmf_tcp.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:09:29.984 00:21:57 nvmf_tcp.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:29.984 00:21:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@47 -- # : 0 00:09:29.984 00:21:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:29.984 00:21:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:29.984 00:21:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:29.984 00:21:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:29.984 00:21:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:29.984 00:21:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:29.984 00:21:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:29.984 00:21:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:29.984 00:21:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:09:29.984 00:21:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:09:29.984 00:21:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:09:29.984 00:21:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:09:29.984 00:21:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:09:29.984 00:21:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:09:29.984 00:21:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:09:29.984 00:21:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:29.984 00:21:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:29.984 00:21:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:29.984 00:21:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:29.984 00:21:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:29.984 00:21:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:29.984 00:21:57 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:29.984 00:21:57 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:29.984 00:21:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:29.984 00:21:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:29.984 00:21:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@285 -- # xtrace_disable 00:09:29.984 00:21:57 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:31.363 00:21:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:31.363 00:21:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # pci_devs=() 00:09:31.363 00:21:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:31.363 00:21:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:31.363 00:21:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:31.363 00:21:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:31.363 00:21:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:31.363 00:21:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # net_devs=() 00:09:31.363 00:21:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:31.363 00:21:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # e810=() 00:09:31.363 00:21:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # local -ga e810 00:09:31.363 00:21:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # x722=() 00:09:31.363 00:21:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # local -ga x722 00:09:31.363 00:21:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # mlx=() 00:09:31.363 00:21:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # local -ga mlx 00:09:31.363 00:21:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:31.363 00:21:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:31.364 00:21:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:31.623 00:21:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:31.623 00:21:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:31.623 00:21:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:31.623 00:21:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:31.623 00:21:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:31.623 00:21:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:31.623 00:21:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:31.623 00:21:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:31.623 00:21:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:31.623 00:21:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:31.623 00:21:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:31.623 00:21:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:31.623 00:21:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:31.623 00:21:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:31.623 00:21:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:31.623 00:21:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:09:31.623 Found 0000:08:00.0 (0x8086 - 0x159b) 00:09:31.623 00:21:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:31.623 00:21:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:31.623 00:21:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:31.623 00:21:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:31.623 00:21:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:31.623 00:21:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:31.623 00:21:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:09:31.623 Found 0000:08:00.1 (0x8086 - 0x159b) 00:09:31.623 00:21:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:31.623 00:21:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:31.623 00:21:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:31.623 00:21:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:31.623 00:21:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:31.623 00:21:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:31.623 00:21:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:31.623 00:21:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:31.623 00:21:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:31.623 00:21:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:31.623 00:21:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:31.623 00:21:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:31.623 00:21:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:31.623 00:21:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:31.623 00:21:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:31.623 00:21:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:09:31.623 Found net devices under 0000:08:00.0: cvl_0_0 00:09:31.623 00:21:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:31.623 00:21:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:31.623 00:21:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:31.623 00:21:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:31.623 00:21:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:31.623 00:21:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:31.623 00:21:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:31.623 00:21:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:31.623 00:21:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:09:31.623 Found net devices under 0000:08:00.1: cvl_0_1 00:09:31.623 00:21:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:31.623 00:21:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:31.623 00:21:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # is_hw=yes 00:09:31.623 00:21:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:31.623 00:21:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:31.623 00:21:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:31.623 00:21:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:31.623 00:21:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:31.623 00:21:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:31.623 00:21:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:31.623 00:21:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:31.623 00:21:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:31.623 00:21:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:31.623 00:21:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:31.623 00:21:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:31.623 00:21:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:31.623 00:21:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:31.623 00:21:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:31.623 00:21:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:31.623 00:21:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:31.624 00:21:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:31.624 00:21:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:31.624 00:21:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:31.624 00:21:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:31.624 00:21:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:31.624 00:21:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:31.624 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:31.624 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.212 ms 00:09:31.624 00:09:31.624 --- 10.0.0.2 ping statistics --- 00:09:31.624 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:31.624 rtt min/avg/max/mdev = 0.212/0.212/0.212/0.000 ms 00:09:31.624 00:21:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:31.624 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:31.624 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.075 ms 00:09:31.624 00:09:31.624 --- 10.0.0.1 ping statistics --- 00:09:31.624 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:31.624 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:09:31.624 00:21:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:31.624 00:21:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@422 -- # return 0 00:09:31.624 00:21:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:31.624 00:21:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:31.624 00:21:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:31.624 00:21:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:31.624 00:21:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:31.624 00:21:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:31.624 00:21:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:31.624 00:21:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:09:31.624 00:21:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:31.624 00:21:59 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@720 -- # xtrace_disable 00:09:31.624 00:21:59 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:31.624 00:21:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@481 -- # nvmfpid=874078 00:09:31.624 00:21:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:31.624 00:21:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@482 -- # waitforlisten 874078 00:09:31.624 00:21:59 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@827 -- # '[' -z 874078 ']' 00:09:31.624 00:21:59 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:31.624 00:21:59 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@832 -- # local max_retries=100 00:09:31.624 00:21:59 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:31.624 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:31.624 00:21:59 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@836 -- # xtrace_disable 00:09:31.624 00:21:59 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:31.624 [2024-07-12 00:21:59.393773] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:09:31.624 [2024-07-12 00:21:59.393861] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:31.624 EAL: No free 2048 kB hugepages reported on node 1 00:09:31.624 [2024-07-12 00:21:59.461012] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:31.883 [2024-07-12 00:21:59.551049] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:31.883 [2024-07-12 00:21:59.551107] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:31.883 [2024-07-12 00:21:59.551123] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:31.883 [2024-07-12 00:21:59.551136] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:31.883 [2024-07-12 00:21:59.551147] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:31.883 [2024-07-12 00:21:59.551226] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:31.883 [2024-07-12 00:21:59.551279] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:31.883 [2024-07-12 00:21:59.554632] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:31.883 [2024-07-12 00:21:59.554667] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:31.883 00:21:59 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:09:31.883 00:21:59 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@860 -- # return 0 00:09:31.883 00:21:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:31.883 00:21:59 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:31.883 00:21:59 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:31.883 00:21:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:31.883 00:21:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:31.883 00:21:59 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:31.883 00:21:59 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:31.883 [2024-07-12 00:21:59.704215] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:31.883 00:21:59 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:31.883 00:21:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:09:31.883 00:21:59 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:31.883 00:21:59 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:31.883 [2024-07-12 00:21:59.716432] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:09:31.883 00:21:59 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:31.883 00:21:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:09:31.883 00:21:59 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:31.883 00:21:59 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:32.140 00:21:59 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:32.140 00:21:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:09:32.140 00:21:59 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:32.140 00:21:59 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:32.140 00:21:59 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:32.140 00:21:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:09:32.140 00:21:59 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:32.140 00:21:59 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:32.140 00:21:59 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:32.140 00:21:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:09:32.140 00:21:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:09:32.140 00:21:59 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:32.140 00:21:59 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:32.140 00:21:59 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:32.140 00:21:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:09:32.140 00:21:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:09:32.140 00:21:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:09:32.140 00:21:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:09:32.140 00:21:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:09:32.141 00:21:59 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:32.141 00:21:59 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:32.141 00:21:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:09:32.141 00:21:59 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:32.141 00:21:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:09:32.141 00:21:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:09:32.141 00:21:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:09:32.141 00:21:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:09:32.141 00:21:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:09:32.141 00:21:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -a 10.0.0.2 -s 8009 -o json 00:09:32.141 00:21:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:09:32.141 00:21:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:09:32.398 00:22:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:09:32.398 00:22:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:09:32.398 00:22:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:09:32.398 00:22:00 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:32.398 00:22:00 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:32.398 00:22:00 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:32.398 00:22:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:09:32.398 00:22:00 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:32.398 00:22:00 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:32.398 00:22:00 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:32.398 00:22:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:09:32.398 00:22:00 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:32.398 00:22:00 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:32.398 00:22:00 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:32.398 00:22:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:09:32.398 00:22:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:09:32.398 00:22:00 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:32.398 00:22:00 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:32.398 00:22:00 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:32.398 00:22:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:09:32.398 00:22:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:09:32.398 00:22:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:09:32.398 00:22:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:09:32.398 00:22:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -a 10.0.0.2 -s 8009 -o json 00:09:32.398 00:22:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:09:32.398 00:22:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:09:32.398 00:22:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:09:32.398 00:22:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:09:32.398 00:22:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:09:32.398 00:22:00 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:32.398 00:22:00 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:32.398 00:22:00 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:32.398 00:22:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:09:32.398 00:22:00 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:32.398 00:22:00 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:32.656 00:22:00 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:32.656 00:22:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:09:32.656 00:22:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:09:32.656 00:22:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:09:32.656 00:22:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:09:32.656 00:22:00 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:32.656 00:22:00 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:32.656 00:22:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:09:32.656 00:22:00 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:32.656 00:22:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:09:32.656 00:22:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:09:32.656 00:22:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:09:32.656 00:22:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:09:32.656 00:22:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:09:32.656 00:22:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -a 10.0.0.2 -s 8009 -o json 00:09:32.656 00:22:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:09:32.656 00:22:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:09:32.656 00:22:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:09:32.656 00:22:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:09:32.656 00:22:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:09:32.656 00:22:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:09:32.656 00:22:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:09:32.656 00:22:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -a 10.0.0.2 -s 8009 -o json 00:09:32.656 00:22:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:09:32.914 00:22:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:09:32.914 00:22:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:09:32.914 00:22:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:09:32.914 00:22:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:09:32.914 00:22:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -a 10.0.0.2 -s 8009 -o json 00:09:32.914 00:22:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:09:32.914 00:22:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:09:32.914 00:22:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:09:32.914 00:22:00 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:32.914 00:22:00 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:32.914 00:22:00 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:32.914 00:22:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:09:32.914 00:22:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:09:32.914 00:22:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:09:32.914 00:22:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:09:32.914 00:22:00 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:32.914 00:22:00 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:32.914 00:22:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:09:32.914 00:22:00 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:32.914 00:22:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:09:32.914 00:22:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:09:32.914 00:22:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:09:32.914 00:22:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:09:32.914 00:22:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:09:32.914 00:22:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -a 10.0.0.2 -s 8009 -o json 00:09:32.914 00:22:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:09:32.914 00:22:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:09:33.171 00:22:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:09:33.171 00:22:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:09:33.171 00:22:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:09:33.171 00:22:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:09:33.171 00:22:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:09:33.171 00:22:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -a 10.0.0.2 -s 8009 -o json 00:09:33.171 00:22:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:09:33.429 00:22:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:09:33.429 00:22:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:09:33.429 00:22:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:09:33.429 00:22:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:09:33.429 00:22:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -a 10.0.0.2 -s 8009 -o json 00:09:33.429 00:22:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:09:33.429 00:22:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:09:33.429 00:22:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:09:33.429 00:22:01 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:33.429 00:22:01 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:33.429 00:22:01 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:33.429 00:22:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:09:33.429 00:22:01 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:33.429 00:22:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:09:33.429 00:22:01 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:33.429 00:22:01 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:33.429 00:22:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:09:33.429 00:22:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:09:33.429 00:22:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:09:33.429 00:22:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:09:33.429 00:22:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -a 10.0.0.2 -s 8009 -o json 00:09:33.429 00:22:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:09:33.429 00:22:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:09:33.687 00:22:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:09:33.687 00:22:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:09:33.687 00:22:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:09:33.687 00:22:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:09:33.687 00:22:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:33.687 00:22:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@117 -- # sync 00:09:33.687 00:22:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:33.687 00:22:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@120 -- # set +e 00:09:33.687 00:22:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:33.687 00:22:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:33.687 rmmod nvme_tcp 00:09:33.687 rmmod nvme_fabrics 00:09:33.687 rmmod nvme_keyring 00:09:33.687 00:22:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:33.687 00:22:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@124 -- # set -e 00:09:33.687 00:22:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@125 -- # return 0 00:09:33.687 00:22:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@489 -- # '[' -n 874078 ']' 00:09:33.687 00:22:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@490 -- # killprocess 874078 00:09:33.687 00:22:01 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@946 -- # '[' -z 874078 ']' 00:09:33.687 00:22:01 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@950 -- # kill -0 874078 00:09:33.687 00:22:01 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@951 -- # uname 00:09:33.687 00:22:01 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:09:33.687 00:22:01 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 874078 00:09:33.687 00:22:01 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:09:33.687 00:22:01 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:09:33.687 00:22:01 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@964 -- # echo 'killing process with pid 874078' 00:09:33.687 killing process with pid 874078 00:09:33.687 00:22:01 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@965 -- # kill 874078 00:09:33.687 00:22:01 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@970 -- # wait 874078 00:09:33.947 00:22:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:33.947 00:22:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:33.947 00:22:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:33.947 00:22:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:33.947 00:22:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:33.947 00:22:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:33.947 00:22:01 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:33.947 00:22:01 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:35.850 00:22:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:35.850 00:09:35.850 real 0m6.086s 00:09:35.850 user 0m9.549s 00:09:35.850 sys 0m1.808s 00:09:35.850 00:22:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:35.850 00:22:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:35.850 ************************************ 00:09:35.850 END TEST nvmf_referrals 00:09:35.850 ************************************ 00:09:35.850 00:22:03 nvmf_tcp -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:09:35.850 00:22:03 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:09:35.850 00:22:03 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:35.850 00:22:03 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:36.109 ************************************ 00:09:36.109 START TEST nvmf_connect_disconnect 00:09:36.109 ************************************ 00:09:36.109 00:22:03 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:09:36.109 * Looking for test storage... 00:09:36.109 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:36.109 00:22:03 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:36.109 00:22:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:09:36.109 00:22:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:36.109 00:22:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:36.109 00:22:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:36.109 00:22:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:36.109 00:22:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:36.109 00:22:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:36.109 00:22:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:36.109 00:22:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:36.109 00:22:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:36.109 00:22:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:36.109 00:22:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:09:36.109 00:22:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:09:36.109 00:22:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:36.109 00:22:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:36.109 00:22:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:36.109 00:22:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:36.109 00:22:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:36.109 00:22:03 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:36.109 00:22:03 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:36.109 00:22:03 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:36.109 00:22:03 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:36.109 00:22:03 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:36.110 00:22:03 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:36.110 00:22:03 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:09:36.110 00:22:03 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:36.110 00:22:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@47 -- # : 0 00:09:36.110 00:22:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:36.110 00:22:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:36.110 00:22:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:36.110 00:22:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:36.110 00:22:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:36.110 00:22:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:36.110 00:22:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:36.110 00:22:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:36.110 00:22:03 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:36.110 00:22:03 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:36.110 00:22:03 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:09:36.110 00:22:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:36.110 00:22:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:36.110 00:22:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:36.110 00:22:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:36.110 00:22:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:36.110 00:22:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:36.110 00:22:03 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:36.110 00:22:03 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:36.110 00:22:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:36.110 00:22:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:36.110 00:22:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:09:36.110 00:22:03 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:38.014 00:22:05 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:38.014 00:22:05 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:09:38.014 00:22:05 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:38.014 00:22:05 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:38.014 00:22:05 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:38.014 00:22:05 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:38.014 00:22:05 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:38.014 00:22:05 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:09:38.014 00:22:05 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:38.014 00:22:05 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # e810=() 00:09:38.014 00:22:05 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:09:38.014 00:22:05 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # x722=() 00:09:38.014 00:22:05 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:09:38.014 00:22:05 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:09:38.014 00:22:05 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:09:38.014 00:22:05 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:38.014 00:22:05 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:38.014 00:22:05 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:38.014 00:22:05 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:38.014 00:22:05 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:38.014 00:22:05 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:38.014 00:22:05 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:38.014 00:22:05 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:38.014 00:22:05 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:38.014 00:22:05 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:38.014 00:22:05 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:38.014 00:22:05 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:38.014 00:22:05 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:38.014 00:22:05 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:38.014 00:22:05 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:38.014 00:22:05 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:38.014 00:22:05 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:38.014 00:22:05 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:38.014 00:22:05 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:09:38.014 Found 0000:08:00.0 (0x8086 - 0x159b) 00:09:38.014 00:22:05 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:38.014 00:22:05 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:38.014 00:22:05 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:38.014 00:22:05 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:38.014 00:22:05 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:38.014 00:22:05 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:38.014 00:22:05 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:09:38.014 Found 0000:08:00.1 (0x8086 - 0x159b) 00:09:38.014 00:22:05 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:38.014 00:22:05 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:38.014 00:22:05 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:38.014 00:22:05 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:38.014 00:22:05 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:38.014 00:22:05 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:38.014 00:22:05 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:38.014 00:22:05 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:38.014 00:22:05 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:38.014 00:22:05 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:38.014 00:22:05 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:38.014 00:22:05 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:38.014 00:22:05 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:38.014 00:22:05 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:38.014 00:22:05 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:38.014 00:22:05 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:09:38.014 Found net devices under 0000:08:00.0: cvl_0_0 00:09:38.014 00:22:05 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:38.015 00:22:05 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:38.015 00:22:05 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:38.015 00:22:05 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:38.015 00:22:05 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:38.015 00:22:05 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:38.015 00:22:05 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:38.015 00:22:05 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:38.015 00:22:05 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:09:38.015 Found net devices under 0000:08:00.1: cvl_0_1 00:09:38.015 00:22:05 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:38.015 00:22:05 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:38.015 00:22:05 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:09:38.015 00:22:05 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:38.015 00:22:05 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:38.015 00:22:05 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:38.015 00:22:05 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:38.015 00:22:05 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:38.015 00:22:05 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:38.015 00:22:05 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:38.015 00:22:05 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:38.015 00:22:05 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:38.015 00:22:05 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:38.015 00:22:05 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:38.015 00:22:05 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:38.015 00:22:05 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:38.015 00:22:05 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:38.015 00:22:05 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:38.015 00:22:05 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:38.015 00:22:05 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:38.015 00:22:05 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:38.015 00:22:05 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:38.015 00:22:05 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:38.015 00:22:05 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:38.015 00:22:05 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:38.015 00:22:05 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:38.015 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:38.015 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.391 ms 00:09:38.015 00:09:38.015 --- 10.0.0.2 ping statistics --- 00:09:38.015 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:38.015 rtt min/avg/max/mdev = 0.391/0.391/0.391/0.000 ms 00:09:38.015 00:22:05 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:38.015 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:38.015 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.204 ms 00:09:38.015 00:09:38.015 --- 10.0.0.1 ping statistics --- 00:09:38.015 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:38.015 rtt min/avg/max/mdev = 0.204/0.204/0.204/0.000 ms 00:09:38.015 00:22:05 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:38.015 00:22:05 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # return 0 00:09:38.015 00:22:05 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:38.015 00:22:05 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:38.015 00:22:05 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:38.015 00:22:05 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:38.015 00:22:05 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:38.015 00:22:05 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:38.015 00:22:05 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:38.015 00:22:05 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:09:38.015 00:22:05 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:38.015 00:22:05 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@720 -- # xtrace_disable 00:09:38.015 00:22:05 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:38.015 00:22:05 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # nvmfpid=875872 00:09:38.015 00:22:05 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:38.015 00:22:05 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # waitforlisten 875872 00:09:38.015 00:22:05 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@827 -- # '[' -z 875872 ']' 00:09:38.015 00:22:05 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:38.015 00:22:05 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@832 -- # local max_retries=100 00:09:38.015 00:22:05 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:38.015 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:38.015 00:22:05 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@836 -- # xtrace_disable 00:09:38.015 00:22:05 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:38.015 [2024-07-12 00:22:05.663237] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:09:38.015 [2024-07-12 00:22:05.663317] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:38.015 EAL: No free 2048 kB hugepages reported on node 1 00:09:38.015 [2024-07-12 00:22:05.727155] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:38.015 [2024-07-12 00:22:05.814503] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:38.015 [2024-07-12 00:22:05.814561] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:38.015 [2024-07-12 00:22:05.814577] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:38.015 [2024-07-12 00:22:05.814597] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:38.015 [2024-07-12 00:22:05.814609] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:38.015 [2024-07-12 00:22:05.814666] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:38.015 [2024-07-12 00:22:05.814752] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:38.015 [2024-07-12 00:22:05.814805] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:38.015 [2024-07-12 00:22:05.814802] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:38.273 00:22:05 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:09:38.273 00:22:05 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@860 -- # return 0 00:09:38.273 00:22:05 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:38.273 00:22:05 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:38.273 00:22:05 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:38.273 00:22:05 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:38.273 00:22:05 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:09:38.273 00:22:05 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:38.273 00:22:05 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:38.273 [2024-07-12 00:22:05.956243] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:38.273 00:22:05 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:38.273 00:22:05 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:09:38.273 00:22:05 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:38.273 00:22:05 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:38.273 00:22:05 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:38.273 00:22:05 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:09:38.273 00:22:05 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:38.273 00:22:05 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:38.273 00:22:05 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:38.273 00:22:05 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:38.273 00:22:05 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:38.273 00:22:05 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:38.273 00:22:05 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:38.273 00:22:06 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:38.273 00:22:06 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:38.273 00:22:06 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:38.273 00:22:06 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:38.273 [2024-07-12 00:22:06.006366] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:38.273 00:22:06 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:38.273 00:22:06 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 1 -eq 1 ']' 00:09:38.273 00:22:06 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@27 -- # num_iterations=100 00:09:38.273 00:22:06 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@29 -- # NVME_CONNECT='nvme connect -i 8' 00:09:38.273 00:22:06 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:09:40.834 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:43.361 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:45.263 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:47.811 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:50.339 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:52.236 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:54.763 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:57.291 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:59.188 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:01.715 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:04.277 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:06.172 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:08.698 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:10.592 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:13.119 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:15.647 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:18.174 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:20.100 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:22.628 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:25.158 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:27.072 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:29.616 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:32.142 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:34.041 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:36.566 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:39.092 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:40.989 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:43.527 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:46.052 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:47.950 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:50.478 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:53.036 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:54.934 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:57.460 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:59.357 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:01.886 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:04.418 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:06.316 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:08.842 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:10.739 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:13.264 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:15.842 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:17.739 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:20.263 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:22.791 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:24.688 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:27.220 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:29.748 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:31.648 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:34.175 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:36.072 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:38.598 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:41.155 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:43.053 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:45.579 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:48.105 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:50.003 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:52.525 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:54.423 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:56.955 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:59.481 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:02.008 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:03.942 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:06.475 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:08.375 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:10.906 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:13.431 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:15.954 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:17.850 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:20.374 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:22.270 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:24.801 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:27.323 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:29.266 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:31.792 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:34.319 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:36.218 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:38.761 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:40.662 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:43.190 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:45.722 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:47.619 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:50.145 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:52.718 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:54.616 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:57.142 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:59.040 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:01.589 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:04.116 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:06.643 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:08.542 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:11.069 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:13.659 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:15.584 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:18.112 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:20.011 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:22.538 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:24.436 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:26.963 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:29.503 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:29.503 00:25:57 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:13:29.503 00:25:57 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:13:29.503 00:25:57 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:29.503 00:25:57 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # sync 00:13:29.503 00:25:57 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:29.503 00:25:57 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@120 -- # set +e 00:13:29.503 00:25:57 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:29.503 00:25:57 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:29.503 rmmod nvme_tcp 00:13:29.503 rmmod nvme_fabrics 00:13:29.503 rmmod nvme_keyring 00:13:29.503 00:25:57 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:29.503 00:25:57 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set -e 00:13:29.503 00:25:57 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # return 0 00:13:29.503 00:25:57 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@489 -- # '[' -n 875872 ']' 00:13:29.503 00:25:57 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@490 -- # killprocess 875872 00:13:29.503 00:25:57 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@946 -- # '[' -z 875872 ']' 00:13:29.503 00:25:57 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@950 -- # kill -0 875872 00:13:29.503 00:25:57 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@951 -- # uname 00:13:29.503 00:25:57 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:13:29.503 00:25:57 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 875872 00:13:29.503 00:25:57 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:13:29.503 00:25:57 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:13:29.503 00:25:57 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@964 -- # echo 'killing process with pid 875872' 00:13:29.503 killing process with pid 875872 00:13:29.503 00:25:57 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@965 -- # kill 875872 00:13:29.503 00:25:57 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@970 -- # wait 875872 00:13:29.503 00:25:57 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:29.503 00:25:57 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:29.503 00:25:57 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:29.503 00:25:57 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:29.503 00:25:57 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:29.503 00:25:57 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:29.503 00:25:57 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:29.503 00:25:57 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:32.048 00:25:59 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:32.048 00:13:32.048 real 3m55.623s 00:13:32.048 user 14m59.139s 00:13:32.048 sys 0m32.741s 00:13:32.048 00:25:59 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:32.048 00:25:59 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:32.048 ************************************ 00:13:32.048 END TEST nvmf_connect_disconnect 00:13:32.048 ************************************ 00:13:32.048 00:25:59 nvmf_tcp -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:13:32.048 00:25:59 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:13:32.048 00:25:59 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:32.048 00:25:59 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:32.048 ************************************ 00:13:32.048 START TEST nvmf_multitarget 00:13:32.048 ************************************ 00:13:32.048 00:25:59 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:13:32.048 * Looking for test storage... 00:13:32.048 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:32.048 00:25:59 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:32.048 00:25:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:13:32.048 00:25:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:32.048 00:25:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:32.048 00:25:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:32.048 00:25:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:32.048 00:25:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:32.048 00:25:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:32.048 00:25:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:32.048 00:25:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:32.048 00:25:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:32.048 00:25:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:32.048 00:25:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:13:32.048 00:25:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:13:32.048 00:25:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:32.048 00:25:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:32.048 00:25:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:32.048 00:25:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:32.048 00:25:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:32.048 00:25:59 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:32.048 00:25:59 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:32.048 00:25:59 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:32.048 00:25:59 nvmf_tcp.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:32.049 00:25:59 nvmf_tcp.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:32.049 00:25:59 nvmf_tcp.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:32.049 00:25:59 nvmf_tcp.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:13:32.049 00:25:59 nvmf_tcp.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:32.049 00:25:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@47 -- # : 0 00:13:32.049 00:25:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:32.049 00:25:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:32.049 00:25:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:32.049 00:25:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:32.049 00:25:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:32.049 00:25:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:32.049 00:25:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:32.049 00:25:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:32.049 00:25:59 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:13:32.049 00:25:59 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:13:32.049 00:25:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:32.049 00:25:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:32.049 00:25:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:32.049 00:25:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:32.049 00:25:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:32.049 00:25:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:32.049 00:25:59 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:32.049 00:25:59 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:32.049 00:25:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:32.049 00:25:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:32.049 00:25:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@285 -- # xtrace_disable 00:13:32.049 00:25:59 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:13:33.430 00:26:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:33.430 00:26:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # pci_devs=() 00:13:33.430 00:26:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:33.430 00:26:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:33.430 00:26:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:33.430 00:26:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:33.430 00:26:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:33.430 00:26:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # net_devs=() 00:13:33.430 00:26:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:33.430 00:26:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # e810=() 00:13:33.430 00:26:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # local -ga e810 00:13:33.430 00:26:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # x722=() 00:13:33.430 00:26:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # local -ga x722 00:13:33.430 00:26:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # mlx=() 00:13:33.430 00:26:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # local -ga mlx 00:13:33.430 00:26:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:33.430 00:26:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:33.430 00:26:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:33.430 00:26:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:33.430 00:26:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:33.430 00:26:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:33.430 00:26:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:33.430 00:26:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:33.430 00:26:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:33.430 00:26:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:33.430 00:26:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:33.430 00:26:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:33.430 00:26:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:33.431 00:26:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:33.431 00:26:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:33.431 00:26:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:33.431 00:26:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:33.431 00:26:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:33.431 00:26:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:13:33.431 Found 0000:08:00.0 (0x8086 - 0x159b) 00:13:33.431 00:26:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:33.431 00:26:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:33.431 00:26:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:33.431 00:26:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:33.431 00:26:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:33.431 00:26:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:33.431 00:26:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:13:33.431 Found 0000:08:00.1 (0x8086 - 0x159b) 00:13:33.431 00:26:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:33.431 00:26:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:33.431 00:26:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:33.431 00:26:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:33.431 00:26:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:33.431 00:26:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:33.431 00:26:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:33.431 00:26:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:33.431 00:26:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:33.431 00:26:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:33.431 00:26:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:33.431 00:26:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:33.431 00:26:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:33.431 00:26:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:33.431 00:26:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:33.431 00:26:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:13:33.431 Found net devices under 0000:08:00.0: cvl_0_0 00:13:33.431 00:26:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:33.431 00:26:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:33.431 00:26:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:33.431 00:26:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:33.431 00:26:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:33.431 00:26:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:33.431 00:26:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:33.431 00:26:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:33.431 00:26:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:13:33.431 Found net devices under 0000:08:00.1: cvl_0_1 00:13:33.431 00:26:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:33.431 00:26:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:33.431 00:26:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # is_hw=yes 00:13:33.431 00:26:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:33.431 00:26:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:33.431 00:26:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:33.431 00:26:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:33.431 00:26:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:33.431 00:26:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:33.431 00:26:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:33.431 00:26:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:33.431 00:26:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:33.431 00:26:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:33.431 00:26:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:33.431 00:26:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:33.431 00:26:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:33.431 00:26:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:33.431 00:26:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:33.431 00:26:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:33.431 00:26:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:33.431 00:26:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:33.431 00:26:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:33.431 00:26:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:33.431 00:26:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:33.431 00:26:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:33.431 00:26:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:33.431 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:33.431 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.231 ms 00:13:33.431 00:13:33.431 --- 10.0.0.2 ping statistics --- 00:13:33.431 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:33.431 rtt min/avg/max/mdev = 0.231/0.231/0.231/0.000 ms 00:13:33.431 00:26:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:33.431 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:33.431 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.070 ms 00:13:33.431 00:13:33.431 --- 10.0.0.1 ping statistics --- 00:13:33.431 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:33.431 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:13:33.431 00:26:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:33.431 00:26:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@422 -- # return 0 00:13:33.431 00:26:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:33.431 00:26:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:33.431 00:26:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:33.431 00:26:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:33.431 00:26:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:33.431 00:26:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:33.431 00:26:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:33.431 00:26:01 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:13:33.431 00:26:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:33.431 00:26:01 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@720 -- # xtrace_disable 00:13:33.431 00:26:01 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:13:33.431 00:26:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@481 -- # nvmfpid=900596 00:13:33.431 00:26:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:33.431 00:26:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@482 -- # waitforlisten 900596 00:13:33.431 00:26:01 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@827 -- # '[' -z 900596 ']' 00:13:33.431 00:26:01 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:33.431 00:26:01 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:33.431 00:26:01 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:33.431 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:33.431 00:26:01 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:33.431 00:26:01 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:13:33.690 [2024-07-12 00:26:01.302435] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:13:33.690 [2024-07-12 00:26:01.302521] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:33.690 EAL: No free 2048 kB hugepages reported on node 1 00:13:33.690 [2024-07-12 00:26:01.367195] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:33.690 [2024-07-12 00:26:01.454741] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:33.690 [2024-07-12 00:26:01.454797] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:33.690 [2024-07-12 00:26:01.454814] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:33.690 [2024-07-12 00:26:01.454827] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:33.690 [2024-07-12 00:26:01.454839] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:33.690 [2024-07-12 00:26:01.454915] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:33.690 [2024-07-12 00:26:01.454968] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:33.690 [2024-07-12 00:26:01.455019] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:33.690 [2024-07-12 00:26:01.455022] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:33.948 00:26:01 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:33.948 00:26:01 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@860 -- # return 0 00:13:33.948 00:26:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:33.948 00:26:01 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:33.948 00:26:01 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:13:33.948 00:26:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:33.949 00:26:01 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:13:33.949 00:26:01 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:13:33.949 00:26:01 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:13:33.949 00:26:01 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:13:33.949 00:26:01 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:13:34.207 "nvmf_tgt_1" 00:13:34.207 00:26:01 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:13:34.207 "nvmf_tgt_2" 00:13:34.207 00:26:01 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:13:34.207 00:26:01 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:13:34.466 00:26:02 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:13:34.466 00:26:02 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:13:34.466 true 00:13:34.466 00:26:02 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:13:34.725 true 00:13:34.725 00:26:02 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:13:34.725 00:26:02 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:13:34.725 00:26:02 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:13:34.725 00:26:02 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:13:34.725 00:26:02 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:13:34.725 00:26:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:34.725 00:26:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@117 -- # sync 00:13:34.725 00:26:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:34.725 00:26:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@120 -- # set +e 00:13:34.725 00:26:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:34.725 00:26:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:34.725 rmmod nvme_tcp 00:13:34.725 rmmod nvme_fabrics 00:13:34.725 rmmod nvme_keyring 00:13:34.725 00:26:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:34.989 00:26:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@124 -- # set -e 00:13:34.989 00:26:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@125 -- # return 0 00:13:34.989 00:26:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@489 -- # '[' -n 900596 ']' 00:13:34.989 00:26:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@490 -- # killprocess 900596 00:13:34.989 00:26:02 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@946 -- # '[' -z 900596 ']' 00:13:34.989 00:26:02 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@950 -- # kill -0 900596 00:13:34.989 00:26:02 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@951 -- # uname 00:13:34.989 00:26:02 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:13:34.989 00:26:02 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 900596 00:13:34.989 00:26:02 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:13:34.989 00:26:02 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:13:34.989 00:26:02 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@964 -- # echo 'killing process with pid 900596' 00:13:34.989 killing process with pid 900596 00:13:34.989 00:26:02 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@965 -- # kill 900596 00:13:34.989 00:26:02 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@970 -- # wait 900596 00:13:34.989 00:26:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:34.989 00:26:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:34.989 00:26:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:34.989 00:26:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:34.989 00:26:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:34.989 00:26:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:34.989 00:26:02 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:34.989 00:26:02 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:37.581 00:26:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:37.581 00:13:37.581 real 0m5.417s 00:13:37.581 user 0m6.706s 00:13:37.581 sys 0m1.658s 00:13:37.581 00:26:04 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:37.581 00:26:04 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:13:37.581 ************************************ 00:13:37.581 END TEST nvmf_multitarget 00:13:37.581 ************************************ 00:13:37.581 00:26:04 nvmf_tcp -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:13:37.581 00:26:04 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:13:37.581 00:26:04 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:37.581 00:26:04 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:37.581 ************************************ 00:13:37.581 START TEST nvmf_rpc 00:13:37.581 ************************************ 00:13:37.581 00:26:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:13:37.581 * Looking for test storage... 00:13:37.581 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:37.581 00:26:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:37.581 00:26:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:13:37.581 00:26:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:37.581 00:26:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:37.581 00:26:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:37.581 00:26:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:37.581 00:26:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:37.581 00:26:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:37.581 00:26:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:37.581 00:26:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:37.581 00:26:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:37.581 00:26:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:37.581 00:26:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:13:37.581 00:26:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:13:37.581 00:26:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:37.581 00:26:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:37.581 00:26:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:37.581 00:26:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:37.581 00:26:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:37.581 00:26:04 nvmf_tcp.nvmf_rpc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:37.581 00:26:04 nvmf_tcp.nvmf_rpc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:37.581 00:26:04 nvmf_tcp.nvmf_rpc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:37.581 00:26:04 nvmf_tcp.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:37.581 00:26:04 nvmf_tcp.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:37.582 00:26:04 nvmf_tcp.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:37.582 00:26:04 nvmf_tcp.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:13:37.582 00:26:04 nvmf_tcp.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:37.582 00:26:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@47 -- # : 0 00:13:37.582 00:26:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:37.582 00:26:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:37.582 00:26:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:37.582 00:26:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:37.582 00:26:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:37.582 00:26:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:37.582 00:26:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:37.582 00:26:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:37.582 00:26:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:13:37.582 00:26:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:13:37.582 00:26:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:37.582 00:26:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:37.582 00:26:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:37.582 00:26:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:37.582 00:26:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:37.582 00:26:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:37.582 00:26:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:37.582 00:26:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:37.582 00:26:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:37.582 00:26:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:37.582 00:26:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@285 -- # xtrace_disable 00:13:37.582 00:26:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:38.964 00:26:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:38.964 00:26:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # pci_devs=() 00:13:38.964 00:26:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:38.964 00:26:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:38.964 00:26:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:38.964 00:26:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:38.964 00:26:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:38.964 00:26:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # net_devs=() 00:13:38.964 00:26:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:38.964 00:26:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # e810=() 00:13:38.964 00:26:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # local -ga e810 00:13:38.964 00:26:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # x722=() 00:13:38.964 00:26:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # local -ga x722 00:13:38.964 00:26:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # mlx=() 00:13:38.964 00:26:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # local -ga mlx 00:13:38.964 00:26:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:38.964 00:26:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:38.964 00:26:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:38.964 00:26:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:38.964 00:26:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:38.964 00:26:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:38.964 00:26:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:38.964 00:26:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:38.964 00:26:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:38.964 00:26:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:38.964 00:26:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:38.964 00:26:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:38.964 00:26:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:38.964 00:26:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:38.964 00:26:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:38.964 00:26:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:38.964 00:26:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:38.964 00:26:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:38.964 00:26:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:13:38.964 Found 0000:08:00.0 (0x8086 - 0x159b) 00:13:38.964 00:26:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:38.964 00:26:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:38.964 00:26:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:38.964 00:26:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:38.964 00:26:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:38.964 00:26:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:38.964 00:26:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:13:38.964 Found 0000:08:00.1 (0x8086 - 0x159b) 00:13:38.964 00:26:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:38.964 00:26:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:38.964 00:26:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:38.965 00:26:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:38.965 00:26:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:38.965 00:26:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:38.965 00:26:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:38.965 00:26:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:38.965 00:26:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:38.965 00:26:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:38.965 00:26:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:38.965 00:26:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:38.965 00:26:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:38.965 00:26:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:38.965 00:26:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:38.965 00:26:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:13:38.965 Found net devices under 0000:08:00.0: cvl_0_0 00:13:38.965 00:26:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:38.965 00:26:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:38.965 00:26:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:38.965 00:26:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:38.965 00:26:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:38.965 00:26:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:38.965 00:26:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:38.965 00:26:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:38.965 00:26:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:13:38.965 Found net devices under 0000:08:00.1: cvl_0_1 00:13:38.965 00:26:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:38.965 00:26:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:38.965 00:26:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # is_hw=yes 00:13:38.965 00:26:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:38.965 00:26:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:38.965 00:26:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:38.965 00:26:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:38.965 00:26:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:38.965 00:26:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:38.965 00:26:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:38.965 00:26:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:38.965 00:26:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:38.965 00:26:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:38.965 00:26:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:38.965 00:26:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:38.965 00:26:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:38.965 00:26:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:38.965 00:26:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:38.965 00:26:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:38.965 00:26:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:38.965 00:26:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:38.965 00:26:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:38.965 00:26:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:38.965 00:26:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:38.965 00:26:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:38.965 00:26:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:38.965 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:38.965 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.248 ms 00:13:38.965 00:13:38.965 --- 10.0.0.2 ping statistics --- 00:13:38.965 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:38.965 rtt min/avg/max/mdev = 0.248/0.248/0.248/0.000 ms 00:13:38.965 00:26:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:38.965 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:38.965 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.087 ms 00:13:38.965 00:13:38.965 --- 10.0.0.1 ping statistics --- 00:13:38.965 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:38.965 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:13:38.965 00:26:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:38.965 00:26:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@422 -- # return 0 00:13:38.965 00:26:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:38.965 00:26:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:38.965 00:26:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:38.965 00:26:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:38.965 00:26:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:38.965 00:26:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:38.965 00:26:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:38.965 00:26:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:13:38.965 00:26:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:38.965 00:26:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@720 -- # xtrace_disable 00:13:38.965 00:26:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:38.965 00:26:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@481 -- # nvmfpid=902140 00:13:38.965 00:26:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:38.965 00:26:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@482 -- # waitforlisten 902140 00:13:38.965 00:26:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@827 -- # '[' -z 902140 ']' 00:13:38.965 00:26:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:38.965 00:26:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:38.965 00:26:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:38.965 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:38.965 00:26:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:38.965 00:26:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:38.965 [2024-07-12 00:26:06.634022] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:13:38.965 [2024-07-12 00:26:06.634116] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:38.965 EAL: No free 2048 kB hugepages reported on node 1 00:13:38.965 [2024-07-12 00:26:06.700000] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:38.965 [2024-07-12 00:26:06.791034] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:38.965 [2024-07-12 00:26:06.791092] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:38.965 [2024-07-12 00:26:06.791109] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:38.965 [2024-07-12 00:26:06.791122] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:38.965 [2024-07-12 00:26:06.791134] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:38.965 [2024-07-12 00:26:06.791205] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:38.965 [2024-07-12 00:26:06.791310] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:38.965 [2024-07-12 00:26:06.791312] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:38.965 [2024-07-12 00:26:06.791261] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:39.225 00:26:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:39.225 00:26:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@860 -- # return 0 00:13:39.225 00:26:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:39.225 00:26:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:39.225 00:26:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:39.225 00:26:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:39.225 00:26:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:13:39.225 00:26:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:39.225 00:26:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:39.225 00:26:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:39.225 00:26:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:13:39.225 "tick_rate": 2700000000, 00:13:39.225 "poll_groups": [ 00:13:39.225 { 00:13:39.225 "name": "nvmf_tgt_poll_group_000", 00:13:39.225 "admin_qpairs": 0, 00:13:39.225 "io_qpairs": 0, 00:13:39.225 "current_admin_qpairs": 0, 00:13:39.225 "current_io_qpairs": 0, 00:13:39.225 "pending_bdev_io": 0, 00:13:39.225 "completed_nvme_io": 0, 00:13:39.225 "transports": [] 00:13:39.225 }, 00:13:39.225 { 00:13:39.225 "name": "nvmf_tgt_poll_group_001", 00:13:39.225 "admin_qpairs": 0, 00:13:39.225 "io_qpairs": 0, 00:13:39.225 "current_admin_qpairs": 0, 00:13:39.225 "current_io_qpairs": 0, 00:13:39.225 "pending_bdev_io": 0, 00:13:39.225 "completed_nvme_io": 0, 00:13:39.225 "transports": [] 00:13:39.225 }, 00:13:39.225 { 00:13:39.225 "name": "nvmf_tgt_poll_group_002", 00:13:39.225 "admin_qpairs": 0, 00:13:39.225 "io_qpairs": 0, 00:13:39.225 "current_admin_qpairs": 0, 00:13:39.225 "current_io_qpairs": 0, 00:13:39.225 "pending_bdev_io": 0, 00:13:39.225 "completed_nvme_io": 0, 00:13:39.225 "transports": [] 00:13:39.225 }, 00:13:39.225 { 00:13:39.225 "name": "nvmf_tgt_poll_group_003", 00:13:39.225 "admin_qpairs": 0, 00:13:39.225 "io_qpairs": 0, 00:13:39.225 "current_admin_qpairs": 0, 00:13:39.225 "current_io_qpairs": 0, 00:13:39.225 "pending_bdev_io": 0, 00:13:39.225 "completed_nvme_io": 0, 00:13:39.225 "transports": [] 00:13:39.225 } 00:13:39.225 ] 00:13:39.225 }' 00:13:39.225 00:26:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:13:39.225 00:26:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:13:39.225 00:26:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:13:39.225 00:26:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:13:39.225 00:26:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:13:39.225 00:26:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:13:39.225 00:26:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:13:39.225 00:26:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:39.225 00:26:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:39.225 00:26:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:39.225 [2024-07-12 00:26:07.050604] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:39.225 00:26:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:39.225 00:26:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:13:39.225 00:26:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:39.225 00:26:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:39.484 00:26:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:39.484 00:26:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:13:39.484 "tick_rate": 2700000000, 00:13:39.484 "poll_groups": [ 00:13:39.484 { 00:13:39.484 "name": "nvmf_tgt_poll_group_000", 00:13:39.484 "admin_qpairs": 0, 00:13:39.484 "io_qpairs": 0, 00:13:39.484 "current_admin_qpairs": 0, 00:13:39.484 "current_io_qpairs": 0, 00:13:39.484 "pending_bdev_io": 0, 00:13:39.484 "completed_nvme_io": 0, 00:13:39.484 "transports": [ 00:13:39.484 { 00:13:39.484 "trtype": "TCP" 00:13:39.484 } 00:13:39.484 ] 00:13:39.484 }, 00:13:39.484 { 00:13:39.484 "name": "nvmf_tgt_poll_group_001", 00:13:39.484 "admin_qpairs": 0, 00:13:39.484 "io_qpairs": 0, 00:13:39.484 "current_admin_qpairs": 0, 00:13:39.484 "current_io_qpairs": 0, 00:13:39.484 "pending_bdev_io": 0, 00:13:39.484 "completed_nvme_io": 0, 00:13:39.484 "transports": [ 00:13:39.484 { 00:13:39.484 "trtype": "TCP" 00:13:39.484 } 00:13:39.484 ] 00:13:39.484 }, 00:13:39.484 { 00:13:39.484 "name": "nvmf_tgt_poll_group_002", 00:13:39.484 "admin_qpairs": 0, 00:13:39.484 "io_qpairs": 0, 00:13:39.484 "current_admin_qpairs": 0, 00:13:39.484 "current_io_qpairs": 0, 00:13:39.484 "pending_bdev_io": 0, 00:13:39.484 "completed_nvme_io": 0, 00:13:39.484 "transports": [ 00:13:39.484 { 00:13:39.484 "trtype": "TCP" 00:13:39.484 } 00:13:39.485 ] 00:13:39.485 }, 00:13:39.485 { 00:13:39.485 "name": "nvmf_tgt_poll_group_003", 00:13:39.485 "admin_qpairs": 0, 00:13:39.485 "io_qpairs": 0, 00:13:39.485 "current_admin_qpairs": 0, 00:13:39.485 "current_io_qpairs": 0, 00:13:39.485 "pending_bdev_io": 0, 00:13:39.485 "completed_nvme_io": 0, 00:13:39.485 "transports": [ 00:13:39.485 { 00:13:39.485 "trtype": "TCP" 00:13:39.485 } 00:13:39.485 ] 00:13:39.485 } 00:13:39.485 ] 00:13:39.485 }' 00:13:39.485 00:26:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:13:39.485 00:26:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:13:39.485 00:26:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:13:39.485 00:26:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:39.485 00:26:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:13:39.485 00:26:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:13:39.485 00:26:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:13:39.485 00:26:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:13:39.485 00:26:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:39.485 00:26:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:13:39.485 00:26:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:13:39.485 00:26:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:13:39.485 00:26:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:13:39.485 00:26:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:13:39.485 00:26:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:39.485 00:26:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:39.485 Malloc1 00:13:39.485 00:26:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:39.485 00:26:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:39.485 00:26:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:39.485 00:26:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:39.485 00:26:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:39.485 00:26:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:39.485 00:26:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:39.485 00:26:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:39.485 00:26:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:39.485 00:26:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:13:39.485 00:26:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:39.485 00:26:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:39.485 00:26:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:39.485 00:26:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:39.485 00:26:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:39.485 00:26:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:39.485 [2024-07-12 00:26:07.203668] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:39.485 00:26:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:39.485 00:26:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -a 10.0.0.2 -s 4420 00:13:39.485 00:26:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:13:39.485 00:26:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -a 10.0.0.2 -s 4420 00:13:39.485 00:26:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:13:39.485 00:26:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:39.485 00:26:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:13:39.485 00:26:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:39.485 00:26:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:13:39.485 00:26:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:39.485 00:26:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:13:39.485 00:26:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:13:39.485 00:26:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -a 10.0.0.2 -s 4420 00:13:39.485 [2024-07-12 00:26:07.226188] ctrlr.c: 816:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc' 00:13:39.485 Failed to write to /dev/nvme-fabrics: Input/output error 00:13:39.485 could not add new controller: failed to write to nvme-fabrics device 00:13:39.485 00:26:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:13:39.485 00:26:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:39.485 00:26:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:13:39.485 00:26:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:39.485 00:26:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:13:39.485 00:26:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:39.485 00:26:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:39.485 00:26:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:39.485 00:26:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:40.058 00:26:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:13:40.058 00:26:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:13:40.058 00:26:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:13:40.058 00:26:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:13:40.058 00:26:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:13:41.971 00:26:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:13:41.971 00:26:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:13:41.971 00:26:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:13:41.971 00:26:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:13:41.971 00:26:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:13:41.971 00:26:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:13:41.971 00:26:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:42.230 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:42.230 00:26:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:42.230 00:26:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:13:42.230 00:26:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:13:42.230 00:26:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:42.230 00:26:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:13:42.230 00:26:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:42.230 00:26:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:13:42.230 00:26:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:13:42.230 00:26:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:42.230 00:26:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:42.230 00:26:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:42.230 00:26:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:42.230 00:26:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:13:42.230 00:26:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:42.230 00:26:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:13:42.230 00:26:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:42.230 00:26:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:13:42.230 00:26:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:42.230 00:26:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:13:42.230 00:26:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:42.230 00:26:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:13:42.230 00:26:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:13:42.230 00:26:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:42.230 [2024-07-12 00:26:09.884086] ctrlr.c: 816:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc' 00:13:42.230 Failed to write to /dev/nvme-fabrics: Input/output error 00:13:42.230 could not add new controller: failed to write to nvme-fabrics device 00:13:42.230 00:26:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:13:42.230 00:26:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:42.230 00:26:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:13:42.230 00:26:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:42.230 00:26:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:13:42.230 00:26:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:42.230 00:26:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:42.230 00:26:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:42.230 00:26:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:42.799 00:26:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:13:42.799 00:26:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:13:42.799 00:26:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:13:42.799 00:26:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:13:42.799 00:26:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:13:44.704 00:26:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:13:44.704 00:26:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:13:44.704 00:26:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:13:44.704 00:26:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:13:44.704 00:26:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:13:44.704 00:26:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:13:44.704 00:26:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:44.704 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:44.704 00:26:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:44.704 00:26:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:13:44.704 00:26:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:13:44.704 00:26:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:44.704 00:26:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:13:44.704 00:26:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:44.704 00:26:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:13:44.704 00:26:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:44.704 00:26:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:44.704 00:26:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:44.704 00:26:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:44.704 00:26:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:13:44.704 00:26:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:44.704 00:26:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:44.704 00:26:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:44.704 00:26:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:44.704 00:26:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:44.704 00:26:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:44.704 00:26:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:44.704 00:26:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:44.704 [2024-07-12 00:26:12.536210] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:44.704 00:26:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:44.704 00:26:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:44.704 00:26:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:44.704 00:26:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:44.964 00:26:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:44.964 00:26:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:44.964 00:26:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:44.964 00:26:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:44.964 00:26:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:44.964 00:26:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:45.226 00:26:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:45.226 00:26:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:13:45.226 00:26:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:13:45.226 00:26:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:13:45.226 00:26:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:13:47.762 00:26:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:13:47.762 00:26:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:13:47.762 00:26:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:13:47.762 00:26:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:13:47.762 00:26:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:13:47.762 00:26:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:13:47.762 00:26:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:47.762 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:47.762 00:26:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:47.762 00:26:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:13:47.762 00:26:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:13:47.762 00:26:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:47.762 00:26:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:13:47.762 00:26:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:47.762 00:26:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:13:47.762 00:26:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:47.762 00:26:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:47.762 00:26:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:47.762 00:26:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:47.762 00:26:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:47.762 00:26:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:47.762 00:26:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:47.762 00:26:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:47.762 00:26:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:47.762 00:26:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:47.762 00:26:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:47.762 00:26:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:47.762 00:26:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:47.762 00:26:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:47.762 00:26:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:47.762 00:26:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:47.762 [2024-07-12 00:26:15.131701] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:47.762 00:26:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:47.762 00:26:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:47.762 00:26:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:47.762 00:26:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:47.762 00:26:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:47.762 00:26:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:47.762 00:26:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:47.762 00:26:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:47.762 00:26:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:47.762 00:26:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:47.762 00:26:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:47.762 00:26:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:13:47.762 00:26:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:13:47.762 00:26:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:13:47.762 00:26:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:13:50.302 00:26:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:13:50.302 00:26:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:13:50.303 00:26:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:13:50.303 00:26:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:13:50.303 00:26:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:13:50.303 00:26:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:13:50.303 00:26:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:50.303 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:50.303 00:26:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:50.303 00:26:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:13:50.303 00:26:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:13:50.303 00:26:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:50.303 00:26:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:13:50.303 00:26:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:50.303 00:26:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:13:50.303 00:26:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:50.303 00:26:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:50.303 00:26:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:50.303 00:26:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:50.303 00:26:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:50.303 00:26:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:50.303 00:26:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:50.303 00:26:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:50.303 00:26:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:50.303 00:26:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:50.303 00:26:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:50.303 00:26:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:50.303 00:26:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:50.303 00:26:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:50.303 00:26:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:50.303 00:26:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:50.303 [2024-07-12 00:26:17.721053] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:50.303 00:26:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:50.303 00:26:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:50.303 00:26:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:50.303 00:26:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:50.303 00:26:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:50.303 00:26:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:50.303 00:26:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:50.303 00:26:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:50.303 00:26:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:50.303 00:26:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:50.563 00:26:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:50.563 00:26:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:13:50.563 00:26:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:13:50.563 00:26:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:13:50.563 00:26:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:13:52.472 00:26:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:13:52.472 00:26:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:13:52.472 00:26:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:13:52.472 00:26:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:13:52.472 00:26:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:13:52.472 00:26:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:13:52.472 00:26:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:52.732 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:52.732 00:26:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:52.732 00:26:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:13:52.732 00:26:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:13:52.732 00:26:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:52.732 00:26:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:13:52.732 00:26:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:52.732 00:26:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:13:52.732 00:26:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:52.732 00:26:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:52.732 00:26:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:52.732 00:26:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:52.732 00:26:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:52.733 00:26:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:52.733 00:26:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:52.733 00:26:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:52.733 00:26:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:52.733 00:26:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:52.733 00:26:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:52.733 00:26:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:52.733 00:26:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:52.733 00:26:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:52.733 00:26:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:52.733 00:26:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:52.733 [2024-07-12 00:26:20.367958] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:52.733 00:26:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:52.733 00:26:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:52.733 00:26:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:52.733 00:26:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:52.733 00:26:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:52.733 00:26:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:52.733 00:26:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:52.733 00:26:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:52.733 00:26:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:52.733 00:26:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:52.992 00:26:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:52.992 00:26:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:13:52.992 00:26:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:13:52.992 00:26:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:13:52.992 00:26:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:13:55.575 00:26:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:13:55.575 00:26:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:13:55.575 00:26:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:13:55.575 00:26:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:13:55.575 00:26:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:13:55.575 00:26:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:13:55.575 00:26:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:55.575 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:55.575 00:26:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:55.575 00:26:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:13:55.575 00:26:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:13:55.575 00:26:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:55.575 00:26:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:13:55.575 00:26:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:55.575 00:26:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:13:55.575 00:26:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:55.575 00:26:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:55.575 00:26:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:55.575 00:26:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:55.575 00:26:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:55.575 00:26:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:55.575 00:26:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:55.575 00:26:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:55.575 00:26:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:55.575 00:26:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:55.575 00:26:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:55.575 00:26:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:55.575 00:26:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:55.575 00:26:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:55.575 00:26:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:55.575 00:26:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:55.575 [2024-07-12 00:26:22.911577] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:55.575 00:26:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:55.575 00:26:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:55.575 00:26:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:55.575 00:26:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:55.575 00:26:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:55.575 00:26:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:55.575 00:26:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:55.575 00:26:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:55.575 00:26:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:55.575 00:26:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:55.575 00:26:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:55.575 00:26:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:13:55.575 00:26:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:13:55.575 00:26:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:13:55.575 00:26:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:13:58.109 00:26:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:13:58.109 00:26:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:13:58.109 00:26:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:13:58.109 00:26:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:13:58.109 00:26:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:13:58.109 00:26:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:13:58.109 00:26:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:58.109 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:58.109 00:26:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:58.109 00:26:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:13:58.109 00:26:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:13:58.109 00:26:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:58.109 00:26:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:13:58.109 00:26:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:58.109 00:26:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:13:58.109 00:26:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:58.109 00:26:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:58.109 00:26:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:58.109 00:26:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:58.109 00:26:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:58.109 00:26:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:58.109 00:26:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:58.109 00:26:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:58.109 00:26:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:13:58.109 00:26:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:58.109 00:26:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:58.109 00:26:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:58.109 00:26:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:58.109 00:26:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:58.109 00:26:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:58.109 00:26:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:58.109 00:26:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:58.109 [2024-07-12 00:26:25.519213] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:58.109 00:26:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:58.109 00:26:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:58.109 00:26:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:58.109 00:26:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:58.109 00:26:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:58.109 00:26:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:58.109 00:26:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:58.109 00:26:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:58.109 00:26:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:58.109 00:26:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:58.109 00:26:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:58.109 00:26:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:58.109 00:26:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:58.109 00:26:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:58.109 00:26:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:58.109 00:26:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:58.109 00:26:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:58.109 00:26:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:58.109 00:26:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:58.109 00:26:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:58.109 00:26:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:58.109 00:26:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:58.109 00:26:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:58.109 00:26:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:58.109 00:26:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:58.109 [2024-07-12 00:26:25.567325] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:58.109 00:26:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:58.109 00:26:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:58.109 00:26:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:58.109 00:26:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:58.109 00:26:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:58.109 00:26:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:58.109 00:26:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:58.109 00:26:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:58.109 00:26:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:58.109 00:26:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:58.109 00:26:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:58.109 00:26:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:58.109 00:26:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:58.109 00:26:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:58.109 00:26:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:58.109 00:26:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:58.109 00:26:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:58.109 00:26:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:58.109 00:26:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:58.109 00:26:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:58.110 00:26:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:58.110 00:26:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:58.110 00:26:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:58.110 00:26:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:58.110 00:26:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:58.110 [2024-07-12 00:26:25.615469] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:58.110 00:26:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:58.110 00:26:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:58.110 00:26:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:58.110 00:26:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:58.110 00:26:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:58.110 00:26:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:58.110 00:26:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:58.110 00:26:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:58.110 00:26:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:58.110 00:26:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:58.110 00:26:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:58.110 00:26:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:58.110 00:26:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:58.110 00:26:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:58.110 00:26:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:58.110 00:26:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:58.110 00:26:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:58.110 00:26:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:58.110 00:26:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:58.110 00:26:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:58.110 00:26:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:58.110 00:26:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:58.110 00:26:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:58.110 00:26:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:58.110 00:26:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:58.110 [2024-07-12 00:26:25.663671] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:58.110 00:26:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:58.110 00:26:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:58.110 00:26:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:58.110 00:26:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:58.110 00:26:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:58.110 00:26:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:58.110 00:26:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:58.110 00:26:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:58.110 00:26:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:58.110 00:26:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:58.110 00:26:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:58.110 00:26:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:58.110 00:26:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:58.110 00:26:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:58.110 00:26:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:58.110 00:26:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:58.110 00:26:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:58.110 00:26:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:58.110 00:26:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:58.110 00:26:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:58.110 00:26:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:58.110 00:26:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:58.110 00:26:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:58.110 00:26:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:58.110 00:26:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:58.110 [2024-07-12 00:26:25.711807] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:58.110 00:26:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:58.110 00:26:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:58.110 00:26:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:58.110 00:26:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:58.110 00:26:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:58.110 00:26:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:58.110 00:26:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:58.110 00:26:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:58.110 00:26:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:58.110 00:26:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:58.110 00:26:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:58.110 00:26:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:58.110 00:26:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:58.110 00:26:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:58.110 00:26:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:58.110 00:26:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:58.110 00:26:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:58.110 00:26:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:13:58.110 00:26:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:58.110 00:26:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:58.110 00:26:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:58.110 00:26:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:13:58.110 "tick_rate": 2700000000, 00:13:58.110 "poll_groups": [ 00:13:58.110 { 00:13:58.110 "name": "nvmf_tgt_poll_group_000", 00:13:58.110 "admin_qpairs": 2, 00:13:58.110 "io_qpairs": 56, 00:13:58.110 "current_admin_qpairs": 0, 00:13:58.110 "current_io_qpairs": 0, 00:13:58.110 "pending_bdev_io": 0, 00:13:58.110 "completed_nvme_io": 123, 00:13:58.110 "transports": [ 00:13:58.110 { 00:13:58.110 "trtype": "TCP" 00:13:58.110 } 00:13:58.110 ] 00:13:58.110 }, 00:13:58.110 { 00:13:58.110 "name": "nvmf_tgt_poll_group_001", 00:13:58.110 "admin_qpairs": 2, 00:13:58.110 "io_qpairs": 56, 00:13:58.110 "current_admin_qpairs": 0, 00:13:58.110 "current_io_qpairs": 0, 00:13:58.110 "pending_bdev_io": 0, 00:13:58.110 "completed_nvme_io": 106, 00:13:58.110 "transports": [ 00:13:58.110 { 00:13:58.110 "trtype": "TCP" 00:13:58.110 } 00:13:58.110 ] 00:13:58.110 }, 00:13:58.110 { 00:13:58.110 "name": "nvmf_tgt_poll_group_002", 00:13:58.110 "admin_qpairs": 1, 00:13:58.110 "io_qpairs": 56, 00:13:58.110 "current_admin_qpairs": 0, 00:13:58.110 "current_io_qpairs": 0, 00:13:58.110 "pending_bdev_io": 0, 00:13:58.110 "completed_nvme_io": 163, 00:13:58.110 "transports": [ 00:13:58.110 { 00:13:58.110 "trtype": "TCP" 00:13:58.110 } 00:13:58.110 ] 00:13:58.110 }, 00:13:58.110 { 00:13:58.110 "name": "nvmf_tgt_poll_group_003", 00:13:58.110 "admin_qpairs": 2, 00:13:58.110 "io_qpairs": 56, 00:13:58.110 "current_admin_qpairs": 0, 00:13:58.110 "current_io_qpairs": 0, 00:13:58.110 "pending_bdev_io": 0, 00:13:58.110 "completed_nvme_io": 182, 00:13:58.110 "transports": [ 00:13:58.110 { 00:13:58.110 "trtype": "TCP" 00:13:58.110 } 00:13:58.110 ] 00:13:58.110 } 00:13:58.110 ] 00:13:58.110 }' 00:13:58.110 00:26:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:13:58.110 00:26:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:13:58.110 00:26:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:13:58.110 00:26:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:58.110 00:26:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:13:58.110 00:26:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:13:58.110 00:26:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:13:58.110 00:26:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:13:58.110 00:26:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:58.110 00:26:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # (( 224 > 0 )) 00:13:58.110 00:26:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:13:58.110 00:26:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:13:58.110 00:26:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:13:58.110 00:26:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:58.110 00:26:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@117 -- # sync 00:13:58.110 00:26:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:58.110 00:26:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@120 -- # set +e 00:13:58.110 00:26:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:58.110 00:26:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:58.110 rmmod nvme_tcp 00:13:58.110 rmmod nvme_fabrics 00:13:58.110 rmmod nvme_keyring 00:13:58.110 00:26:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:58.111 00:26:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@124 -- # set -e 00:13:58.111 00:26:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@125 -- # return 0 00:13:58.111 00:26:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@489 -- # '[' -n 902140 ']' 00:13:58.111 00:26:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@490 -- # killprocess 902140 00:13:58.111 00:26:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@946 -- # '[' -z 902140 ']' 00:13:58.111 00:26:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@950 -- # kill -0 902140 00:13:58.111 00:26:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@951 -- # uname 00:13:58.111 00:26:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:13:58.111 00:26:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 902140 00:13:58.111 00:26:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:13:58.111 00:26:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:13:58.111 00:26:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 902140' 00:13:58.111 killing process with pid 902140 00:13:58.111 00:26:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@965 -- # kill 902140 00:13:58.111 00:26:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@970 -- # wait 902140 00:13:58.370 00:26:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:58.370 00:26:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:58.370 00:26:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:58.370 00:26:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:58.370 00:26:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:58.370 00:26:26 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:58.370 00:26:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:58.370 00:26:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:00.909 00:26:28 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:00.909 00:14:00.909 real 0m23.335s 00:14:00.909 user 1m16.433s 00:14:00.909 sys 0m3.673s 00:14:00.909 00:26:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:00.909 00:26:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:00.909 ************************************ 00:14:00.909 END TEST nvmf_rpc 00:14:00.909 ************************************ 00:14:00.909 00:26:28 nvmf_tcp -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:14:00.909 00:26:28 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:14:00.909 00:26:28 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:00.909 00:26:28 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:00.909 ************************************ 00:14:00.909 START TEST nvmf_invalid 00:14:00.909 ************************************ 00:14:00.909 00:26:28 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:14:00.909 * Looking for test storage... 00:14:00.909 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:00.909 00:26:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:00.909 00:26:28 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:14:00.909 00:26:28 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:00.909 00:26:28 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:00.909 00:26:28 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:00.909 00:26:28 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:00.909 00:26:28 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:00.909 00:26:28 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:00.909 00:26:28 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:00.909 00:26:28 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:00.909 00:26:28 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:00.909 00:26:28 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:00.909 00:26:28 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:14:00.909 00:26:28 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:14:00.909 00:26:28 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:00.909 00:26:28 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:00.909 00:26:28 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:00.909 00:26:28 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:00.909 00:26:28 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:00.909 00:26:28 nvmf_tcp.nvmf_invalid -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:00.909 00:26:28 nvmf_tcp.nvmf_invalid -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:00.909 00:26:28 nvmf_tcp.nvmf_invalid -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:00.909 00:26:28 nvmf_tcp.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:00.909 00:26:28 nvmf_tcp.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:00.909 00:26:28 nvmf_tcp.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:00.909 00:26:28 nvmf_tcp.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:14:00.909 00:26:28 nvmf_tcp.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:00.909 00:26:28 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@47 -- # : 0 00:14:00.909 00:26:28 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:00.910 00:26:28 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:00.910 00:26:28 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:00.910 00:26:28 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:00.910 00:26:28 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:00.910 00:26:28 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:00.910 00:26:28 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:00.910 00:26:28 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:00.910 00:26:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:14:00.910 00:26:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:00.910 00:26:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:14:00.910 00:26:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:14:00.910 00:26:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:14:00.910 00:26:28 nvmf_tcp.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:14:00.910 00:26:28 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:00.910 00:26:28 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:00.910 00:26:28 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:00.910 00:26:28 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:00.910 00:26:28 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:00.910 00:26:28 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:00.910 00:26:28 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:00.910 00:26:28 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:00.910 00:26:28 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:00.910 00:26:28 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:00.910 00:26:28 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@285 -- # xtrace_disable 00:14:00.910 00:26:28 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:14:02.285 00:26:29 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:02.285 00:26:29 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # pci_devs=() 00:14:02.285 00:26:29 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:02.285 00:26:29 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:02.285 00:26:29 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:02.285 00:26:29 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:02.285 00:26:29 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:02.285 00:26:29 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # net_devs=() 00:14:02.285 00:26:29 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:02.285 00:26:29 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # e810=() 00:14:02.285 00:26:29 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # local -ga e810 00:14:02.285 00:26:29 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # x722=() 00:14:02.285 00:26:29 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # local -ga x722 00:14:02.285 00:26:29 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # mlx=() 00:14:02.285 00:26:29 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # local -ga mlx 00:14:02.285 00:26:29 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:02.285 00:26:29 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:02.285 00:26:29 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:02.285 00:26:29 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:02.285 00:26:29 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:02.285 00:26:29 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:02.285 00:26:29 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:02.285 00:26:29 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:02.285 00:26:29 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:02.285 00:26:29 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:02.285 00:26:29 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:02.285 00:26:29 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:02.285 00:26:29 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:02.285 00:26:29 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:02.285 00:26:29 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:02.285 00:26:29 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:02.285 00:26:29 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:02.285 00:26:29 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:02.285 00:26:29 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:14:02.285 Found 0000:08:00.0 (0x8086 - 0x159b) 00:14:02.285 00:26:29 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:02.285 00:26:29 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:02.285 00:26:29 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:02.285 00:26:29 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:02.285 00:26:29 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:02.285 00:26:29 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:02.285 00:26:29 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:14:02.285 Found 0000:08:00.1 (0x8086 - 0x159b) 00:14:02.285 00:26:29 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:02.285 00:26:29 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:02.285 00:26:29 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:02.285 00:26:29 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:02.285 00:26:29 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:02.285 00:26:29 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:02.285 00:26:29 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:02.285 00:26:29 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:02.285 00:26:29 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:02.285 00:26:29 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:02.285 00:26:29 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:02.285 00:26:29 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:02.285 00:26:29 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:02.286 00:26:29 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:02.286 00:26:29 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:02.286 00:26:29 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:14:02.286 Found net devices under 0000:08:00.0: cvl_0_0 00:14:02.286 00:26:29 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:02.286 00:26:29 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:02.286 00:26:29 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:02.286 00:26:29 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:02.286 00:26:29 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:02.286 00:26:29 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:02.286 00:26:29 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:02.286 00:26:29 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:02.286 00:26:29 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:14:02.286 Found net devices under 0000:08:00.1: cvl_0_1 00:14:02.286 00:26:29 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:02.286 00:26:29 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:02.286 00:26:29 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # is_hw=yes 00:14:02.286 00:26:29 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:02.286 00:26:29 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:02.286 00:26:29 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:02.286 00:26:29 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:02.286 00:26:29 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:02.286 00:26:29 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:02.286 00:26:29 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:02.286 00:26:29 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:02.286 00:26:29 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:02.286 00:26:29 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:02.286 00:26:29 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:02.286 00:26:29 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:02.286 00:26:29 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:02.286 00:26:29 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:02.286 00:26:29 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:02.286 00:26:29 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:02.286 00:26:29 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:02.286 00:26:29 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:02.286 00:26:29 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:02.286 00:26:29 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:02.286 00:26:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:02.286 00:26:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:02.286 00:26:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:02.286 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:02.286 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.229 ms 00:14:02.286 00:14:02.286 --- 10.0.0.2 ping statistics --- 00:14:02.286 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:02.286 rtt min/avg/max/mdev = 0.229/0.229/0.229/0.000 ms 00:14:02.286 00:26:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:02.286 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:02.286 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.125 ms 00:14:02.286 00:14:02.286 --- 10.0.0.1 ping statistics --- 00:14:02.286 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:02.286 rtt min/avg/max/mdev = 0.125/0.125/0.125/0.000 ms 00:14:02.286 00:26:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:02.286 00:26:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@422 -- # return 0 00:14:02.286 00:26:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:02.286 00:26:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:02.286 00:26:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:02.286 00:26:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:02.286 00:26:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:02.286 00:26:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:02.286 00:26:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:02.286 00:26:30 nvmf_tcp.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:14:02.286 00:26:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:02.286 00:26:30 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@720 -- # xtrace_disable 00:14:02.286 00:26:30 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:14:02.286 00:26:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@481 -- # nvmfpid=905595 00:14:02.286 00:26:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:02.286 00:26:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@482 -- # waitforlisten 905595 00:14:02.286 00:26:30 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@827 -- # '[' -z 905595 ']' 00:14:02.286 00:26:30 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:02.286 00:26:30 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:02.286 00:26:30 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:02.286 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:02.286 00:26:30 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:02.286 00:26:30 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:14:02.545 [2024-07-12 00:26:30.136360] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:14:02.545 [2024-07-12 00:26:30.136446] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:02.545 EAL: No free 2048 kB hugepages reported on node 1 00:14:02.545 [2024-07-12 00:26:30.200593] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:02.545 [2024-07-12 00:26:30.288098] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:02.545 [2024-07-12 00:26:30.288149] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:02.545 [2024-07-12 00:26:30.288165] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:02.545 [2024-07-12 00:26:30.288186] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:02.545 [2024-07-12 00:26:30.288198] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:02.545 [2024-07-12 00:26:30.288270] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:02.545 [2024-07-12 00:26:30.288321] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:02.545 [2024-07-12 00:26:30.288368] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:02.545 [2024-07-12 00:26:30.288371] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:02.803 00:26:30 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:02.803 00:26:30 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@860 -- # return 0 00:14:02.803 00:26:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:02.803 00:26:30 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:02.803 00:26:30 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:14:02.803 00:26:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:02.803 00:26:30 nvmf_tcp.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:14:02.803 00:26:30 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode32653 00:14:03.061 [2024-07-12 00:26:30.706035] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:14:03.061 00:26:30 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:14:03.061 { 00:14:03.061 "nqn": "nqn.2016-06.io.spdk:cnode32653", 00:14:03.061 "tgt_name": "foobar", 00:14:03.061 "method": "nvmf_create_subsystem", 00:14:03.061 "req_id": 1 00:14:03.061 } 00:14:03.061 Got JSON-RPC error response 00:14:03.061 response: 00:14:03.061 { 00:14:03.061 "code": -32603, 00:14:03.061 "message": "Unable to find target foobar" 00:14:03.061 }' 00:14:03.061 00:26:30 nvmf_tcp.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:14:03.061 { 00:14:03.061 "nqn": "nqn.2016-06.io.spdk:cnode32653", 00:14:03.061 "tgt_name": "foobar", 00:14:03.061 "method": "nvmf_create_subsystem", 00:14:03.061 "req_id": 1 00:14:03.061 } 00:14:03.061 Got JSON-RPC error response 00:14:03.061 response: 00:14:03.061 { 00:14:03.061 "code": -32603, 00:14:03.061 "message": "Unable to find target foobar" 00:14:03.061 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:14:03.061 00:26:30 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:14:03.061 00:26:30 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode1402 00:14:03.320 [2024-07-12 00:26:31.007069] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1402: invalid serial number 'SPDKISFASTANDAWESOME' 00:14:03.320 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:14:03.320 { 00:14:03.320 "nqn": "nqn.2016-06.io.spdk:cnode1402", 00:14:03.320 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:14:03.320 "method": "nvmf_create_subsystem", 00:14:03.320 "req_id": 1 00:14:03.320 } 00:14:03.320 Got JSON-RPC error response 00:14:03.320 response: 00:14:03.320 { 00:14:03.320 "code": -32602, 00:14:03.320 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:14:03.320 }' 00:14:03.320 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:14:03.320 { 00:14:03.320 "nqn": "nqn.2016-06.io.spdk:cnode1402", 00:14:03.320 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:14:03.320 "method": "nvmf_create_subsystem", 00:14:03.320 "req_id": 1 00:14:03.320 } 00:14:03.320 Got JSON-RPC error response 00:14:03.320 response: 00:14:03.320 { 00:14:03.320 "code": -32602, 00:14:03.320 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:14:03.320 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:14:03.320 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:14:03.320 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode10959 00:14:03.579 [2024-07-12 00:26:31.304098] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode10959: invalid model number 'SPDK_Controller' 00:14:03.579 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:14:03.579 { 00:14:03.579 "nqn": "nqn.2016-06.io.spdk:cnode10959", 00:14:03.579 "model_number": "SPDK_Controller\u001f", 00:14:03.579 "method": "nvmf_create_subsystem", 00:14:03.579 "req_id": 1 00:14:03.579 } 00:14:03.579 Got JSON-RPC error response 00:14:03.579 response: 00:14:03.579 { 00:14:03.579 "code": -32602, 00:14:03.579 "message": "Invalid MN SPDK_Controller\u001f" 00:14:03.579 }' 00:14:03.579 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:14:03.579 { 00:14:03.579 "nqn": "nqn.2016-06.io.spdk:cnode10959", 00:14:03.579 "model_number": "SPDK_Controller\u001f", 00:14:03.579 "method": "nvmf_create_subsystem", 00:14:03.579 "req_id": 1 00:14:03.579 } 00:14:03.579 Got JSON-RPC error response 00:14:03.579 response: 00:14:03.579 { 00:14:03.579 "code": -32602, 00:14:03.579 "message": "Invalid MN SPDK_Controller\u001f" 00:14:03.579 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:14:03.579 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:14:03.579 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:14:03.579 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:14:03.579 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:14:03.579 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:14:03.579 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:14:03.579 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:03.579 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:14:03.579 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:14:03.579 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:14:03.579 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:03.579 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:03.579 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:14:03.579 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:14:03.579 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:14:03.579 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:03.579 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:03.579 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:14:03.579 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:14:03.579 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:14:03.579 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:03.579 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:03.579 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:14:03.579 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:14:03.579 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:14:03.579 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:03.579 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:03.579 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:14:03.579 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:14:03.579 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:14:03.579 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:03.579 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:03.579 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:14:03.579 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:14:03.579 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:14:03.579 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:03.579 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:03.579 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102 00:14:03.579 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66' 00:14:03.579 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=f 00:14:03.579 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:03.579 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:03.579 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:14:03.579 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:14:03.579 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:14:03.579 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:03.579 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:03.579 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:14:03.579 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:14:03.579 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:14:03.579 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:03.579 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:03.579 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:14:03.579 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:14:03.579 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:14:03.579 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:03.579 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:03.579 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 94 00:14:03.579 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5e' 00:14:03.579 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='^' 00:14:03.579 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:03.579 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:03.579 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:14:03.579 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:14:03.579 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:14:03.579 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:03.579 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:03.579 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:14:03.579 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:14:03.579 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:14:03.579 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:03.579 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:03.579 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:14:03.579 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:14:03.579 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:14:03.579 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:03.580 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:03.580 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 54 00:14:03.580 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x36' 00:14:03.580 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=6 00:14:03.580 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:03.580 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:03.580 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:14:03.580 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:14:03.580 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:14:03.580 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:03.580 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:03.580 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:14:03.580 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:14:03.580 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:14:03.580 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:03.580 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:03.580 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:14:03.580 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:14:03.580 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:14:03.580 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:03.580 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:03.580 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:14:03.580 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:14:03.580 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:14:03.580 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:03.580 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:03.580 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:14:03.580 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:14:03.580 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:14:03.580 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:03.580 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:03.580 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:14:03.580 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:14:03.580 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:14:03.580 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:03.580 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:03.580 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ . == \- ]] 00:14:03.580 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo '.=g{Xhf?oE^g[s6BZ{O4' 00:14:03.580 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s '.=g{Xhf?oE^g[s6BZ{O4' nqn.2016-06.io.spdk:cnode16662 00:14:04.148 [2024-07-12 00:26:31.681256] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode16662: invalid serial number '.=g{Xhf?oE^g[s6BZ{O4' 00:14:04.148 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:14:04.148 { 00:14:04.148 "nqn": "nqn.2016-06.io.spdk:cnode16662", 00:14:04.148 "serial_number": ".=g{Xhf?oE^g[s6\u007fBZ{O4", 00:14:04.148 "method": "nvmf_create_subsystem", 00:14:04.148 "req_id": 1 00:14:04.148 } 00:14:04.148 Got JSON-RPC error response 00:14:04.148 response: 00:14:04.148 { 00:14:04.148 "code": -32602, 00:14:04.148 "message": "Invalid SN .=g{Xhf?oE^g[s6\u007fBZ{O4" 00:14:04.148 }' 00:14:04.148 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:14:04.148 { 00:14:04.148 "nqn": "nqn.2016-06.io.spdk:cnode16662", 00:14:04.148 "serial_number": ".=g{Xhf?oE^g[s6\u007fBZ{O4", 00:14:04.148 "method": "nvmf_create_subsystem", 00:14:04.148 "req_id": 1 00:14:04.148 } 00:14:04.148 Got JSON-RPC error response 00:14:04.148 response: 00:14:04.148 { 00:14:04.148 "code": -32602, 00:14:04.148 "message": "Invalid SN .=g{Xhf?oE^g[s6\u007fBZ{O4" 00:14:04.148 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:14:04.148 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:14:04.148 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:14:04.148 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:14:04.148 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:14:04.148 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:14:04.148 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:14:04.148 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:04.148 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95 00:14:04.148 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f' 00:14:04.148 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=_ 00:14:04.148 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:04.148 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:04.148 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:14:04.148 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:14:04.148 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:14:04.148 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:04.148 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:04.148 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:14:04.148 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:14:04.148 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:14:04.148 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:04.148 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:04.148 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:14:04.148 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:14:04.148 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:14:04.148 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:04.148 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:04.148 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:14:04.148 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:14:04.148 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:14:04.148 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:04.148 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:04.148 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 54 00:14:04.148 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x36' 00:14:04.148 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=6 00:14:04.148 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:04.148 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:04.148 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:14:04.148 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:14:04.148 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:14:04.148 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:04.148 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:04.148 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:14:04.148 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:14:04.148 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:14:04.148 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:04.148 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:04.148 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:14:04.148 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:14:04.148 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:14:04.148 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:04.148 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:04.148 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:14:04.148 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:14:04.148 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:14:04.148 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:04.148 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:04.148 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:14:04.148 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:14:04.148 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:14:04.149 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:04.149 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:04.149 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:14:04.149 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:14:04.149 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:14:04.149 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:04.149 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:04.149 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:14:04.149 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:14:04.149 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:14:04.149 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:04.149 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:04.149 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:14:04.149 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:14:04.149 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:14:04.149 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:04.149 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:04.149 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:14:04.149 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:14:04.149 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:14:04.149 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:04.149 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:04.149 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:14:04.149 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:14:04.149 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:14:04.149 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:04.149 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:04.149 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:14:04.149 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:14:04.149 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:14:04.149 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:04.149 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:04.149 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:14:04.149 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:14:04.149 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:14:04.149 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:04.149 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:04.149 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:14:04.149 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:14:04.149 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:14:04.149 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:04.149 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:04.149 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95 00:14:04.149 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f' 00:14:04.149 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=_ 00:14:04.149 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:04.149 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:04.149 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:14:04.149 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:14:04.149 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:14:04.149 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:04.149 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:04.149 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:14:04.149 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:14:04.149 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:14:04.149 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:04.149 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:04.149 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:14:04.149 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:14:04.149 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:14:04.149 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:04.149 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:04.149 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:14:04.149 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:14:04.149 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:14:04.149 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:04.149 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:04.149 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:14:04.149 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:14:04.149 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:14:04.149 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:04.149 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:04.149 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:14:04.149 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:14:04.149 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:14:04.149 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:04.149 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:04.149 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 54 00:14:04.149 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x36' 00:14:04.149 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=6 00:14:04.149 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:04.149 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:04.149 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:14:04.149 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:14:04.149 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:14:04.149 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:04.149 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:04.149 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 54 00:14:04.149 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x36' 00:14:04.149 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=6 00:14:04.149 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:04.149 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:04.149 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:14:04.149 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:14:04.149 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:14:04.149 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:04.149 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:04.149 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:14:04.149 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:14:04.149 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:14:04.149 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:04.149 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:04.149 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:14:04.149 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:14:04.149 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:14:04.149 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:04.149 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:04.149 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:14:04.149 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:14:04.149 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:14:04.149 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:04.149 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:04.149 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:14:04.149 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:14:04.149 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:14:04.149 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:04.149 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:04.149 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:14:04.149 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:14:04.149 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:14:04.149 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:04.149 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:04.149 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:14:04.149 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:14:04.149 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:14:04.149 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:04.149 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:04.149 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:14:04.149 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:14:04.149 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:14:04.149 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:04.149 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:04.149 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:14:04.149 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:14:04.149 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:14:04.149 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:04.149 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:04.149 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:14:04.149 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:14:04.149 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:14:04.149 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:04.149 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:04.149 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:14:04.149 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:14:04.149 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:14:04.149 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:04.150 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:04.150 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:14:04.150 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:14:04.150 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:14:04.150 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:04.150 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:04.150 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ _ == \- ]] 00:14:04.150 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo '_]s7;6B/.h0C`&H[@d8_VcMd{'\''6W6)zW=g:h79:H' 00:14:04.150 00:26:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d '_]s7;6B/.h0C`&H[@d8_VcMd{'\''6W6)zW=g:h79:H' nqn.2016-06.io.spdk:cnode2346 00:14:04.409 [2024-07-12 00:26:32.098642] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2346: invalid model number '_]s7;6B/.h0C`&H[@d8_VcMd{'6W6)zW=g:h79:H' 00:14:04.409 00:26:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:14:04.409 { 00:14:04.409 "nqn": "nqn.2016-06.io.spdk:cnode2346", 00:14:04.409 "model_number": "_]s7;6B/.h0C`&H[@d8_VcMd{'\''6W6)zW=g:h79:\u007fH", 00:14:04.409 "method": "nvmf_create_subsystem", 00:14:04.409 "req_id": 1 00:14:04.409 } 00:14:04.409 Got JSON-RPC error response 00:14:04.409 response: 00:14:04.409 { 00:14:04.409 "code": -32602, 00:14:04.409 "message": "Invalid MN _]s7;6B/.h0C`&H[@d8_VcMd{'\''6W6)zW=g:h79:\u007fH" 00:14:04.409 }' 00:14:04.409 00:26:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:14:04.409 { 00:14:04.409 "nqn": "nqn.2016-06.io.spdk:cnode2346", 00:14:04.409 "model_number": "_]s7;6B/.h0C`&H[@d8_VcMd{'6W6)zW=g:h79:\u007fH", 00:14:04.409 "method": "nvmf_create_subsystem", 00:14:04.409 "req_id": 1 00:14:04.409 } 00:14:04.409 Got JSON-RPC error response 00:14:04.409 response: 00:14:04.409 { 00:14:04.409 "code": -32602, 00:14:04.409 "message": "Invalid MN _]s7;6B/.h0C`&H[@d8_VcMd{'6W6)zW=g:h79:\u007fH" 00:14:04.409 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:14:04.409 00:26:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:14:04.667 [2024-07-12 00:26:32.403724] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:04.668 00:26:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:14:04.926 00:26:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:14:04.926 00:26:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:14:04.926 00:26:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:14:04.926 00:26:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:14:04.926 00:26:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:14:05.185 [2024-07-12 00:26:33.009688] nvmf_rpc.c: 804:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:14:05.442 00:26:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:14:05.442 { 00:14:05.442 "nqn": "nqn.2016-06.io.spdk:cnode", 00:14:05.442 "listen_address": { 00:14:05.442 "trtype": "tcp", 00:14:05.442 "traddr": "", 00:14:05.442 "trsvcid": "4421" 00:14:05.442 }, 00:14:05.442 "method": "nvmf_subsystem_remove_listener", 00:14:05.442 "req_id": 1 00:14:05.442 } 00:14:05.442 Got JSON-RPC error response 00:14:05.442 response: 00:14:05.442 { 00:14:05.442 "code": -32602, 00:14:05.442 "message": "Invalid parameters" 00:14:05.442 }' 00:14:05.442 00:26:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:14:05.442 { 00:14:05.442 "nqn": "nqn.2016-06.io.spdk:cnode", 00:14:05.442 "listen_address": { 00:14:05.442 "trtype": "tcp", 00:14:05.442 "traddr": "", 00:14:05.442 "trsvcid": "4421" 00:14:05.442 }, 00:14:05.442 "method": "nvmf_subsystem_remove_listener", 00:14:05.442 "req_id": 1 00:14:05.442 } 00:14:05.442 Got JSON-RPC error response 00:14:05.442 response: 00:14:05.442 { 00:14:05.442 "code": -32602, 00:14:05.442 "message": "Invalid parameters" 00:14:05.442 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:14:05.442 00:26:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode18284 -i 0 00:14:05.699 [2024-07-12 00:26:33.306664] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode18284: invalid cntlid range [0-65519] 00:14:05.699 00:26:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:14:05.699 { 00:14:05.699 "nqn": "nqn.2016-06.io.spdk:cnode18284", 00:14:05.699 "min_cntlid": 0, 00:14:05.699 "method": "nvmf_create_subsystem", 00:14:05.699 "req_id": 1 00:14:05.699 } 00:14:05.699 Got JSON-RPC error response 00:14:05.699 response: 00:14:05.699 { 00:14:05.699 "code": -32602, 00:14:05.699 "message": "Invalid cntlid range [0-65519]" 00:14:05.699 }' 00:14:05.699 00:26:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:14:05.699 { 00:14:05.700 "nqn": "nqn.2016-06.io.spdk:cnode18284", 00:14:05.700 "min_cntlid": 0, 00:14:05.700 "method": "nvmf_create_subsystem", 00:14:05.700 "req_id": 1 00:14:05.700 } 00:14:05.700 Got JSON-RPC error response 00:14:05.700 response: 00:14:05.700 { 00:14:05.700 "code": -32602, 00:14:05.700 "message": "Invalid cntlid range [0-65519]" 00:14:05.700 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:14:05.700 00:26:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10814 -i 65520 00:14:05.957 [2024-07-12 00:26:33.603552] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode10814: invalid cntlid range [65520-65519] 00:14:05.957 00:26:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:14:05.957 { 00:14:05.957 "nqn": "nqn.2016-06.io.spdk:cnode10814", 00:14:05.957 "min_cntlid": 65520, 00:14:05.957 "method": "nvmf_create_subsystem", 00:14:05.957 "req_id": 1 00:14:05.957 } 00:14:05.957 Got JSON-RPC error response 00:14:05.957 response: 00:14:05.957 { 00:14:05.957 "code": -32602, 00:14:05.957 "message": "Invalid cntlid range [65520-65519]" 00:14:05.957 }' 00:14:05.957 00:26:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:14:05.957 { 00:14:05.957 "nqn": "nqn.2016-06.io.spdk:cnode10814", 00:14:05.957 "min_cntlid": 65520, 00:14:05.957 "method": "nvmf_create_subsystem", 00:14:05.957 "req_id": 1 00:14:05.957 } 00:14:05.957 Got JSON-RPC error response 00:14:05.957 response: 00:14:05.957 { 00:14:05.957 "code": -32602, 00:14:05.957 "message": "Invalid cntlid range [65520-65519]" 00:14:05.957 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:14:05.957 00:26:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode15166 -I 0 00:14:06.215 [2024-07-12 00:26:33.900529] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode15166: invalid cntlid range [1-0] 00:14:06.215 00:26:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:14:06.215 { 00:14:06.215 "nqn": "nqn.2016-06.io.spdk:cnode15166", 00:14:06.215 "max_cntlid": 0, 00:14:06.215 "method": "nvmf_create_subsystem", 00:14:06.215 "req_id": 1 00:14:06.215 } 00:14:06.215 Got JSON-RPC error response 00:14:06.215 response: 00:14:06.215 { 00:14:06.215 "code": -32602, 00:14:06.215 "message": "Invalid cntlid range [1-0]" 00:14:06.215 }' 00:14:06.215 00:26:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:14:06.215 { 00:14:06.215 "nqn": "nqn.2016-06.io.spdk:cnode15166", 00:14:06.215 "max_cntlid": 0, 00:14:06.215 "method": "nvmf_create_subsystem", 00:14:06.215 "req_id": 1 00:14:06.215 } 00:14:06.215 Got JSON-RPC error response 00:14:06.215 response: 00:14:06.215 { 00:14:06.215 "code": -32602, 00:14:06.215 "message": "Invalid cntlid range [1-0]" 00:14:06.215 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:14:06.215 00:26:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11217 -I 65520 00:14:06.473 [2024-07-12 00:26:34.197495] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode11217: invalid cntlid range [1-65520] 00:14:06.473 00:26:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:14:06.473 { 00:14:06.473 "nqn": "nqn.2016-06.io.spdk:cnode11217", 00:14:06.473 "max_cntlid": 65520, 00:14:06.473 "method": "nvmf_create_subsystem", 00:14:06.473 "req_id": 1 00:14:06.473 } 00:14:06.473 Got JSON-RPC error response 00:14:06.473 response: 00:14:06.473 { 00:14:06.473 "code": -32602, 00:14:06.473 "message": "Invalid cntlid range [1-65520]" 00:14:06.473 }' 00:14:06.473 00:26:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:14:06.473 { 00:14:06.473 "nqn": "nqn.2016-06.io.spdk:cnode11217", 00:14:06.473 "max_cntlid": 65520, 00:14:06.473 "method": "nvmf_create_subsystem", 00:14:06.473 "req_id": 1 00:14:06.473 } 00:14:06.473 Got JSON-RPC error response 00:14:06.473 response: 00:14:06.473 { 00:14:06.473 "code": -32602, 00:14:06.473 "message": "Invalid cntlid range [1-65520]" 00:14:06.473 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:14:06.473 00:26:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1175 -i 6 -I 5 00:14:06.731 [2024-07-12 00:26:34.494459] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1175: invalid cntlid range [6-5] 00:14:06.731 00:26:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:14:06.731 { 00:14:06.731 "nqn": "nqn.2016-06.io.spdk:cnode1175", 00:14:06.731 "min_cntlid": 6, 00:14:06.731 "max_cntlid": 5, 00:14:06.731 "method": "nvmf_create_subsystem", 00:14:06.731 "req_id": 1 00:14:06.731 } 00:14:06.731 Got JSON-RPC error response 00:14:06.731 response: 00:14:06.731 { 00:14:06.731 "code": -32602, 00:14:06.731 "message": "Invalid cntlid range [6-5]" 00:14:06.731 }' 00:14:06.731 00:26:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:14:06.731 { 00:14:06.731 "nqn": "nqn.2016-06.io.spdk:cnode1175", 00:14:06.731 "min_cntlid": 6, 00:14:06.731 "max_cntlid": 5, 00:14:06.731 "method": "nvmf_create_subsystem", 00:14:06.731 "req_id": 1 00:14:06.731 } 00:14:06.731 Got JSON-RPC error response 00:14:06.731 response: 00:14:06.731 { 00:14:06.731 "code": -32602, 00:14:06.731 "message": "Invalid cntlid range [6-5]" 00:14:06.731 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:14:06.731 00:26:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:14:06.992 00:26:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:14:06.992 { 00:14:06.992 "name": "foobar", 00:14:06.992 "method": "nvmf_delete_target", 00:14:06.992 "req_id": 1 00:14:06.992 } 00:14:06.992 Got JSON-RPC error response 00:14:06.992 response: 00:14:06.992 { 00:14:06.992 "code": -32602, 00:14:06.992 "message": "The specified target doesn'\''t exist, cannot delete it." 00:14:06.992 }' 00:14:06.992 00:26:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:14:06.992 { 00:14:06.992 "name": "foobar", 00:14:06.992 "method": "nvmf_delete_target", 00:14:06.992 "req_id": 1 00:14:06.992 } 00:14:06.992 Got JSON-RPC error response 00:14:06.992 response: 00:14:06.992 { 00:14:06.992 "code": -32602, 00:14:06.992 "message": "The specified target doesn't exist, cannot delete it." 00:14:06.992 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:14:06.992 00:26:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:14:06.992 00:26:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:14:06.992 00:26:34 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:06.992 00:26:34 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@117 -- # sync 00:14:06.992 00:26:34 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:06.992 00:26:34 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@120 -- # set +e 00:14:06.992 00:26:34 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:06.992 00:26:34 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:06.992 rmmod nvme_tcp 00:14:06.992 rmmod nvme_fabrics 00:14:06.992 rmmod nvme_keyring 00:14:06.992 00:26:34 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:06.992 00:26:34 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@124 -- # set -e 00:14:06.992 00:26:34 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@125 -- # return 0 00:14:06.992 00:26:34 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@489 -- # '[' -n 905595 ']' 00:14:06.992 00:26:34 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@490 -- # killprocess 905595 00:14:06.992 00:26:34 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@946 -- # '[' -z 905595 ']' 00:14:06.992 00:26:34 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@950 -- # kill -0 905595 00:14:06.992 00:26:34 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@951 -- # uname 00:14:06.992 00:26:34 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:14:06.992 00:26:34 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 905595 00:14:06.992 00:26:34 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:14:06.992 00:26:34 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:14:06.992 00:26:34 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@964 -- # echo 'killing process with pid 905595' 00:14:06.992 killing process with pid 905595 00:14:06.992 00:26:34 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@965 -- # kill 905595 00:14:06.992 00:26:34 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@970 -- # wait 905595 00:14:07.253 00:26:34 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:07.253 00:26:34 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:07.253 00:26:34 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:07.253 00:26:34 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:07.253 00:26:34 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:07.253 00:26:34 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:07.253 00:26:34 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:07.253 00:26:34 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:09.162 00:26:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:09.162 00:14:09.162 real 0m8.726s 00:14:09.162 user 0m22.494s 00:14:09.162 sys 0m2.225s 00:14:09.162 00:26:36 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:09.162 00:26:36 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:14:09.162 ************************************ 00:14:09.162 END TEST nvmf_invalid 00:14:09.162 ************************************ 00:14:09.162 00:26:36 nvmf_tcp -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:14:09.162 00:26:36 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:14:09.162 00:26:36 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:09.162 00:26:36 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:09.421 ************************************ 00:14:09.421 START TEST nvmf_abort 00:14:09.421 ************************************ 00:14:09.421 00:26:37 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:14:09.421 * Looking for test storage... 00:14:09.421 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:09.421 00:26:37 nvmf_tcp.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:09.421 00:26:37 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:14:09.421 00:26:37 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:09.421 00:26:37 nvmf_tcp.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:09.421 00:26:37 nvmf_tcp.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:09.421 00:26:37 nvmf_tcp.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:09.421 00:26:37 nvmf_tcp.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:09.421 00:26:37 nvmf_tcp.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:09.421 00:26:37 nvmf_tcp.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:09.421 00:26:37 nvmf_tcp.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:09.421 00:26:37 nvmf_tcp.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:09.421 00:26:37 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:09.421 00:26:37 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:14:09.421 00:26:37 nvmf_tcp.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:14:09.421 00:26:37 nvmf_tcp.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:09.421 00:26:37 nvmf_tcp.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:09.421 00:26:37 nvmf_tcp.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:09.421 00:26:37 nvmf_tcp.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:09.421 00:26:37 nvmf_tcp.nvmf_abort -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:09.421 00:26:37 nvmf_tcp.nvmf_abort -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:09.421 00:26:37 nvmf_tcp.nvmf_abort -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:09.421 00:26:37 nvmf_tcp.nvmf_abort -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:09.421 00:26:37 nvmf_tcp.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:09.421 00:26:37 nvmf_tcp.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:09.421 00:26:37 nvmf_tcp.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:09.421 00:26:37 nvmf_tcp.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:14:09.421 00:26:37 nvmf_tcp.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:09.421 00:26:37 nvmf_tcp.nvmf_abort -- nvmf/common.sh@47 -- # : 0 00:14:09.422 00:26:37 nvmf_tcp.nvmf_abort -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:09.422 00:26:37 nvmf_tcp.nvmf_abort -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:09.422 00:26:37 nvmf_tcp.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:09.422 00:26:37 nvmf_tcp.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:09.422 00:26:37 nvmf_tcp.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:09.422 00:26:37 nvmf_tcp.nvmf_abort -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:09.422 00:26:37 nvmf_tcp.nvmf_abort -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:09.422 00:26:37 nvmf_tcp.nvmf_abort -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:09.422 00:26:37 nvmf_tcp.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:09.422 00:26:37 nvmf_tcp.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:14:09.422 00:26:37 nvmf_tcp.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:14:09.422 00:26:37 nvmf_tcp.nvmf_abort -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:09.422 00:26:37 nvmf_tcp.nvmf_abort -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:09.422 00:26:37 nvmf_tcp.nvmf_abort -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:09.422 00:26:37 nvmf_tcp.nvmf_abort -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:09.422 00:26:37 nvmf_tcp.nvmf_abort -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:09.422 00:26:37 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:09.422 00:26:37 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:09.422 00:26:37 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:09.422 00:26:37 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:09.422 00:26:37 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:09.422 00:26:37 nvmf_tcp.nvmf_abort -- nvmf/common.sh@285 -- # xtrace_disable 00:14:09.422 00:26:37 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:14:10.812 00:26:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:10.812 00:26:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # pci_devs=() 00:14:10.812 00:26:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:10.812 00:26:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:10.812 00:26:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:10.812 00:26:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:10.812 00:26:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:10.812 00:26:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # net_devs=() 00:14:10.812 00:26:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:10.812 00:26:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # e810=() 00:14:10.812 00:26:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # local -ga e810 00:14:10.812 00:26:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # x722=() 00:14:10.812 00:26:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # local -ga x722 00:14:10.812 00:26:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # mlx=() 00:14:10.812 00:26:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # local -ga mlx 00:14:10.812 00:26:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:10.812 00:26:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:10.812 00:26:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:10.812 00:26:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:10.812 00:26:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:10.812 00:26:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:10.812 00:26:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:10.812 00:26:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:10.812 00:26:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:10.812 00:26:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:10.812 00:26:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:10.812 00:26:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:10.812 00:26:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:10.812 00:26:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:10.812 00:26:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:10.812 00:26:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:10.812 00:26:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:10.812 00:26:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:10.812 00:26:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:14:10.812 Found 0000:08:00.0 (0x8086 - 0x159b) 00:14:10.812 00:26:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:10.812 00:26:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:10.812 00:26:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:10.812 00:26:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:10.812 00:26:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:10.812 00:26:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:10.812 00:26:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:14:10.812 Found 0000:08:00.1 (0x8086 - 0x159b) 00:14:10.812 00:26:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:10.812 00:26:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:10.812 00:26:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:10.812 00:26:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:10.812 00:26:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:10.812 00:26:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:10.812 00:26:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:10.812 00:26:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:10.812 00:26:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:10.812 00:26:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:10.812 00:26:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:10.812 00:26:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:10.812 00:26:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:10.812 00:26:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:10.812 00:26:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:10.812 00:26:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:14:10.812 Found net devices under 0000:08:00.0: cvl_0_0 00:14:10.812 00:26:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:10.812 00:26:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:10.812 00:26:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:10.812 00:26:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:10.812 00:26:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:10.812 00:26:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:10.812 00:26:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:10.812 00:26:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:10.812 00:26:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:14:10.812 Found net devices under 0000:08:00.1: cvl_0_1 00:14:10.812 00:26:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:10.812 00:26:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:10.812 00:26:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # is_hw=yes 00:14:10.812 00:26:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:10.812 00:26:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:10.812 00:26:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:10.812 00:26:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:10.812 00:26:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:10.812 00:26:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:10.812 00:26:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:10.812 00:26:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:10.812 00:26:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:10.812 00:26:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:10.812 00:26:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:10.812 00:26:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:10.812 00:26:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:10.812 00:26:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:11.072 00:26:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:11.072 00:26:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:11.072 00:26:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:11.072 00:26:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:11.072 00:26:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:11.072 00:26:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:11.072 00:26:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:11.072 00:26:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:11.072 00:26:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:11.072 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:11.072 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.206 ms 00:14:11.072 00:14:11.072 --- 10.0.0.2 ping statistics --- 00:14:11.072 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:11.072 rtt min/avg/max/mdev = 0.206/0.206/0.206/0.000 ms 00:14:11.072 00:26:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:11.072 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:11.072 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.109 ms 00:14:11.072 00:14:11.072 --- 10.0.0.1 ping statistics --- 00:14:11.072 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:11.072 rtt min/avg/max/mdev = 0.109/0.109/0.109/0.000 ms 00:14:11.072 00:26:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:11.072 00:26:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@422 -- # return 0 00:14:11.072 00:26:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:11.072 00:26:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:11.072 00:26:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:11.072 00:26:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:11.072 00:26:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:11.072 00:26:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:11.072 00:26:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:11.072 00:26:38 nvmf_tcp.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:14:11.072 00:26:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:11.072 00:26:38 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@720 -- # xtrace_disable 00:14:11.072 00:26:38 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:14:11.072 00:26:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@481 -- # nvmfpid=907667 00:14:11.072 00:26:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:14:11.072 00:26:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@482 -- # waitforlisten 907667 00:14:11.072 00:26:38 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@827 -- # '[' -z 907667 ']' 00:14:11.072 00:26:38 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:11.072 00:26:38 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:11.072 00:26:38 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:11.072 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:11.072 00:26:38 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:11.072 00:26:38 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:14:11.072 [2024-07-12 00:26:38.841302] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:14:11.072 [2024-07-12 00:26:38.841408] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:11.073 EAL: No free 2048 kB hugepages reported on node 1 00:14:11.073 [2024-07-12 00:26:38.910270] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:11.332 [2024-07-12 00:26:39.000920] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:11.332 [2024-07-12 00:26:39.000980] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:11.332 [2024-07-12 00:26:39.000996] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:11.332 [2024-07-12 00:26:39.001010] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:11.332 [2024-07-12 00:26:39.001021] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:11.332 [2024-07-12 00:26:39.001102] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:11.332 [2024-07-12 00:26:39.001155] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:11.332 [2024-07-12 00:26:39.001157] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:11.332 00:26:39 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:11.332 00:26:39 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@860 -- # return 0 00:14:11.332 00:26:39 nvmf_tcp.nvmf_abort -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:11.332 00:26:39 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:11.332 00:26:39 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:14:11.332 00:26:39 nvmf_tcp.nvmf_abort -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:11.332 00:26:39 nvmf_tcp.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:14:11.332 00:26:39 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:11.332 00:26:39 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:14:11.332 [2024-07-12 00:26:39.138365] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:11.332 00:26:39 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:11.332 00:26:39 nvmf_tcp.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:14:11.332 00:26:39 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:11.332 00:26:39 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:14:11.590 Malloc0 00:14:11.590 00:26:39 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:11.590 00:26:39 nvmf_tcp.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:14:11.590 00:26:39 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:11.590 00:26:39 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:14:11.590 Delay0 00:14:11.590 00:26:39 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:11.590 00:26:39 nvmf_tcp.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:14:11.590 00:26:39 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:11.590 00:26:39 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:14:11.590 00:26:39 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:11.590 00:26:39 nvmf_tcp.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:14:11.590 00:26:39 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:11.590 00:26:39 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:14:11.590 00:26:39 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:11.590 00:26:39 nvmf_tcp.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:14:11.590 00:26:39 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:11.590 00:26:39 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:14:11.590 [2024-07-12 00:26:39.204739] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:11.590 00:26:39 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:11.590 00:26:39 nvmf_tcp.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:11.590 00:26:39 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:11.590 00:26:39 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:14:11.590 00:26:39 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:11.590 00:26:39 nvmf_tcp.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:14:11.590 EAL: No free 2048 kB hugepages reported on node 1 00:14:11.590 [2024-07-12 00:26:39.352672] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:14:14.153 Initializing NVMe Controllers 00:14:14.153 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:14:14.153 controller IO queue size 128 less than required 00:14:14.153 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:14:14.153 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:14:14.153 Initialization complete. Launching workers. 00:14:14.153 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 30820 00:14:14.153 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 30881, failed to submit 62 00:14:14.153 success 30824, unsuccess 57, failed 0 00:14:14.153 00:26:41 nvmf_tcp.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:14:14.153 00:26:41 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:14.153 00:26:41 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:14:14.153 00:26:41 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:14.153 00:26:41 nvmf_tcp.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:14:14.153 00:26:41 nvmf_tcp.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:14:14.153 00:26:41 nvmf_tcp.nvmf_abort -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:14.153 00:26:41 nvmf_tcp.nvmf_abort -- nvmf/common.sh@117 -- # sync 00:14:14.153 00:26:41 nvmf_tcp.nvmf_abort -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:14.153 00:26:41 nvmf_tcp.nvmf_abort -- nvmf/common.sh@120 -- # set +e 00:14:14.153 00:26:41 nvmf_tcp.nvmf_abort -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:14.153 00:26:41 nvmf_tcp.nvmf_abort -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:14.153 rmmod nvme_tcp 00:14:14.153 rmmod nvme_fabrics 00:14:14.153 rmmod nvme_keyring 00:14:14.153 00:26:41 nvmf_tcp.nvmf_abort -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:14.153 00:26:41 nvmf_tcp.nvmf_abort -- nvmf/common.sh@124 -- # set -e 00:14:14.153 00:26:41 nvmf_tcp.nvmf_abort -- nvmf/common.sh@125 -- # return 0 00:14:14.153 00:26:41 nvmf_tcp.nvmf_abort -- nvmf/common.sh@489 -- # '[' -n 907667 ']' 00:14:14.153 00:26:41 nvmf_tcp.nvmf_abort -- nvmf/common.sh@490 -- # killprocess 907667 00:14:14.153 00:26:41 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@946 -- # '[' -z 907667 ']' 00:14:14.153 00:26:41 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@950 -- # kill -0 907667 00:14:14.153 00:26:41 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@951 -- # uname 00:14:14.153 00:26:41 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:14:14.153 00:26:41 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 907667 00:14:14.153 00:26:41 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:14:14.153 00:26:41 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:14:14.153 00:26:41 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@964 -- # echo 'killing process with pid 907667' 00:14:14.153 killing process with pid 907667 00:14:14.153 00:26:41 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@965 -- # kill 907667 00:14:14.153 00:26:41 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@970 -- # wait 907667 00:14:14.153 00:26:41 nvmf_tcp.nvmf_abort -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:14.153 00:26:41 nvmf_tcp.nvmf_abort -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:14.153 00:26:41 nvmf_tcp.nvmf_abort -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:14.153 00:26:41 nvmf_tcp.nvmf_abort -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:14.153 00:26:41 nvmf_tcp.nvmf_abort -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:14.153 00:26:41 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:14.153 00:26:41 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:14.153 00:26:41 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:16.062 00:26:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:16.062 00:14:16.062 real 0m6.824s 00:14:16.062 user 0m10.882s 00:14:16.062 sys 0m1.936s 00:14:16.062 00:26:43 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:16.062 00:26:43 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:14:16.062 ************************************ 00:14:16.062 END TEST nvmf_abort 00:14:16.062 ************************************ 00:14:16.062 00:26:43 nvmf_tcp -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:14:16.062 00:26:43 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:14:16.062 00:26:43 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:16.062 00:26:43 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:16.062 ************************************ 00:14:16.062 START TEST nvmf_ns_hotplug_stress 00:14:16.062 ************************************ 00:14:16.062 00:26:43 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:14:16.321 * Looking for test storage... 00:14:16.321 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:16.321 00:26:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:16.321 00:26:43 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:14:16.321 00:26:43 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:16.321 00:26:43 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:16.321 00:26:43 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:16.321 00:26:43 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:16.321 00:26:43 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:16.321 00:26:43 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:16.321 00:26:43 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:16.321 00:26:43 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:16.321 00:26:43 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:16.321 00:26:43 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:16.321 00:26:43 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:14:16.321 00:26:43 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:14:16.321 00:26:43 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:16.321 00:26:43 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:16.321 00:26:43 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:16.321 00:26:43 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:16.321 00:26:43 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:16.321 00:26:43 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:16.321 00:26:43 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:16.321 00:26:43 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:16.321 00:26:43 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:16.321 00:26:43 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:16.321 00:26:43 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:16.321 00:26:43 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:14:16.321 00:26:43 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:16.321 00:26:43 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@47 -- # : 0 00:14:16.321 00:26:43 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:16.321 00:26:43 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:16.321 00:26:43 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:16.321 00:26:43 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:16.321 00:26:43 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:16.321 00:26:43 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:16.321 00:26:43 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:16.321 00:26:43 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:16.321 00:26:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:16.321 00:26:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:14:16.321 00:26:43 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:16.321 00:26:43 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:16.321 00:26:43 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:16.322 00:26:43 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:16.322 00:26:43 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:16.322 00:26:43 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:16.322 00:26:43 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:16.322 00:26:43 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:16.322 00:26:43 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:16.322 00:26:43 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:16.322 00:26:43 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:14:16.322 00:26:43 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:14:17.704 00:26:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:17.704 00:26:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:14:17.704 00:26:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:17.704 00:26:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:17.704 00:26:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:17.704 00:26:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:17.704 00:26:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:17.704 00:26:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # net_devs=() 00:14:17.704 00:26:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:17.704 00:26:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # e810=() 00:14:17.704 00:26:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # local -ga e810 00:14:17.704 00:26:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # x722=() 00:14:17.704 00:26:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # local -ga x722 00:14:17.704 00:26:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # mlx=() 00:14:17.704 00:26:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:14:17.704 00:26:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:17.704 00:26:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:17.704 00:26:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:17.704 00:26:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:17.704 00:26:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:17.704 00:26:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:17.704 00:26:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:17.704 00:26:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:17.704 00:26:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:17.704 00:26:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:17.704 00:26:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:17.704 00:26:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:17.704 00:26:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:17.704 00:26:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:17.704 00:26:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:17.704 00:26:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:17.704 00:26:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:17.704 00:26:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:17.704 00:26:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:14:17.704 Found 0000:08:00.0 (0x8086 - 0x159b) 00:14:17.704 00:26:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:17.704 00:26:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:17.704 00:26:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:17.704 00:26:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:17.704 00:26:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:17.704 00:26:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:17.704 00:26:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:14:17.704 Found 0000:08:00.1 (0x8086 - 0x159b) 00:14:17.704 00:26:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:17.704 00:26:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:17.704 00:26:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:17.704 00:26:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:17.704 00:26:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:17.704 00:26:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:17.704 00:26:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:17.704 00:26:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:17.704 00:26:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:17.704 00:26:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:17.704 00:26:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:17.704 00:26:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:17.704 00:26:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:17.704 00:26:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:17.704 00:26:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:17.704 00:26:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:14:17.704 Found net devices under 0000:08:00.0: cvl_0_0 00:14:17.704 00:26:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:17.704 00:26:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:17.704 00:26:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:17.704 00:26:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:17.704 00:26:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:17.704 00:26:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:17.704 00:26:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:17.704 00:26:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:17.704 00:26:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:14:17.704 Found net devices under 0000:08:00.1: cvl_0_1 00:14:17.704 00:26:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:17.704 00:26:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:17.704 00:26:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:14:17.704 00:26:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:17.704 00:26:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:17.704 00:26:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:17.704 00:26:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:17.704 00:26:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:17.704 00:26:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:17.704 00:26:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:17.704 00:26:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:17.704 00:26:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:17.704 00:26:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:17.704 00:26:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:17.704 00:26:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:17.704 00:26:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:17.704 00:26:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:17.704 00:26:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:17.964 00:26:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:17.964 00:26:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:17.964 00:26:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:17.964 00:26:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:17.964 00:26:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:17.964 00:26:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:17.964 00:26:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:17.964 00:26:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:17.964 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:17.964 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.326 ms 00:14:17.964 00:14:17.964 --- 10.0.0.2 ping statistics --- 00:14:17.964 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:17.964 rtt min/avg/max/mdev = 0.326/0.326/0.326/0.000 ms 00:14:17.964 00:26:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:17.964 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:17.964 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.174 ms 00:14:17.964 00:14:17.964 --- 10.0.0.1 ping statistics --- 00:14:17.964 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:17.964 rtt min/avg/max/mdev = 0.174/0.174/0.174/0.000 ms 00:14:17.964 00:26:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:17.964 00:26:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # return 0 00:14:17.964 00:26:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:17.964 00:26:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:17.964 00:26:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:17.964 00:26:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:17.964 00:26:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:17.964 00:26:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:17.965 00:26:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:17.965 00:26:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:14:17.965 00:26:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:17.965 00:26:45 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@720 -- # xtrace_disable 00:14:17.965 00:26:45 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:14:17.965 00:26:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # nvmfpid=909387 00:14:17.965 00:26:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:14:17.965 00:26:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # waitforlisten 909387 00:14:17.965 00:26:45 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@827 -- # '[' -z 909387 ']' 00:14:17.965 00:26:45 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:17.965 00:26:45 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:17.965 00:26:45 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:17.965 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:17.965 00:26:45 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:17.965 00:26:45 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:14:17.965 [2024-07-12 00:26:45.725667] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:14:17.965 [2024-07-12 00:26:45.725755] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:17.965 EAL: No free 2048 kB hugepages reported on node 1 00:14:17.965 [2024-07-12 00:26:45.790843] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:18.223 [2024-07-12 00:26:45.877694] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:18.223 [2024-07-12 00:26:45.877754] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:18.223 [2024-07-12 00:26:45.877771] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:18.223 [2024-07-12 00:26:45.877785] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:18.223 [2024-07-12 00:26:45.877797] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:18.223 [2024-07-12 00:26:45.877876] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:18.223 [2024-07-12 00:26:45.877959] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:18.223 [2024-07-12 00:26:45.877992] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:18.223 00:26:45 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:18.223 00:26:45 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@860 -- # return 0 00:14:18.223 00:26:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:18.223 00:26:45 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:18.223 00:26:45 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:14:18.223 00:26:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:18.223 00:26:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:14:18.223 00:26:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:18.482 [2024-07-12 00:26:46.269990] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:18.482 00:26:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:19.051 00:26:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:19.051 [2024-07-12 00:26:46.865557] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:19.051 00:26:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:19.619 00:26:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:14:19.879 Malloc0 00:14:19.879 00:26:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:14:20.138 Delay0 00:14:20.138 00:26:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:20.397 00:26:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:14:20.397 NULL1 00:14:20.397 00:26:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:14:20.656 00:26:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=909630 00:14:20.656 00:26:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:14:20.656 00:26:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 909630 00:14:20.656 00:26:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:20.914 EAL: No free 2048 kB hugepages reported on node 1 00:14:20.914 00:26:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:21.172 00:26:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:14:21.172 00:26:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:14:21.430 true 00:14:21.430 00:26:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 909630 00:14:21.430 00:26:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:21.687 00:26:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:21.946 00:26:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:14:21.946 00:26:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:14:22.204 true 00:14:22.204 00:26:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 909630 00:14:22.204 00:26:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:23.140 Read completed with error (sct=0, sc=11) 00:14:23.140 00:26:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:23.140 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:23.140 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:23.140 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:23.398 00:26:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:14:23.398 00:26:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:14:23.657 true 00:14:23.657 00:26:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 909630 00:14:23.657 00:26:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:23.914 00:26:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:24.481 00:26:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:14:24.481 00:26:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:14:24.481 true 00:14:24.481 00:26:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 909630 00:14:24.481 00:26:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:25.416 00:26:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:25.675 00:26:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:14:25.675 00:26:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:14:25.933 true 00:14:25.933 00:26:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 909630 00:14:25.933 00:26:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:26.191 00:26:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:26.450 00:26:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:14:26.450 00:26:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:14:26.709 true 00:14:26.709 00:26:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 909630 00:14:26.709 00:26:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:26.967 00:26:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:27.225 00:26:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:14:27.225 00:26:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:14:27.483 true 00:14:27.483 00:26:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 909630 00:14:27.483 00:26:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:28.418 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:28.418 00:26:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:28.418 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:28.418 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:28.727 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:28.727 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:28.727 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:28.727 00:26:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:14:28.727 00:26:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:14:28.984 true 00:14:28.984 00:26:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 909630 00:14:28.984 00:26:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:29.917 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:29.917 00:26:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:29.917 00:26:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:14:29.917 00:26:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:14:30.175 true 00:14:30.175 00:26:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 909630 00:14:30.175 00:26:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:30.433 00:26:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:30.690 00:26:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:14:30.690 00:26:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:14:30.947 true 00:14:30.947 00:26:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 909630 00:14:30.947 00:26:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:31.884 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:31.884 00:26:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:32.142 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:32.142 00:26:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:14:32.142 00:26:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:14:32.399 true 00:14:32.399 00:27:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 909630 00:14:32.399 00:27:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:32.657 00:27:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:33.223 00:27:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:14:33.223 00:27:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:14:33.480 true 00:14:33.480 00:27:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 909630 00:14:33.480 00:27:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:33.737 00:27:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:33.994 00:27:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:14:33.994 00:27:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:14:34.252 true 00:14:34.252 00:27:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 909630 00:14:34.252 00:27:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:35.190 00:27:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:35.190 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:35.446 00:27:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:14:35.446 00:27:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:14:35.703 true 00:14:35.703 00:27:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 909630 00:14:35.703 00:27:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:35.960 00:27:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:36.217 00:27:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:14:36.217 00:27:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:14:36.475 true 00:14:36.475 00:27:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 909630 00:14:36.475 00:27:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:37.044 00:27:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:37.302 00:27:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:14:37.302 00:27:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:14:37.561 true 00:14:37.561 00:27:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 909630 00:14:37.561 00:27:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:38.498 00:27:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:38.498 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:38.498 00:27:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:14:38.498 00:27:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:14:38.757 true 00:14:39.015 00:27:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 909630 00:14:39.015 00:27:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:39.273 00:27:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:39.532 00:27:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:14:39.532 00:27:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:14:39.790 true 00:14:39.790 00:27:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 909630 00:14:39.791 00:27:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:40.049 00:27:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:40.307 00:27:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:14:40.307 00:27:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:14:40.566 true 00:14:40.566 00:27:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 909630 00:14:40.566 00:27:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:41.502 00:27:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:41.761 00:27:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:14:41.761 00:27:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:14:42.019 true 00:14:42.019 00:27:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 909630 00:14:42.019 00:27:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:42.278 00:27:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:42.846 00:27:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:14:42.846 00:27:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:14:42.846 true 00:14:43.105 00:27:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 909630 00:14:43.105 00:27:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:43.401 00:27:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:43.660 00:27:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:14:43.660 00:27:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:14:43.918 true 00:14:43.918 00:27:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 909630 00:14:43.918 00:27:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:44.855 00:27:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:44.855 00:27:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:14:44.855 00:27:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:14:45.113 true 00:14:45.113 00:27:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 909630 00:14:45.113 00:27:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:45.680 00:27:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:45.940 00:27:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:14:45.940 00:27:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:14:45.940 true 00:14:45.940 00:27:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 909630 00:14:45.940 00:27:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:46.197 00:27:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:46.455 00:27:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:14:46.455 00:27:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:14:46.712 true 00:14:46.712 00:27:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 909630 00:14:46.712 00:27:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:48.085 00:27:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:48.085 00:27:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:14:48.085 00:27:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:14:48.344 true 00:14:48.344 00:27:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 909630 00:14:48.344 00:27:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:48.602 00:27:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:48.860 00:27:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:14:48.860 00:27:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:14:49.428 true 00:14:49.428 00:27:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 909630 00:14:49.428 00:27:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:49.687 00:27:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:49.945 00:27:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:14:49.945 00:27:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:14:50.203 true 00:14:50.203 00:27:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 909630 00:14:50.203 00:27:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:51.140 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:51.140 00:27:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:51.140 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:51.140 Initializing NVMe Controllers 00:14:51.140 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:51.140 Controller IO queue size 128, less than required. 00:14:51.140 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:51.140 Controller IO queue size 128, less than required. 00:14:51.140 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:51.140 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:14:51.140 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:14:51.140 Initialization complete. Launching workers. 00:14:51.140 ======================================================== 00:14:51.140 Latency(us) 00:14:51.140 Device Information : IOPS MiB/s Average min max 00:14:51.140 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 600.20 0.29 87921.48 3203.36 1012770.86 00:14:51.140 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 7607.40 3.71 16766.36 3371.99 566604.32 00:14:51.140 ======================================================== 00:14:51.140 Total : 8207.60 4.01 21969.74 3203.36 1012770.86 00:14:51.140 00:14:51.399 00:27:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:14:51.399 00:27:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:14:51.657 true 00:14:51.657 00:27:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 909630 00:14:51.657 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (909630) - No such process 00:14:51.657 00:27:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 909630 00:14:51.657 00:27:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:51.916 00:27:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:52.174 00:27:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:14:52.174 00:27:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:14:52.174 00:27:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:14:52.174 00:27:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:52.174 00:27:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:14:52.432 null0 00:14:52.432 00:27:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:52.432 00:27:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:52.432 00:27:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:14:52.690 null1 00:14:52.690 00:27:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:52.690 00:27:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:52.690 00:27:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:14:53.260 null2 00:14:53.260 00:27:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:53.260 00:27:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:53.260 00:27:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:14:53.260 null3 00:14:53.519 00:27:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:53.519 00:27:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:53.519 00:27:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:14:53.778 null4 00:14:53.778 00:27:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:53.778 00:27:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:53.778 00:27:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:14:54.036 null5 00:14:54.036 00:27:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:54.036 00:27:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:54.036 00:27:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:14:54.294 null6 00:14:54.294 00:27:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:54.294 00:27:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:54.294 00:27:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:14:54.553 null7 00:14:54.553 00:27:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:54.553 00:27:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:54.553 00:27:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:14:54.553 00:27:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:54.553 00:27:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:54.553 00:27:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:54.553 00:27:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:14:54.553 00:27:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:54.553 00:27:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:14:54.553 00:27:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:54.553 00:27:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:54.553 00:27:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:54.553 00:27:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:54.553 00:27:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:54.553 00:27:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:14:54.554 00:27:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:54.554 00:27:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:14:54.554 00:27:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:54.554 00:27:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:54.554 00:27:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:54.554 00:27:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:54.554 00:27:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:54.554 00:27:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:14:54.554 00:27:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:54.554 00:27:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:14:54.554 00:27:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:54.554 00:27:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:54.554 00:27:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:54.554 00:27:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:54.554 00:27:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:14:54.554 00:27:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:54.554 00:27:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:54.554 00:27:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:14:54.554 00:27:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:54.554 00:27:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:54.554 00:27:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:54.554 00:27:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:54.554 00:27:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:54.554 00:27:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:14:54.554 00:27:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:54.554 00:27:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:14:54.554 00:27:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:54.554 00:27:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:54.554 00:27:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:54.554 00:27:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:54.554 00:27:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:14:54.554 00:27:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:54.554 00:27:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:54.554 00:27:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:14:54.554 00:27:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:54.554 00:27:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:54.554 00:27:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:54.554 00:27:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:54.554 00:27:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:14:54.554 00:27:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:54.554 00:27:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:54.554 00:27:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:14:54.554 00:27:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:54.554 00:27:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:54.554 00:27:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:54.554 00:27:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:54.554 00:27:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:14:54.554 00:27:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:54.554 00:27:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:54.554 00:27:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:14:54.554 00:27:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:54.554 00:27:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 913498 913499 913501 913503 913506 913509 913512 913515 00:14:54.554 00:27:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:54.554 00:27:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:54.817 00:27:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:54.817 00:27:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:54.817 00:27:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:54.817 00:27:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:54.817 00:27:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:54.817 00:27:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:54.817 00:27:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:54.817 00:27:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:55.075 00:27:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:55.075 00:27:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:55.075 00:27:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:55.075 00:27:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:55.075 00:27:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:55.075 00:27:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:55.075 00:27:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:55.075 00:27:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:55.075 00:27:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:55.075 00:27:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:55.075 00:27:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:55.075 00:27:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:55.075 00:27:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:55.075 00:27:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:55.075 00:27:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:55.075 00:27:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:55.075 00:27:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:55.075 00:27:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:55.075 00:27:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:55.075 00:27:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:55.075 00:27:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:55.075 00:27:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:55.075 00:27:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:55.075 00:27:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:55.334 00:27:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:55.334 00:27:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:55.334 00:27:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:55.334 00:27:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:55.334 00:27:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:55.334 00:27:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:55.592 00:27:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:55.592 00:27:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:55.592 00:27:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:55.592 00:27:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:55.592 00:27:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:55.592 00:27:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:55.592 00:27:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:55.592 00:27:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:55.592 00:27:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:55.592 00:27:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:55.592 00:27:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:55.592 00:27:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:55.592 00:27:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:55.592 00:27:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:55.592 00:27:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:55.592 00:27:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:55.592 00:27:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:55.592 00:27:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:55.592 00:27:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:55.592 00:27:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:55.851 00:27:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:55.851 00:27:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:55.851 00:27:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:55.851 00:27:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:55.851 00:27:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:55.851 00:27:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:55.851 00:27:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:55.851 00:27:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:55.851 00:27:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:55.851 00:27:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:55.851 00:27:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:55.851 00:27:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:56.109 00:27:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:56.109 00:27:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:56.109 00:27:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:56.109 00:27:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:56.109 00:27:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:56.109 00:27:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:56.109 00:27:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:56.109 00:27:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:56.109 00:27:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:56.109 00:27:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:56.109 00:27:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:56.109 00:27:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:56.109 00:27:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:56.109 00:27:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:56.109 00:27:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:56.109 00:27:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:56.109 00:27:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:56.367 00:27:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:56.367 00:27:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:56.367 00:27:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:56.367 00:27:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:56.367 00:27:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:56.367 00:27:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:56.367 00:27:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:56.367 00:27:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:56.367 00:27:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:56.367 00:27:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:56.367 00:27:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:56.367 00:27:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:56.367 00:27:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:56.367 00:27:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:56.625 00:27:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:56.625 00:27:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:56.625 00:27:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:56.625 00:27:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:56.625 00:27:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:56.625 00:27:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:56.625 00:27:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:56.625 00:27:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:56.625 00:27:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:56.625 00:27:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:56.625 00:27:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:56.625 00:27:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:56.625 00:27:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:56.625 00:27:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:56.625 00:27:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:56.625 00:27:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:56.625 00:27:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:56.625 00:27:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:56.884 00:27:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:56.884 00:27:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:56.884 00:27:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:56.884 00:27:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:56.884 00:27:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:56.884 00:27:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:56.884 00:27:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:56.884 00:27:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:56.884 00:27:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:56.884 00:27:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:56.884 00:27:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:56.884 00:27:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:57.143 00:27:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:57.143 00:27:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:57.143 00:27:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:57.143 00:27:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:57.143 00:27:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:57.143 00:27:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:57.143 00:27:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:57.143 00:27:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:57.143 00:27:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:57.143 00:27:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:57.143 00:27:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:57.143 00:27:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:57.143 00:27:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:57.143 00:27:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:57.143 00:27:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:57.464 00:27:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:57.464 00:27:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:57.464 00:27:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:57.464 00:27:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:57.464 00:27:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:57.464 00:27:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:57.464 00:27:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:57.464 00:27:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:57.464 00:27:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:57.464 00:27:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:57.464 00:27:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:57.464 00:27:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:57.464 00:27:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:57.464 00:27:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:57.464 00:27:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:57.464 00:27:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:57.464 00:27:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:57.464 00:27:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:57.464 00:27:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:57.723 00:27:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:57.723 00:27:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:57.723 00:27:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:57.723 00:27:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:57.723 00:27:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:57.723 00:27:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:57.723 00:27:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:57.723 00:27:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:57.723 00:27:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:57.723 00:27:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:57.723 00:27:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:57.723 00:27:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:57.723 00:27:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:57.723 00:27:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:57.723 00:27:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:57.723 00:27:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:57.723 00:27:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:57.723 00:27:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:57.982 00:27:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:57.982 00:27:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:57.982 00:27:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:57.982 00:27:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:57.982 00:27:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:57.982 00:27:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:57.982 00:27:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:57.982 00:27:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:57.982 00:27:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:57.982 00:27:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:57.982 00:27:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:57.982 00:27:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:57.982 00:27:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:57.982 00:27:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:58.240 00:27:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:58.240 00:27:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:58.240 00:27:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:58.240 00:27:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:58.240 00:27:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:58.240 00:27:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:58.240 00:27:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:58.240 00:27:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:58.240 00:27:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:58.240 00:27:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:58.240 00:27:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:58.240 00:27:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:58.240 00:27:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:58.240 00:27:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:58.240 00:27:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:58.240 00:27:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:58.240 00:27:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:58.498 00:27:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:58.498 00:27:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:58.498 00:27:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:58.498 00:27:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:58.498 00:27:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:58.498 00:27:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:58.498 00:27:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:58.498 00:27:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:58.498 00:27:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:58.498 00:27:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:58.498 00:27:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:58.498 00:27:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:58.757 00:27:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:58.757 00:27:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:58.757 00:27:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:58.757 00:27:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:58.757 00:27:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:58.757 00:27:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:58.757 00:27:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:58.757 00:27:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:58.757 00:27:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:58.757 00:27:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:58.757 00:27:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:58.757 00:27:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:58.757 00:27:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:58.757 00:27:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:58.757 00:27:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:58.757 00:27:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:58.757 00:27:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:58.757 00:27:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:58.757 00:27:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:58.757 00:27:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:59.016 00:27:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:59.016 00:27:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:59.016 00:27:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:59.016 00:27:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:59.016 00:27:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:59.016 00:27:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:59.016 00:27:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:59.016 00:27:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:59.016 00:27:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:59.016 00:27:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:59.016 00:27:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:59.274 00:27:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:59.274 00:27:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:59.274 00:27:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:59.274 00:27:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:59.274 00:27:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:59.274 00:27:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:59.274 00:27:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:59.274 00:27:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:59.274 00:27:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:59.274 00:27:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:59.274 00:27:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:59.274 00:27:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:59.274 00:27:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:59.274 00:27:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:59.274 00:27:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:59.274 00:27:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:59.274 00:27:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:59.274 00:27:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:59.532 00:27:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:59.532 00:27:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:59.532 00:27:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:59.532 00:27:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:59.532 00:27:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:59.532 00:27:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:59.532 00:27:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:59.532 00:27:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:59.532 00:27:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:59.532 00:27:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:59.532 00:27:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:59.532 00:27:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:59.532 00:27:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:59.790 00:27:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:59.790 00:27:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:59.790 00:27:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:59.790 00:27:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:59.790 00:27:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:59.790 00:27:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:59.790 00:27:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:59.790 00:27:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:59.790 00:27:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:59.790 00:27:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:59.790 00:27:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:59.790 00:27:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:59.790 00:27:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:00.048 00:27:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:00.048 00:27:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:00.048 00:27:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:15:00.049 00:27:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:00.049 00:27:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:00.049 00:27:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:00.049 00:27:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:00.049 00:27:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:00.049 00:27:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:00.307 00:27:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:00.307 00:27:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:00.307 00:27:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:15:00.307 00:27:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:15:00.307 00:27:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:00.307 00:27:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # sync 00:15:00.307 00:27:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:00.307 00:27:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@120 -- # set +e 00:15:00.307 00:27:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:00.307 00:27:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:00.307 rmmod nvme_tcp 00:15:00.307 rmmod nvme_fabrics 00:15:00.307 rmmod nvme_keyring 00:15:00.307 00:27:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:00.307 00:27:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set -e 00:15:00.307 00:27:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # return 0 00:15:00.307 00:27:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@489 -- # '[' -n 909387 ']' 00:15:00.307 00:27:27 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # killprocess 909387 00:15:00.307 00:27:27 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@946 -- # '[' -z 909387 ']' 00:15:00.307 00:27:27 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@950 -- # kill -0 909387 00:15:00.307 00:27:27 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@951 -- # uname 00:15:00.307 00:27:27 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:15:00.307 00:27:27 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 909387 00:15:00.307 00:27:28 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:15:00.307 00:27:28 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:15:00.307 00:27:28 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # echo 'killing process with pid 909387' 00:15:00.307 killing process with pid 909387 00:15:00.307 00:27:28 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@965 -- # kill 909387 00:15:00.307 00:27:28 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@970 -- # wait 909387 00:15:00.567 00:27:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:00.567 00:27:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:00.567 00:27:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:00.567 00:27:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:00.567 00:27:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:00.567 00:27:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:00.567 00:27:28 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:00.567 00:27:28 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:02.472 00:27:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:02.472 00:15:02.472 real 0m46.350s 00:15:02.472 user 3m36.706s 00:15:02.472 sys 0m14.927s 00:15:02.472 00:27:30 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1122 -- # xtrace_disable 00:15:02.472 00:27:30 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:15:02.472 ************************************ 00:15:02.472 END TEST nvmf_ns_hotplug_stress 00:15:02.472 ************************************ 00:15:02.472 00:27:30 nvmf_tcp -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:15:02.472 00:27:30 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:15:02.472 00:27:30 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:15:02.472 00:27:30 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:02.472 ************************************ 00:15:02.472 START TEST nvmf_connect_stress 00:15:02.472 ************************************ 00:15:02.472 00:27:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:15:02.731 * Looking for test storage... 00:15:02.731 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:02.731 00:27:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:02.731 00:27:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:15:02.731 00:27:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:02.731 00:27:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:02.731 00:27:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:02.731 00:27:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:02.731 00:27:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:02.731 00:27:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:02.731 00:27:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:02.731 00:27:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:02.731 00:27:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:02.731 00:27:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:02.731 00:27:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:15:02.731 00:27:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:15:02.731 00:27:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:02.731 00:27:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:02.731 00:27:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:02.731 00:27:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:02.731 00:27:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:02.731 00:27:30 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:02.731 00:27:30 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:02.731 00:27:30 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:02.731 00:27:30 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:02.731 00:27:30 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:02.731 00:27:30 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:02.731 00:27:30 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:15:02.731 00:27:30 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:02.731 00:27:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@47 -- # : 0 00:15:02.731 00:27:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:02.731 00:27:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:02.731 00:27:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:02.731 00:27:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:02.731 00:27:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:02.731 00:27:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:02.731 00:27:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:02.731 00:27:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:02.731 00:27:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:15:02.731 00:27:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:02.731 00:27:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:02.731 00:27:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:02.731 00:27:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:02.731 00:27:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:02.731 00:27:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:02.731 00:27:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:02.731 00:27:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:02.731 00:27:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:02.731 00:27:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:02.731 00:27:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:15:02.731 00:27:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:04.635 00:27:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:04.635 00:27:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:15:04.635 00:27:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:04.635 00:27:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:04.635 00:27:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:04.635 00:27:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:04.635 00:27:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:04.635 00:27:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # net_devs=() 00:15:04.635 00:27:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:04.635 00:27:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # e810=() 00:15:04.635 00:27:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # local -ga e810 00:15:04.635 00:27:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # x722=() 00:15:04.635 00:27:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # local -ga x722 00:15:04.635 00:27:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # mlx=() 00:15:04.635 00:27:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:15:04.635 00:27:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:04.635 00:27:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:04.635 00:27:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:04.635 00:27:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:04.635 00:27:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:04.635 00:27:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:04.635 00:27:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:04.635 00:27:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:04.635 00:27:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:04.635 00:27:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:04.635 00:27:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:04.635 00:27:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:04.635 00:27:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:04.635 00:27:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:04.635 00:27:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:04.635 00:27:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:04.635 00:27:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:04.635 00:27:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:04.635 00:27:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:15:04.635 Found 0000:08:00.0 (0x8086 - 0x159b) 00:15:04.635 00:27:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:04.635 00:27:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:04.635 00:27:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:04.635 00:27:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:04.635 00:27:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:04.635 00:27:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:04.635 00:27:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:15:04.635 Found 0000:08:00.1 (0x8086 - 0x159b) 00:15:04.635 00:27:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:04.635 00:27:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:04.635 00:27:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:04.635 00:27:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:04.635 00:27:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:04.635 00:27:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:04.635 00:27:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:04.635 00:27:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:04.635 00:27:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:04.635 00:27:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:04.635 00:27:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:04.635 00:27:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:04.635 00:27:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:04.635 00:27:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:04.635 00:27:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:04.635 00:27:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:15:04.635 Found net devices under 0000:08:00.0: cvl_0_0 00:15:04.635 00:27:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:04.635 00:27:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:04.635 00:27:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:04.635 00:27:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:04.635 00:27:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:04.635 00:27:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:04.635 00:27:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:04.635 00:27:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:04.635 00:27:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:15:04.635 Found net devices under 0000:08:00.1: cvl_0_1 00:15:04.635 00:27:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:04.635 00:27:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:04.635 00:27:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:15:04.635 00:27:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:04.635 00:27:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:04.635 00:27:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:04.635 00:27:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:04.635 00:27:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:04.635 00:27:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:04.635 00:27:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:04.635 00:27:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:04.635 00:27:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:04.635 00:27:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:04.635 00:27:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:04.635 00:27:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:04.635 00:27:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:04.635 00:27:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:04.635 00:27:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:04.635 00:27:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:04.635 00:27:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:04.635 00:27:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:04.635 00:27:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:04.635 00:27:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:04.635 00:27:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:04.635 00:27:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:04.635 00:27:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:04.635 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:04.635 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.187 ms 00:15:04.635 00:15:04.635 --- 10.0.0.2 ping statistics --- 00:15:04.635 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:04.635 rtt min/avg/max/mdev = 0.187/0.187/0.187/0.000 ms 00:15:04.635 00:27:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:04.635 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:04.635 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.086 ms 00:15:04.635 00:15:04.635 --- 10.0.0.1 ping statistics --- 00:15:04.635 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:04.635 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:15:04.635 00:27:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:04.635 00:27:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@422 -- # return 0 00:15:04.635 00:27:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:04.635 00:27:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:04.635 00:27:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:04.635 00:27:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:04.636 00:27:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:04.636 00:27:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:04.636 00:27:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:04.636 00:27:32 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:15:04.636 00:27:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:04.636 00:27:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@720 -- # xtrace_disable 00:15:04.636 00:27:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:04.636 00:27:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@481 -- # nvmfpid=915696 00:15:04.636 00:27:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:15:04.636 00:27:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@482 -- # waitforlisten 915696 00:15:04.636 00:27:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@827 -- # '[' -z 915696 ']' 00:15:04.636 00:27:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:04.636 00:27:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@832 -- # local max_retries=100 00:15:04.636 00:27:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:04.636 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:04.636 00:27:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@836 -- # xtrace_disable 00:15:04.636 00:27:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:04.636 [2024-07-12 00:27:32.226443] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:15:04.636 [2024-07-12 00:27:32.226535] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:04.636 EAL: No free 2048 kB hugepages reported on node 1 00:15:04.636 [2024-07-12 00:27:32.290747] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:04.636 [2024-07-12 00:27:32.377404] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:04.636 [2024-07-12 00:27:32.377461] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:04.636 [2024-07-12 00:27:32.377484] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:04.636 [2024-07-12 00:27:32.377498] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:04.636 [2024-07-12 00:27:32.377510] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:04.636 [2024-07-12 00:27:32.377697] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:04.636 [2024-07-12 00:27:32.378123] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:04.636 [2024-07-12 00:27:32.378520] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:04.893 00:27:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:15:04.893 00:27:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@860 -- # return 0 00:15:04.893 00:27:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:04.893 00:27:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:04.893 00:27:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:04.893 00:27:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:04.893 00:27:32 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:04.893 00:27:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:04.893 00:27:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:04.893 [2024-07-12 00:27:32.505385] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:04.893 00:27:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:04.893 00:27:32 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:15:04.893 00:27:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:04.893 00:27:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:04.893 00:27:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:04.893 00:27:32 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:04.893 00:27:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:04.893 00:27:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:04.893 [2024-07-12 00:27:32.530723] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:04.893 00:27:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:04.893 00:27:32 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:15:04.893 00:27:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:04.893 00:27:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:04.893 NULL1 00:15:04.893 00:27:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:04.893 00:27:32 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=915725 00:15:04.893 00:27:32 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:15:04.893 00:27:32 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:15:04.893 00:27:32 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:15:04.893 00:27:32 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:15:04.893 00:27:32 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:04.893 00:27:32 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:04.893 00:27:32 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:04.893 00:27:32 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:04.893 00:27:32 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:04.893 00:27:32 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:04.893 00:27:32 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:04.893 00:27:32 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:04.893 00:27:32 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:04.893 00:27:32 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:04.893 00:27:32 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:04.893 00:27:32 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:04.893 00:27:32 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:04.893 00:27:32 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:04.893 00:27:32 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:04.893 00:27:32 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:04.893 00:27:32 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:04.893 00:27:32 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:04.893 00:27:32 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:04.893 00:27:32 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:04.893 00:27:32 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:04.893 00:27:32 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:04.893 00:27:32 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:04.893 00:27:32 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:04.893 EAL: No free 2048 kB hugepages reported on node 1 00:15:04.893 00:27:32 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:04.893 00:27:32 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:04.893 00:27:32 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:04.893 00:27:32 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:04.893 00:27:32 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:04.893 00:27:32 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:04.893 00:27:32 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:04.893 00:27:32 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:04.893 00:27:32 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:04.893 00:27:32 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:04.893 00:27:32 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:04.893 00:27:32 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:04.893 00:27:32 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:04.893 00:27:32 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:04.893 00:27:32 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:04.893 00:27:32 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:04.893 00:27:32 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 915725 00:15:04.893 00:27:32 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:04.893 00:27:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:04.893 00:27:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:05.151 00:27:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:05.151 00:27:32 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 915725 00:15:05.151 00:27:32 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:05.151 00:27:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:05.151 00:27:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:05.408 00:27:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:05.408 00:27:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 915725 00:15:05.408 00:27:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:05.408 00:27:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:05.408 00:27:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:05.974 00:27:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:05.974 00:27:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 915725 00:15:05.974 00:27:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:05.974 00:27:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:05.974 00:27:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:06.232 00:27:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:06.232 00:27:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 915725 00:15:06.232 00:27:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:06.232 00:27:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:06.232 00:27:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:06.490 00:27:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:06.490 00:27:34 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 915725 00:15:06.490 00:27:34 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:06.490 00:27:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:06.490 00:27:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:06.749 00:27:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:06.749 00:27:34 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 915725 00:15:06.749 00:27:34 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:06.749 00:27:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:06.749 00:27:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:07.007 00:27:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:07.007 00:27:34 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 915725 00:15:07.007 00:27:34 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:07.007 00:27:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:07.007 00:27:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:07.574 00:27:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:07.574 00:27:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 915725 00:15:07.574 00:27:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:07.574 00:27:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:07.574 00:27:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:07.832 00:27:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:07.832 00:27:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 915725 00:15:07.832 00:27:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:07.832 00:27:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:07.832 00:27:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:08.089 00:27:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:08.089 00:27:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 915725 00:15:08.089 00:27:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:08.089 00:27:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:08.089 00:27:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:08.347 00:27:36 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:08.347 00:27:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 915725 00:15:08.347 00:27:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:08.347 00:27:36 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:08.347 00:27:36 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:08.605 00:27:36 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:08.863 00:27:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 915725 00:15:08.863 00:27:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:08.863 00:27:36 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:08.863 00:27:36 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:09.121 00:27:36 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:09.121 00:27:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 915725 00:15:09.121 00:27:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:09.121 00:27:36 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:09.121 00:27:36 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:09.380 00:27:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:09.380 00:27:37 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 915725 00:15:09.380 00:27:37 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:09.380 00:27:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:09.380 00:27:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:09.639 00:27:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:09.639 00:27:37 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 915725 00:15:09.639 00:27:37 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:09.639 00:27:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:09.639 00:27:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:09.897 00:27:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:09.897 00:27:37 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 915725 00:15:09.897 00:27:37 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:09.897 00:27:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:09.897 00:27:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:10.463 00:27:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:10.463 00:27:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 915725 00:15:10.463 00:27:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:10.463 00:27:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:10.463 00:27:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:10.721 00:27:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:10.721 00:27:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 915725 00:15:10.721 00:27:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:10.721 00:27:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:10.721 00:27:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:10.978 00:27:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:10.978 00:27:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 915725 00:15:10.978 00:27:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:10.978 00:27:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:10.978 00:27:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:11.236 00:27:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:11.236 00:27:39 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 915725 00:15:11.236 00:27:39 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:11.236 00:27:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:11.236 00:27:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:11.802 00:27:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:11.802 00:27:39 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 915725 00:15:11.802 00:27:39 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:11.802 00:27:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:11.802 00:27:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:12.060 00:27:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:12.060 00:27:39 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 915725 00:15:12.060 00:27:39 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:12.060 00:27:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:12.060 00:27:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:12.318 00:27:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:12.318 00:27:39 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 915725 00:15:12.318 00:27:39 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:12.318 00:27:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:12.318 00:27:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:12.606 00:27:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:12.606 00:27:40 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 915725 00:15:12.606 00:27:40 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:12.606 00:27:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:12.606 00:27:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:12.864 00:27:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:12.864 00:27:40 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 915725 00:15:12.864 00:27:40 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:12.864 00:27:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:12.864 00:27:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:13.122 00:27:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:13.122 00:27:40 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 915725 00:15:13.122 00:27:40 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:13.122 00:27:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:13.122 00:27:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:13.689 00:27:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:13.689 00:27:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 915725 00:15:13.689 00:27:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:13.689 00:27:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:13.689 00:27:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:13.947 00:27:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:13.947 00:27:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 915725 00:15:13.947 00:27:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:13.947 00:27:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:13.947 00:27:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:14.205 00:27:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:14.205 00:27:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 915725 00:15:14.206 00:27:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:14.206 00:27:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:14.206 00:27:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:14.463 00:27:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:14.463 00:27:42 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 915725 00:15:14.463 00:27:42 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:14.463 00:27:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:14.463 00:27:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:14.721 00:27:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:14.721 00:27:42 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 915725 00:15:14.721 00:27:42 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:14.721 00:27:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:14.721 00:27:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:14.979 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:15.237 00:27:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:15.237 00:27:42 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 915725 00:15:15.237 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (915725) - No such process 00:15:15.237 00:27:42 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 915725 00:15:15.237 00:27:42 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:15:15.237 00:27:42 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:15:15.237 00:27:42 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:15:15.237 00:27:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:15.237 00:27:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@117 -- # sync 00:15:15.237 00:27:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:15.237 00:27:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@120 -- # set +e 00:15:15.237 00:27:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:15.237 00:27:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:15.237 rmmod nvme_tcp 00:15:15.237 rmmod nvme_fabrics 00:15:15.237 rmmod nvme_keyring 00:15:15.237 00:27:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:15.237 00:27:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@124 -- # set -e 00:15:15.237 00:27:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@125 -- # return 0 00:15:15.237 00:27:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@489 -- # '[' -n 915696 ']' 00:15:15.237 00:27:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@490 -- # killprocess 915696 00:15:15.237 00:27:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@946 -- # '[' -z 915696 ']' 00:15:15.237 00:27:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@950 -- # kill -0 915696 00:15:15.237 00:27:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@951 -- # uname 00:15:15.237 00:27:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:15:15.237 00:27:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 915696 00:15:15.237 00:27:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:15:15.237 00:27:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:15:15.237 00:27:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@964 -- # echo 'killing process with pid 915696' 00:15:15.237 killing process with pid 915696 00:15:15.237 00:27:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@965 -- # kill 915696 00:15:15.237 00:27:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@970 -- # wait 915696 00:15:15.496 00:27:43 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:15.496 00:27:43 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:15.496 00:27:43 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:15.496 00:27:43 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:15.496 00:27:43 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:15.496 00:27:43 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:15.496 00:27:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:15.496 00:27:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:17.404 00:27:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:17.404 00:15:17.404 real 0m14.857s 00:15:17.404 user 0m39.420s 00:15:17.404 sys 0m4.313s 00:15:17.404 00:27:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1122 -- # xtrace_disable 00:15:17.405 00:27:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:17.405 ************************************ 00:15:17.405 END TEST nvmf_connect_stress 00:15:17.405 ************************************ 00:15:17.405 00:27:45 nvmf_tcp -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:15:17.405 00:27:45 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:15:17.405 00:27:45 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:15:17.405 00:27:45 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:17.405 ************************************ 00:15:17.405 START TEST nvmf_fused_ordering 00:15:17.405 ************************************ 00:15:17.405 00:27:45 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:15:17.663 * Looking for test storage... 00:15:17.663 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:17.663 00:27:45 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:17.663 00:27:45 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:15:17.663 00:27:45 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:17.663 00:27:45 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:17.663 00:27:45 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:17.663 00:27:45 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:17.663 00:27:45 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:17.663 00:27:45 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:17.663 00:27:45 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:17.663 00:27:45 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:17.663 00:27:45 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:17.663 00:27:45 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:17.663 00:27:45 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:15:17.663 00:27:45 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:15:17.663 00:27:45 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:17.663 00:27:45 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:17.663 00:27:45 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:17.663 00:27:45 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:17.663 00:27:45 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:17.663 00:27:45 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:17.663 00:27:45 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:17.663 00:27:45 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:17.663 00:27:45 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:17.664 00:27:45 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:17.664 00:27:45 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:17.664 00:27:45 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:15:17.664 00:27:45 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:17.664 00:27:45 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@47 -- # : 0 00:15:17.664 00:27:45 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:17.664 00:27:45 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:17.664 00:27:45 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:17.664 00:27:45 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:17.664 00:27:45 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:17.664 00:27:45 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:17.664 00:27:45 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:17.664 00:27:45 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:17.664 00:27:45 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:15:17.664 00:27:45 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:17.664 00:27:45 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:17.664 00:27:45 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:17.664 00:27:45 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:17.664 00:27:45 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:17.664 00:27:45 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:17.664 00:27:45 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:17.664 00:27:45 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:17.664 00:27:45 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:17.664 00:27:45 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:17.664 00:27:45 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@285 -- # xtrace_disable 00:15:17.664 00:27:45 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:19.040 00:27:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:19.040 00:27:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # pci_devs=() 00:15:19.040 00:27:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:19.040 00:27:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:19.040 00:27:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:19.040 00:27:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:19.040 00:27:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:19.040 00:27:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # net_devs=() 00:15:19.040 00:27:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:19.040 00:27:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # e810=() 00:15:19.040 00:27:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # local -ga e810 00:15:19.040 00:27:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # x722=() 00:15:19.040 00:27:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # local -ga x722 00:15:19.040 00:27:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # mlx=() 00:15:19.040 00:27:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # local -ga mlx 00:15:19.040 00:27:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:19.040 00:27:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:19.040 00:27:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:19.040 00:27:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:19.040 00:27:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:19.040 00:27:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:19.040 00:27:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:19.040 00:27:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:19.040 00:27:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:19.040 00:27:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:19.040 00:27:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:19.040 00:27:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:19.040 00:27:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:19.040 00:27:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:19.041 00:27:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:19.041 00:27:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:19.041 00:27:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:19.041 00:27:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:19.041 00:27:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:15:19.041 Found 0000:08:00.0 (0x8086 - 0x159b) 00:15:19.041 00:27:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:19.041 00:27:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:19.041 00:27:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:19.041 00:27:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:19.041 00:27:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:19.041 00:27:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:19.041 00:27:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:15:19.041 Found 0000:08:00.1 (0x8086 - 0x159b) 00:15:19.041 00:27:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:19.041 00:27:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:19.041 00:27:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:19.041 00:27:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:19.041 00:27:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:19.041 00:27:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:19.041 00:27:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:19.041 00:27:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:19.041 00:27:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:19.041 00:27:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:19.041 00:27:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:19.041 00:27:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:19.041 00:27:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:19.041 00:27:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:19.041 00:27:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:19.041 00:27:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:15:19.041 Found net devices under 0000:08:00.0: cvl_0_0 00:15:19.041 00:27:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:19.041 00:27:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:19.041 00:27:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:19.041 00:27:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:19.041 00:27:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:19.041 00:27:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:19.041 00:27:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:19.041 00:27:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:19.041 00:27:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:15:19.041 Found net devices under 0000:08:00.1: cvl_0_1 00:15:19.041 00:27:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:19.041 00:27:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:19.041 00:27:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # is_hw=yes 00:15:19.041 00:27:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:19.041 00:27:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:19.041 00:27:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:19.041 00:27:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:19.041 00:27:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:19.041 00:27:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:19.041 00:27:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:19.041 00:27:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:19.041 00:27:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:19.041 00:27:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:19.041 00:27:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:19.041 00:27:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:19.041 00:27:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:19.041 00:27:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:19.041 00:27:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:19.041 00:27:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:19.298 00:27:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:19.298 00:27:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:19.298 00:27:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:19.298 00:27:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:19.298 00:27:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:19.298 00:27:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:19.298 00:27:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:19.298 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:19.298 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.154 ms 00:15:19.298 00:15:19.298 --- 10.0.0.2 ping statistics --- 00:15:19.298 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:19.298 rtt min/avg/max/mdev = 0.154/0.154/0.154/0.000 ms 00:15:19.298 00:27:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:19.298 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:19.298 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.102 ms 00:15:19.298 00:15:19.298 --- 10.0.0.1 ping statistics --- 00:15:19.298 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:19.298 rtt min/avg/max/mdev = 0.102/0.102/0.102/0.000 ms 00:15:19.298 00:27:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:19.298 00:27:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@422 -- # return 0 00:15:19.298 00:27:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:19.298 00:27:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:19.298 00:27:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:19.298 00:27:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:19.298 00:27:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:19.298 00:27:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:19.298 00:27:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:19.299 00:27:46 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:15:19.299 00:27:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:19.299 00:27:46 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@720 -- # xtrace_disable 00:15:19.299 00:27:46 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:19.299 00:27:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@481 -- # nvmfpid=918151 00:15:19.299 00:27:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:19.299 00:27:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@482 -- # waitforlisten 918151 00:15:19.299 00:27:46 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@827 -- # '[' -z 918151 ']' 00:15:19.299 00:27:46 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:19.299 00:27:46 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@832 -- # local max_retries=100 00:15:19.299 00:27:46 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:19.299 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:19.299 00:27:46 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@836 -- # xtrace_disable 00:15:19.299 00:27:46 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:19.299 [2024-07-12 00:27:47.032689] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:15:19.299 [2024-07-12 00:27:47.032790] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:19.299 EAL: No free 2048 kB hugepages reported on node 1 00:15:19.299 [2024-07-12 00:27:47.098694] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:19.556 [2024-07-12 00:27:47.188937] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:19.556 [2024-07-12 00:27:47.189007] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:19.556 [2024-07-12 00:27:47.189023] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:19.556 [2024-07-12 00:27:47.189036] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:19.556 [2024-07-12 00:27:47.189048] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:19.556 [2024-07-12 00:27:47.189085] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:19.556 00:27:47 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:15:19.556 00:27:47 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@860 -- # return 0 00:15:19.556 00:27:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:19.556 00:27:47 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:19.556 00:27:47 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:19.556 00:27:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:19.556 00:27:47 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:19.556 00:27:47 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:19.556 00:27:47 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:19.556 [2024-07-12 00:27:47.325370] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:19.556 00:27:47 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:19.556 00:27:47 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:15:19.556 00:27:47 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:19.556 00:27:47 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:19.556 00:27:47 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:19.556 00:27:47 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:19.556 00:27:47 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:19.557 00:27:47 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:19.557 [2024-07-12 00:27:47.341524] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:19.557 00:27:47 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:19.557 00:27:47 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:15:19.557 00:27:47 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:19.557 00:27:47 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:19.557 NULL1 00:15:19.557 00:27:47 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:19.557 00:27:47 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:15:19.557 00:27:47 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:19.557 00:27:47 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:19.557 00:27:47 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:19.557 00:27:47 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:15:19.557 00:27:47 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:19.557 00:27:47 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:19.557 00:27:47 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:19.557 00:27:47 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:15:19.557 [2024-07-12 00:27:47.387144] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:15:19.557 [2024-07-12 00:27:47.387189] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid918172 ] 00:15:19.824 EAL: No free 2048 kB hugepages reported on node 1 00:15:20.082 Attached to nqn.2016-06.io.spdk:cnode1 00:15:20.082 Namespace ID: 1 size: 1GB 00:15:20.082 fused_ordering(0) 00:15:20.082 fused_ordering(1) 00:15:20.082 fused_ordering(2) 00:15:20.082 fused_ordering(3) 00:15:20.082 fused_ordering(4) 00:15:20.082 fused_ordering(5) 00:15:20.082 fused_ordering(6) 00:15:20.082 fused_ordering(7) 00:15:20.082 fused_ordering(8) 00:15:20.082 fused_ordering(9) 00:15:20.082 fused_ordering(10) 00:15:20.082 fused_ordering(11) 00:15:20.082 fused_ordering(12) 00:15:20.082 fused_ordering(13) 00:15:20.082 fused_ordering(14) 00:15:20.082 fused_ordering(15) 00:15:20.082 fused_ordering(16) 00:15:20.082 fused_ordering(17) 00:15:20.082 fused_ordering(18) 00:15:20.082 fused_ordering(19) 00:15:20.082 fused_ordering(20) 00:15:20.082 fused_ordering(21) 00:15:20.082 fused_ordering(22) 00:15:20.082 fused_ordering(23) 00:15:20.082 fused_ordering(24) 00:15:20.082 fused_ordering(25) 00:15:20.082 fused_ordering(26) 00:15:20.082 fused_ordering(27) 00:15:20.082 fused_ordering(28) 00:15:20.082 fused_ordering(29) 00:15:20.082 fused_ordering(30) 00:15:20.082 fused_ordering(31) 00:15:20.082 fused_ordering(32) 00:15:20.082 fused_ordering(33) 00:15:20.082 fused_ordering(34) 00:15:20.082 fused_ordering(35) 00:15:20.082 fused_ordering(36) 00:15:20.082 fused_ordering(37) 00:15:20.082 fused_ordering(38) 00:15:20.082 fused_ordering(39) 00:15:20.082 fused_ordering(40) 00:15:20.082 fused_ordering(41) 00:15:20.082 fused_ordering(42) 00:15:20.082 fused_ordering(43) 00:15:20.082 fused_ordering(44) 00:15:20.082 fused_ordering(45) 00:15:20.082 fused_ordering(46) 00:15:20.082 fused_ordering(47) 00:15:20.082 fused_ordering(48) 00:15:20.082 fused_ordering(49) 00:15:20.082 fused_ordering(50) 00:15:20.082 fused_ordering(51) 00:15:20.082 fused_ordering(52) 00:15:20.082 fused_ordering(53) 00:15:20.082 fused_ordering(54) 00:15:20.082 fused_ordering(55) 00:15:20.082 fused_ordering(56) 00:15:20.082 fused_ordering(57) 00:15:20.082 fused_ordering(58) 00:15:20.082 fused_ordering(59) 00:15:20.082 fused_ordering(60) 00:15:20.082 fused_ordering(61) 00:15:20.082 fused_ordering(62) 00:15:20.082 fused_ordering(63) 00:15:20.082 fused_ordering(64) 00:15:20.082 fused_ordering(65) 00:15:20.082 fused_ordering(66) 00:15:20.082 fused_ordering(67) 00:15:20.082 fused_ordering(68) 00:15:20.082 fused_ordering(69) 00:15:20.082 fused_ordering(70) 00:15:20.082 fused_ordering(71) 00:15:20.082 fused_ordering(72) 00:15:20.082 fused_ordering(73) 00:15:20.082 fused_ordering(74) 00:15:20.082 fused_ordering(75) 00:15:20.082 fused_ordering(76) 00:15:20.082 fused_ordering(77) 00:15:20.082 fused_ordering(78) 00:15:20.082 fused_ordering(79) 00:15:20.082 fused_ordering(80) 00:15:20.082 fused_ordering(81) 00:15:20.082 fused_ordering(82) 00:15:20.082 fused_ordering(83) 00:15:20.082 fused_ordering(84) 00:15:20.082 fused_ordering(85) 00:15:20.082 fused_ordering(86) 00:15:20.082 fused_ordering(87) 00:15:20.082 fused_ordering(88) 00:15:20.082 fused_ordering(89) 00:15:20.082 fused_ordering(90) 00:15:20.082 fused_ordering(91) 00:15:20.082 fused_ordering(92) 00:15:20.082 fused_ordering(93) 00:15:20.082 fused_ordering(94) 00:15:20.082 fused_ordering(95) 00:15:20.082 fused_ordering(96) 00:15:20.082 fused_ordering(97) 00:15:20.082 fused_ordering(98) 00:15:20.082 fused_ordering(99) 00:15:20.082 fused_ordering(100) 00:15:20.082 fused_ordering(101) 00:15:20.082 fused_ordering(102) 00:15:20.082 fused_ordering(103) 00:15:20.082 fused_ordering(104) 00:15:20.082 fused_ordering(105) 00:15:20.082 fused_ordering(106) 00:15:20.082 fused_ordering(107) 00:15:20.082 fused_ordering(108) 00:15:20.082 fused_ordering(109) 00:15:20.082 fused_ordering(110) 00:15:20.082 fused_ordering(111) 00:15:20.082 fused_ordering(112) 00:15:20.083 fused_ordering(113) 00:15:20.083 fused_ordering(114) 00:15:20.083 fused_ordering(115) 00:15:20.083 fused_ordering(116) 00:15:20.083 fused_ordering(117) 00:15:20.083 fused_ordering(118) 00:15:20.083 fused_ordering(119) 00:15:20.083 fused_ordering(120) 00:15:20.083 fused_ordering(121) 00:15:20.083 fused_ordering(122) 00:15:20.083 fused_ordering(123) 00:15:20.083 fused_ordering(124) 00:15:20.083 fused_ordering(125) 00:15:20.083 fused_ordering(126) 00:15:20.083 fused_ordering(127) 00:15:20.083 fused_ordering(128) 00:15:20.083 fused_ordering(129) 00:15:20.083 fused_ordering(130) 00:15:20.083 fused_ordering(131) 00:15:20.083 fused_ordering(132) 00:15:20.083 fused_ordering(133) 00:15:20.083 fused_ordering(134) 00:15:20.083 fused_ordering(135) 00:15:20.083 fused_ordering(136) 00:15:20.083 fused_ordering(137) 00:15:20.083 fused_ordering(138) 00:15:20.083 fused_ordering(139) 00:15:20.083 fused_ordering(140) 00:15:20.083 fused_ordering(141) 00:15:20.083 fused_ordering(142) 00:15:20.083 fused_ordering(143) 00:15:20.083 fused_ordering(144) 00:15:20.083 fused_ordering(145) 00:15:20.083 fused_ordering(146) 00:15:20.083 fused_ordering(147) 00:15:20.083 fused_ordering(148) 00:15:20.083 fused_ordering(149) 00:15:20.083 fused_ordering(150) 00:15:20.083 fused_ordering(151) 00:15:20.083 fused_ordering(152) 00:15:20.083 fused_ordering(153) 00:15:20.083 fused_ordering(154) 00:15:20.083 fused_ordering(155) 00:15:20.083 fused_ordering(156) 00:15:20.083 fused_ordering(157) 00:15:20.083 fused_ordering(158) 00:15:20.083 fused_ordering(159) 00:15:20.083 fused_ordering(160) 00:15:20.083 fused_ordering(161) 00:15:20.083 fused_ordering(162) 00:15:20.083 fused_ordering(163) 00:15:20.083 fused_ordering(164) 00:15:20.083 fused_ordering(165) 00:15:20.083 fused_ordering(166) 00:15:20.083 fused_ordering(167) 00:15:20.083 fused_ordering(168) 00:15:20.083 fused_ordering(169) 00:15:20.083 fused_ordering(170) 00:15:20.083 fused_ordering(171) 00:15:20.083 fused_ordering(172) 00:15:20.083 fused_ordering(173) 00:15:20.083 fused_ordering(174) 00:15:20.083 fused_ordering(175) 00:15:20.083 fused_ordering(176) 00:15:20.083 fused_ordering(177) 00:15:20.083 fused_ordering(178) 00:15:20.083 fused_ordering(179) 00:15:20.083 fused_ordering(180) 00:15:20.083 fused_ordering(181) 00:15:20.083 fused_ordering(182) 00:15:20.083 fused_ordering(183) 00:15:20.083 fused_ordering(184) 00:15:20.083 fused_ordering(185) 00:15:20.083 fused_ordering(186) 00:15:20.083 fused_ordering(187) 00:15:20.083 fused_ordering(188) 00:15:20.083 fused_ordering(189) 00:15:20.083 fused_ordering(190) 00:15:20.083 fused_ordering(191) 00:15:20.083 fused_ordering(192) 00:15:20.083 fused_ordering(193) 00:15:20.083 fused_ordering(194) 00:15:20.083 fused_ordering(195) 00:15:20.083 fused_ordering(196) 00:15:20.083 fused_ordering(197) 00:15:20.083 fused_ordering(198) 00:15:20.083 fused_ordering(199) 00:15:20.083 fused_ordering(200) 00:15:20.083 fused_ordering(201) 00:15:20.083 fused_ordering(202) 00:15:20.083 fused_ordering(203) 00:15:20.083 fused_ordering(204) 00:15:20.083 fused_ordering(205) 00:15:20.340 fused_ordering(206) 00:15:20.340 fused_ordering(207) 00:15:20.340 fused_ordering(208) 00:15:20.340 fused_ordering(209) 00:15:20.340 fused_ordering(210) 00:15:20.340 fused_ordering(211) 00:15:20.340 fused_ordering(212) 00:15:20.340 fused_ordering(213) 00:15:20.340 fused_ordering(214) 00:15:20.340 fused_ordering(215) 00:15:20.340 fused_ordering(216) 00:15:20.340 fused_ordering(217) 00:15:20.340 fused_ordering(218) 00:15:20.340 fused_ordering(219) 00:15:20.340 fused_ordering(220) 00:15:20.340 fused_ordering(221) 00:15:20.340 fused_ordering(222) 00:15:20.340 fused_ordering(223) 00:15:20.340 fused_ordering(224) 00:15:20.340 fused_ordering(225) 00:15:20.340 fused_ordering(226) 00:15:20.340 fused_ordering(227) 00:15:20.340 fused_ordering(228) 00:15:20.340 fused_ordering(229) 00:15:20.340 fused_ordering(230) 00:15:20.340 fused_ordering(231) 00:15:20.340 fused_ordering(232) 00:15:20.340 fused_ordering(233) 00:15:20.340 fused_ordering(234) 00:15:20.340 fused_ordering(235) 00:15:20.340 fused_ordering(236) 00:15:20.340 fused_ordering(237) 00:15:20.340 fused_ordering(238) 00:15:20.340 fused_ordering(239) 00:15:20.340 fused_ordering(240) 00:15:20.340 fused_ordering(241) 00:15:20.340 fused_ordering(242) 00:15:20.340 fused_ordering(243) 00:15:20.340 fused_ordering(244) 00:15:20.340 fused_ordering(245) 00:15:20.340 fused_ordering(246) 00:15:20.340 fused_ordering(247) 00:15:20.340 fused_ordering(248) 00:15:20.340 fused_ordering(249) 00:15:20.340 fused_ordering(250) 00:15:20.340 fused_ordering(251) 00:15:20.340 fused_ordering(252) 00:15:20.340 fused_ordering(253) 00:15:20.340 fused_ordering(254) 00:15:20.340 fused_ordering(255) 00:15:20.340 fused_ordering(256) 00:15:20.340 fused_ordering(257) 00:15:20.340 fused_ordering(258) 00:15:20.340 fused_ordering(259) 00:15:20.340 fused_ordering(260) 00:15:20.340 fused_ordering(261) 00:15:20.340 fused_ordering(262) 00:15:20.340 fused_ordering(263) 00:15:20.340 fused_ordering(264) 00:15:20.340 fused_ordering(265) 00:15:20.340 fused_ordering(266) 00:15:20.340 fused_ordering(267) 00:15:20.340 fused_ordering(268) 00:15:20.340 fused_ordering(269) 00:15:20.340 fused_ordering(270) 00:15:20.340 fused_ordering(271) 00:15:20.340 fused_ordering(272) 00:15:20.340 fused_ordering(273) 00:15:20.340 fused_ordering(274) 00:15:20.340 fused_ordering(275) 00:15:20.340 fused_ordering(276) 00:15:20.340 fused_ordering(277) 00:15:20.340 fused_ordering(278) 00:15:20.340 fused_ordering(279) 00:15:20.340 fused_ordering(280) 00:15:20.340 fused_ordering(281) 00:15:20.340 fused_ordering(282) 00:15:20.340 fused_ordering(283) 00:15:20.340 fused_ordering(284) 00:15:20.340 fused_ordering(285) 00:15:20.340 fused_ordering(286) 00:15:20.340 fused_ordering(287) 00:15:20.340 fused_ordering(288) 00:15:20.340 fused_ordering(289) 00:15:20.340 fused_ordering(290) 00:15:20.340 fused_ordering(291) 00:15:20.340 fused_ordering(292) 00:15:20.340 fused_ordering(293) 00:15:20.340 fused_ordering(294) 00:15:20.340 fused_ordering(295) 00:15:20.340 fused_ordering(296) 00:15:20.340 fused_ordering(297) 00:15:20.340 fused_ordering(298) 00:15:20.340 fused_ordering(299) 00:15:20.340 fused_ordering(300) 00:15:20.340 fused_ordering(301) 00:15:20.340 fused_ordering(302) 00:15:20.340 fused_ordering(303) 00:15:20.340 fused_ordering(304) 00:15:20.340 fused_ordering(305) 00:15:20.340 fused_ordering(306) 00:15:20.340 fused_ordering(307) 00:15:20.340 fused_ordering(308) 00:15:20.340 fused_ordering(309) 00:15:20.340 fused_ordering(310) 00:15:20.340 fused_ordering(311) 00:15:20.340 fused_ordering(312) 00:15:20.340 fused_ordering(313) 00:15:20.340 fused_ordering(314) 00:15:20.340 fused_ordering(315) 00:15:20.340 fused_ordering(316) 00:15:20.340 fused_ordering(317) 00:15:20.340 fused_ordering(318) 00:15:20.340 fused_ordering(319) 00:15:20.340 fused_ordering(320) 00:15:20.340 fused_ordering(321) 00:15:20.340 fused_ordering(322) 00:15:20.340 fused_ordering(323) 00:15:20.340 fused_ordering(324) 00:15:20.340 fused_ordering(325) 00:15:20.340 fused_ordering(326) 00:15:20.340 fused_ordering(327) 00:15:20.340 fused_ordering(328) 00:15:20.340 fused_ordering(329) 00:15:20.340 fused_ordering(330) 00:15:20.340 fused_ordering(331) 00:15:20.340 fused_ordering(332) 00:15:20.340 fused_ordering(333) 00:15:20.340 fused_ordering(334) 00:15:20.340 fused_ordering(335) 00:15:20.340 fused_ordering(336) 00:15:20.340 fused_ordering(337) 00:15:20.340 fused_ordering(338) 00:15:20.340 fused_ordering(339) 00:15:20.340 fused_ordering(340) 00:15:20.340 fused_ordering(341) 00:15:20.340 fused_ordering(342) 00:15:20.340 fused_ordering(343) 00:15:20.340 fused_ordering(344) 00:15:20.340 fused_ordering(345) 00:15:20.340 fused_ordering(346) 00:15:20.340 fused_ordering(347) 00:15:20.340 fused_ordering(348) 00:15:20.340 fused_ordering(349) 00:15:20.340 fused_ordering(350) 00:15:20.340 fused_ordering(351) 00:15:20.340 fused_ordering(352) 00:15:20.340 fused_ordering(353) 00:15:20.340 fused_ordering(354) 00:15:20.340 fused_ordering(355) 00:15:20.340 fused_ordering(356) 00:15:20.340 fused_ordering(357) 00:15:20.340 fused_ordering(358) 00:15:20.340 fused_ordering(359) 00:15:20.340 fused_ordering(360) 00:15:20.340 fused_ordering(361) 00:15:20.340 fused_ordering(362) 00:15:20.340 fused_ordering(363) 00:15:20.340 fused_ordering(364) 00:15:20.340 fused_ordering(365) 00:15:20.340 fused_ordering(366) 00:15:20.340 fused_ordering(367) 00:15:20.340 fused_ordering(368) 00:15:20.340 fused_ordering(369) 00:15:20.340 fused_ordering(370) 00:15:20.340 fused_ordering(371) 00:15:20.340 fused_ordering(372) 00:15:20.340 fused_ordering(373) 00:15:20.340 fused_ordering(374) 00:15:20.340 fused_ordering(375) 00:15:20.340 fused_ordering(376) 00:15:20.340 fused_ordering(377) 00:15:20.340 fused_ordering(378) 00:15:20.340 fused_ordering(379) 00:15:20.340 fused_ordering(380) 00:15:20.340 fused_ordering(381) 00:15:20.340 fused_ordering(382) 00:15:20.340 fused_ordering(383) 00:15:20.340 fused_ordering(384) 00:15:20.340 fused_ordering(385) 00:15:20.340 fused_ordering(386) 00:15:20.340 fused_ordering(387) 00:15:20.340 fused_ordering(388) 00:15:20.340 fused_ordering(389) 00:15:20.340 fused_ordering(390) 00:15:20.340 fused_ordering(391) 00:15:20.340 fused_ordering(392) 00:15:20.340 fused_ordering(393) 00:15:20.340 fused_ordering(394) 00:15:20.340 fused_ordering(395) 00:15:20.340 fused_ordering(396) 00:15:20.340 fused_ordering(397) 00:15:20.340 fused_ordering(398) 00:15:20.340 fused_ordering(399) 00:15:20.340 fused_ordering(400) 00:15:20.340 fused_ordering(401) 00:15:20.340 fused_ordering(402) 00:15:20.340 fused_ordering(403) 00:15:20.340 fused_ordering(404) 00:15:20.340 fused_ordering(405) 00:15:20.340 fused_ordering(406) 00:15:20.340 fused_ordering(407) 00:15:20.340 fused_ordering(408) 00:15:20.340 fused_ordering(409) 00:15:20.340 fused_ordering(410) 00:15:20.904 fused_ordering(411) 00:15:20.905 fused_ordering(412) 00:15:20.905 fused_ordering(413) 00:15:20.905 fused_ordering(414) 00:15:20.905 fused_ordering(415) 00:15:20.905 fused_ordering(416) 00:15:20.905 fused_ordering(417) 00:15:20.905 fused_ordering(418) 00:15:20.905 fused_ordering(419) 00:15:20.905 fused_ordering(420) 00:15:20.905 fused_ordering(421) 00:15:20.905 fused_ordering(422) 00:15:20.905 fused_ordering(423) 00:15:20.905 fused_ordering(424) 00:15:20.905 fused_ordering(425) 00:15:20.905 fused_ordering(426) 00:15:20.905 fused_ordering(427) 00:15:20.905 fused_ordering(428) 00:15:20.905 fused_ordering(429) 00:15:20.905 fused_ordering(430) 00:15:20.905 fused_ordering(431) 00:15:20.905 fused_ordering(432) 00:15:20.905 fused_ordering(433) 00:15:20.905 fused_ordering(434) 00:15:20.905 fused_ordering(435) 00:15:20.905 fused_ordering(436) 00:15:20.905 fused_ordering(437) 00:15:20.905 fused_ordering(438) 00:15:20.905 fused_ordering(439) 00:15:20.905 fused_ordering(440) 00:15:20.905 fused_ordering(441) 00:15:20.905 fused_ordering(442) 00:15:20.905 fused_ordering(443) 00:15:20.905 fused_ordering(444) 00:15:20.905 fused_ordering(445) 00:15:20.905 fused_ordering(446) 00:15:20.905 fused_ordering(447) 00:15:20.905 fused_ordering(448) 00:15:20.905 fused_ordering(449) 00:15:20.905 fused_ordering(450) 00:15:20.905 fused_ordering(451) 00:15:20.905 fused_ordering(452) 00:15:20.905 fused_ordering(453) 00:15:20.905 fused_ordering(454) 00:15:20.905 fused_ordering(455) 00:15:20.905 fused_ordering(456) 00:15:20.905 fused_ordering(457) 00:15:20.905 fused_ordering(458) 00:15:20.905 fused_ordering(459) 00:15:20.905 fused_ordering(460) 00:15:20.905 fused_ordering(461) 00:15:20.905 fused_ordering(462) 00:15:20.905 fused_ordering(463) 00:15:20.905 fused_ordering(464) 00:15:20.905 fused_ordering(465) 00:15:20.905 fused_ordering(466) 00:15:20.905 fused_ordering(467) 00:15:20.905 fused_ordering(468) 00:15:20.905 fused_ordering(469) 00:15:20.905 fused_ordering(470) 00:15:20.905 fused_ordering(471) 00:15:20.905 fused_ordering(472) 00:15:20.905 fused_ordering(473) 00:15:20.905 fused_ordering(474) 00:15:20.905 fused_ordering(475) 00:15:20.905 fused_ordering(476) 00:15:20.905 fused_ordering(477) 00:15:20.905 fused_ordering(478) 00:15:20.905 fused_ordering(479) 00:15:20.905 fused_ordering(480) 00:15:20.905 fused_ordering(481) 00:15:20.905 fused_ordering(482) 00:15:20.905 fused_ordering(483) 00:15:20.905 fused_ordering(484) 00:15:20.905 fused_ordering(485) 00:15:20.905 fused_ordering(486) 00:15:20.905 fused_ordering(487) 00:15:20.905 fused_ordering(488) 00:15:20.905 fused_ordering(489) 00:15:20.905 fused_ordering(490) 00:15:20.905 fused_ordering(491) 00:15:20.905 fused_ordering(492) 00:15:20.905 fused_ordering(493) 00:15:20.905 fused_ordering(494) 00:15:20.905 fused_ordering(495) 00:15:20.905 fused_ordering(496) 00:15:20.905 fused_ordering(497) 00:15:20.905 fused_ordering(498) 00:15:20.905 fused_ordering(499) 00:15:20.905 fused_ordering(500) 00:15:20.905 fused_ordering(501) 00:15:20.905 fused_ordering(502) 00:15:20.905 fused_ordering(503) 00:15:20.905 fused_ordering(504) 00:15:20.905 fused_ordering(505) 00:15:20.905 fused_ordering(506) 00:15:20.905 fused_ordering(507) 00:15:20.905 fused_ordering(508) 00:15:20.905 fused_ordering(509) 00:15:20.905 fused_ordering(510) 00:15:20.905 fused_ordering(511) 00:15:20.905 fused_ordering(512) 00:15:20.905 fused_ordering(513) 00:15:20.905 fused_ordering(514) 00:15:20.905 fused_ordering(515) 00:15:20.905 fused_ordering(516) 00:15:20.905 fused_ordering(517) 00:15:20.905 fused_ordering(518) 00:15:20.905 fused_ordering(519) 00:15:20.905 fused_ordering(520) 00:15:20.905 fused_ordering(521) 00:15:20.905 fused_ordering(522) 00:15:20.905 fused_ordering(523) 00:15:20.905 fused_ordering(524) 00:15:20.905 fused_ordering(525) 00:15:20.905 fused_ordering(526) 00:15:20.905 fused_ordering(527) 00:15:20.905 fused_ordering(528) 00:15:20.905 fused_ordering(529) 00:15:20.905 fused_ordering(530) 00:15:20.905 fused_ordering(531) 00:15:20.905 fused_ordering(532) 00:15:20.905 fused_ordering(533) 00:15:20.905 fused_ordering(534) 00:15:20.905 fused_ordering(535) 00:15:20.905 fused_ordering(536) 00:15:20.905 fused_ordering(537) 00:15:20.905 fused_ordering(538) 00:15:20.905 fused_ordering(539) 00:15:20.905 fused_ordering(540) 00:15:20.905 fused_ordering(541) 00:15:20.905 fused_ordering(542) 00:15:20.905 fused_ordering(543) 00:15:20.905 fused_ordering(544) 00:15:20.905 fused_ordering(545) 00:15:20.905 fused_ordering(546) 00:15:20.905 fused_ordering(547) 00:15:20.905 fused_ordering(548) 00:15:20.905 fused_ordering(549) 00:15:20.905 fused_ordering(550) 00:15:20.905 fused_ordering(551) 00:15:20.905 fused_ordering(552) 00:15:20.905 fused_ordering(553) 00:15:20.905 fused_ordering(554) 00:15:20.905 fused_ordering(555) 00:15:20.905 fused_ordering(556) 00:15:20.905 fused_ordering(557) 00:15:20.905 fused_ordering(558) 00:15:20.905 fused_ordering(559) 00:15:20.905 fused_ordering(560) 00:15:20.905 fused_ordering(561) 00:15:20.905 fused_ordering(562) 00:15:20.905 fused_ordering(563) 00:15:20.905 fused_ordering(564) 00:15:20.905 fused_ordering(565) 00:15:20.905 fused_ordering(566) 00:15:20.905 fused_ordering(567) 00:15:20.905 fused_ordering(568) 00:15:20.905 fused_ordering(569) 00:15:20.905 fused_ordering(570) 00:15:20.905 fused_ordering(571) 00:15:20.905 fused_ordering(572) 00:15:20.905 fused_ordering(573) 00:15:20.905 fused_ordering(574) 00:15:20.905 fused_ordering(575) 00:15:20.905 fused_ordering(576) 00:15:20.905 fused_ordering(577) 00:15:20.905 fused_ordering(578) 00:15:20.905 fused_ordering(579) 00:15:20.905 fused_ordering(580) 00:15:20.905 fused_ordering(581) 00:15:20.905 fused_ordering(582) 00:15:20.905 fused_ordering(583) 00:15:20.905 fused_ordering(584) 00:15:20.905 fused_ordering(585) 00:15:20.905 fused_ordering(586) 00:15:20.905 fused_ordering(587) 00:15:20.905 fused_ordering(588) 00:15:20.905 fused_ordering(589) 00:15:20.905 fused_ordering(590) 00:15:20.905 fused_ordering(591) 00:15:20.905 fused_ordering(592) 00:15:20.905 fused_ordering(593) 00:15:20.905 fused_ordering(594) 00:15:20.905 fused_ordering(595) 00:15:20.905 fused_ordering(596) 00:15:20.905 fused_ordering(597) 00:15:20.905 fused_ordering(598) 00:15:20.905 fused_ordering(599) 00:15:20.905 fused_ordering(600) 00:15:20.905 fused_ordering(601) 00:15:20.905 fused_ordering(602) 00:15:20.905 fused_ordering(603) 00:15:20.905 fused_ordering(604) 00:15:20.905 fused_ordering(605) 00:15:20.905 fused_ordering(606) 00:15:20.905 fused_ordering(607) 00:15:20.905 fused_ordering(608) 00:15:20.905 fused_ordering(609) 00:15:20.905 fused_ordering(610) 00:15:20.905 fused_ordering(611) 00:15:20.905 fused_ordering(612) 00:15:20.905 fused_ordering(613) 00:15:20.905 fused_ordering(614) 00:15:20.905 fused_ordering(615) 00:15:21.470 fused_ordering(616) 00:15:21.470 fused_ordering(617) 00:15:21.470 fused_ordering(618) 00:15:21.470 fused_ordering(619) 00:15:21.470 fused_ordering(620) 00:15:21.470 fused_ordering(621) 00:15:21.470 fused_ordering(622) 00:15:21.470 fused_ordering(623) 00:15:21.470 fused_ordering(624) 00:15:21.470 fused_ordering(625) 00:15:21.470 fused_ordering(626) 00:15:21.470 fused_ordering(627) 00:15:21.470 fused_ordering(628) 00:15:21.470 fused_ordering(629) 00:15:21.470 fused_ordering(630) 00:15:21.470 fused_ordering(631) 00:15:21.470 fused_ordering(632) 00:15:21.470 fused_ordering(633) 00:15:21.470 fused_ordering(634) 00:15:21.470 fused_ordering(635) 00:15:21.470 fused_ordering(636) 00:15:21.470 fused_ordering(637) 00:15:21.470 fused_ordering(638) 00:15:21.470 fused_ordering(639) 00:15:21.470 fused_ordering(640) 00:15:21.470 fused_ordering(641) 00:15:21.470 fused_ordering(642) 00:15:21.470 fused_ordering(643) 00:15:21.470 fused_ordering(644) 00:15:21.470 fused_ordering(645) 00:15:21.470 fused_ordering(646) 00:15:21.470 fused_ordering(647) 00:15:21.470 fused_ordering(648) 00:15:21.470 fused_ordering(649) 00:15:21.470 fused_ordering(650) 00:15:21.470 fused_ordering(651) 00:15:21.470 fused_ordering(652) 00:15:21.470 fused_ordering(653) 00:15:21.470 fused_ordering(654) 00:15:21.470 fused_ordering(655) 00:15:21.470 fused_ordering(656) 00:15:21.470 fused_ordering(657) 00:15:21.470 fused_ordering(658) 00:15:21.470 fused_ordering(659) 00:15:21.470 fused_ordering(660) 00:15:21.470 fused_ordering(661) 00:15:21.470 fused_ordering(662) 00:15:21.470 fused_ordering(663) 00:15:21.470 fused_ordering(664) 00:15:21.470 fused_ordering(665) 00:15:21.470 fused_ordering(666) 00:15:21.470 fused_ordering(667) 00:15:21.470 fused_ordering(668) 00:15:21.470 fused_ordering(669) 00:15:21.470 fused_ordering(670) 00:15:21.470 fused_ordering(671) 00:15:21.470 fused_ordering(672) 00:15:21.470 fused_ordering(673) 00:15:21.470 fused_ordering(674) 00:15:21.470 fused_ordering(675) 00:15:21.470 fused_ordering(676) 00:15:21.470 fused_ordering(677) 00:15:21.470 fused_ordering(678) 00:15:21.470 fused_ordering(679) 00:15:21.470 fused_ordering(680) 00:15:21.470 fused_ordering(681) 00:15:21.470 fused_ordering(682) 00:15:21.470 fused_ordering(683) 00:15:21.470 fused_ordering(684) 00:15:21.470 fused_ordering(685) 00:15:21.470 fused_ordering(686) 00:15:21.470 fused_ordering(687) 00:15:21.470 fused_ordering(688) 00:15:21.470 fused_ordering(689) 00:15:21.470 fused_ordering(690) 00:15:21.470 fused_ordering(691) 00:15:21.470 fused_ordering(692) 00:15:21.470 fused_ordering(693) 00:15:21.470 fused_ordering(694) 00:15:21.470 fused_ordering(695) 00:15:21.470 fused_ordering(696) 00:15:21.470 fused_ordering(697) 00:15:21.470 fused_ordering(698) 00:15:21.470 fused_ordering(699) 00:15:21.470 fused_ordering(700) 00:15:21.470 fused_ordering(701) 00:15:21.470 fused_ordering(702) 00:15:21.470 fused_ordering(703) 00:15:21.470 fused_ordering(704) 00:15:21.470 fused_ordering(705) 00:15:21.470 fused_ordering(706) 00:15:21.470 fused_ordering(707) 00:15:21.470 fused_ordering(708) 00:15:21.470 fused_ordering(709) 00:15:21.470 fused_ordering(710) 00:15:21.470 fused_ordering(711) 00:15:21.470 fused_ordering(712) 00:15:21.470 fused_ordering(713) 00:15:21.470 fused_ordering(714) 00:15:21.470 fused_ordering(715) 00:15:21.470 fused_ordering(716) 00:15:21.470 fused_ordering(717) 00:15:21.470 fused_ordering(718) 00:15:21.470 fused_ordering(719) 00:15:21.470 fused_ordering(720) 00:15:21.470 fused_ordering(721) 00:15:21.470 fused_ordering(722) 00:15:21.470 fused_ordering(723) 00:15:21.470 fused_ordering(724) 00:15:21.470 fused_ordering(725) 00:15:21.470 fused_ordering(726) 00:15:21.470 fused_ordering(727) 00:15:21.470 fused_ordering(728) 00:15:21.470 fused_ordering(729) 00:15:21.470 fused_ordering(730) 00:15:21.470 fused_ordering(731) 00:15:21.470 fused_ordering(732) 00:15:21.470 fused_ordering(733) 00:15:21.470 fused_ordering(734) 00:15:21.470 fused_ordering(735) 00:15:21.470 fused_ordering(736) 00:15:21.470 fused_ordering(737) 00:15:21.470 fused_ordering(738) 00:15:21.470 fused_ordering(739) 00:15:21.470 fused_ordering(740) 00:15:21.470 fused_ordering(741) 00:15:21.470 fused_ordering(742) 00:15:21.470 fused_ordering(743) 00:15:21.470 fused_ordering(744) 00:15:21.470 fused_ordering(745) 00:15:21.470 fused_ordering(746) 00:15:21.470 fused_ordering(747) 00:15:21.470 fused_ordering(748) 00:15:21.470 fused_ordering(749) 00:15:21.470 fused_ordering(750) 00:15:21.470 fused_ordering(751) 00:15:21.470 fused_ordering(752) 00:15:21.470 fused_ordering(753) 00:15:21.470 fused_ordering(754) 00:15:21.470 fused_ordering(755) 00:15:21.470 fused_ordering(756) 00:15:21.470 fused_ordering(757) 00:15:21.470 fused_ordering(758) 00:15:21.470 fused_ordering(759) 00:15:21.470 fused_ordering(760) 00:15:21.470 fused_ordering(761) 00:15:21.470 fused_ordering(762) 00:15:21.470 fused_ordering(763) 00:15:21.470 fused_ordering(764) 00:15:21.470 fused_ordering(765) 00:15:21.470 fused_ordering(766) 00:15:21.470 fused_ordering(767) 00:15:21.470 fused_ordering(768) 00:15:21.470 fused_ordering(769) 00:15:21.470 fused_ordering(770) 00:15:21.470 fused_ordering(771) 00:15:21.470 fused_ordering(772) 00:15:21.470 fused_ordering(773) 00:15:21.470 fused_ordering(774) 00:15:21.470 fused_ordering(775) 00:15:21.470 fused_ordering(776) 00:15:21.470 fused_ordering(777) 00:15:21.471 fused_ordering(778) 00:15:21.471 fused_ordering(779) 00:15:21.471 fused_ordering(780) 00:15:21.471 fused_ordering(781) 00:15:21.471 fused_ordering(782) 00:15:21.471 fused_ordering(783) 00:15:21.471 fused_ordering(784) 00:15:21.471 fused_ordering(785) 00:15:21.471 fused_ordering(786) 00:15:21.471 fused_ordering(787) 00:15:21.471 fused_ordering(788) 00:15:21.471 fused_ordering(789) 00:15:21.471 fused_ordering(790) 00:15:21.471 fused_ordering(791) 00:15:21.471 fused_ordering(792) 00:15:21.471 fused_ordering(793) 00:15:21.471 fused_ordering(794) 00:15:21.471 fused_ordering(795) 00:15:21.471 fused_ordering(796) 00:15:21.471 fused_ordering(797) 00:15:21.471 fused_ordering(798) 00:15:21.471 fused_ordering(799) 00:15:21.471 fused_ordering(800) 00:15:21.471 fused_ordering(801) 00:15:21.471 fused_ordering(802) 00:15:21.471 fused_ordering(803) 00:15:21.471 fused_ordering(804) 00:15:21.471 fused_ordering(805) 00:15:21.471 fused_ordering(806) 00:15:21.471 fused_ordering(807) 00:15:21.471 fused_ordering(808) 00:15:21.471 fused_ordering(809) 00:15:21.471 fused_ordering(810) 00:15:21.471 fused_ordering(811) 00:15:21.471 fused_ordering(812) 00:15:21.471 fused_ordering(813) 00:15:21.471 fused_ordering(814) 00:15:21.471 fused_ordering(815) 00:15:21.471 fused_ordering(816) 00:15:21.471 fused_ordering(817) 00:15:21.471 fused_ordering(818) 00:15:21.471 fused_ordering(819) 00:15:21.471 fused_ordering(820) 00:15:22.406 fused_ordering(821) 00:15:22.406 fused_ordering(822) 00:15:22.406 fused_ordering(823) 00:15:22.406 fused_ordering(824) 00:15:22.406 fused_ordering(825) 00:15:22.406 fused_ordering(826) 00:15:22.406 fused_ordering(827) 00:15:22.406 fused_ordering(828) 00:15:22.406 fused_ordering(829) 00:15:22.406 fused_ordering(830) 00:15:22.406 fused_ordering(831) 00:15:22.406 fused_ordering(832) 00:15:22.406 fused_ordering(833) 00:15:22.406 fused_ordering(834) 00:15:22.406 fused_ordering(835) 00:15:22.406 fused_ordering(836) 00:15:22.406 fused_ordering(837) 00:15:22.406 fused_ordering(838) 00:15:22.406 fused_ordering(839) 00:15:22.406 fused_ordering(840) 00:15:22.406 fused_ordering(841) 00:15:22.406 fused_ordering(842) 00:15:22.406 fused_ordering(843) 00:15:22.406 fused_ordering(844) 00:15:22.406 fused_ordering(845) 00:15:22.406 fused_ordering(846) 00:15:22.406 fused_ordering(847) 00:15:22.406 fused_ordering(848) 00:15:22.406 fused_ordering(849) 00:15:22.406 fused_ordering(850) 00:15:22.406 fused_ordering(851) 00:15:22.406 fused_ordering(852) 00:15:22.406 fused_ordering(853) 00:15:22.406 fused_ordering(854) 00:15:22.406 fused_ordering(855) 00:15:22.406 fused_ordering(856) 00:15:22.406 fused_ordering(857) 00:15:22.406 fused_ordering(858) 00:15:22.406 fused_ordering(859) 00:15:22.406 fused_ordering(860) 00:15:22.406 fused_ordering(861) 00:15:22.406 fused_ordering(862) 00:15:22.406 fused_ordering(863) 00:15:22.406 fused_ordering(864) 00:15:22.406 fused_ordering(865) 00:15:22.406 fused_ordering(866) 00:15:22.406 fused_ordering(867) 00:15:22.406 fused_ordering(868) 00:15:22.406 fused_ordering(869) 00:15:22.406 fused_ordering(870) 00:15:22.406 fused_ordering(871) 00:15:22.406 fused_ordering(872) 00:15:22.406 fused_ordering(873) 00:15:22.406 fused_ordering(874) 00:15:22.406 fused_ordering(875) 00:15:22.406 fused_ordering(876) 00:15:22.406 fused_ordering(877) 00:15:22.406 fused_ordering(878) 00:15:22.406 fused_ordering(879) 00:15:22.406 fused_ordering(880) 00:15:22.406 fused_ordering(881) 00:15:22.406 fused_ordering(882) 00:15:22.406 fused_ordering(883) 00:15:22.406 fused_ordering(884) 00:15:22.406 fused_ordering(885) 00:15:22.406 fused_ordering(886) 00:15:22.407 fused_ordering(887) 00:15:22.407 fused_ordering(888) 00:15:22.407 fused_ordering(889) 00:15:22.407 fused_ordering(890) 00:15:22.407 fused_ordering(891) 00:15:22.407 fused_ordering(892) 00:15:22.407 fused_ordering(893) 00:15:22.407 fused_ordering(894) 00:15:22.407 fused_ordering(895) 00:15:22.407 fused_ordering(896) 00:15:22.407 fused_ordering(897) 00:15:22.407 fused_ordering(898) 00:15:22.407 fused_ordering(899) 00:15:22.407 fused_ordering(900) 00:15:22.407 fused_ordering(901) 00:15:22.407 fused_ordering(902) 00:15:22.407 fused_ordering(903) 00:15:22.407 fused_ordering(904) 00:15:22.407 fused_ordering(905) 00:15:22.407 fused_ordering(906) 00:15:22.407 fused_ordering(907) 00:15:22.407 fused_ordering(908) 00:15:22.407 fused_ordering(909) 00:15:22.407 fused_ordering(910) 00:15:22.407 fused_ordering(911) 00:15:22.407 fused_ordering(912) 00:15:22.407 fused_ordering(913) 00:15:22.407 fused_ordering(914) 00:15:22.407 fused_ordering(915) 00:15:22.407 fused_ordering(916) 00:15:22.407 fused_ordering(917) 00:15:22.407 fused_ordering(918) 00:15:22.407 fused_ordering(919) 00:15:22.407 fused_ordering(920) 00:15:22.407 fused_ordering(921) 00:15:22.407 fused_ordering(922) 00:15:22.407 fused_ordering(923) 00:15:22.407 fused_ordering(924) 00:15:22.407 fused_ordering(925) 00:15:22.407 fused_ordering(926) 00:15:22.407 fused_ordering(927) 00:15:22.407 fused_ordering(928) 00:15:22.407 fused_ordering(929) 00:15:22.407 fused_ordering(930) 00:15:22.407 fused_ordering(931) 00:15:22.407 fused_ordering(932) 00:15:22.407 fused_ordering(933) 00:15:22.407 fused_ordering(934) 00:15:22.407 fused_ordering(935) 00:15:22.407 fused_ordering(936) 00:15:22.407 fused_ordering(937) 00:15:22.407 fused_ordering(938) 00:15:22.407 fused_ordering(939) 00:15:22.407 fused_ordering(940) 00:15:22.407 fused_ordering(941) 00:15:22.407 fused_ordering(942) 00:15:22.407 fused_ordering(943) 00:15:22.407 fused_ordering(944) 00:15:22.407 fused_ordering(945) 00:15:22.407 fused_ordering(946) 00:15:22.407 fused_ordering(947) 00:15:22.407 fused_ordering(948) 00:15:22.407 fused_ordering(949) 00:15:22.407 fused_ordering(950) 00:15:22.407 fused_ordering(951) 00:15:22.407 fused_ordering(952) 00:15:22.407 fused_ordering(953) 00:15:22.407 fused_ordering(954) 00:15:22.407 fused_ordering(955) 00:15:22.407 fused_ordering(956) 00:15:22.407 fused_ordering(957) 00:15:22.407 fused_ordering(958) 00:15:22.407 fused_ordering(959) 00:15:22.407 fused_ordering(960) 00:15:22.407 fused_ordering(961) 00:15:22.407 fused_ordering(962) 00:15:22.407 fused_ordering(963) 00:15:22.407 fused_ordering(964) 00:15:22.407 fused_ordering(965) 00:15:22.407 fused_ordering(966) 00:15:22.407 fused_ordering(967) 00:15:22.407 fused_ordering(968) 00:15:22.407 fused_ordering(969) 00:15:22.407 fused_ordering(970) 00:15:22.407 fused_ordering(971) 00:15:22.407 fused_ordering(972) 00:15:22.407 fused_ordering(973) 00:15:22.407 fused_ordering(974) 00:15:22.407 fused_ordering(975) 00:15:22.407 fused_ordering(976) 00:15:22.407 fused_ordering(977) 00:15:22.407 fused_ordering(978) 00:15:22.407 fused_ordering(979) 00:15:22.407 fused_ordering(980) 00:15:22.407 fused_ordering(981) 00:15:22.407 fused_ordering(982) 00:15:22.407 fused_ordering(983) 00:15:22.407 fused_ordering(984) 00:15:22.407 fused_ordering(985) 00:15:22.407 fused_ordering(986) 00:15:22.407 fused_ordering(987) 00:15:22.407 fused_ordering(988) 00:15:22.407 fused_ordering(989) 00:15:22.407 fused_ordering(990) 00:15:22.407 fused_ordering(991) 00:15:22.407 fused_ordering(992) 00:15:22.407 fused_ordering(993) 00:15:22.407 fused_ordering(994) 00:15:22.407 fused_ordering(995) 00:15:22.407 fused_ordering(996) 00:15:22.407 fused_ordering(997) 00:15:22.407 fused_ordering(998) 00:15:22.407 fused_ordering(999) 00:15:22.407 fused_ordering(1000) 00:15:22.407 fused_ordering(1001) 00:15:22.407 fused_ordering(1002) 00:15:22.407 fused_ordering(1003) 00:15:22.407 fused_ordering(1004) 00:15:22.407 fused_ordering(1005) 00:15:22.407 fused_ordering(1006) 00:15:22.407 fused_ordering(1007) 00:15:22.407 fused_ordering(1008) 00:15:22.407 fused_ordering(1009) 00:15:22.407 fused_ordering(1010) 00:15:22.407 fused_ordering(1011) 00:15:22.407 fused_ordering(1012) 00:15:22.407 fused_ordering(1013) 00:15:22.407 fused_ordering(1014) 00:15:22.407 fused_ordering(1015) 00:15:22.407 fused_ordering(1016) 00:15:22.407 fused_ordering(1017) 00:15:22.407 fused_ordering(1018) 00:15:22.407 fused_ordering(1019) 00:15:22.407 fused_ordering(1020) 00:15:22.407 fused_ordering(1021) 00:15:22.407 fused_ordering(1022) 00:15:22.407 fused_ordering(1023) 00:15:22.407 00:27:50 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:15:22.407 00:27:50 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:15:22.407 00:27:50 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:22.407 00:27:50 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@117 -- # sync 00:15:22.407 00:27:50 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:22.407 00:27:50 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@120 -- # set +e 00:15:22.407 00:27:50 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:22.407 00:27:50 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:22.407 rmmod nvme_tcp 00:15:22.407 rmmod nvme_fabrics 00:15:22.407 rmmod nvme_keyring 00:15:22.407 00:27:50 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:22.407 00:27:50 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set -e 00:15:22.407 00:27:50 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@125 -- # return 0 00:15:22.407 00:27:50 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@489 -- # '[' -n 918151 ']' 00:15:22.407 00:27:50 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@490 -- # killprocess 918151 00:15:22.407 00:27:50 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@946 -- # '[' -z 918151 ']' 00:15:22.407 00:27:50 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@950 -- # kill -0 918151 00:15:22.407 00:27:50 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@951 -- # uname 00:15:22.407 00:27:50 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:15:22.407 00:27:50 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 918151 00:15:22.407 00:27:50 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:15:22.407 00:27:50 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:15:22.407 00:27:50 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@964 -- # echo 'killing process with pid 918151' 00:15:22.407 killing process with pid 918151 00:15:22.407 00:27:50 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@965 -- # kill 918151 00:15:22.407 00:27:50 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@970 -- # wait 918151 00:15:22.667 00:27:50 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:22.667 00:27:50 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:22.667 00:27:50 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:22.667 00:27:50 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:22.667 00:27:50 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:22.667 00:27:50 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:22.667 00:27:50 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:22.667 00:27:50 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:24.574 00:27:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:24.574 00:15:24.574 real 0m7.104s 00:15:24.574 user 0m5.567s 00:15:24.574 sys 0m2.511s 00:15:24.574 00:27:52 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1122 -- # xtrace_disable 00:15:24.574 00:27:52 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:24.574 ************************************ 00:15:24.574 END TEST nvmf_fused_ordering 00:15:24.574 ************************************ 00:15:24.574 00:27:52 nvmf_tcp -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:15:24.574 00:27:52 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:15:24.574 00:27:52 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:15:24.574 00:27:52 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:24.574 ************************************ 00:15:24.574 START TEST nvmf_delete_subsystem 00:15:24.574 ************************************ 00:15:24.574 00:27:52 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:15:24.574 * Looking for test storage... 00:15:24.574 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:24.574 00:27:52 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:24.574 00:27:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:15:24.574 00:27:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:24.574 00:27:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:24.574 00:27:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:24.574 00:27:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:24.574 00:27:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:24.574 00:27:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:24.574 00:27:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:24.574 00:27:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:24.574 00:27:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:24.574 00:27:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:24.574 00:27:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:15:24.574 00:27:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:15:24.574 00:27:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:24.574 00:27:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:24.574 00:27:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:24.574 00:27:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:24.574 00:27:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:24.833 00:27:52 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:24.833 00:27:52 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:24.833 00:27:52 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:24.833 00:27:52 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:24.833 00:27:52 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:24.834 00:27:52 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:24.834 00:27:52 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:15:24.834 00:27:52 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:24.834 00:27:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@47 -- # : 0 00:15:24.834 00:27:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:24.834 00:27:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:24.834 00:27:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:24.834 00:27:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:24.834 00:27:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:24.834 00:27:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:24.834 00:27:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:24.834 00:27:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:24.834 00:27:52 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:15:24.834 00:27:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:24.834 00:27:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:24.834 00:27:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:24.834 00:27:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:24.834 00:27:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:24.834 00:27:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:24.834 00:27:52 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:24.834 00:27:52 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:24.834 00:27:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:24.834 00:27:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:24.834 00:27:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@285 -- # xtrace_disable 00:15:24.834 00:27:52 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:15:26.218 00:27:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:26.218 00:27:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # pci_devs=() 00:15:26.218 00:27:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:26.218 00:27:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:26.218 00:27:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:26.218 00:27:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:26.218 00:27:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:26.218 00:27:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # net_devs=() 00:15:26.218 00:27:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:26.218 00:27:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # e810=() 00:15:26.218 00:27:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # local -ga e810 00:15:26.218 00:27:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # x722=() 00:15:26.218 00:27:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # local -ga x722 00:15:26.218 00:27:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # mlx=() 00:15:26.218 00:27:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # local -ga mlx 00:15:26.218 00:27:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:26.218 00:27:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:26.218 00:27:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:26.218 00:27:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:26.218 00:27:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:26.218 00:27:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:26.218 00:27:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:26.218 00:27:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:26.218 00:27:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:26.218 00:27:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:26.219 00:27:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:26.219 00:27:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:26.219 00:27:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:26.219 00:27:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:26.219 00:27:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:26.219 00:27:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:26.219 00:27:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:26.219 00:27:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:26.219 00:27:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:15:26.219 Found 0000:08:00.0 (0x8086 - 0x159b) 00:15:26.219 00:27:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:26.219 00:27:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:26.219 00:27:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:26.219 00:27:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:26.219 00:27:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:26.219 00:27:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:26.219 00:27:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:15:26.219 Found 0000:08:00.1 (0x8086 - 0x159b) 00:15:26.219 00:27:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:26.219 00:27:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:26.219 00:27:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:26.219 00:27:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:26.219 00:27:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:26.219 00:27:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:26.219 00:27:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:26.219 00:27:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:26.219 00:27:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:26.219 00:27:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:26.219 00:27:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:26.219 00:27:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:26.219 00:27:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:26.219 00:27:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:26.219 00:27:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:26.219 00:27:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:15:26.219 Found net devices under 0000:08:00.0: cvl_0_0 00:15:26.219 00:27:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:26.219 00:27:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:26.219 00:27:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:26.219 00:27:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:26.219 00:27:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:26.219 00:27:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:26.219 00:27:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:26.219 00:27:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:26.219 00:27:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:15:26.219 Found net devices under 0000:08:00.1: cvl_0_1 00:15:26.219 00:27:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:26.219 00:27:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:26.219 00:27:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # is_hw=yes 00:15:26.219 00:27:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:26.219 00:27:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:26.219 00:27:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:26.219 00:27:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:26.219 00:27:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:26.219 00:27:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:26.219 00:27:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:26.219 00:27:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:26.219 00:27:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:26.219 00:27:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:26.219 00:27:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:26.219 00:27:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:26.219 00:27:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:26.219 00:27:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:26.219 00:27:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:26.219 00:27:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:26.219 00:27:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:26.219 00:27:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:26.219 00:27:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:26.219 00:27:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:26.477 00:27:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:26.477 00:27:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:26.477 00:27:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:26.477 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:26.477 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.218 ms 00:15:26.477 00:15:26.477 --- 10.0.0.2 ping statistics --- 00:15:26.477 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:26.477 rtt min/avg/max/mdev = 0.218/0.218/0.218/0.000 ms 00:15:26.477 00:27:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:26.477 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:26.477 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.083 ms 00:15:26.477 00:15:26.477 --- 10.0.0.1 ping statistics --- 00:15:26.477 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:26.477 rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms 00:15:26.477 00:27:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:26.477 00:27:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # return 0 00:15:26.477 00:27:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:26.477 00:27:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:26.477 00:27:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:26.477 00:27:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:26.477 00:27:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:26.477 00:27:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:26.477 00:27:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:26.477 00:27:54 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:15:26.477 00:27:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:26.477 00:27:54 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@720 -- # xtrace_disable 00:15:26.477 00:27:54 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:15:26.477 00:27:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # nvmfpid=919874 00:15:26.477 00:27:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:15:26.477 00:27:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # waitforlisten 919874 00:15:26.477 00:27:54 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@827 -- # '[' -z 919874 ']' 00:15:26.477 00:27:54 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:26.477 00:27:54 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@832 -- # local max_retries=100 00:15:26.477 00:27:54 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:26.477 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:26.477 00:27:54 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # xtrace_disable 00:15:26.477 00:27:54 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:15:26.477 [2024-07-12 00:27:54.173391] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:15:26.477 [2024-07-12 00:27:54.173474] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:26.478 EAL: No free 2048 kB hugepages reported on node 1 00:15:26.478 [2024-07-12 00:27:54.237151] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:15:26.736 [2024-07-12 00:27:54.323839] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:26.736 [2024-07-12 00:27:54.323892] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:26.736 [2024-07-12 00:27:54.323907] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:26.736 [2024-07-12 00:27:54.323921] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:26.736 [2024-07-12 00:27:54.323933] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:26.736 [2024-07-12 00:27:54.324027] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:26.736 [2024-07-12 00:27:54.324032] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:26.736 00:27:54 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:15:26.736 00:27:54 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@860 -- # return 0 00:15:26.736 00:27:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:26.736 00:27:54 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:26.736 00:27:54 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:15:26.736 00:27:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:26.736 00:27:54 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:26.736 00:27:54 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:26.736 00:27:54 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:15:26.736 [2024-07-12 00:27:54.454397] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:26.736 00:27:54 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:26.736 00:27:54 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:15:26.737 00:27:54 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:26.737 00:27:54 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:15:26.737 00:27:54 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:26.737 00:27:54 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:26.737 00:27:54 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:26.737 00:27:54 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:15:26.737 [2024-07-12 00:27:54.470577] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:26.737 00:27:54 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:26.737 00:27:54 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:15:26.737 00:27:54 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:26.737 00:27:54 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:15:26.737 NULL1 00:15:26.737 00:27:54 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:26.737 00:27:54 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:15:26.737 00:27:54 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:26.737 00:27:54 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:15:26.737 Delay0 00:15:26.737 00:27:54 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:26.737 00:27:54 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:26.737 00:27:54 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:26.737 00:27:54 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:15:26.737 00:27:54 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:26.737 00:27:54 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=919976 00:15:26.737 00:27:54 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:15:26.737 00:27:54 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:15:26.737 EAL: No free 2048 kB hugepages reported on node 1 00:15:26.737 [2024-07-12 00:27:54.545338] subsystem.c:1568:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:15:29.276 00:27:56 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:29.276 00:27:56 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:29.276 00:27:56 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:15:29.276 Read completed with error (sct=0, sc=8) 00:15:29.276 Write completed with error (sct=0, sc=8) 00:15:29.276 Write completed with error (sct=0, sc=8) 00:15:29.276 starting I/O failed: -6 00:15:29.276 Read completed with error (sct=0, sc=8) 00:15:29.276 Read completed with error (sct=0, sc=8) 00:15:29.276 Read completed with error (sct=0, sc=8) 00:15:29.276 Read completed with error (sct=0, sc=8) 00:15:29.276 starting I/O failed: -6 00:15:29.276 Write completed with error (sct=0, sc=8) 00:15:29.276 Read completed with error (sct=0, sc=8) 00:15:29.276 Write completed with error (sct=0, sc=8) 00:15:29.276 Write completed with error (sct=0, sc=8) 00:15:29.276 starting I/O failed: -6 00:15:29.276 Read completed with error (sct=0, sc=8) 00:15:29.276 Write completed with error (sct=0, sc=8) 00:15:29.276 Read completed with error (sct=0, sc=8) 00:15:29.276 Read completed with error (sct=0, sc=8) 00:15:29.276 starting I/O failed: -6 00:15:29.276 Write completed with error (sct=0, sc=8) 00:15:29.276 Write completed with error (sct=0, sc=8) 00:15:29.276 Read completed with error (sct=0, sc=8) 00:15:29.276 Write completed with error (sct=0, sc=8) 00:15:29.276 starting I/O failed: -6 00:15:29.276 Read completed with error (sct=0, sc=8) 00:15:29.276 Read completed with error (sct=0, sc=8) 00:15:29.276 Read completed with error (sct=0, sc=8) 00:15:29.276 Write completed with error (sct=0, sc=8) 00:15:29.276 starting I/O failed: -6 00:15:29.276 Read completed with error (sct=0, sc=8) 00:15:29.276 Read completed with error (sct=0, sc=8) 00:15:29.276 Read completed with error (sct=0, sc=8) 00:15:29.276 Read completed with error (sct=0, sc=8) 00:15:29.276 starting I/O failed: -6 00:15:29.276 Read completed with error (sct=0, sc=8) 00:15:29.276 Read completed with error (sct=0, sc=8) 00:15:29.276 Read completed with error (sct=0, sc=8) 00:15:29.276 Read completed with error (sct=0, sc=8) 00:15:29.276 starting I/O failed: -6 00:15:29.276 Read completed with error (sct=0, sc=8) 00:15:29.276 Read completed with error (sct=0, sc=8) 00:15:29.276 Read completed with error (sct=0, sc=8) 00:15:29.276 Read completed with error (sct=0, sc=8) 00:15:29.276 starting I/O failed: -6 00:15:29.276 Read completed with error (sct=0, sc=8) 00:15:29.276 Write completed with error (sct=0, sc=8) 00:15:29.276 Read completed with error (sct=0, sc=8) 00:15:29.276 Read completed with error (sct=0, sc=8) 00:15:29.276 starting I/O failed: -6 00:15:29.276 Read completed with error (sct=0, sc=8) 00:15:29.276 Write completed with error (sct=0, sc=8) 00:15:29.276 Write completed with error (sct=0, sc=8) 00:15:29.276 Write completed with error (sct=0, sc=8) 00:15:29.276 starting I/O failed: -6 00:15:29.276 Read completed with error (sct=0, sc=8) 00:15:29.276 Read completed with error (sct=0, sc=8) 00:15:29.276 Read completed with error (sct=0, sc=8) 00:15:29.276 [2024-07-12 00:27:56.667690] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12306a0 is same with the state(5) to be set 00:15:29.276 Read completed with error (sct=0, sc=8) 00:15:29.276 Read completed with error (sct=0, sc=8) 00:15:29.276 Read completed with error (sct=0, sc=8) 00:15:29.276 starting I/O failed: -6 00:15:29.276 Read completed with error (sct=0, sc=8) 00:15:29.276 Write completed with error (sct=0, sc=8) 00:15:29.276 Read completed with error (sct=0, sc=8) 00:15:29.276 Write completed with error (sct=0, sc=8) 00:15:29.276 starting I/O failed: -6 00:15:29.276 Read completed with error (sct=0, sc=8) 00:15:29.276 Write completed with error (sct=0, sc=8) 00:15:29.276 Read completed with error (sct=0, sc=8) 00:15:29.276 Read completed with error (sct=0, sc=8) 00:15:29.276 starting I/O failed: -6 00:15:29.276 Read completed with error (sct=0, sc=8) 00:15:29.276 Write completed with error (sct=0, sc=8) 00:15:29.276 Write completed with error (sct=0, sc=8) 00:15:29.276 Read completed with error (sct=0, sc=8) 00:15:29.276 starting I/O failed: -6 00:15:29.276 Read completed with error (sct=0, sc=8) 00:15:29.276 Write completed with error (sct=0, sc=8) 00:15:29.276 Read completed with error (sct=0, sc=8) 00:15:29.276 Write completed with error (sct=0, sc=8) 00:15:29.276 starting I/O failed: -6 00:15:29.276 Read completed with error (sct=0, sc=8) 00:15:29.276 Read completed with error (sct=0, sc=8) 00:15:29.276 Read completed with error (sct=0, sc=8) 00:15:29.276 Write completed with error (sct=0, sc=8) 00:15:29.276 starting I/O failed: -6 00:15:29.276 Read completed with error (sct=0, sc=8) 00:15:29.276 Write completed with error (sct=0, sc=8) 00:15:29.276 Read completed with error (sct=0, sc=8) 00:15:29.276 Read completed with error (sct=0, sc=8) 00:15:29.276 starting I/O failed: -6 00:15:29.276 Read completed with error (sct=0, sc=8) 00:15:29.276 Read completed with error (sct=0, sc=8) 00:15:29.276 Read completed with error (sct=0, sc=8) 00:15:29.276 Read completed with error (sct=0, sc=8) 00:15:29.276 starting I/O failed: -6 00:15:29.276 Read completed with error (sct=0, sc=8) 00:15:29.276 Read completed with error (sct=0, sc=8) 00:15:29.276 Read completed with error (sct=0, sc=8) 00:15:29.276 Read completed with error (sct=0, sc=8) 00:15:29.276 starting I/O failed: -6 00:15:29.276 Read completed with error (sct=0, sc=8) 00:15:29.276 Read completed with error (sct=0, sc=8) 00:15:29.276 [2024-07-12 00:27:56.668328] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fd50c000c00 is same with the state(5) to be set 00:15:29.276 Read completed with error (sct=0, sc=8) 00:15:29.276 Read completed with error (sct=0, sc=8) 00:15:29.276 Read completed with error (sct=0, sc=8) 00:15:29.276 Read completed with error (sct=0, sc=8) 00:15:29.276 Read completed with error (sct=0, sc=8) 00:15:29.276 Read completed with error (sct=0, sc=8) 00:15:29.276 Read completed with error (sct=0, sc=8) 00:15:29.276 Write completed with error (sct=0, sc=8) 00:15:29.276 Read completed with error (sct=0, sc=8) 00:15:29.276 Read completed with error (sct=0, sc=8) 00:15:29.276 Read completed with error (sct=0, sc=8) 00:15:29.276 Read completed with error (sct=0, sc=8) 00:15:29.276 Read completed with error (sct=0, sc=8) 00:15:29.276 Read completed with error (sct=0, sc=8) 00:15:29.276 Write completed with error (sct=0, sc=8) 00:15:29.276 Read completed with error (sct=0, sc=8) 00:15:29.276 Read completed with error (sct=0, sc=8) 00:15:29.276 Write completed with error (sct=0, sc=8) 00:15:29.276 Read completed with error (sct=0, sc=8) 00:15:29.276 Write completed with error (sct=0, sc=8) 00:15:29.276 Read completed with error (sct=0, sc=8) 00:15:29.276 Read completed with error (sct=0, sc=8) 00:15:29.276 Read completed with error (sct=0, sc=8) 00:15:29.276 Write completed with error (sct=0, sc=8) 00:15:29.276 Read completed with error (sct=0, sc=8) 00:15:29.276 Read completed with error (sct=0, sc=8) 00:15:29.276 Read completed with error (sct=0, sc=8) 00:15:29.276 Write completed with error (sct=0, sc=8) 00:15:29.276 Read completed with error (sct=0, sc=8) 00:15:29.276 Read completed with error (sct=0, sc=8) 00:15:29.276 Read completed with error (sct=0, sc=8) 00:15:29.276 Read completed with error (sct=0, sc=8) 00:15:29.276 Read completed with error (sct=0, sc=8) 00:15:29.276 Read completed with error (sct=0, sc=8) 00:15:29.276 Write completed with error (sct=0, sc=8) 00:15:29.276 Read completed with error (sct=0, sc=8) 00:15:29.276 Read completed with error (sct=0, sc=8) 00:15:29.276 Read completed with error (sct=0, sc=8) 00:15:29.276 Read completed with error (sct=0, sc=8) 00:15:29.276 Read completed with error (sct=0, sc=8) 00:15:29.276 Read completed with error (sct=0, sc=8) 00:15:29.276 Write completed with error (sct=0, sc=8) 00:15:29.276 Read completed with error (sct=0, sc=8) 00:15:29.276 Read completed with error (sct=0, sc=8) 00:15:29.276 Read completed with error (sct=0, sc=8) 00:15:29.276 Read completed with error (sct=0, sc=8) 00:15:29.276 Write completed with error (sct=0, sc=8) 00:15:29.276 Write completed with error (sct=0, sc=8) 00:15:29.276 Write completed with error (sct=0, sc=8) 00:15:29.276 Read completed with error (sct=0, sc=8) 00:15:29.276 Read completed with error (sct=0, sc=8) 00:15:29.276 Write completed with error (sct=0, sc=8) 00:15:29.276 Read completed with error (sct=0, sc=8) 00:15:29.276 Read completed with error (sct=0, sc=8) 00:15:29.276 Read completed with error (sct=0, sc=8) 00:15:29.276 Read completed with error (sct=0, sc=8) 00:15:29.276 Write completed with error (sct=0, sc=8) 00:15:29.276 Write completed with error (sct=0, sc=8) 00:15:29.276 Read completed with error (sct=0, sc=8) 00:15:29.276 Read completed with error (sct=0, sc=8) 00:15:29.276 Read completed with error (sct=0, sc=8) 00:15:29.276 Read completed with error (sct=0, sc=8) 00:15:29.276 Read completed with error (sct=0, sc=8) 00:15:29.276 Write completed with error (sct=0, sc=8) 00:15:29.276 Read completed with error (sct=0, sc=8) 00:15:29.276 Read completed with error (sct=0, sc=8) 00:15:29.276 Write completed with error (sct=0, sc=8) 00:15:29.276 Read completed with error (sct=0, sc=8) 00:15:29.276 Read completed with error (sct=0, sc=8) 00:15:29.276 Read completed with error (sct=0, sc=8) 00:15:29.276 Read completed with error (sct=0, sc=8) 00:15:29.276 Read completed with error (sct=0, sc=8) 00:15:29.276 Read completed with error (sct=0, sc=8) 00:15:29.276 Read completed with error (sct=0, sc=8) 00:15:29.276 Write completed with error (sct=0, sc=8) 00:15:29.276 Read completed with error (sct=0, sc=8) 00:15:29.276 Read completed with error (sct=0, sc=8) 00:15:29.276 Write completed with error (sct=0, sc=8) 00:15:29.277 Read completed with error (sct=0, sc=8) 00:15:29.277 Read completed with error (sct=0, sc=8) 00:15:29.277 Read completed with error (sct=0, sc=8) 00:15:29.277 Read completed with error (sct=0, sc=8) 00:15:29.277 Read completed with error (sct=0, sc=8) 00:15:29.277 Write completed with error (sct=0, sc=8) 00:15:29.277 Read completed with error (sct=0, sc=8) 00:15:29.277 Write completed with error (sct=0, sc=8) 00:15:29.277 Read completed with error (sct=0, sc=8) 00:15:29.277 Read completed with error (sct=0, sc=8) 00:15:29.277 Read completed with error (sct=0, sc=8) 00:15:29.277 Read completed with error (sct=0, sc=8) 00:15:29.277 Read completed with error (sct=0, sc=8) 00:15:29.876 [2024-07-12 00:27:57.641080] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12337c0 is same with the state(5) to be set 00:15:29.876 Write completed with error (sct=0, sc=8) 00:15:29.876 Read completed with error (sct=0, sc=8) 00:15:29.876 Read completed with error (sct=0, sc=8) 00:15:29.876 Write completed with error (sct=0, sc=8) 00:15:29.876 Read completed with error (sct=0, sc=8) 00:15:29.876 Write completed with error (sct=0, sc=8) 00:15:29.876 Read completed with error (sct=0, sc=8) 00:15:29.876 Read completed with error (sct=0, sc=8) 00:15:29.876 Read completed with error (sct=0, sc=8) 00:15:29.876 Write completed with error (sct=0, sc=8) 00:15:29.876 Read completed with error (sct=0, sc=8) 00:15:29.876 Read completed with error (sct=0, sc=8) 00:15:29.876 Read completed with error (sct=0, sc=8) 00:15:29.876 Write completed with error (sct=0, sc=8) 00:15:29.876 Read completed with error (sct=0, sc=8) 00:15:29.876 Read completed with error (sct=0, sc=8) 00:15:29.876 Write completed with error (sct=0, sc=8) 00:15:29.876 Read completed with error (sct=0, sc=8) 00:15:29.876 Write completed with error (sct=0, sc=8) 00:15:29.876 Read completed with error (sct=0, sc=8) 00:15:29.876 Read completed with error (sct=0, sc=8) 00:15:29.876 Read completed with error (sct=0, sc=8) 00:15:29.876 Read completed with error (sct=0, sc=8) 00:15:29.876 Read completed with error (sct=0, sc=8) 00:15:29.876 Read completed with error (sct=0, sc=8) 00:15:29.876 Write completed with error (sct=0, sc=8) 00:15:29.876 [2024-07-12 00:27:57.672826] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1230090 is same with the state(5) to be set 00:15:29.876 Read completed with error (sct=0, sc=8) 00:15:29.876 Write completed with error (sct=0, sc=8) 00:15:29.876 Read completed with error (sct=0, sc=8) 00:15:29.876 Read completed with error (sct=0, sc=8) 00:15:29.876 Write completed with error (sct=0, sc=8) 00:15:29.876 Read completed with error (sct=0, sc=8) 00:15:29.876 Read completed with error (sct=0, sc=8) 00:15:29.876 Read completed with error (sct=0, sc=8) 00:15:29.876 Write completed with error (sct=0, sc=8) 00:15:29.876 Read completed with error (sct=0, sc=8) 00:15:29.876 Read completed with error (sct=0, sc=8) 00:15:29.876 Write completed with error (sct=0, sc=8) 00:15:29.876 Read completed with error (sct=0, sc=8) 00:15:29.876 Read completed with error (sct=0, sc=8) 00:15:29.876 Read completed with error (sct=0, sc=8) 00:15:29.876 Read completed with error (sct=0, sc=8) 00:15:29.876 Read completed with error (sct=0, sc=8) 00:15:29.876 Read completed with error (sct=0, sc=8) 00:15:29.876 Read completed with error (sct=0, sc=8) 00:15:29.876 Write completed with error (sct=0, sc=8) 00:15:29.876 Write completed with error (sct=0, sc=8) 00:15:29.876 Write completed with error (sct=0, sc=8) 00:15:29.876 Read completed with error (sct=0, sc=8) 00:15:29.876 Write completed with error (sct=0, sc=8) 00:15:29.876 Write completed with error (sct=0, sc=8) 00:15:29.876 [2024-07-12 00:27:57.673255] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12309b0 is same with the state(5) to be set 00:15:29.876 Write completed with error (sct=0, sc=8) 00:15:29.876 Read completed with error (sct=0, sc=8) 00:15:29.876 Read completed with error (sct=0, sc=8) 00:15:29.876 Write completed with error (sct=0, sc=8) 00:15:29.876 Read completed with error (sct=0, sc=8) 00:15:29.876 Read completed with error (sct=0, sc=8) 00:15:29.876 Read completed with error (sct=0, sc=8) 00:15:29.876 Read completed with error (sct=0, sc=8) 00:15:29.876 Read completed with error (sct=0, sc=8) 00:15:29.876 Read completed with error (sct=0, sc=8) 00:15:29.876 Read completed with error (sct=0, sc=8) 00:15:29.876 Read completed with error (sct=0, sc=8) 00:15:29.876 Read completed with error (sct=0, sc=8) 00:15:29.876 Write completed with error (sct=0, sc=8) 00:15:29.876 [2024-07-12 00:27:57.673445] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fd50c00c2f0 is same with the state(5) to be set 00:15:29.876 Read completed with error (sct=0, sc=8) 00:15:29.876 Read completed with error (sct=0, sc=8) 00:15:29.876 Read completed with error (sct=0, sc=8) 00:15:29.876 Read completed with error (sct=0, sc=8) 00:15:29.876 Write completed with error (sct=0, sc=8) 00:15:29.877 Read completed with error (sct=0, sc=8) 00:15:29.877 Read completed with error (sct=0, sc=8) 00:15:29.877 Write completed with error (sct=0, sc=8) 00:15:29.877 Read completed with error (sct=0, sc=8) 00:15:29.877 Write completed with error (sct=0, sc=8) 00:15:29.877 Write completed with error (sct=0, sc=8) 00:15:29.877 Read completed with error (sct=0, sc=8) 00:15:29.877 Read completed with error (sct=0, sc=8) 00:15:29.877 Write completed with error (sct=0, sc=8) 00:15:29.877 Read completed with error (sct=0, sc=8) 00:15:29.877 Write completed with error (sct=0, sc=8) 00:15:29.877 Read completed with error (sct=0, sc=8) 00:15:29.877 Read completed with error (sct=0, sc=8) 00:15:29.877 Write completed with error (sct=0, sc=8) 00:15:29.877 Write completed with error (sct=0, sc=8) 00:15:29.877 Write completed with error (sct=0, sc=8) 00:15:29.877 Read completed with error (sct=0, sc=8) 00:15:29.877 Write completed with error (sct=0, sc=8) 00:15:29.877 Read completed with error (sct=0, sc=8) 00:15:29.877 Write completed with error (sct=0, sc=8) 00:15:29.877 Write completed with error (sct=0, sc=8) 00:15:29.877 [2024-07-12 00:27:57.673686] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1230270 is same with the state(5) to be set 00:15:29.877 Initializing NVMe Controllers 00:15:29.877 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:29.877 Controller IO queue size 128, less than required. 00:15:29.877 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:29.877 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:15:29.877 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:15:29.877 Initialization complete. Launching workers. 00:15:29.877 ======================================================== 00:15:29.877 Latency(us) 00:15:29.877 Device Information : IOPS MiB/s Average min max 00:15:29.877 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 172.17 0.08 966476.80 882.11 1044691.76 00:15:29.877 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 149.84 0.07 901142.45 355.05 1014920.86 00:15:29.877 ======================================================== 00:15:29.877 Total : 322.01 0.16 936074.69 355.05 1044691.76 00:15:29.877 00:15:29.877 [2024-07-12 00:27:57.674629] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12337c0 (9): Bad file descriptor 00:15:29.877 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:15:29.877 00:27:57 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:29.877 00:27:57 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:15:29.877 00:27:57 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 919976 00:15:29.877 00:27:57 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:15:30.446 00:27:58 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:15:30.446 00:27:58 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 919976 00:15:30.446 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (919976) - No such process 00:15:30.446 00:27:58 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 919976 00:15:30.446 00:27:58 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@648 -- # local es=0 00:15:30.446 00:27:58 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # valid_exec_arg wait 919976 00:15:30.446 00:27:58 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@636 -- # local arg=wait 00:15:30.446 00:27:58 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:30.446 00:27:58 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # type -t wait 00:15:30.446 00:27:58 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:30.446 00:27:58 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # wait 919976 00:15:30.446 00:27:58 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # es=1 00:15:30.446 00:27:58 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:30.446 00:27:58 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:30.446 00:27:58 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:30.446 00:27:58 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:15:30.446 00:27:58 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:30.446 00:27:58 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:15:30.446 00:27:58 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:30.446 00:27:58 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:30.446 00:27:58 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:30.446 00:27:58 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:15:30.446 [2024-07-12 00:27:58.197570] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:30.446 00:27:58 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:30.446 00:27:58 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:30.446 00:27:58 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:30.446 00:27:58 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:15:30.446 00:27:58 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:30.446 00:27:58 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=920291 00:15:30.446 00:27:58 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:15:30.446 00:27:58 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:15:30.446 00:27:58 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 920291 00:15:30.446 00:27:58 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:15:30.446 EAL: No free 2048 kB hugepages reported on node 1 00:15:30.446 [2024-07-12 00:27:58.257541] subsystem.c:1568:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:15:31.013 00:27:58 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:15:31.013 00:27:58 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 920291 00:15:31.013 00:27:58 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:15:31.581 00:27:59 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:15:31.581 00:27:59 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 920291 00:15:31.581 00:27:59 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:15:32.150 00:27:59 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:15:32.150 00:27:59 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 920291 00:15:32.150 00:27:59 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:15:32.408 00:28:00 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:15:32.408 00:28:00 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 920291 00:15:32.408 00:28:00 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:15:32.978 00:28:00 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:15:32.978 00:28:00 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 920291 00:15:32.978 00:28:00 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:15:33.548 00:28:01 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:15:33.548 00:28:01 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 920291 00:15:33.548 00:28:01 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:15:33.807 Initializing NVMe Controllers 00:15:33.807 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:33.807 Controller IO queue size 128, less than required. 00:15:33.807 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:33.807 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:15:33.807 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:15:33.807 Initialization complete. Launching workers. 00:15:33.807 ======================================================== 00:15:33.807 Latency(us) 00:15:33.807 Device Information : IOPS MiB/s Average min max 00:15:33.807 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1005128.80 1000254.45 1042831.32 00:15:33.807 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1004658.48 1000233.33 1013239.55 00:15:33.807 ======================================================== 00:15:33.807 Total : 256.00 0.12 1004893.64 1000233.33 1042831.32 00:15:33.807 00:15:34.066 00:28:01 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:15:34.066 00:28:01 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 920291 00:15:34.066 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (920291) - No such process 00:15:34.066 00:28:01 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 920291 00:15:34.066 00:28:01 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:15:34.066 00:28:01 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:15:34.066 00:28:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:34.066 00:28:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # sync 00:15:34.066 00:28:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:34.066 00:28:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@120 -- # set +e 00:15:34.066 00:28:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:34.066 00:28:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:34.066 rmmod nvme_tcp 00:15:34.066 rmmod nvme_fabrics 00:15:34.066 rmmod nvme_keyring 00:15:34.066 00:28:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:34.066 00:28:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set -e 00:15:34.066 00:28:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # return 0 00:15:34.066 00:28:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@489 -- # '[' -n 919874 ']' 00:15:34.066 00:28:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # killprocess 919874 00:15:34.066 00:28:01 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@946 -- # '[' -z 919874 ']' 00:15:34.066 00:28:01 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@950 -- # kill -0 919874 00:15:34.066 00:28:01 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@951 -- # uname 00:15:34.066 00:28:01 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:15:34.066 00:28:01 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 919874 00:15:34.066 00:28:01 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:15:34.066 00:28:01 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:15:34.066 00:28:01 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # echo 'killing process with pid 919874' 00:15:34.066 killing process with pid 919874 00:15:34.066 00:28:01 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@965 -- # kill 919874 00:15:34.066 00:28:01 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@970 -- # wait 919874 00:15:34.326 00:28:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:34.326 00:28:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:34.326 00:28:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:34.326 00:28:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:34.326 00:28:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:34.326 00:28:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:34.326 00:28:01 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:34.326 00:28:01 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:36.231 00:28:04 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:36.231 00:15:36.231 real 0m11.669s 00:15:36.231 user 0m27.441s 00:15:36.231 sys 0m2.586s 00:15:36.231 00:28:04 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1122 -- # xtrace_disable 00:15:36.231 00:28:04 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:15:36.231 ************************************ 00:15:36.231 END TEST nvmf_delete_subsystem 00:15:36.231 ************************************ 00:15:36.231 00:28:04 nvmf_tcp -- nvmf/nvmf.sh@36 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:15:36.231 00:28:04 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:15:36.231 00:28:04 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:15:36.231 00:28:04 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:36.489 ************************************ 00:15:36.489 START TEST nvmf_ns_masking 00:15:36.490 ************************************ 00:15:36.490 00:28:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1121 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:15:36.490 * Looking for test storage... 00:15:36.490 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:36.490 00:28:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:36.490 00:28:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:15:36.490 00:28:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:36.490 00:28:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:36.490 00:28:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:36.490 00:28:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:36.490 00:28:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:36.490 00:28:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:36.490 00:28:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:36.490 00:28:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:36.490 00:28:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:36.490 00:28:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:36.490 00:28:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:15:36.490 00:28:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:15:36.490 00:28:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:36.490 00:28:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:36.490 00:28:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:36.490 00:28:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:36.490 00:28:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:36.490 00:28:04 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:36.490 00:28:04 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:36.490 00:28:04 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:36.490 00:28:04 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:36.490 00:28:04 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:36.490 00:28:04 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:36.490 00:28:04 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:15:36.490 00:28:04 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:36.490 00:28:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@47 -- # : 0 00:15:36.490 00:28:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:36.490 00:28:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:36.490 00:28:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:36.490 00:28:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:36.490 00:28:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:36.490 00:28:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:36.490 00:28:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:36.490 00:28:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:36.490 00:28:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:36.490 00:28:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@11 -- # loops=5 00:15:36.490 00:28:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@13 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:15:36.490 00:28:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@14 -- # HOSTNQN=nqn.2016-06.io.spdk:host1 00:15:36.490 00:28:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@15 -- # uuidgen 00:15:36.490 00:28:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@15 -- # HOSTID=adcc3164-bbda-442b-ad50-9146ce0da33e 00:15:36.490 00:28:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvmftestinit 00:15:36.490 00:28:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:36.490 00:28:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:36.490 00:28:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:36.490 00:28:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:36.490 00:28:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:36.490 00:28:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:36.490 00:28:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:36.490 00:28:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:36.490 00:28:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:36.490 00:28:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:36.490 00:28:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@285 -- # xtrace_disable 00:15:36.490 00:28:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:15:37.866 00:28:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:37.866 00:28:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # pci_devs=() 00:15:37.866 00:28:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:37.866 00:28:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:37.866 00:28:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:37.866 00:28:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:37.866 00:28:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:37.866 00:28:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # net_devs=() 00:15:37.866 00:28:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:37.866 00:28:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # e810=() 00:15:37.866 00:28:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # local -ga e810 00:15:37.866 00:28:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # x722=() 00:15:37.866 00:28:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # local -ga x722 00:15:37.866 00:28:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # mlx=() 00:15:37.866 00:28:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # local -ga mlx 00:15:37.866 00:28:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:37.866 00:28:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:37.866 00:28:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:37.866 00:28:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:37.866 00:28:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:38.125 00:28:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:38.125 00:28:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:38.125 00:28:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:38.125 00:28:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:38.125 00:28:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:38.125 00:28:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:38.125 00:28:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:38.125 00:28:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:38.125 00:28:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:38.125 00:28:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:38.125 00:28:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:38.125 00:28:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:38.125 00:28:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:38.125 00:28:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:15:38.125 Found 0000:08:00.0 (0x8086 - 0x159b) 00:15:38.125 00:28:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:38.125 00:28:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:38.125 00:28:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:38.125 00:28:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:38.125 00:28:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:38.125 00:28:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:38.125 00:28:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:15:38.125 Found 0000:08:00.1 (0x8086 - 0x159b) 00:15:38.125 00:28:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:38.125 00:28:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:38.125 00:28:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:38.125 00:28:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:38.125 00:28:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:38.125 00:28:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:38.125 00:28:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:38.125 00:28:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:38.125 00:28:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:38.125 00:28:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:38.125 00:28:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:38.125 00:28:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:38.125 00:28:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:38.125 00:28:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:38.125 00:28:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:38.125 00:28:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:15:38.125 Found net devices under 0000:08:00.0: cvl_0_0 00:15:38.125 00:28:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:38.125 00:28:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:38.125 00:28:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:38.125 00:28:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:38.125 00:28:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:38.125 00:28:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:38.126 00:28:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:38.126 00:28:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:38.126 00:28:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:15:38.126 Found net devices under 0000:08:00.1: cvl_0_1 00:15:38.126 00:28:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:38.126 00:28:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:38.126 00:28:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # is_hw=yes 00:15:38.126 00:28:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:38.126 00:28:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:38.126 00:28:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:38.126 00:28:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:38.126 00:28:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:38.126 00:28:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:38.126 00:28:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:38.126 00:28:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:38.126 00:28:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:38.126 00:28:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:38.126 00:28:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:38.126 00:28:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:38.126 00:28:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:38.126 00:28:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:38.126 00:28:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:38.126 00:28:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:38.126 00:28:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:38.126 00:28:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:38.126 00:28:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:38.126 00:28:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:38.126 00:28:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:38.126 00:28:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:38.126 00:28:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:38.126 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:38.126 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.327 ms 00:15:38.126 00:15:38.126 --- 10.0.0.2 ping statistics --- 00:15:38.126 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:38.126 rtt min/avg/max/mdev = 0.327/0.327/0.327/0.000 ms 00:15:38.126 00:28:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:38.126 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:38.126 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.152 ms 00:15:38.126 00:15:38.126 --- 10.0.0.1 ping statistics --- 00:15:38.126 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:38.126 rtt min/avg/max/mdev = 0.152/0.152/0.152/0.000 ms 00:15:38.126 00:28:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:38.126 00:28:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@422 -- # return 0 00:15:38.126 00:28:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:38.126 00:28:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:38.126 00:28:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:38.126 00:28:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:38.126 00:28:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:38.126 00:28:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:38.126 00:28:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:38.126 00:28:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # nvmfappstart -m 0xF 00:15:38.126 00:28:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:38.126 00:28:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@720 -- # xtrace_disable 00:15:38.126 00:28:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:15:38.126 00:28:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@481 -- # nvmfpid=922095 00:15:38.126 00:28:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:38.126 00:28:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@482 -- # waitforlisten 922095 00:15:38.126 00:28:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@827 -- # '[' -z 922095 ']' 00:15:38.126 00:28:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:38.126 00:28:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@832 -- # local max_retries=100 00:15:38.126 00:28:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:38.126 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:38.126 00:28:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@836 -- # xtrace_disable 00:15:38.126 00:28:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:15:38.126 [2024-07-12 00:28:05.907991] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:15:38.126 [2024-07-12 00:28:05.908078] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:38.126 EAL: No free 2048 kB hugepages reported on node 1 00:15:38.384 [2024-07-12 00:28:05.973502] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:38.384 [2024-07-12 00:28:06.061849] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:38.384 [2024-07-12 00:28:06.061907] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:38.384 [2024-07-12 00:28:06.061924] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:38.384 [2024-07-12 00:28:06.061937] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:38.384 [2024-07-12 00:28:06.061954] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:38.384 [2024-07-12 00:28:06.062039] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:38.384 [2024-07-12 00:28:06.062093] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:38.384 [2024-07-12 00:28:06.062146] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:38.384 [2024-07-12 00:28:06.062149] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:38.384 00:28:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:15:38.384 00:28:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@860 -- # return 0 00:15:38.384 00:28:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:38.384 00:28:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:38.384 00:28:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:15:38.384 00:28:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:38.384 00:28:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:15:38.643 [2024-07-12 00:28:06.475120] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:38.901 00:28:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@49 -- # MALLOC_BDEV_SIZE=64 00:15:38.901 00:28:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@50 -- # MALLOC_BLOCK_SIZE=512 00:15:38.901 00:28:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:15:39.159 Malloc1 00:15:39.159 00:28:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:15:39.417 Malloc2 00:15:39.417 00:28:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:15:39.674 00:28:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:15:39.931 00:28:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:40.189 [2024-07-12 00:28:07.833480] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:40.189 00:28:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@61 -- # connect 00:15:40.189 00:28:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I adcc3164-bbda-442b-ad50-9146ce0da33e -a 10.0.0.2 -s 4420 -i 4 00:15:40.189 00:28:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 00:15:40.189 00:28:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1194 -- # local i=0 00:15:40.189 00:28:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:15:40.189 00:28:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:15:40.189 00:28:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # sleep 2 00:15:42.728 00:28:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:15:42.728 00:28:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:15:42.728 00:28:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:15:42.728 00:28:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:15:42.728 00:28:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:15:42.728 00:28:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # return 0 00:15:42.728 00:28:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:15:42.728 00:28:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:15:42.728 00:28:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:15:42.728 00:28:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:15:42.728 00:28:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@62 -- # ns_is_visible 0x1 00:15:42.728 00:28:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:15:42.728 00:28:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:15:42.728 [ 0]:0x1 00:15:42.728 00:28:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:42.728 00:28:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:15:42.728 00:28:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=a18b835bafc74fada5d77015f6f3243e 00:15:42.728 00:28:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ a18b835bafc74fada5d77015f6f3243e != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:42.729 00:28:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:15:42.729 00:28:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@66 -- # ns_is_visible 0x1 00:15:42.729 00:28:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:15:42.729 00:28:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:15:42.729 [ 0]:0x1 00:15:42.729 00:28:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:42.729 00:28:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:15:42.729 00:28:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=a18b835bafc74fada5d77015f6f3243e 00:15:42.729 00:28:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ a18b835bafc74fada5d77015f6f3243e != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:42.729 00:28:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@67 -- # ns_is_visible 0x2 00:15:42.729 00:28:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:15:42.729 00:28:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:15:42.729 [ 1]:0x2 00:15:42.729 00:28:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:42.729 00:28:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:15:42.729 00:28:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=958cfb89456a4f3fabec2c801db55b42 00:15:42.729 00:28:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 958cfb89456a4f3fabec2c801db55b42 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:42.729 00:28:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@69 -- # disconnect 00:15:42.729 00:28:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:42.987 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:42.987 00:28:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:43.245 00:28:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:15:43.505 00:28:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@77 -- # connect 1 00:15:43.505 00:28:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I adcc3164-bbda-442b-ad50-9146ce0da33e -a 10.0.0.2 -s 4420 -i 4 00:15:43.766 00:28:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 1 00:15:43.766 00:28:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1194 -- # local i=0 00:15:43.766 00:28:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:15:43.766 00:28:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1196 -- # [[ -n 1 ]] 00:15:43.766 00:28:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1197 -- # nvme_device_counter=1 00:15:43.766 00:28:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # sleep 2 00:15:45.674 00:28:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:15:45.675 00:28:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:15:45.675 00:28:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:15:45.675 00:28:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:15:45.675 00:28:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:15:45.675 00:28:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # return 0 00:15:45.675 00:28:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:15:45.675 00:28:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:15:45.675 00:28:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:15:45.675 00:28:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:15:45.675 00:28:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@78 -- # NOT ns_is_visible 0x1 00:15:45.675 00:28:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:15:45.675 00:28:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:15:45.675 00:28:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:15:45.675 00:28:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:45.675 00:28:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:15:45.675 00:28:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:45.675 00:28:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:15:45.675 00:28:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:15:45.675 00:28:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:15:45.933 00:28:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:45.933 00:28:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:15:45.933 00:28:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:15:45.933 00:28:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:45.933 00:28:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:15:45.933 00:28:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:45.933 00:28:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:45.933 00:28:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:45.933 00:28:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@79 -- # ns_is_visible 0x2 00:15:45.933 00:28:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:15:45.933 00:28:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:15:45.933 [ 0]:0x2 00:15:45.933 00:28:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:45.933 00:28:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:15:45.933 00:28:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=958cfb89456a4f3fabec2c801db55b42 00:15:45.933 00:28:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 958cfb89456a4f3fabec2c801db55b42 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:45.933 00:28:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:15:46.196 00:28:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@83 -- # ns_is_visible 0x1 00:15:46.196 00:28:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:15:46.196 00:28:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:15:46.196 [ 0]:0x1 00:15:46.512 00:28:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:46.512 00:28:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:15:46.512 00:28:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=a18b835bafc74fada5d77015f6f3243e 00:15:46.512 00:28:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ a18b835bafc74fada5d77015f6f3243e != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:46.512 00:28:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@84 -- # ns_is_visible 0x2 00:15:46.512 00:28:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:15:46.512 00:28:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:15:46.512 [ 1]:0x2 00:15:46.512 00:28:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:46.512 00:28:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:15:46.512 00:28:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=958cfb89456a4f3fabec2c801db55b42 00:15:46.512 00:28:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 958cfb89456a4f3fabec2c801db55b42 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:46.512 00:28:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:15:46.770 00:28:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@88 -- # NOT ns_is_visible 0x1 00:15:46.770 00:28:14 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:15:46.770 00:28:14 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:15:46.770 00:28:14 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:15:46.770 00:28:14 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:46.770 00:28:14 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:15:46.770 00:28:14 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:46.770 00:28:14 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:15:46.770 00:28:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:15:46.770 00:28:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:15:46.770 00:28:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:46.770 00:28:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:15:46.770 00:28:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:15:46.770 00:28:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:46.770 00:28:14 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:15:46.770 00:28:14 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:46.770 00:28:14 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:46.770 00:28:14 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:46.770 00:28:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x2 00:15:46.770 00:28:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:15:46.770 00:28:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:15:46.770 [ 0]:0x2 00:15:46.770 00:28:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:46.770 00:28:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:15:46.770 00:28:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=958cfb89456a4f3fabec2c801db55b42 00:15:46.770 00:28:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 958cfb89456a4f3fabec2c801db55b42 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:46.770 00:28:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@91 -- # disconnect 00:15:46.770 00:28:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:47.028 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:47.028 00:28:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:15:47.286 00:28:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@95 -- # connect 2 00:15:47.286 00:28:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I adcc3164-bbda-442b-ad50-9146ce0da33e -a 10.0.0.2 -s 4420 -i 4 00:15:47.286 00:28:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 2 00:15:47.286 00:28:15 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1194 -- # local i=0 00:15:47.286 00:28:15 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:15:47.286 00:28:15 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1196 -- # [[ -n 2 ]] 00:15:47.286 00:28:15 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1197 -- # nvme_device_counter=2 00:15:47.286 00:28:15 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # sleep 2 00:15:49.814 00:28:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:15:49.814 00:28:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:15:49.814 00:28:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:15:49.814 00:28:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # nvme_devices=2 00:15:49.814 00:28:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:15:49.814 00:28:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # return 0 00:15:49.814 00:28:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:15:49.814 00:28:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:15:49.814 00:28:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:15:49.814 00:28:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:15:49.814 00:28:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@96 -- # ns_is_visible 0x1 00:15:49.814 00:28:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:15:49.814 00:28:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:15:49.814 [ 0]:0x1 00:15:49.814 00:28:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:49.814 00:28:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:15:49.814 00:28:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=a18b835bafc74fada5d77015f6f3243e 00:15:49.814 00:28:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ a18b835bafc74fada5d77015f6f3243e != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:49.814 00:28:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@97 -- # ns_is_visible 0x2 00:15:49.814 00:28:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:15:49.814 00:28:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:15:49.814 [ 1]:0x2 00:15:49.814 00:28:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:49.814 00:28:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:15:49.814 00:28:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=958cfb89456a4f3fabec2c801db55b42 00:15:49.814 00:28:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 958cfb89456a4f3fabec2c801db55b42 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:49.814 00:28:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:15:49.814 00:28:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@101 -- # NOT ns_is_visible 0x1 00:15:49.814 00:28:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:15:49.814 00:28:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:15:49.814 00:28:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:15:49.814 00:28:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:49.814 00:28:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:15:49.814 00:28:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:49.814 00:28:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:15:49.814 00:28:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:15:49.814 00:28:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:15:49.814 00:28:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:49.814 00:28:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:15:49.814 00:28:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:15:49.814 00:28:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:49.814 00:28:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:15:49.814 00:28:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:49.814 00:28:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:49.814 00:28:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:49.815 00:28:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x2 00:15:49.815 00:28:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:15:49.815 00:28:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:15:49.815 [ 0]:0x2 00:15:49.815 00:28:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:49.815 00:28:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:15:49.815 00:28:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=958cfb89456a4f3fabec2c801db55b42 00:15:49.815 00:28:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 958cfb89456a4f3fabec2c801db55b42 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:49.815 00:28:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@105 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:15:49.815 00:28:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:15:49.815 00:28:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:15:49.815 00:28:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:49.815 00:28:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:49.815 00:28:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:49.815 00:28:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:49.815 00:28:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:49.815 00:28:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:49.815 00:28:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:49.815 00:28:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:15:49.815 00:28:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:15:50.073 [2024-07-12 00:28:17.881428] nvmf_rpc.c:1791:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:15:50.073 request: 00:15:50.073 { 00:15:50.073 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:50.073 "nsid": 2, 00:15:50.073 "host": "nqn.2016-06.io.spdk:host1", 00:15:50.073 "method": "nvmf_ns_remove_host", 00:15:50.073 "req_id": 1 00:15:50.073 } 00:15:50.073 Got JSON-RPC error response 00:15:50.073 response: 00:15:50.073 { 00:15:50.073 "code": -32602, 00:15:50.073 "message": "Invalid parameters" 00:15:50.073 } 00:15:50.073 00:28:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:15:50.073 00:28:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:50.073 00:28:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:50.073 00:28:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:50.073 00:28:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@106 -- # NOT ns_is_visible 0x1 00:15:50.073 00:28:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:15:50.073 00:28:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:15:50.073 00:28:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:15:50.073 00:28:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:50.073 00:28:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:15:50.073 00:28:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:50.073 00:28:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:15:50.073 00:28:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:15:50.073 00:28:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:15:50.331 00:28:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:50.331 00:28:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:15:50.331 00:28:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:15:50.331 00:28:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:50.331 00:28:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:15:50.331 00:28:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:50.331 00:28:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:50.331 00:28:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:50.331 00:28:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@107 -- # ns_is_visible 0x2 00:15:50.331 00:28:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:15:50.331 00:28:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:15:50.331 [ 0]:0x2 00:15:50.331 00:28:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:50.331 00:28:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:15:50.331 00:28:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=958cfb89456a4f3fabec2c801db55b42 00:15:50.331 00:28:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 958cfb89456a4f3fabec2c801db55b42 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:50.331 00:28:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@108 -- # disconnect 00:15:50.331 00:28:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:50.331 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:50.331 00:28:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@110 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:50.589 00:28:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:15:50.589 00:28:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@114 -- # nvmftestfini 00:15:50.589 00:28:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:50.589 00:28:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@117 -- # sync 00:15:50.589 00:28:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:50.589 00:28:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@120 -- # set +e 00:15:50.589 00:28:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:50.589 00:28:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:50.589 rmmod nvme_tcp 00:15:50.589 rmmod nvme_fabrics 00:15:50.589 rmmod nvme_keyring 00:15:50.589 00:28:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:50.589 00:28:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@124 -- # set -e 00:15:50.589 00:28:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@125 -- # return 0 00:15:50.589 00:28:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@489 -- # '[' -n 922095 ']' 00:15:50.589 00:28:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@490 -- # killprocess 922095 00:15:50.589 00:28:18 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@946 -- # '[' -z 922095 ']' 00:15:50.589 00:28:18 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@950 -- # kill -0 922095 00:15:50.589 00:28:18 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@951 -- # uname 00:15:50.589 00:28:18 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:15:50.589 00:28:18 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 922095 00:15:50.849 00:28:18 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:15:50.849 00:28:18 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:15:50.849 00:28:18 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@964 -- # echo 'killing process with pid 922095' 00:15:50.849 killing process with pid 922095 00:15:50.849 00:28:18 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@965 -- # kill 922095 00:15:50.849 00:28:18 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@970 -- # wait 922095 00:15:50.849 00:28:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:50.849 00:28:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:50.849 00:28:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:50.849 00:28:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:50.849 00:28:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:50.849 00:28:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:50.849 00:28:18 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:50.849 00:28:18 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:53.392 00:28:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:53.392 00:15:53.392 real 0m16.608s 00:15:53.392 user 0m53.836s 00:15:53.392 sys 0m3.590s 00:15:53.392 00:28:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1122 -- # xtrace_disable 00:15:53.392 00:28:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:15:53.392 ************************************ 00:15:53.392 END TEST nvmf_ns_masking 00:15:53.392 ************************************ 00:15:53.392 00:28:20 nvmf_tcp -- nvmf/nvmf.sh@37 -- # [[ 1 -eq 1 ]] 00:15:53.392 00:28:20 nvmf_tcp -- nvmf/nvmf.sh@38 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:15:53.392 00:28:20 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:15:53.392 00:28:20 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:15:53.392 00:28:20 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:53.392 ************************************ 00:15:53.392 START TEST nvmf_nvme_cli 00:15:53.392 ************************************ 00:15:53.392 00:28:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:15:53.392 * Looking for test storage... 00:15:53.392 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:53.392 00:28:20 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:53.392 00:28:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:15:53.392 00:28:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:53.392 00:28:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:53.392 00:28:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:53.392 00:28:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:53.392 00:28:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:53.392 00:28:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:53.392 00:28:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:53.392 00:28:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:53.392 00:28:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:53.392 00:28:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:53.392 00:28:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:15:53.392 00:28:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:15:53.392 00:28:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:53.392 00:28:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:53.392 00:28:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:53.392 00:28:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:53.392 00:28:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:53.392 00:28:20 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:53.392 00:28:20 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:53.392 00:28:20 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:53.392 00:28:20 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:53.392 00:28:20 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:53.393 00:28:20 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:53.393 00:28:20 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:15:53.393 00:28:20 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:53.393 00:28:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@47 -- # : 0 00:15:53.393 00:28:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:53.393 00:28:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:53.393 00:28:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:53.393 00:28:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:53.393 00:28:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:53.393 00:28:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:53.393 00:28:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:53.393 00:28:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:53.393 00:28:20 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:53.393 00:28:20 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:53.393 00:28:20 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:15:53.393 00:28:20 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:15:53.393 00:28:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:53.393 00:28:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:53.393 00:28:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:53.393 00:28:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:53.393 00:28:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:53.393 00:28:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:53.393 00:28:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:53.393 00:28:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:53.393 00:28:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:53.393 00:28:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:53.393 00:28:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@285 -- # xtrace_disable 00:15:53.393 00:28:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:54.771 00:28:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:54.771 00:28:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@291 -- # pci_devs=() 00:15:54.771 00:28:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:54.771 00:28:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:54.771 00:28:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:54.771 00:28:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:54.771 00:28:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:54.771 00:28:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@295 -- # net_devs=() 00:15:54.771 00:28:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:54.771 00:28:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@296 -- # e810=() 00:15:54.771 00:28:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@296 -- # local -ga e810 00:15:54.771 00:28:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@297 -- # x722=() 00:15:54.771 00:28:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@297 -- # local -ga x722 00:15:54.771 00:28:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@298 -- # mlx=() 00:15:54.771 00:28:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@298 -- # local -ga mlx 00:15:54.771 00:28:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:54.771 00:28:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:54.771 00:28:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:54.771 00:28:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:54.771 00:28:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:54.771 00:28:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:54.771 00:28:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:54.771 00:28:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:54.771 00:28:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:54.771 00:28:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:54.771 00:28:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:54.771 00:28:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:54.771 00:28:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:54.771 00:28:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:54.771 00:28:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:54.771 00:28:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:54.771 00:28:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:54.771 00:28:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:54.771 00:28:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:15:54.771 Found 0000:08:00.0 (0x8086 - 0x159b) 00:15:54.771 00:28:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:54.771 00:28:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:54.771 00:28:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:54.771 00:28:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:54.771 00:28:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:54.771 00:28:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:54.771 00:28:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:15:54.771 Found 0000:08:00.1 (0x8086 - 0x159b) 00:15:54.771 00:28:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:54.771 00:28:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:54.771 00:28:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:54.771 00:28:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:54.771 00:28:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:54.771 00:28:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:54.771 00:28:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:54.771 00:28:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:54.771 00:28:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:54.771 00:28:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:54.771 00:28:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:54.771 00:28:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:54.771 00:28:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:54.771 00:28:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:54.771 00:28:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:54.771 00:28:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:15:54.771 Found net devices under 0000:08:00.0: cvl_0_0 00:15:54.771 00:28:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:54.771 00:28:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:54.771 00:28:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:54.771 00:28:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:54.771 00:28:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:54.771 00:28:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:54.771 00:28:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:54.771 00:28:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:54.771 00:28:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:15:54.771 Found net devices under 0000:08:00.1: cvl_0_1 00:15:54.771 00:28:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:54.771 00:28:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:54.771 00:28:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # is_hw=yes 00:15:54.771 00:28:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:54.771 00:28:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:54.771 00:28:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:54.771 00:28:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:54.771 00:28:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:54.771 00:28:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:54.771 00:28:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:54.771 00:28:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:54.771 00:28:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:54.771 00:28:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:54.771 00:28:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:54.771 00:28:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:54.771 00:28:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:54.771 00:28:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:54.771 00:28:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:54.771 00:28:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:54.771 00:28:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:54.771 00:28:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:54.771 00:28:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:54.771 00:28:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:54.771 00:28:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:54.771 00:28:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:54.771 00:28:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:54.771 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:54.771 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.210 ms 00:15:54.771 00:15:54.771 --- 10.0.0.2 ping statistics --- 00:15:54.771 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:54.771 rtt min/avg/max/mdev = 0.210/0.210/0.210/0.000 ms 00:15:54.771 00:28:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:54.771 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:54.771 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.120 ms 00:15:54.771 00:15:54.771 --- 10.0.0.1 ping statistics --- 00:15:54.771 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:54.771 rtt min/avg/max/mdev = 0.120/0.120/0.120/0.000 ms 00:15:54.771 00:28:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:54.771 00:28:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@422 -- # return 0 00:15:54.771 00:28:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:54.771 00:28:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:54.771 00:28:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:54.771 00:28:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:54.771 00:28:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:54.771 00:28:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:54.771 00:28:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:54.771 00:28:22 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:15:54.771 00:28:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:54.771 00:28:22 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@720 -- # xtrace_disable 00:15:54.771 00:28:22 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:54.771 00:28:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@481 -- # nvmfpid=924864 00:15:54.771 00:28:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:54.772 00:28:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@482 -- # waitforlisten 924864 00:15:54.772 00:28:22 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@827 -- # '[' -z 924864 ']' 00:15:54.772 00:28:22 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:54.772 00:28:22 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@832 -- # local max_retries=100 00:15:54.772 00:28:22 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:54.772 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:54.772 00:28:22 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@836 -- # xtrace_disable 00:15:54.772 00:28:22 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:54.772 [2024-07-12 00:28:22.572408] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:15:54.772 [2024-07-12 00:28:22.572511] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:54.772 EAL: No free 2048 kB hugepages reported on node 1 00:15:55.029 [2024-07-12 00:28:22.641657] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:55.029 [2024-07-12 00:28:22.732657] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:55.029 [2024-07-12 00:28:22.732715] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:55.029 [2024-07-12 00:28:22.732730] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:55.029 [2024-07-12 00:28:22.732744] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:55.029 [2024-07-12 00:28:22.732756] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:55.029 [2024-07-12 00:28:22.732842] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:55.029 [2024-07-12 00:28:22.732893] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:55.029 [2024-07-12 00:28:22.732942] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:55.029 [2024-07-12 00:28:22.732945] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:55.029 00:28:22 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:15:55.029 00:28:22 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@860 -- # return 0 00:15:55.029 00:28:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:55.029 00:28:22 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:55.029 00:28:22 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:55.029 00:28:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:55.029 00:28:22 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:55.029 00:28:22 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:55.029 00:28:22 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:55.029 [2024-07-12 00:28:22.868264] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:55.287 00:28:22 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:55.287 00:28:22 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:55.287 00:28:22 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:55.287 00:28:22 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:55.287 Malloc0 00:15:55.287 00:28:22 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:55.287 00:28:22 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:15:55.287 00:28:22 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:55.287 00:28:22 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:55.287 Malloc1 00:15:55.287 00:28:22 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:55.287 00:28:22 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:15:55.287 00:28:22 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:55.287 00:28:22 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:55.287 00:28:22 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:55.287 00:28:22 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:55.287 00:28:22 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:55.287 00:28:22 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:55.287 00:28:22 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:55.287 00:28:22 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:55.287 00:28:22 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:55.287 00:28:22 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:55.287 00:28:22 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:55.287 00:28:22 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:55.287 00:28:22 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:55.287 00:28:22 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:55.287 [2024-07-12 00:28:22.946037] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:55.287 00:28:22 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:55.287 00:28:22 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:55.287 00:28:22 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:55.287 00:28:22 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:55.287 00:28:22 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:55.288 00:28:22 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -a 10.0.0.2 -s 4420 00:15:55.544 00:15:55.544 Discovery Log Number of Records 2, Generation counter 2 00:15:55.544 =====Discovery Log Entry 0====== 00:15:55.544 trtype: tcp 00:15:55.544 adrfam: ipv4 00:15:55.544 subtype: current discovery subsystem 00:15:55.544 treq: not required 00:15:55.544 portid: 0 00:15:55.544 trsvcid: 4420 00:15:55.544 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:15:55.544 traddr: 10.0.0.2 00:15:55.544 eflags: explicit discovery connections, duplicate discovery information 00:15:55.544 sectype: none 00:15:55.544 =====Discovery Log Entry 1====== 00:15:55.544 trtype: tcp 00:15:55.544 adrfam: ipv4 00:15:55.544 subtype: nvme subsystem 00:15:55.544 treq: not required 00:15:55.544 portid: 0 00:15:55.544 trsvcid: 4420 00:15:55.544 subnqn: nqn.2016-06.io.spdk:cnode1 00:15:55.544 traddr: 10.0.0.2 00:15:55.544 eflags: none 00:15:55.544 sectype: none 00:15:55.544 00:28:23 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:15:55.544 00:28:23 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:15:55.544 00:28:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:15:55.544 00:28:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:55.544 00:28:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:15:55.544 00:28:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:15:55.544 00:28:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:55.544 00:28:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:15:55.544 00:28:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:55.544 00:28:23 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:15:55.544 00:28:23 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:56.107 00:28:23 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:15:56.107 00:28:23 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1194 -- # local i=0 00:15:56.107 00:28:23 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:15:56.107 00:28:23 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1196 -- # [[ -n 2 ]] 00:15:56.107 00:28:23 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1197 -- # nvme_device_counter=2 00:15:56.107 00:28:23 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1201 -- # sleep 2 00:15:58.002 00:28:25 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:15:58.002 00:28:25 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:15:58.002 00:28:25 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:15:58.002 00:28:25 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # nvme_devices=2 00:15:58.002 00:28:25 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:15:58.002 00:28:25 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # return 0 00:15:58.002 00:28:25 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:15:58.002 00:28:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:15:58.002 00:28:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:58.002 00:28:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:15:58.002 00:28:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:15:58.002 00:28:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:58.002 00:28:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:15:58.002 00:28:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:58.002 00:28:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:15:58.002 00:28:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:15:58.002 00:28:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:58.002 00:28:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:15:58.002 00:28:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:15:58.002 00:28:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:58.002 00:28:25 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n2 00:15:58.002 /dev/nvme0n1 ]] 00:15:58.002 00:28:25 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:15:58.002 00:28:25 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:15:58.002 00:28:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:15:58.002 00:28:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:58.002 00:28:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:15:58.002 00:28:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:15:58.002 00:28:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:58.002 00:28:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:15:58.002 00:28:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:58.002 00:28:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:15:58.002 00:28:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:15:58.002 00:28:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:58.002 00:28:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:15:58.002 00:28:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:15:58.002 00:28:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:58.002 00:28:25 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:15:58.002 00:28:25 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:58.002 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:58.002 00:28:25 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:58.002 00:28:25 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1215 -- # local i=0 00:15:58.002 00:28:25 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:15:58.002 00:28:25 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:58.002 00:28:25 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:15:58.002 00:28:25 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:58.002 00:28:25 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # return 0 00:15:58.002 00:28:25 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:15:58.002 00:28:25 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:58.002 00:28:25 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:58.002 00:28:25 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:58.002 00:28:25 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:58.002 00:28:25 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:15:58.002 00:28:25 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:15:58.002 00:28:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:58.002 00:28:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@117 -- # sync 00:15:58.002 00:28:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:58.002 00:28:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@120 -- # set +e 00:15:58.002 00:28:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:58.002 00:28:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:58.002 rmmod nvme_tcp 00:15:58.002 rmmod nvme_fabrics 00:15:58.002 rmmod nvme_keyring 00:15:58.263 00:28:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:58.263 00:28:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set -e 00:15:58.263 00:28:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@125 -- # return 0 00:15:58.263 00:28:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@489 -- # '[' -n 924864 ']' 00:15:58.263 00:28:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@490 -- # killprocess 924864 00:15:58.263 00:28:25 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@946 -- # '[' -z 924864 ']' 00:15:58.263 00:28:25 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@950 -- # kill -0 924864 00:15:58.263 00:28:25 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@951 -- # uname 00:15:58.263 00:28:25 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:15:58.263 00:28:25 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 924864 00:15:58.263 00:28:25 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:15:58.263 00:28:25 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:15:58.263 00:28:25 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@964 -- # echo 'killing process with pid 924864' 00:15:58.263 killing process with pid 924864 00:15:58.263 00:28:25 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@965 -- # kill 924864 00:15:58.263 00:28:25 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@970 -- # wait 924864 00:15:58.263 00:28:26 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:58.263 00:28:26 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:58.263 00:28:26 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:58.263 00:28:26 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:58.263 00:28:26 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:58.263 00:28:26 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:58.263 00:28:26 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:58.263 00:28:26 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:00.801 00:28:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:00.801 00:16:00.801 real 0m7.386s 00:16:00.801 user 0m13.703s 00:16:00.801 sys 0m1.837s 00:16:00.801 00:28:28 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1122 -- # xtrace_disable 00:16:00.801 00:28:28 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:00.801 ************************************ 00:16:00.801 END TEST nvmf_nvme_cli 00:16:00.801 ************************************ 00:16:00.801 00:28:28 nvmf_tcp -- nvmf/nvmf.sh@40 -- # [[ 1 -eq 1 ]] 00:16:00.801 00:28:28 nvmf_tcp -- nvmf/nvmf.sh@41 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:16:00.801 00:28:28 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:16:00.801 00:28:28 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:16:00.801 00:28:28 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:00.801 ************************************ 00:16:00.801 START TEST nvmf_vfio_user 00:16:00.801 ************************************ 00:16:00.801 00:28:28 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:16:00.801 * Looking for test storage... 00:16:00.801 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:00.801 00:28:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:00.801 00:28:28 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:16:00.801 00:28:28 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:00.801 00:28:28 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:00.801 00:28:28 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:00.801 00:28:28 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:00.801 00:28:28 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:00.801 00:28:28 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:00.801 00:28:28 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:00.801 00:28:28 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:00.801 00:28:28 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:00.801 00:28:28 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:00.801 00:28:28 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:16:00.802 00:28:28 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:16:00.802 00:28:28 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:00.802 00:28:28 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:00.802 00:28:28 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:00.802 00:28:28 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:00.802 00:28:28 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:00.802 00:28:28 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:00.802 00:28:28 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:00.802 00:28:28 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:00.802 00:28:28 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:00.802 00:28:28 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:00.802 00:28:28 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:00.802 00:28:28 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:16:00.802 00:28:28 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:00.802 00:28:28 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@47 -- # : 0 00:16:00.802 00:28:28 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:00.802 00:28:28 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:00.802 00:28:28 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:00.802 00:28:28 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:00.802 00:28:28 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:00.802 00:28:28 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:00.802 00:28:28 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:00.802 00:28:28 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:00.802 00:28:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:16:00.802 00:28:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:16:00.802 00:28:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:16:00.802 00:28:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:00.802 00:28:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:16:00.802 00:28:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:16:00.802 00:28:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:16:00.802 00:28:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:16:00.802 00:28:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:16:00.802 00:28:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:16:00.802 00:28:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=925497 00:16:00.802 00:28:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:16:00.802 00:28:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 925497' 00:16:00.802 Process pid: 925497 00:16:00.802 00:28:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:16:00.802 00:28:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 925497 00:16:00.802 00:28:28 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@827 -- # '[' -z 925497 ']' 00:16:00.802 00:28:28 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:00.802 00:28:28 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:00.802 00:28:28 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:00.802 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:00.802 00:28:28 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:00.802 00:28:28 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:16:00.802 [2024-07-12 00:28:28.295167] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:16:00.802 [2024-07-12 00:28:28.295244] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:00.802 EAL: No free 2048 kB hugepages reported on node 1 00:16:00.802 [2024-07-12 00:28:28.354210] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:00.802 [2024-07-12 00:28:28.441856] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:00.802 [2024-07-12 00:28:28.441908] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:00.802 [2024-07-12 00:28:28.441931] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:00.802 [2024-07-12 00:28:28.441946] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:00.802 [2024-07-12 00:28:28.441958] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:00.802 [2024-07-12 00:28:28.442038] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:00.802 [2024-07-12 00:28:28.442123] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:00.802 [2024-07-12 00:28:28.442179] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:00.802 [2024-07-12 00:28:28.442183] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:00.802 00:28:28 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:16:00.802 00:28:28 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@860 -- # return 0 00:16:00.802 00:28:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:16:01.736 00:28:29 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:16:02.302 00:28:29 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:16:02.302 00:28:29 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:16:02.302 00:28:29 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:16:02.302 00:28:29 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:16:02.302 00:28:29 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:16:02.560 Malloc1 00:16:02.560 00:28:30 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:16:02.818 00:28:30 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:16:03.076 00:28:30 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:16:03.334 00:28:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:16:03.334 00:28:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:16:03.334 00:28:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:16:03.592 Malloc2 00:16:03.592 00:28:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:16:03.850 00:28:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:16:04.459 00:28:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:16:04.459 00:28:32 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:16:04.459 00:28:32 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:16:04.459 00:28:32 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:16:04.459 00:28:32 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:16:04.459 00:28:32 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:16:04.459 00:28:32 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:16:04.721 [2024-07-12 00:28:32.291159] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:16:04.721 [2024-07-12 00:28:32.291208] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid925917 ] 00:16:04.721 EAL: No free 2048 kB hugepages reported on node 1 00:16:04.721 [2024-07-12 00:28:32.334495] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:16:04.721 [2024-07-12 00:28:32.337667] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:16:04.721 [2024-07-12 00:28:32.337697] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7fa42ce76000 00:16:04.721 [2024-07-12 00:28:32.338664] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:04.721 [2024-07-12 00:28:32.339661] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:04.721 [2024-07-12 00:28:32.340666] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:04.721 [2024-07-12 00:28:32.341674] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:16:04.721 [2024-07-12 00:28:32.342678] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:16:04.721 [2024-07-12 00:28:32.343685] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:04.721 [2024-07-12 00:28:32.344693] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:16:04.721 [2024-07-12 00:28:32.345700] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:04.721 [2024-07-12 00:28:32.346709] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:16:04.721 [2024-07-12 00:28:32.346732] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7fa42bc2c000 00:16:04.721 [2024-07-12 00:28:32.348191] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:16:04.721 [2024-07-12 00:28:32.368742] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:16:04.721 [2024-07-12 00:28:32.368783] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to connect adminq (no timeout) 00:16:04.721 [2024-07-12 00:28:32.371844] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:16:04.721 [2024-07-12 00:28:32.371910] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:16:04.721 [2024-07-12 00:28:32.372012] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for connect adminq (no timeout) 00:16:04.721 [2024-07-12 00:28:32.372045] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs (no timeout) 00:16:04.721 [2024-07-12 00:28:32.372058] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs wait for vs (no timeout) 00:16:04.721 [2024-07-12 00:28:32.372849] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:16:04.721 [2024-07-12 00:28:32.372881] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap (no timeout) 00:16:04.721 [2024-07-12 00:28:32.372898] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap wait for cap (no timeout) 00:16:04.721 [2024-07-12 00:28:32.373856] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:16:04.721 [2024-07-12 00:28:32.373883] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en (no timeout) 00:16:04.721 [2024-07-12 00:28:32.373898] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en wait for cc (timeout 15000 ms) 00:16:04.721 [2024-07-12 00:28:32.374864] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:16:04.721 [2024-07-12 00:28:32.374885] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:16:04.721 [2024-07-12 00:28:32.375868] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:16:04.721 [2024-07-12 00:28:32.375887] nvme_ctrlr.c:3751:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 0 && CSTS.RDY = 0 00:16:04.721 [2024-07-12 00:28:32.375898] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to controller is disabled (timeout 15000 ms) 00:16:04.721 [2024-07-12 00:28:32.375911] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:16:04.721 [2024-07-12 00:28:32.376022] nvme_ctrlr.c:3944:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Setting CC.EN = 1 00:16:04.721 [2024-07-12 00:28:32.376032] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:16:04.722 [2024-07-12 00:28:32.376042] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:16:04.722 [2024-07-12 00:28:32.379603] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:16:04.722 [2024-07-12 00:28:32.379898] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:16:04.722 [2024-07-12 00:28:32.380903] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:16:04.722 [2024-07-12 00:28:32.381902] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:16:04.722 [2024-07-12 00:28:32.382004] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:16:04.722 [2024-07-12 00:28:32.382921] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:16:04.722 [2024-07-12 00:28:32.382941] nvme_ctrlr.c:3786:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:16:04.722 [2024-07-12 00:28:32.382951] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to reset admin queue (timeout 30000 ms) 00:16:04.722 [2024-07-12 00:28:32.382979] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller (no timeout) 00:16:04.722 [2024-07-12 00:28:32.382999] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify controller (timeout 30000 ms) 00:16:04.722 [2024-07-12 00:28:32.383031] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:16:04.722 [2024-07-12 00:28:32.383046] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:04.722 [2024-07-12 00:28:32.383070] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:04.722 [2024-07-12 00:28:32.383133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:16:04.722 [2024-07-12 00:28:32.383158] nvme_ctrlr.c:1986:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_xfer_size 131072 00:16:04.722 [2024-07-12 00:28:32.383170] nvme_ctrlr.c:1990:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] MDTS max_xfer_size 131072 00:16:04.722 [2024-07-12 00:28:32.383179] nvme_ctrlr.c:1993:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CNTLID 0x0001 00:16:04.722 [2024-07-12 00:28:32.383188] nvme_ctrlr.c:2004:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:16:04.722 [2024-07-12 00:28:32.383197] nvme_ctrlr.c:2017:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_sges 1 00:16:04.722 [2024-07-12 00:28:32.383206] nvme_ctrlr.c:2032:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] fuses compare and write: 1 00:16:04.722 [2024-07-12 00:28:32.383215] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to configure AER (timeout 30000 ms) 00:16:04.722 [2024-07-12 00:28:32.383230] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for configure aer (timeout 30000 ms) 00:16:04.722 [2024-07-12 00:28:32.383248] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:16:04.722 [2024-07-12 00:28:32.383265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:16:04.722 [2024-07-12 00:28:32.383287] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:16:04.722 [2024-07-12 00:28:32.383301] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:16:04.722 [2024-07-12 00:28:32.383316] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:16:04.722 [2024-07-12 00:28:32.383330] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:16:04.722 [2024-07-12 00:28:32.383340] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set keep alive timeout (timeout 30000 ms) 00:16:04.722 [2024-07-12 00:28:32.383357] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:16:04.722 [2024-07-12 00:28:32.383373] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:16:04.722 [2024-07-12 00:28:32.383387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:16:04.722 [2024-07-12 00:28:32.383399] nvme_ctrlr.c:2892:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Controller adjusted keep alive timeout to 0 ms 00:16:04.722 [2024-07-12 00:28:32.383409] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller iocs specific (timeout 30000 ms) 00:16:04.722 [2024-07-12 00:28:32.383422] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set number of queues (timeout 30000 ms) 00:16:04.722 [2024-07-12 00:28:32.383438] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set number of queues (timeout 30000 ms) 00:16:04.722 [2024-07-12 00:28:32.383457] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:16:04.722 [2024-07-12 00:28:32.383471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:16:04.722 [2024-07-12 00:28:32.383548] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify active ns (timeout 30000 ms) 00:16:04.722 [2024-07-12 00:28:32.383565] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify active ns (timeout 30000 ms) 00:16:04.722 [2024-07-12 00:28:32.383580] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:16:04.722 [2024-07-12 00:28:32.383598] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:16:04.722 [2024-07-12 00:28:32.383610] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:16:04.722 [2024-07-12 00:28:32.383627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:16:04.722 [2024-07-12 00:28:32.383646] nvme_ctrlr.c:4570:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Namespace 1 was added 00:16:04.722 [2024-07-12 00:28:32.383664] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns (timeout 30000 ms) 00:16:04.722 [2024-07-12 00:28:32.383680] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify ns (timeout 30000 ms) 00:16:04.722 [2024-07-12 00:28:32.383694] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:16:04.722 [2024-07-12 00:28:32.383703] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:04.722 [2024-07-12 00:28:32.383714] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:04.722 [2024-07-12 00:28:32.383739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:16:04.722 [2024-07-12 00:28:32.383764] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:16:04.722 [2024-07-12 00:28:32.383780] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:16:04.722 [2024-07-12 00:28:32.383794] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:16:04.722 [2024-07-12 00:28:32.383803] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:04.722 [2024-07-12 00:28:32.383815] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:04.722 [2024-07-12 00:28:32.383831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:16:04.722 [2024-07-12 00:28:32.383849] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns iocs specific (timeout 30000 ms) 00:16:04.722 [2024-07-12 00:28:32.383862] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported log pages (timeout 30000 ms) 00:16:04.722 [2024-07-12 00:28:32.383877] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported features (timeout 30000 ms) 00:16:04.722 [2024-07-12 00:28:32.383890] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set doorbell buffer config (timeout 30000 ms) 00:16:04.722 [2024-07-12 00:28:32.383900] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host ID (timeout 30000 ms) 00:16:04.722 [2024-07-12 00:28:32.383915] nvme_ctrlr.c:2992:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] NVMe-oF transport - not sending Set Features - Host ID 00:16:04.722 [2024-07-12 00:28:32.383924] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to transport ready (timeout 30000 ms) 00:16:04.722 [2024-07-12 00:28:32.383934] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to ready (no timeout) 00:16:04.722 [2024-07-12 00:28:32.383967] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:16:04.722 [2024-07-12 00:28:32.383988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:16:04.722 [2024-07-12 00:28:32.384010] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:16:04.722 [2024-07-12 00:28:32.384024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:16:04.722 [2024-07-12 00:28:32.384043] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:16:04.722 [2024-07-12 00:28:32.384057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:16:04.722 [2024-07-12 00:28:32.384076] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:16:04.722 [2024-07-12 00:28:32.384089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:16:04.722 [2024-07-12 00:28:32.384113] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:16:04.722 [2024-07-12 00:28:32.384123] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:16:04.722 [2024-07-12 00:28:32.384130] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:16:04.722 [2024-07-12 00:28:32.384137] nvme_pcie_common.c:1254:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:16:04.722 [2024-07-12 00:28:32.384148] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:16:04.722 [2024-07-12 00:28:32.384161] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:16:04.722 [2024-07-12 00:28:32.384171] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:16:04.722 [2024-07-12 00:28:32.384182] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:16:04.722 [2024-07-12 00:28:32.384195] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:16:04.722 [2024-07-12 00:28:32.384204] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:04.722 [2024-07-12 00:28:32.384214] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:04.722 [2024-07-12 00:28:32.384228] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:16:04.723 [2024-07-12 00:28:32.384237] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:16:04.723 [2024-07-12 00:28:32.384248] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:16:04.723 [2024-07-12 00:28:32.384261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:16:04.723 [2024-07-12 00:28:32.384284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:16:04.723 [2024-07-12 00:28:32.384305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:16:04.723 [2024-07-12 00:28:32.384324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:16:04.723 ===================================================== 00:16:04.723 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:16:04.723 ===================================================== 00:16:04.723 Controller Capabilities/Features 00:16:04.723 ================================ 00:16:04.723 Vendor ID: 4e58 00:16:04.723 Subsystem Vendor ID: 4e58 00:16:04.723 Serial Number: SPDK1 00:16:04.723 Model Number: SPDK bdev Controller 00:16:04.723 Firmware Version: 24.05.1 00:16:04.723 Recommended Arb Burst: 6 00:16:04.723 IEEE OUI Identifier: 8d 6b 50 00:16:04.723 Multi-path I/O 00:16:04.723 May have multiple subsystem ports: Yes 00:16:04.723 May have multiple controllers: Yes 00:16:04.723 Associated with SR-IOV VF: No 00:16:04.723 Max Data Transfer Size: 131072 00:16:04.723 Max Number of Namespaces: 32 00:16:04.723 Max Number of I/O Queues: 127 00:16:04.723 NVMe Specification Version (VS): 1.3 00:16:04.723 NVMe Specification Version (Identify): 1.3 00:16:04.723 Maximum Queue Entries: 256 00:16:04.723 Contiguous Queues Required: Yes 00:16:04.723 Arbitration Mechanisms Supported 00:16:04.723 Weighted Round Robin: Not Supported 00:16:04.723 Vendor Specific: Not Supported 00:16:04.723 Reset Timeout: 15000 ms 00:16:04.723 Doorbell Stride: 4 bytes 00:16:04.723 NVM Subsystem Reset: Not Supported 00:16:04.723 Command Sets Supported 00:16:04.723 NVM Command Set: Supported 00:16:04.723 Boot Partition: Not Supported 00:16:04.723 Memory Page Size Minimum: 4096 bytes 00:16:04.723 Memory Page Size Maximum: 4096 bytes 00:16:04.723 Persistent Memory Region: Not Supported 00:16:04.723 Optional Asynchronous Events Supported 00:16:04.723 Namespace Attribute Notices: Supported 00:16:04.723 Firmware Activation Notices: Not Supported 00:16:04.723 ANA Change Notices: Not Supported 00:16:04.723 PLE Aggregate Log Change Notices: Not Supported 00:16:04.723 LBA Status Info Alert Notices: Not Supported 00:16:04.723 EGE Aggregate Log Change Notices: Not Supported 00:16:04.723 Normal NVM Subsystem Shutdown event: Not Supported 00:16:04.723 Zone Descriptor Change Notices: Not Supported 00:16:04.723 Discovery Log Change Notices: Not Supported 00:16:04.723 Controller Attributes 00:16:04.723 128-bit Host Identifier: Supported 00:16:04.723 Non-Operational Permissive Mode: Not Supported 00:16:04.723 NVM Sets: Not Supported 00:16:04.723 Read Recovery Levels: Not Supported 00:16:04.723 Endurance Groups: Not Supported 00:16:04.723 Predictable Latency Mode: Not Supported 00:16:04.723 Traffic Based Keep ALive: Not Supported 00:16:04.723 Namespace Granularity: Not Supported 00:16:04.723 SQ Associations: Not Supported 00:16:04.723 UUID List: Not Supported 00:16:04.723 Multi-Domain Subsystem: Not Supported 00:16:04.723 Fixed Capacity Management: Not Supported 00:16:04.723 Variable Capacity Management: Not Supported 00:16:04.723 Delete Endurance Group: Not Supported 00:16:04.723 Delete NVM Set: Not Supported 00:16:04.723 Extended LBA Formats Supported: Not Supported 00:16:04.723 Flexible Data Placement Supported: Not Supported 00:16:04.723 00:16:04.723 Controller Memory Buffer Support 00:16:04.723 ================================ 00:16:04.723 Supported: No 00:16:04.723 00:16:04.723 Persistent Memory Region Support 00:16:04.723 ================================ 00:16:04.723 Supported: No 00:16:04.723 00:16:04.723 Admin Command Set Attributes 00:16:04.723 ============================ 00:16:04.723 Security Send/Receive: Not Supported 00:16:04.723 Format NVM: Not Supported 00:16:04.723 Firmware Activate/Download: Not Supported 00:16:04.723 Namespace Management: Not Supported 00:16:04.723 Device Self-Test: Not Supported 00:16:04.723 Directives: Not Supported 00:16:04.723 NVMe-MI: Not Supported 00:16:04.723 Virtualization Management: Not Supported 00:16:04.723 Doorbell Buffer Config: Not Supported 00:16:04.723 Get LBA Status Capability: Not Supported 00:16:04.723 Command & Feature Lockdown Capability: Not Supported 00:16:04.723 Abort Command Limit: 4 00:16:04.723 Async Event Request Limit: 4 00:16:04.723 Number of Firmware Slots: N/A 00:16:04.723 Firmware Slot 1 Read-Only: N/A 00:16:04.723 Firmware Activation Without Reset: N/A 00:16:04.723 Multiple Update Detection Support: N/A 00:16:04.723 Firmware Update Granularity: No Information Provided 00:16:04.723 Per-Namespace SMART Log: No 00:16:04.723 Asymmetric Namespace Access Log Page: Not Supported 00:16:04.723 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:16:04.723 Command Effects Log Page: Supported 00:16:04.723 Get Log Page Extended Data: Supported 00:16:04.723 Telemetry Log Pages: Not Supported 00:16:04.723 Persistent Event Log Pages: Not Supported 00:16:04.723 Supported Log Pages Log Page: May Support 00:16:04.723 Commands Supported & Effects Log Page: Not Supported 00:16:04.723 Feature Identifiers & Effects Log Page:May Support 00:16:04.723 NVMe-MI Commands & Effects Log Page: May Support 00:16:04.723 Data Area 4 for Telemetry Log: Not Supported 00:16:04.723 Error Log Page Entries Supported: 128 00:16:04.723 Keep Alive: Supported 00:16:04.723 Keep Alive Granularity: 10000 ms 00:16:04.723 00:16:04.723 NVM Command Set Attributes 00:16:04.723 ========================== 00:16:04.723 Submission Queue Entry Size 00:16:04.723 Max: 64 00:16:04.723 Min: 64 00:16:04.723 Completion Queue Entry Size 00:16:04.723 Max: 16 00:16:04.723 Min: 16 00:16:04.723 Number of Namespaces: 32 00:16:04.723 Compare Command: Supported 00:16:04.723 Write Uncorrectable Command: Not Supported 00:16:04.723 Dataset Management Command: Supported 00:16:04.723 Write Zeroes Command: Supported 00:16:04.723 Set Features Save Field: Not Supported 00:16:04.723 Reservations: Not Supported 00:16:04.723 Timestamp: Not Supported 00:16:04.723 Copy: Supported 00:16:04.723 Volatile Write Cache: Present 00:16:04.723 Atomic Write Unit (Normal): 1 00:16:04.723 Atomic Write Unit (PFail): 1 00:16:04.723 Atomic Compare & Write Unit: 1 00:16:04.723 Fused Compare & Write: Supported 00:16:04.723 Scatter-Gather List 00:16:04.723 SGL Command Set: Supported (Dword aligned) 00:16:04.723 SGL Keyed: Not Supported 00:16:04.723 SGL Bit Bucket Descriptor: Not Supported 00:16:04.723 SGL Metadata Pointer: Not Supported 00:16:04.723 Oversized SGL: Not Supported 00:16:04.723 SGL Metadata Address: Not Supported 00:16:04.723 SGL Offset: Not Supported 00:16:04.723 Transport SGL Data Block: Not Supported 00:16:04.723 Replay Protected Memory Block: Not Supported 00:16:04.723 00:16:04.723 Firmware Slot Information 00:16:04.723 ========================= 00:16:04.723 Active slot: 1 00:16:04.723 Slot 1 Firmware Revision: 24.05.1 00:16:04.723 00:16:04.723 00:16:04.723 Commands Supported and Effects 00:16:04.723 ============================== 00:16:04.723 Admin Commands 00:16:04.723 -------------- 00:16:04.723 Get Log Page (02h): Supported 00:16:04.723 Identify (06h): Supported 00:16:04.723 Abort (08h): Supported 00:16:04.723 Set Features (09h): Supported 00:16:04.723 Get Features (0Ah): Supported 00:16:04.723 Asynchronous Event Request (0Ch): Supported 00:16:04.723 Keep Alive (18h): Supported 00:16:04.723 I/O Commands 00:16:04.723 ------------ 00:16:04.723 Flush (00h): Supported LBA-Change 00:16:04.723 Write (01h): Supported LBA-Change 00:16:04.723 Read (02h): Supported 00:16:04.723 Compare (05h): Supported 00:16:04.723 Write Zeroes (08h): Supported LBA-Change 00:16:04.723 Dataset Management (09h): Supported LBA-Change 00:16:04.723 Copy (19h): Supported LBA-Change 00:16:04.723 Unknown (79h): Supported LBA-Change 00:16:04.723 Unknown (7Ah): Supported 00:16:04.723 00:16:04.723 Error Log 00:16:04.723 ========= 00:16:04.723 00:16:04.723 Arbitration 00:16:04.723 =========== 00:16:04.723 Arbitration Burst: 1 00:16:04.723 00:16:04.723 Power Management 00:16:04.723 ================ 00:16:04.723 Number of Power States: 1 00:16:04.723 Current Power State: Power State #0 00:16:04.723 Power State #0: 00:16:04.723 Max Power: 0.00 W 00:16:04.723 Non-Operational State: Operational 00:16:04.723 Entry Latency: Not Reported 00:16:04.723 Exit Latency: Not Reported 00:16:04.723 Relative Read Throughput: 0 00:16:04.723 Relative Read Latency: 0 00:16:04.723 Relative Write Throughput: 0 00:16:04.723 Relative Write Latency: 0 00:16:04.723 Idle Power: Not Reported 00:16:04.723 Active Power: Not Reported 00:16:04.723 Non-Operational Permissive Mode: Not Supported 00:16:04.723 00:16:04.723 Health Information 00:16:04.723 ================== 00:16:04.723 Critical Warnings: 00:16:04.723 Available Spare Space: OK 00:16:04.723 Temperature: OK 00:16:04.723 Device Reliability: OK 00:16:04.723 Read Only: No 00:16:04.723 Volatile Memory Backup: OK 00:16:04.723 Current Temperature: 0 Kelvin[2024-07-12 00:28:32.384462] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:16:04.723 [2024-07-12 00:28:32.384480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:16:04.723 [2024-07-12 00:28:32.384522] nvme_ctrlr.c:4234:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Prepare to destruct SSD 00:16:04.723 [2024-07-12 00:28:32.384540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.723 [2024-07-12 00:28:32.384553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.723 [2024-07-12 00:28:32.384564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.723 [2024-07-12 00:28:32.384576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.723 [2024-07-12 00:28:32.384935] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:16:04.723 [2024-07-12 00:28:32.384959] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:16:04.723 [2024-07-12 00:28:32.385934] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:16:04.723 [2024-07-12 00:28:32.386012] nvme_ctrlr.c:1084:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] RTD3E = 0 us 00:16:04.723 [2024-07-12 00:28:32.386026] nvme_ctrlr.c:1087:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown timeout = 10000 ms 00:16:04.723 [2024-07-12 00:28:32.386941] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:16:04.723 [2024-07-12 00:28:32.386965] nvme_ctrlr.c:1206:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown complete in 0 milliseconds 00:16:04.723 [2024-07-12 00:28:32.387044] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:16:04.723 [2024-07-12 00:28:32.393605] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:16:04.723 (-273 Celsius) 00:16:04.723 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:16:04.723 Available Spare: 0% 00:16:04.723 Available Spare Threshold: 0% 00:16:04.723 Life Percentage Used: 0% 00:16:04.723 Data Units Read: 0 00:16:04.723 Data Units Written: 0 00:16:04.723 Host Read Commands: 0 00:16:04.723 Host Write Commands: 0 00:16:04.723 Controller Busy Time: 0 minutes 00:16:04.723 Power Cycles: 0 00:16:04.723 Power On Hours: 0 hours 00:16:04.723 Unsafe Shutdowns: 0 00:16:04.723 Unrecoverable Media Errors: 0 00:16:04.723 Lifetime Error Log Entries: 0 00:16:04.723 Warning Temperature Time: 0 minutes 00:16:04.724 Critical Temperature Time: 0 minutes 00:16:04.724 00:16:04.724 Number of Queues 00:16:04.724 ================ 00:16:04.724 Number of I/O Submission Queues: 127 00:16:04.724 Number of I/O Completion Queues: 127 00:16:04.724 00:16:04.724 Active Namespaces 00:16:04.724 ================= 00:16:04.724 Namespace ID:1 00:16:04.724 Error Recovery Timeout: Unlimited 00:16:04.724 Command Set Identifier: NVM (00h) 00:16:04.724 Deallocate: Supported 00:16:04.724 Deallocated/Unwritten Error: Not Supported 00:16:04.724 Deallocated Read Value: Unknown 00:16:04.724 Deallocate in Write Zeroes: Not Supported 00:16:04.724 Deallocated Guard Field: 0xFFFF 00:16:04.724 Flush: Supported 00:16:04.724 Reservation: Supported 00:16:04.724 Namespace Sharing Capabilities: Multiple Controllers 00:16:04.724 Size (in LBAs): 131072 (0GiB) 00:16:04.724 Capacity (in LBAs): 131072 (0GiB) 00:16:04.724 Utilization (in LBAs): 131072 (0GiB) 00:16:04.724 NGUID: FA6B238BD9E443FD873E8FFB85633BEE 00:16:04.724 UUID: fa6b238b-d9e4-43fd-873e-8ffb85633bee 00:16:04.724 Thin Provisioning: Not Supported 00:16:04.724 Per-NS Atomic Units: Yes 00:16:04.724 Atomic Boundary Size (Normal): 0 00:16:04.724 Atomic Boundary Size (PFail): 0 00:16:04.724 Atomic Boundary Offset: 0 00:16:04.724 Maximum Single Source Range Length: 65535 00:16:04.724 Maximum Copy Length: 65535 00:16:04.724 Maximum Source Range Count: 1 00:16:04.724 NGUID/EUI64 Never Reused: No 00:16:04.724 Namespace Write Protected: No 00:16:04.724 Number of LBA Formats: 1 00:16:04.724 Current LBA Format: LBA Format #00 00:16:04.724 LBA Format #00: Data Size: 512 Metadata Size: 0 00:16:04.724 00:16:04.724 00:28:32 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:16:04.724 EAL: No free 2048 kB hugepages reported on node 1 00:16:04.982 [2024-07-12 00:28:32.616413] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:16:10.244 Initializing NVMe Controllers 00:16:10.244 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:16:10.244 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:16:10.244 Initialization complete. Launching workers. 00:16:10.244 ======================================================== 00:16:10.244 Latency(us) 00:16:10.244 Device Information : IOPS MiB/s Average min max 00:16:10.244 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 24172.45 94.42 5295.24 1460.15 10543.09 00:16:10.244 ======================================================== 00:16:10.244 Total : 24172.45 94.42 5295.24 1460.15 10543.09 00:16:10.244 00:16:10.244 [2024-07-12 00:28:37.640166] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:16:10.244 00:28:37 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:16:10.244 EAL: No free 2048 kB hugepages reported on node 1 00:16:10.244 [2024-07-12 00:28:37.864357] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:16:15.503 Initializing NVMe Controllers 00:16:15.503 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:16:15.503 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:16:15.503 Initialization complete. Launching workers. 00:16:15.503 ======================================================== 00:16:15.503 Latency(us) 00:16:15.503 Device Information : IOPS MiB/s Average min max 00:16:15.503 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16007.54 62.53 7995.46 6951.37 15973.60 00:16:15.503 ======================================================== 00:16:15.503 Total : 16007.54 62.53 7995.46 6951.37 15973.60 00:16:15.503 00:16:15.503 [2024-07-12 00:28:42.897152] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:16:15.503 00:28:42 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:16:15.503 EAL: No free 2048 kB hugepages reported on node 1 00:16:15.503 [2024-07-12 00:28:43.111325] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:16:20.768 [2024-07-12 00:28:48.184848] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:16:20.768 Initializing NVMe Controllers 00:16:20.768 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:16:20.768 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:16:20.768 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:16:20.768 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:16:20.768 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:16:20.768 Initialization complete. Launching workers. 00:16:20.768 Starting thread on core 2 00:16:20.768 Starting thread on core 3 00:16:20.768 Starting thread on core 1 00:16:20.768 00:28:48 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:16:20.768 EAL: No free 2048 kB hugepages reported on node 1 00:16:20.768 [2024-07-12 00:28:48.469359] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:16:24.056 [2024-07-12 00:28:51.544841] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:16:24.056 Initializing NVMe Controllers 00:16:24.056 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:16:24.056 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:16:24.056 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:16:24.056 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:16:24.056 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:16:24.056 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:16:24.056 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:16:24.056 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:16:24.056 Initialization complete. Launching workers. 00:16:24.056 Starting thread on core 1 with urgent priority queue 00:16:24.056 Starting thread on core 2 with urgent priority queue 00:16:24.056 Starting thread on core 3 with urgent priority queue 00:16:24.056 Starting thread on core 0 with urgent priority queue 00:16:24.056 SPDK bdev Controller (SPDK1 ) core 0: 7291.67 IO/s 13.71 secs/100000 ios 00:16:24.056 SPDK bdev Controller (SPDK1 ) core 1: 7271.00 IO/s 13.75 secs/100000 ios 00:16:24.056 SPDK bdev Controller (SPDK1 ) core 2: 7186.33 IO/s 13.92 secs/100000 ios 00:16:24.056 SPDK bdev Controller (SPDK1 ) core 3: 7612.33 IO/s 13.14 secs/100000 ios 00:16:24.056 ======================================================== 00:16:24.056 00:16:24.056 00:28:51 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:16:24.056 EAL: No free 2048 kB hugepages reported on node 1 00:16:24.056 [2024-07-12 00:28:51.813821] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:16:24.056 Initializing NVMe Controllers 00:16:24.056 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:16:24.056 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:16:24.056 Namespace ID: 1 size: 0GB 00:16:24.056 Initialization complete. 00:16:24.056 INFO: using host memory buffer for IO 00:16:24.056 Hello world! 00:16:24.056 [2024-07-12 00:28:51.850563] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:16:24.314 00:28:51 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:16:24.314 EAL: No free 2048 kB hugepages reported on node 1 00:16:24.314 [2024-07-12 00:28:52.115655] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:16:25.688 Initializing NVMe Controllers 00:16:25.688 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:16:25.688 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:16:25.688 Initialization complete. Launching workers. 00:16:25.688 submit (in ns) avg, min, max = 12571.5, 4420.7, 4017666.7 00:16:25.688 complete (in ns) avg, min, max = 27633.2, 2657.8, 4022835.6 00:16:25.688 00:16:25.688 Submit histogram 00:16:25.688 ================ 00:16:25.688 Range in us Cumulative Count 00:16:25.688 4.409 - 4.433: 0.0085% ( 1) 00:16:25.688 4.456 - 4.480: 0.1110% ( 12) 00:16:25.688 4.480 - 4.504: 1.6480% ( 180) 00:16:25.688 4.504 - 4.527: 4.8587% ( 376) 00:16:25.688 4.527 - 4.551: 9.0599% ( 492) 00:16:25.688 4.551 - 4.575: 13.5343% ( 524) 00:16:25.688 4.575 - 4.599: 16.3607% ( 331) 00:16:25.688 4.599 - 4.622: 17.9489% ( 186) 00:16:25.688 4.622 - 4.646: 18.7089% ( 89) 00:16:25.688 4.646 - 4.670: 19.4774% ( 90) 00:16:25.688 4.670 - 4.693: 21.3731% ( 222) 00:16:25.688 4.693 - 4.717: 25.1302% ( 440) 00:16:25.688 4.717 - 4.741: 32.3713% ( 848) 00:16:25.688 4.741 - 4.764: 38.5620% ( 725) 00:16:25.688 4.764 - 4.788: 42.0545% ( 409) 00:16:25.688 4.788 - 4.812: 43.5659% ( 177) 00:16:25.688 4.812 - 4.836: 44.4283% ( 101) 00:16:25.688 4.836 - 4.859: 45.1456% ( 84) 00:16:25.688 4.859 - 4.883: 46.4350% ( 151) 00:16:25.688 4.883 - 4.907: 47.7841% ( 158) 00:16:25.688 4.907 - 4.930: 50.1067% ( 272) 00:16:25.688 4.930 - 4.954: 51.4132% ( 153) 00:16:25.688 4.954 - 4.978: 52.8477% ( 168) 00:16:25.688 4.978 - 5.001: 53.7614% ( 107) 00:16:25.688 5.001 - 5.025: 54.3165% ( 65) 00:16:25.688 5.025 - 5.049: 54.5982% ( 33) 00:16:25.688 5.049 - 5.073: 54.7519% ( 18) 00:16:25.688 5.073 - 5.096: 54.8886% ( 16) 00:16:25.688 5.096 - 5.120: 56.0157% ( 132) 00:16:25.688 5.120 - 5.144: 58.3981% ( 279) 00:16:25.688 5.144 - 5.167: 65.0926% ( 784) 00:16:25.688 5.167 - 5.191: 67.1762% ( 244) 00:16:25.688 5.191 - 5.215: 68.6449% ( 172) 00:16:25.688 5.215 - 5.239: 69.6695% ( 120) 00:16:25.688 5.239 - 5.262: 70.3612% ( 81) 00:16:25.688 5.262 - 5.286: 71.0272% ( 78) 00:16:25.688 5.286 - 5.310: 76.1677% ( 602) 00:16:25.688 5.310 - 5.333: 77.7474% ( 185) 00:16:25.688 5.333 - 5.357: 79.1393% ( 163) 00:16:25.688 5.357 - 5.381: 80.0786% ( 110) 00:16:25.688 5.381 - 5.404: 81.4448% ( 160) 00:16:25.688 5.404 - 5.428: 81.8632% ( 49) 00:16:25.688 5.428 - 5.452: 82.1706% ( 36) 00:16:25.688 5.452 - 5.476: 82.2902% ( 14) 00:16:25.688 5.476 - 5.499: 82.3585% ( 8) 00:16:25.688 5.499 - 5.523: 83.0330% ( 79) 00:16:25.688 5.523 - 5.547: 91.4952% ( 991) 00:16:25.688 5.547 - 5.570: 93.2115% ( 201) 00:16:25.688 5.570 - 5.594: 95.0303% ( 213) 00:16:25.688 5.594 - 5.618: 95.6451% ( 72) 00:16:25.688 5.618 - 5.641: 95.9611% ( 37) 00:16:25.688 5.641 - 5.665: 96.1404% ( 21) 00:16:25.688 5.665 - 5.689: 96.2343% ( 11) 00:16:25.688 5.689 - 5.713: 96.2855% ( 6) 00:16:25.688 5.713 - 5.736: 96.3453% ( 7) 00:16:25.689 5.736 - 5.760: 96.4051% ( 7) 00:16:25.689 5.760 - 5.784: 96.5759% ( 20) 00:16:25.689 5.784 - 5.807: 96.6783% ( 12) 00:16:25.689 5.807 - 5.831: 96.7296% ( 6) 00:16:25.689 5.831 - 5.855: 96.7979% ( 8) 00:16:25.689 5.855 - 5.879: 96.8833% ( 10) 00:16:25.689 5.879 - 5.902: 96.9430% ( 7) 00:16:25.689 5.902 - 5.926: 96.9601% ( 2) 00:16:25.689 5.926 - 5.950: 97.0199% ( 7) 00:16:25.689 5.950 - 5.973: 97.0370% ( 2) 00:16:25.689 5.973 - 5.997: 97.1309% ( 11) 00:16:25.689 5.997 - 6.021: 97.1565% ( 3) 00:16:25.689 6.021 - 6.044: 97.1907% ( 4) 00:16:25.689 6.044 - 6.068: 97.4810% ( 34) 00:16:25.689 6.068 - 6.116: 97.6689% ( 22) 00:16:25.689 6.116 - 6.163: 97.7457% ( 9) 00:16:25.689 6.163 - 6.210: 97.9336% ( 22) 00:16:25.689 6.210 - 6.258: 98.5740% ( 75) 00:16:25.689 6.258 - 6.305: 98.6850% ( 13) 00:16:25.689 6.305 - 6.353: 98.7277% ( 5) 00:16:25.689 6.353 - 6.400: 98.7618% ( 4) 00:16:25.689 6.400 - 6.447: 98.8387% ( 9) 00:16:25.689 6.447 - 6.495: 98.9155% ( 9) 00:16:25.689 6.495 - 6.542: 98.9326% ( 2) 00:16:25.689 6.590 - 6.637: 98.9839% ( 6) 00:16:25.689 6.684 - 6.732: 99.0009% ( 2) 00:16:25.689 6.779 - 6.827: 99.0095% ( 1) 00:16:25.689 6.827 - 6.874: 99.0351% ( 3) 00:16:25.689 6.874 - 6.921: 99.0436% ( 1) 00:16:25.689 6.921 - 6.969: 99.0522% ( 1) 00:16:25.689 6.969 - 7.016: 99.0693% ( 2) 00:16:25.689 7.016 - 7.064: 99.0778% ( 1) 00:16:25.689 7.585 - 7.633: 99.0863% ( 1) 00:16:25.689 7.633 - 7.680: 99.1034% ( 2) 00:16:25.689 7.727 - 7.775: 99.1205% ( 2) 00:16:25.689 7.775 - 7.822: 99.1376% ( 2) 00:16:25.689 7.870 - 7.917: 99.1461% ( 1) 00:16:25.689 7.964 - 8.012: 99.1546% ( 1) 00:16:25.689 8.107 - 8.154: 99.1632% ( 1) 00:16:25.689 8.201 - 8.249: 99.1803% ( 2) 00:16:25.689 8.344 - 8.391: 99.1888% ( 1) 00:16:25.689 8.391 - 8.439: 99.1973% ( 1) 00:16:25.689 8.439 - 8.486: 99.2059% ( 1) 00:16:25.689 8.533 - 8.581: 99.2230% ( 2) 00:16:25.689 8.581 - 8.628: 99.2315% ( 1) 00:16:25.689 8.723 - 8.770: 99.2571% ( 3) 00:16:25.689 8.770 - 8.818: 99.2742% ( 2) 00:16:25.689 8.865 - 8.913: 99.2913% ( 2) 00:16:25.689 8.913 - 8.960: 99.2998% ( 1) 00:16:25.689 9.007 - 9.055: 99.3083% ( 1) 00:16:25.689 9.055 - 9.102: 99.3169% ( 1) 00:16:25.689 9.102 - 9.150: 99.3254% ( 1) 00:16:25.689 9.150 - 9.197: 99.3510% ( 3) 00:16:25.689 9.197 - 9.244: 99.3681% ( 2) 00:16:25.689 9.244 - 9.292: 99.3852% ( 2) 00:16:25.689 9.387 - 9.434: 99.3937% ( 1) 00:16:25.689 9.481 - 9.529: 99.4023% ( 1) 00:16:25.689 9.529 - 9.576: 99.4108% ( 1) 00:16:25.689 9.624 - 9.671: 99.4279% ( 2) 00:16:25.689 9.671 - 9.719: 99.4364% ( 1) 00:16:25.689 9.719 - 9.766: 99.4706% ( 4) 00:16:25.689 9.813 - 9.861: 99.4791% ( 1) 00:16:25.689 9.861 - 9.908: 99.4877% ( 1) 00:16:25.689 9.908 - 9.956: 99.4962% ( 1) 00:16:25.689 10.003 - 10.050: 99.5047% ( 1) 00:16:25.689 10.050 - 10.098: 99.5133% ( 1) 00:16:25.689 10.145 - 10.193: 99.5218% ( 1) 00:16:25.689 10.335 - 10.382: 99.5304% ( 1) 00:16:25.689 10.382 - 10.430: 99.5389% ( 1) 00:16:25.689 10.572 - 10.619: 99.5474% ( 1) 00:16:25.689 11.141 - 11.188: 99.5560% ( 1) 00:16:25.689 11.188 - 11.236: 99.5645% ( 1) 00:16:25.689 11.567 - 11.615: 99.5731% ( 1) 00:16:25.689 11.710 - 11.757: 99.5816% ( 1) 00:16:25.689 12.089 - 12.136: 99.5901% ( 1) 00:16:25.689 12.136 - 12.231: 99.5987% ( 1) 00:16:25.689 12.231 - 12.326: 99.6157% ( 2) 00:16:25.689 12.421 - 12.516: 99.6243% ( 1) 00:16:25.689 12.516 - 12.610: 99.6328% ( 1) 00:16:25.689 12.610 - 12.705: 99.6414% ( 1) 00:16:25.689 13.369 - 13.464: 99.6584% ( 2) 00:16:25.689 13.464 - 13.559: 99.6670% ( 1) 00:16:25.689 13.653 - 13.748: 99.7097% ( 5) 00:16:25.689 13.748 - 13.843: 99.7353% ( 3) 00:16:25.689 13.843 - 13.938: 99.7524% ( 2) 00:16:25.689 14.033 - 14.127: 99.7609% ( 1) 00:16:25.689 14.127 - 14.222: 99.7694% ( 1) 00:16:25.689 14.317 - 14.412: 99.7780% ( 1) 00:16:25.689 14.791 - 14.886: 99.7865% ( 1) 00:16:25.689 14.981 - 15.076: 99.8036% ( 2) 00:16:25.689 17.446 - 17.541: 99.8121% ( 1) 00:16:25.689 3980.705 - 4004.978: 99.9317% ( 14) 00:16:25.689 4004.978 - 4029.250: 100.0000% ( 8) 00:16:25.689 00:16:25.689 Complete histogram 00:16:25.689 ================== 00:16:25.689 Range in us Cumulative Count 00:16:25.689 2.655 - 2.667: 0.8197% ( 96) 00:16:25.689 2.667 - 2.679: 24.4898% ( 2772) 00:16:25.689 2.679 - 2.690: 63.1970% ( 4533) 00:16:25.689 2.690 - 2.702: 72.4106% ( 1079) 00:16:25.689 2.702 - 2.714: 78.7038% ( 737) 00:16:25.689 2.714 - 2.726: 87.2001% ( 995) 00:16:25.689 2.726 - 2.738: 92.1271% ( 577) 00:16:25.689 2.738 - 2.750: 95.8842% ( 440) 00:16:25.689 2.750 - 2.761: 97.0967% ( 142) 00:16:25.689 2.761 - 2.773: 97.6091% ( 60) 00:16:25.689 2.773 - 2.785: 98.0446% ( 51) 00:16:25.689 2.785 - 2.797: 98.2751% ( 27) 00:16:25.689 2.797 - 2.809: 98.3861% ( 13) 00:16:25.689 2.809 - 2.821: 98.4801% ( 11) 00:16:25.689 2.821 - 2.833: 98.5142% ( 4) 00:16:25.689 2.833 - 2.844: 98.5484% ( 4) 00:16:25.689 2.844 - 2.8[2024-07-12 00:28:53.143033] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:16:25.689 56: 98.5569% ( 1) 00:16:25.689 2.856 - 2.868: 98.5996% ( 5) 00:16:25.689 2.892 - 2.904: 98.6252% ( 3) 00:16:25.689 2.904 - 2.916: 98.6338% ( 1) 00:16:25.689 2.916 - 2.927: 98.6679% ( 4) 00:16:25.689 2.927 - 2.939: 98.6850% ( 2) 00:16:25.689 2.939 - 2.951: 98.7021% ( 2) 00:16:25.689 2.951 - 2.963: 98.7192% ( 2) 00:16:25.689 2.963 - 2.975: 98.7362% ( 2) 00:16:25.689 2.975 - 2.987: 98.7448% ( 1) 00:16:25.689 2.987 - 2.999: 98.7533% ( 1) 00:16:25.689 3.010 - 3.022: 98.7618% ( 1) 00:16:25.689 3.022 - 3.034: 98.7704% ( 1) 00:16:25.689 3.058 - 3.081: 98.7960% ( 3) 00:16:25.689 3.153 - 3.176: 98.8045% ( 1) 00:16:25.689 3.200 - 3.224: 98.8131% ( 1) 00:16:25.689 3.247 - 3.271: 98.8302% ( 2) 00:16:25.689 3.271 - 3.295: 98.8472% ( 2) 00:16:25.689 3.295 - 3.319: 98.8558% ( 1) 00:16:25.689 3.319 - 3.342: 98.8643% ( 1) 00:16:25.689 3.342 - 3.366: 98.8729% ( 1) 00:16:25.689 3.366 - 3.390: 98.8985% ( 3) 00:16:25.689 3.390 - 3.413: 98.9070% ( 1) 00:16:25.689 3.413 - 3.437: 98.9241% ( 2) 00:16:25.689 3.437 - 3.461: 98.9497% ( 3) 00:16:25.689 3.461 - 3.484: 98.9668% ( 2) 00:16:25.689 3.484 - 3.508: 99.0095% ( 5) 00:16:25.689 3.508 - 3.532: 99.0180% ( 1) 00:16:25.689 3.532 - 3.556: 99.0351% ( 2) 00:16:25.689 3.556 - 3.579: 99.0607% ( 3) 00:16:25.689 3.627 - 3.650: 99.0693% ( 1) 00:16:25.689 3.650 - 3.674: 99.0778% ( 1) 00:16:25.689 3.674 - 3.698: 99.0949% ( 2) 00:16:25.689 3.698 - 3.721: 99.1034% ( 1) 00:16:25.689 3.721 - 3.745: 99.1119% ( 1) 00:16:25.689 4.243 - 4.267: 99.1205% ( 1) 00:16:25.689 4.267 - 4.290: 99.1290% ( 1) 00:16:25.689 5.594 - 5.618: 99.1376% ( 1) 00:16:25.689 5.784 - 5.807: 99.1461% ( 1) 00:16:25.689 5.807 - 5.831: 99.1546% ( 1) 00:16:25.689 5.855 - 5.879: 99.1717% ( 2) 00:16:25.689 5.950 - 5.973: 99.1888% ( 2) 00:16:25.689 5.973 - 5.997: 99.1973% ( 1) 00:16:25.689 5.997 - 6.021: 99.2059% ( 1) 00:16:25.689 6.116 - 6.163: 99.2144% ( 1) 00:16:25.689 6.258 - 6.305: 99.2230% ( 1) 00:16:25.690 6.353 - 6.400: 99.2400% ( 2) 00:16:25.690 6.400 - 6.447: 99.2571% ( 2) 00:16:25.690 7.016 - 7.064: 99.2656% ( 1) 00:16:25.690 7.253 - 7.301: 99.2742% ( 1) 00:16:25.690 7.301 - 7.348: 99.2827% ( 1) 00:16:25.690 7.538 - 7.585: 99.2913% ( 1) 00:16:25.690 7.680 - 7.727: 99.2998% ( 1) 00:16:25.690 8.628 - 8.676: 99.3083% ( 1) 00:16:25.690 9.813 - 9.861: 99.3169% ( 1) 00:16:25.690 10.145 - 10.193: 99.3254% ( 1) 00:16:25.690 10.335 - 10.382: 99.3340% ( 1) 00:16:25.690 10.904 - 10.951: 99.3425% ( 1) 00:16:25.690 13.369 - 13.464: 99.3510% ( 1) 00:16:25.690 13.843 - 13.938: 99.3596% ( 1) 00:16:25.690 16.403 - 16.498: 99.3681% ( 1) 00:16:25.690 17.161 - 17.256: 99.3767% ( 1) 00:16:25.690 3810.797 - 3835.070: 99.3852% ( 1) 00:16:25.690 3980.705 - 4004.978: 99.7694% ( 45) 00:16:25.690 4004.978 - 4029.250: 100.0000% ( 27) 00:16:25.690 00:16:25.690 00:28:53 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:16:25.690 00:28:53 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:16:25.690 00:28:53 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:16:25.690 00:28:53 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:16:25.690 00:28:53 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:16:25.690 [ 00:16:25.690 { 00:16:25.690 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:16:25.690 "subtype": "Discovery", 00:16:25.690 "listen_addresses": [], 00:16:25.690 "allow_any_host": true, 00:16:25.690 "hosts": [] 00:16:25.690 }, 00:16:25.690 { 00:16:25.690 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:16:25.690 "subtype": "NVMe", 00:16:25.690 "listen_addresses": [ 00:16:25.690 { 00:16:25.690 "trtype": "VFIOUSER", 00:16:25.690 "adrfam": "IPv4", 00:16:25.690 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:16:25.690 "trsvcid": "0" 00:16:25.690 } 00:16:25.690 ], 00:16:25.690 "allow_any_host": true, 00:16:25.690 "hosts": [], 00:16:25.690 "serial_number": "SPDK1", 00:16:25.690 "model_number": "SPDK bdev Controller", 00:16:25.690 "max_namespaces": 32, 00:16:25.690 "min_cntlid": 1, 00:16:25.690 "max_cntlid": 65519, 00:16:25.690 "namespaces": [ 00:16:25.690 { 00:16:25.690 "nsid": 1, 00:16:25.690 "bdev_name": "Malloc1", 00:16:25.690 "name": "Malloc1", 00:16:25.690 "nguid": "FA6B238BD9E443FD873E8FFB85633BEE", 00:16:25.690 "uuid": "fa6b238b-d9e4-43fd-873e-8ffb85633bee" 00:16:25.690 } 00:16:25.690 ] 00:16:25.690 }, 00:16:25.690 { 00:16:25.690 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:16:25.690 "subtype": "NVMe", 00:16:25.690 "listen_addresses": [ 00:16:25.690 { 00:16:25.690 "trtype": "VFIOUSER", 00:16:25.690 "adrfam": "IPv4", 00:16:25.690 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:16:25.690 "trsvcid": "0" 00:16:25.690 } 00:16:25.690 ], 00:16:25.690 "allow_any_host": true, 00:16:25.690 "hosts": [], 00:16:25.690 "serial_number": "SPDK2", 00:16:25.690 "model_number": "SPDK bdev Controller", 00:16:25.690 "max_namespaces": 32, 00:16:25.690 "min_cntlid": 1, 00:16:25.690 "max_cntlid": 65519, 00:16:25.690 "namespaces": [ 00:16:25.690 { 00:16:25.690 "nsid": 1, 00:16:25.690 "bdev_name": "Malloc2", 00:16:25.690 "name": "Malloc2", 00:16:25.690 "nguid": "9CC61FD0FB084E0EAF0B22B2F0DF9C9A", 00:16:25.690 "uuid": "9cc61fd0-fb08-4e0e-af0b-22b2f0df9c9a" 00:16:25.690 } 00:16:25.690 ] 00:16:25.690 } 00:16:25.690 ] 00:16:25.690 00:28:53 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:16:25.690 00:28:53 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=927828 00:16:25.690 00:28:53 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:16:25.690 00:28:53 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:16:25.690 00:28:53 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1261 -- # local i=0 00:16:25.690 00:28:53 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1262 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:25.690 00:28:53 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1263 -- # '[' 0 -lt 200 ']' 00:16:25.690 00:28:53 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1264 -- # i=1 00:16:25.690 00:28:53 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # sleep 0.1 00:16:25.948 EAL: No free 2048 kB hugepages reported on node 1 00:16:25.948 00:28:53 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1262 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:25.948 00:28:53 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1263 -- # '[' 1 -lt 200 ']' 00:16:25.948 00:28:53 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1264 -- # i=2 00:16:25.948 00:28:53 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # sleep 0.1 00:16:25.948 [2024-07-12 00:28:53.655145] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:16:25.948 00:28:53 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1262 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:25.948 00:28:53 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:25.948 00:28:53 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # return 0 00:16:25.948 00:28:53 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:16:25.948 00:28:53 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:16:26.208 Malloc3 00:16:26.208 00:28:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:16:26.774 [2024-07-12 00:28:54.312097] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:16:26.774 00:28:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:16:26.774 Asynchronous Event Request test 00:16:26.774 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:16:26.774 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:16:26.774 Registering asynchronous event callbacks... 00:16:26.774 Starting namespace attribute notice tests for all controllers... 00:16:26.774 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:16:26.774 aer_cb - Changed Namespace 00:16:26.774 Cleaning up... 00:16:26.774 [ 00:16:26.775 { 00:16:26.775 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:16:26.775 "subtype": "Discovery", 00:16:26.775 "listen_addresses": [], 00:16:26.775 "allow_any_host": true, 00:16:26.775 "hosts": [] 00:16:26.775 }, 00:16:26.775 { 00:16:26.775 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:16:26.775 "subtype": "NVMe", 00:16:26.775 "listen_addresses": [ 00:16:26.775 { 00:16:26.775 "trtype": "VFIOUSER", 00:16:26.775 "adrfam": "IPv4", 00:16:26.775 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:16:26.775 "trsvcid": "0" 00:16:26.775 } 00:16:26.775 ], 00:16:26.775 "allow_any_host": true, 00:16:26.775 "hosts": [], 00:16:26.775 "serial_number": "SPDK1", 00:16:26.775 "model_number": "SPDK bdev Controller", 00:16:26.775 "max_namespaces": 32, 00:16:26.775 "min_cntlid": 1, 00:16:26.775 "max_cntlid": 65519, 00:16:26.775 "namespaces": [ 00:16:26.775 { 00:16:26.775 "nsid": 1, 00:16:26.775 "bdev_name": "Malloc1", 00:16:26.775 "name": "Malloc1", 00:16:26.775 "nguid": "FA6B238BD9E443FD873E8FFB85633BEE", 00:16:26.775 "uuid": "fa6b238b-d9e4-43fd-873e-8ffb85633bee" 00:16:26.775 }, 00:16:26.775 { 00:16:26.775 "nsid": 2, 00:16:26.775 "bdev_name": "Malloc3", 00:16:26.775 "name": "Malloc3", 00:16:26.775 "nguid": "305FA6FA76C740FA9DF1C95507B7D9BA", 00:16:26.775 "uuid": "305fa6fa-76c7-40fa-9df1-c95507b7d9ba" 00:16:26.775 } 00:16:26.775 ] 00:16:26.775 }, 00:16:26.775 { 00:16:26.775 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:16:26.775 "subtype": "NVMe", 00:16:26.775 "listen_addresses": [ 00:16:26.775 { 00:16:26.775 "trtype": "VFIOUSER", 00:16:26.775 "adrfam": "IPv4", 00:16:26.775 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:16:26.775 "trsvcid": "0" 00:16:26.775 } 00:16:26.775 ], 00:16:26.775 "allow_any_host": true, 00:16:26.775 "hosts": [], 00:16:26.775 "serial_number": "SPDK2", 00:16:26.775 "model_number": "SPDK bdev Controller", 00:16:26.775 "max_namespaces": 32, 00:16:26.775 "min_cntlid": 1, 00:16:26.775 "max_cntlid": 65519, 00:16:26.775 "namespaces": [ 00:16:26.775 { 00:16:26.775 "nsid": 1, 00:16:26.775 "bdev_name": "Malloc2", 00:16:26.775 "name": "Malloc2", 00:16:26.775 "nguid": "9CC61FD0FB084E0EAF0B22B2F0DF9C9A", 00:16:26.775 "uuid": "9cc61fd0-fb08-4e0e-af0b-22b2f0df9c9a" 00:16:26.775 } 00:16:26.775 ] 00:16:26.775 } 00:16:26.775 ] 00:16:27.035 00:28:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 927828 00:16:27.035 00:28:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:16:27.035 00:28:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:16:27.035 00:28:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:16:27.035 00:28:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:16:27.035 [2024-07-12 00:28:54.643919] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:16:27.035 [2024-07-12 00:28:54.643969] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid927935 ] 00:16:27.035 EAL: No free 2048 kB hugepages reported on node 1 00:16:27.035 [2024-07-12 00:28:54.687410] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:16:27.035 [2024-07-12 00:28:54.689801] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:16:27.035 [2024-07-12 00:28:54.689837] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f49bf898000 00:16:27.035 [2024-07-12 00:28:54.690796] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:27.035 [2024-07-12 00:28:54.691809] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:27.035 [2024-07-12 00:28:54.692812] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:27.035 [2024-07-12 00:28:54.693820] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:16:27.035 [2024-07-12 00:28:54.694831] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:16:27.035 [2024-07-12 00:28:54.695838] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:27.035 [2024-07-12 00:28:54.696841] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:16:27.035 [2024-07-12 00:28:54.697850] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:27.035 [2024-07-12 00:28:54.698857] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:16:27.035 [2024-07-12 00:28:54.698881] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f49be64e000 00:16:27.035 [2024-07-12 00:28:54.700365] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:16:27.035 [2024-07-12 00:28:54.720360] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:16:27.035 [2024-07-12 00:28:54.720418] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to connect adminq (no timeout) 00:16:27.035 [2024-07-12 00:28:54.722524] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:16:27.035 [2024-07-12 00:28:54.722599] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:16:27.035 [2024-07-12 00:28:54.722701] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for connect adminq (no timeout) 00:16:27.035 [2024-07-12 00:28:54.722733] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs (no timeout) 00:16:27.035 [2024-07-12 00:28:54.722744] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs wait for vs (no timeout) 00:16:27.035 [2024-07-12 00:28:54.723524] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:16:27.035 [2024-07-12 00:28:54.723554] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap (no timeout) 00:16:27.035 [2024-07-12 00:28:54.723569] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap wait for cap (no timeout) 00:16:27.035 [2024-07-12 00:28:54.724528] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:16:27.035 [2024-07-12 00:28:54.724551] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en (no timeout) 00:16:27.035 [2024-07-12 00:28:54.724567] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en wait for cc (timeout 15000 ms) 00:16:27.035 [2024-07-12 00:28:54.725534] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:16:27.035 [2024-07-12 00:28:54.725557] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:16:27.035 [2024-07-12 00:28:54.726563] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:16:27.035 [2024-07-12 00:28:54.726591] nvme_ctrlr.c:3751:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 0 && CSTS.RDY = 0 00:16:27.035 [2024-07-12 00:28:54.726603] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to controller is disabled (timeout 15000 ms) 00:16:27.035 [2024-07-12 00:28:54.726617] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:16:27.035 [2024-07-12 00:28:54.726734] nvme_ctrlr.c:3944:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Setting CC.EN = 1 00:16:27.035 [2024-07-12 00:28:54.726745] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:16:27.035 [2024-07-12 00:28:54.726755] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:16:27.035 [2024-07-12 00:28:54.727554] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:16:27.035 [2024-07-12 00:28:54.728566] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:16:27.036 [2024-07-12 00:28:54.729578] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:16:27.036 [2024-07-12 00:28:54.730563] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:27.036 [2024-07-12 00:28:54.730641] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:16:27.036 [2024-07-12 00:28:54.731573] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:16:27.036 [2024-07-12 00:28:54.731598] nvme_ctrlr.c:3786:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:16:27.036 [2024-07-12 00:28:54.731610] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to reset admin queue (timeout 30000 ms) 00:16:27.036 [2024-07-12 00:28:54.731638] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller (no timeout) 00:16:27.036 [2024-07-12 00:28:54.731658] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify controller (timeout 30000 ms) 00:16:27.036 [2024-07-12 00:28:54.731690] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:16:27.036 [2024-07-12 00:28:54.731701] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:27.036 [2024-07-12 00:28:54.731724] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:27.036 [2024-07-12 00:28:54.739603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:16:27.036 [2024-07-12 00:28:54.739635] nvme_ctrlr.c:1986:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_xfer_size 131072 00:16:27.036 [2024-07-12 00:28:54.739646] nvme_ctrlr.c:1990:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] MDTS max_xfer_size 131072 00:16:27.036 [2024-07-12 00:28:54.739656] nvme_ctrlr.c:1993:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CNTLID 0x0001 00:16:27.036 [2024-07-12 00:28:54.739665] nvme_ctrlr.c:2004:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:16:27.036 [2024-07-12 00:28:54.739674] nvme_ctrlr.c:2017:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_sges 1 00:16:27.036 [2024-07-12 00:28:54.739683] nvme_ctrlr.c:2032:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] fuses compare and write: 1 00:16:27.036 [2024-07-12 00:28:54.739692] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to configure AER (timeout 30000 ms) 00:16:27.036 [2024-07-12 00:28:54.739707] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for configure aer (timeout 30000 ms) 00:16:27.036 [2024-07-12 00:28:54.739729] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:16:27.036 [2024-07-12 00:28:54.747606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:16:27.036 [2024-07-12 00:28:54.747634] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:16:27.036 [2024-07-12 00:28:54.747649] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:16:27.036 [2024-07-12 00:28:54.747664] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:16:27.036 [2024-07-12 00:28:54.747678] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:16:27.036 [2024-07-12 00:28:54.747688] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set keep alive timeout (timeout 30000 ms) 00:16:27.036 [2024-07-12 00:28:54.747706] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:16:27.036 [2024-07-12 00:28:54.747723] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:16:27.036 [2024-07-12 00:28:54.755599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:16:27.036 [2024-07-12 00:28:54.755619] nvme_ctrlr.c:2892:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Controller adjusted keep alive timeout to 0 ms 00:16:27.036 [2024-07-12 00:28:54.755630] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller iocs specific (timeout 30000 ms) 00:16:27.036 [2024-07-12 00:28:54.755643] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set number of queues (timeout 30000 ms) 00:16:27.036 [2024-07-12 00:28:54.755660] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set number of queues (timeout 30000 ms) 00:16:27.036 [2024-07-12 00:28:54.755677] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:16:27.036 [2024-07-12 00:28:54.763598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:16:27.036 [2024-07-12 00:28:54.763705] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify active ns (timeout 30000 ms) 00:16:27.036 [2024-07-12 00:28:54.763724] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify active ns (timeout 30000 ms) 00:16:27.036 [2024-07-12 00:28:54.763741] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:16:27.036 [2024-07-12 00:28:54.763750] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:16:27.036 [2024-07-12 00:28:54.763762] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:16:27.036 [2024-07-12 00:28:54.771602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:16:27.036 [2024-07-12 00:28:54.771627] nvme_ctrlr.c:4570:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Namespace 1 was added 00:16:27.036 [2024-07-12 00:28:54.771646] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns (timeout 30000 ms) 00:16:27.036 [2024-07-12 00:28:54.771663] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify ns (timeout 30000 ms) 00:16:27.036 [2024-07-12 00:28:54.771687] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:16:27.036 [2024-07-12 00:28:54.771697] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:27.036 [2024-07-12 00:28:54.771708] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:27.036 [2024-07-12 00:28:54.779599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:16:27.036 [2024-07-12 00:28:54.779634] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify namespace id descriptors (timeout 30000 ms) 00:16:27.036 [2024-07-12 00:28:54.779661] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:16:27.036 [2024-07-12 00:28:54.779676] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:16:27.036 [2024-07-12 00:28:54.779685] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:27.036 [2024-07-12 00:28:54.779697] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:27.036 [2024-07-12 00:28:54.787598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:16:27.036 [2024-07-12 00:28:54.787623] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns iocs specific (timeout 30000 ms) 00:16:27.036 [2024-07-12 00:28:54.787640] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported log pages (timeout 30000 ms) 00:16:27.036 [2024-07-12 00:28:54.787656] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported features (timeout 30000 ms) 00:16:27.036 [2024-07-12 00:28:54.787670] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set doorbell buffer config (timeout 30000 ms) 00:16:27.036 [2024-07-12 00:28:54.787680] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host ID (timeout 30000 ms) 00:16:27.036 [2024-07-12 00:28:54.787692] nvme_ctrlr.c:2992:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] NVMe-oF transport - not sending Set Features - Host ID 00:16:27.036 [2024-07-12 00:28:54.787701] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to transport ready (timeout 30000 ms) 00:16:27.036 [2024-07-12 00:28:54.787711] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to ready (no timeout) 00:16:27.036 [2024-07-12 00:28:54.787745] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:16:27.036 [2024-07-12 00:28:54.795599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:16:27.036 [2024-07-12 00:28:54.795629] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:16:27.036 [2024-07-12 00:28:54.803599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:16:27.036 [2024-07-12 00:28:54.803627] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:16:27.036 [2024-07-12 00:28:54.811599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:16:27.036 [2024-07-12 00:28:54.811628] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:16:27.036 [2024-07-12 00:28:54.819599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:16:27.036 [2024-07-12 00:28:54.819629] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:16:27.036 [2024-07-12 00:28:54.819641] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:16:27.036 [2024-07-12 00:28:54.819648] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:16:27.036 [2024-07-12 00:28:54.819655] nvme_pcie_common.c:1254:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:16:27.036 [2024-07-12 00:28:54.819667] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:16:27.037 [2024-07-12 00:28:54.819680] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:16:27.037 [2024-07-12 00:28:54.819689] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:16:27.037 [2024-07-12 00:28:54.819700] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:16:27.037 [2024-07-12 00:28:54.819712] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:16:27.037 [2024-07-12 00:28:54.819722] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:27.037 [2024-07-12 00:28:54.819732] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:27.037 [2024-07-12 00:28:54.819746] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:16:27.037 [2024-07-12 00:28:54.819755] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:16:27.037 [2024-07-12 00:28:54.819766] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:16:27.037 [2024-07-12 00:28:54.827599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:16:27.037 [2024-07-12 00:28:54.827630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:16:27.037 [2024-07-12 00:28:54.827656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:16:27.037 [2024-07-12 00:28:54.827673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:16:27.037 ===================================================== 00:16:27.037 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:16:27.037 ===================================================== 00:16:27.037 Controller Capabilities/Features 00:16:27.037 ================================ 00:16:27.037 Vendor ID: 4e58 00:16:27.037 Subsystem Vendor ID: 4e58 00:16:27.037 Serial Number: SPDK2 00:16:27.037 Model Number: SPDK bdev Controller 00:16:27.037 Firmware Version: 24.05.1 00:16:27.037 Recommended Arb Burst: 6 00:16:27.037 IEEE OUI Identifier: 8d 6b 50 00:16:27.037 Multi-path I/O 00:16:27.037 May have multiple subsystem ports: Yes 00:16:27.037 May have multiple controllers: Yes 00:16:27.037 Associated with SR-IOV VF: No 00:16:27.037 Max Data Transfer Size: 131072 00:16:27.037 Max Number of Namespaces: 32 00:16:27.037 Max Number of I/O Queues: 127 00:16:27.037 NVMe Specification Version (VS): 1.3 00:16:27.037 NVMe Specification Version (Identify): 1.3 00:16:27.037 Maximum Queue Entries: 256 00:16:27.037 Contiguous Queues Required: Yes 00:16:27.037 Arbitration Mechanisms Supported 00:16:27.037 Weighted Round Robin: Not Supported 00:16:27.037 Vendor Specific: Not Supported 00:16:27.037 Reset Timeout: 15000 ms 00:16:27.037 Doorbell Stride: 4 bytes 00:16:27.037 NVM Subsystem Reset: Not Supported 00:16:27.037 Command Sets Supported 00:16:27.037 NVM Command Set: Supported 00:16:27.037 Boot Partition: Not Supported 00:16:27.037 Memory Page Size Minimum: 4096 bytes 00:16:27.037 Memory Page Size Maximum: 4096 bytes 00:16:27.037 Persistent Memory Region: Not Supported 00:16:27.037 Optional Asynchronous Events Supported 00:16:27.037 Namespace Attribute Notices: Supported 00:16:27.037 Firmware Activation Notices: Not Supported 00:16:27.037 ANA Change Notices: Not Supported 00:16:27.037 PLE Aggregate Log Change Notices: Not Supported 00:16:27.037 LBA Status Info Alert Notices: Not Supported 00:16:27.037 EGE Aggregate Log Change Notices: Not Supported 00:16:27.037 Normal NVM Subsystem Shutdown event: Not Supported 00:16:27.037 Zone Descriptor Change Notices: Not Supported 00:16:27.037 Discovery Log Change Notices: Not Supported 00:16:27.037 Controller Attributes 00:16:27.037 128-bit Host Identifier: Supported 00:16:27.037 Non-Operational Permissive Mode: Not Supported 00:16:27.037 NVM Sets: Not Supported 00:16:27.037 Read Recovery Levels: Not Supported 00:16:27.037 Endurance Groups: Not Supported 00:16:27.037 Predictable Latency Mode: Not Supported 00:16:27.037 Traffic Based Keep ALive: Not Supported 00:16:27.037 Namespace Granularity: Not Supported 00:16:27.037 SQ Associations: Not Supported 00:16:27.037 UUID List: Not Supported 00:16:27.037 Multi-Domain Subsystem: Not Supported 00:16:27.037 Fixed Capacity Management: Not Supported 00:16:27.037 Variable Capacity Management: Not Supported 00:16:27.037 Delete Endurance Group: Not Supported 00:16:27.037 Delete NVM Set: Not Supported 00:16:27.037 Extended LBA Formats Supported: Not Supported 00:16:27.037 Flexible Data Placement Supported: Not Supported 00:16:27.037 00:16:27.037 Controller Memory Buffer Support 00:16:27.037 ================================ 00:16:27.037 Supported: No 00:16:27.037 00:16:27.037 Persistent Memory Region Support 00:16:27.037 ================================ 00:16:27.037 Supported: No 00:16:27.037 00:16:27.037 Admin Command Set Attributes 00:16:27.037 ============================ 00:16:27.037 Security Send/Receive: Not Supported 00:16:27.037 Format NVM: Not Supported 00:16:27.037 Firmware Activate/Download: Not Supported 00:16:27.037 Namespace Management: Not Supported 00:16:27.037 Device Self-Test: Not Supported 00:16:27.037 Directives: Not Supported 00:16:27.037 NVMe-MI: Not Supported 00:16:27.037 Virtualization Management: Not Supported 00:16:27.037 Doorbell Buffer Config: Not Supported 00:16:27.037 Get LBA Status Capability: Not Supported 00:16:27.037 Command & Feature Lockdown Capability: Not Supported 00:16:27.037 Abort Command Limit: 4 00:16:27.037 Async Event Request Limit: 4 00:16:27.037 Number of Firmware Slots: N/A 00:16:27.037 Firmware Slot 1 Read-Only: N/A 00:16:27.037 Firmware Activation Without Reset: N/A 00:16:27.037 Multiple Update Detection Support: N/A 00:16:27.037 Firmware Update Granularity: No Information Provided 00:16:27.037 Per-Namespace SMART Log: No 00:16:27.037 Asymmetric Namespace Access Log Page: Not Supported 00:16:27.037 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:16:27.037 Command Effects Log Page: Supported 00:16:27.037 Get Log Page Extended Data: Supported 00:16:27.037 Telemetry Log Pages: Not Supported 00:16:27.037 Persistent Event Log Pages: Not Supported 00:16:27.037 Supported Log Pages Log Page: May Support 00:16:27.037 Commands Supported & Effects Log Page: Not Supported 00:16:27.037 Feature Identifiers & Effects Log Page:May Support 00:16:27.037 NVMe-MI Commands & Effects Log Page: May Support 00:16:27.037 Data Area 4 for Telemetry Log: Not Supported 00:16:27.037 Error Log Page Entries Supported: 128 00:16:27.037 Keep Alive: Supported 00:16:27.037 Keep Alive Granularity: 10000 ms 00:16:27.037 00:16:27.037 NVM Command Set Attributes 00:16:27.037 ========================== 00:16:27.037 Submission Queue Entry Size 00:16:27.037 Max: 64 00:16:27.037 Min: 64 00:16:27.037 Completion Queue Entry Size 00:16:27.037 Max: 16 00:16:27.037 Min: 16 00:16:27.037 Number of Namespaces: 32 00:16:27.037 Compare Command: Supported 00:16:27.037 Write Uncorrectable Command: Not Supported 00:16:27.037 Dataset Management Command: Supported 00:16:27.037 Write Zeroes Command: Supported 00:16:27.037 Set Features Save Field: Not Supported 00:16:27.037 Reservations: Not Supported 00:16:27.037 Timestamp: Not Supported 00:16:27.037 Copy: Supported 00:16:27.037 Volatile Write Cache: Present 00:16:27.037 Atomic Write Unit (Normal): 1 00:16:27.037 Atomic Write Unit (PFail): 1 00:16:27.037 Atomic Compare & Write Unit: 1 00:16:27.037 Fused Compare & Write: Supported 00:16:27.037 Scatter-Gather List 00:16:27.037 SGL Command Set: Supported (Dword aligned) 00:16:27.037 SGL Keyed: Not Supported 00:16:27.037 SGL Bit Bucket Descriptor: Not Supported 00:16:27.037 SGL Metadata Pointer: Not Supported 00:16:27.037 Oversized SGL: Not Supported 00:16:27.038 SGL Metadata Address: Not Supported 00:16:27.038 SGL Offset: Not Supported 00:16:27.038 Transport SGL Data Block: Not Supported 00:16:27.038 Replay Protected Memory Block: Not Supported 00:16:27.038 00:16:27.038 Firmware Slot Information 00:16:27.038 ========================= 00:16:27.038 Active slot: 1 00:16:27.038 Slot 1 Firmware Revision: 24.05.1 00:16:27.038 00:16:27.038 00:16:27.038 Commands Supported and Effects 00:16:27.038 ============================== 00:16:27.038 Admin Commands 00:16:27.038 -------------- 00:16:27.038 Get Log Page (02h): Supported 00:16:27.038 Identify (06h): Supported 00:16:27.038 Abort (08h): Supported 00:16:27.038 Set Features (09h): Supported 00:16:27.038 Get Features (0Ah): Supported 00:16:27.038 Asynchronous Event Request (0Ch): Supported 00:16:27.038 Keep Alive (18h): Supported 00:16:27.038 I/O Commands 00:16:27.038 ------------ 00:16:27.038 Flush (00h): Supported LBA-Change 00:16:27.038 Write (01h): Supported LBA-Change 00:16:27.038 Read (02h): Supported 00:16:27.038 Compare (05h): Supported 00:16:27.038 Write Zeroes (08h): Supported LBA-Change 00:16:27.038 Dataset Management (09h): Supported LBA-Change 00:16:27.038 Copy (19h): Supported LBA-Change 00:16:27.038 Unknown (79h): Supported LBA-Change 00:16:27.038 Unknown (7Ah): Supported 00:16:27.038 00:16:27.038 Error Log 00:16:27.038 ========= 00:16:27.038 00:16:27.038 Arbitration 00:16:27.038 =========== 00:16:27.038 Arbitration Burst: 1 00:16:27.038 00:16:27.038 Power Management 00:16:27.038 ================ 00:16:27.038 Number of Power States: 1 00:16:27.038 Current Power State: Power State #0 00:16:27.038 Power State #0: 00:16:27.038 Max Power: 0.00 W 00:16:27.038 Non-Operational State: Operational 00:16:27.038 Entry Latency: Not Reported 00:16:27.038 Exit Latency: Not Reported 00:16:27.038 Relative Read Throughput: 0 00:16:27.038 Relative Read Latency: 0 00:16:27.038 Relative Write Throughput: 0 00:16:27.038 Relative Write Latency: 0 00:16:27.038 Idle Power: Not Reported 00:16:27.038 Active Power: Not Reported 00:16:27.038 Non-Operational Permissive Mode: Not Supported 00:16:27.038 00:16:27.038 Health Information 00:16:27.038 ================== 00:16:27.038 Critical Warnings: 00:16:27.038 Available Spare Space: OK 00:16:27.038 Temperature: OK 00:16:27.038 Device Reliability: OK 00:16:27.038 Read Only: No 00:16:27.038 Volatile Memory Backup: OK 00:16:27.038 Current Temperature: 0 Kelvin[2024-07-12 00:28:54.827825] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:16:27.038 [2024-07-12 00:28:54.834602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:16:27.038 [2024-07-12 00:28:54.834653] nvme_ctrlr.c:4234:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Prepare to destruct SSD 00:16:27.038 [2024-07-12 00:28:54.834672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.038 [2024-07-12 00:28:54.834684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.038 [2024-07-12 00:28:54.834696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.038 [2024-07-12 00:28:54.834707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.038 [2024-07-12 00:28:54.834792] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:16:27.038 [2024-07-12 00:28:54.834815] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:16:27.038 [2024-07-12 00:28:54.835792] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:27.038 [2024-07-12 00:28:54.835872] nvme_ctrlr.c:1084:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] RTD3E = 0 us 00:16:27.038 [2024-07-12 00:28:54.835888] nvme_ctrlr.c:1087:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown timeout = 10000 ms 00:16:27.038 [2024-07-12 00:28:54.836800] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:16:27.038 [2024-07-12 00:28:54.836827] nvme_ctrlr.c:1206:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown complete in 0 milliseconds 00:16:27.038 [2024-07-12 00:28:54.836903] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:16:27.038 [2024-07-12 00:28:54.838416] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:16:27.297 (-273 Celsius) 00:16:27.297 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:16:27.297 Available Spare: 0% 00:16:27.297 Available Spare Threshold: 0% 00:16:27.297 Life Percentage Used: 0% 00:16:27.297 Data Units Read: 0 00:16:27.297 Data Units Written: 0 00:16:27.297 Host Read Commands: 0 00:16:27.297 Host Write Commands: 0 00:16:27.297 Controller Busy Time: 0 minutes 00:16:27.297 Power Cycles: 0 00:16:27.297 Power On Hours: 0 hours 00:16:27.297 Unsafe Shutdowns: 0 00:16:27.297 Unrecoverable Media Errors: 0 00:16:27.297 Lifetime Error Log Entries: 0 00:16:27.297 Warning Temperature Time: 0 minutes 00:16:27.297 Critical Temperature Time: 0 minutes 00:16:27.297 00:16:27.297 Number of Queues 00:16:27.297 ================ 00:16:27.297 Number of I/O Submission Queues: 127 00:16:27.297 Number of I/O Completion Queues: 127 00:16:27.297 00:16:27.297 Active Namespaces 00:16:27.297 ================= 00:16:27.297 Namespace ID:1 00:16:27.297 Error Recovery Timeout: Unlimited 00:16:27.297 Command Set Identifier: NVM (00h) 00:16:27.297 Deallocate: Supported 00:16:27.297 Deallocated/Unwritten Error: Not Supported 00:16:27.297 Deallocated Read Value: Unknown 00:16:27.297 Deallocate in Write Zeroes: Not Supported 00:16:27.297 Deallocated Guard Field: 0xFFFF 00:16:27.297 Flush: Supported 00:16:27.297 Reservation: Supported 00:16:27.297 Namespace Sharing Capabilities: Multiple Controllers 00:16:27.297 Size (in LBAs): 131072 (0GiB) 00:16:27.297 Capacity (in LBAs): 131072 (0GiB) 00:16:27.297 Utilization (in LBAs): 131072 (0GiB) 00:16:27.297 NGUID: 9CC61FD0FB084E0EAF0B22B2F0DF9C9A 00:16:27.297 UUID: 9cc61fd0-fb08-4e0e-af0b-22b2f0df9c9a 00:16:27.297 Thin Provisioning: Not Supported 00:16:27.297 Per-NS Atomic Units: Yes 00:16:27.297 Atomic Boundary Size (Normal): 0 00:16:27.297 Atomic Boundary Size (PFail): 0 00:16:27.297 Atomic Boundary Offset: 0 00:16:27.297 Maximum Single Source Range Length: 65535 00:16:27.297 Maximum Copy Length: 65535 00:16:27.297 Maximum Source Range Count: 1 00:16:27.297 NGUID/EUI64 Never Reused: No 00:16:27.297 Namespace Write Protected: No 00:16:27.297 Number of LBA Formats: 1 00:16:27.297 Current LBA Format: LBA Format #00 00:16:27.297 LBA Format #00: Data Size: 512 Metadata Size: 0 00:16:27.297 00:16:27.297 00:28:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:16:27.297 EAL: No free 2048 kB hugepages reported on node 1 00:16:27.297 [2024-07-12 00:28:55.058940] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:32.639 Initializing NVMe Controllers 00:16:32.639 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:16:32.639 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:16:32.639 Initialization complete. Launching workers. 00:16:32.639 ======================================================== 00:16:32.639 Latency(us) 00:16:32.639 Device Information : IOPS MiB/s Average min max 00:16:32.639 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 24143.03 94.31 5301.67 1458.27 8538.42 00:16:32.639 ======================================================== 00:16:32.639 Total : 24143.03 94.31 5301.67 1458.27 8538.42 00:16:32.639 00:16:32.639 [2024-07-12 00:29:00.175914] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:32.639 00:29:00 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:16:32.639 EAL: No free 2048 kB hugepages reported on node 1 00:16:32.639 [2024-07-12 00:29:00.400552] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:37.904 Initializing NVMe Controllers 00:16:37.904 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:16:37.904 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:16:37.904 Initialization complete. Launching workers. 00:16:37.904 ======================================================== 00:16:37.904 Latency(us) 00:16:37.904 Device Information : IOPS MiB/s Average min max 00:16:37.904 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 24102.40 94.15 5315.98 1451.86 8602.40 00:16:37.904 ======================================================== 00:16:37.904 Total : 24102.40 94.15 5315.98 1451.86 8602.40 00:16:37.904 00:16:37.904 [2024-07-12 00:29:05.422683] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:37.904 00:29:05 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:16:37.904 EAL: No free 2048 kB hugepages reported on node 1 00:16:37.904 [2024-07-12 00:29:05.644184] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:43.168 [2024-07-12 00:29:10.791732] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:43.168 Initializing NVMe Controllers 00:16:43.168 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:16:43.168 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:16:43.168 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:16:43.168 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:16:43.168 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:16:43.168 Initialization complete. Launching workers. 00:16:43.168 Starting thread on core 2 00:16:43.168 Starting thread on core 3 00:16:43.168 Starting thread on core 1 00:16:43.168 00:29:10 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:16:43.168 EAL: No free 2048 kB hugepages reported on node 1 00:16:43.427 [2024-07-12 00:29:11.075128] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:46.713 [2024-07-12 00:29:14.128053] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:46.713 Initializing NVMe Controllers 00:16:46.713 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:16:46.713 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:16:46.713 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:16:46.713 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:16:46.713 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:16:46.713 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:16:46.713 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:16:46.713 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:16:46.713 Initialization complete. Launching workers. 00:16:46.713 Starting thread on core 1 with urgent priority queue 00:16:46.713 Starting thread on core 2 with urgent priority queue 00:16:46.713 Starting thread on core 3 with urgent priority queue 00:16:46.713 Starting thread on core 0 with urgent priority queue 00:16:46.713 SPDK bdev Controller (SPDK2 ) core 0: 7384.00 IO/s 13.54 secs/100000 ios 00:16:46.713 SPDK bdev Controller (SPDK2 ) core 1: 6907.33 IO/s 14.48 secs/100000 ios 00:16:46.713 SPDK bdev Controller (SPDK2 ) core 2: 7298.00 IO/s 13.70 secs/100000 ios 00:16:46.713 SPDK bdev Controller (SPDK2 ) core 3: 8472.67 IO/s 11.80 secs/100000 ios 00:16:46.713 ======================================================== 00:16:46.713 00:16:46.713 00:29:14 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:16:46.713 EAL: No free 2048 kB hugepages reported on node 1 00:16:46.713 [2024-07-12 00:29:14.394977] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:46.713 Initializing NVMe Controllers 00:16:46.713 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:16:46.713 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:16:46.713 Namespace ID: 1 size: 0GB 00:16:46.713 Initialization complete. 00:16:46.713 INFO: using host memory buffer for IO 00:16:46.713 Hello world! 00:16:46.713 [2024-07-12 00:29:14.407106] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:46.713 00:29:14 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:16:46.713 EAL: No free 2048 kB hugepages reported on node 1 00:16:46.971 [2024-07-12 00:29:14.676628] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:48.345 Initializing NVMe Controllers 00:16:48.345 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:16:48.345 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:16:48.345 Initialization complete. Launching workers. 00:16:48.345 submit (in ns) avg, min, max = 11910.7, 4469.6, 4034250.4 00:16:48.345 complete (in ns) avg, min, max = 26351.4, 2647.4, 4009185.2 00:16:48.345 00:16:48.345 Submit histogram 00:16:48.345 ================ 00:16:48.345 Range in us Cumulative Count 00:16:48.345 4.456 - 4.480: 0.0338% ( 4) 00:16:48.345 4.480 - 4.504: 0.8364% ( 95) 00:16:48.345 4.504 - 4.527: 2.9824% ( 254) 00:16:48.345 4.527 - 4.551: 6.6408% ( 433) 00:16:48.345 4.551 - 4.575: 10.7469% ( 486) 00:16:48.345 4.575 - 4.599: 13.8053% ( 362) 00:16:48.345 4.599 - 4.622: 15.6641% ( 220) 00:16:48.345 4.622 - 4.646: 16.7540% ( 129) 00:16:48.345 4.646 - 4.670: 17.5059% ( 89) 00:16:48.345 4.670 - 4.693: 18.8915% ( 164) 00:16:48.345 4.693 - 4.717: 21.6796% ( 330) 00:16:48.345 4.717 - 4.741: 27.0869% ( 640) 00:16:48.345 4.741 - 4.764: 31.4211% ( 513) 00:16:48.345 4.764 - 4.788: 33.8881% ( 292) 00:16:48.345 4.788 - 4.812: 35.2146% ( 157) 00:16:48.345 4.812 - 4.836: 35.8314% ( 73) 00:16:48.345 4.836 - 4.859: 36.1271% ( 35) 00:16:48.345 4.859 - 4.883: 36.4735% ( 41) 00:16:48.345 4.883 - 4.907: 36.8959% ( 50) 00:16:48.345 4.907 - 4.930: 37.3859% ( 58) 00:16:48.345 4.930 - 4.954: 37.6816% ( 35) 00:16:48.345 4.954 - 4.978: 38.0196% ( 40) 00:16:48.345 4.978 - 5.001: 38.3576% ( 40) 00:16:48.345 5.001 - 5.025: 38.5096% ( 18) 00:16:48.345 5.025 - 5.049: 38.6026% ( 11) 00:16:48.345 5.049 - 5.073: 38.6448% ( 5) 00:16:48.345 5.073 - 5.096: 38.7462% ( 12) 00:16:48.345 5.096 - 5.120: 39.1179% ( 44) 00:16:48.345 5.120 - 5.144: 40.2670% ( 136) 00:16:48.345 5.144 - 5.167: 49.3832% ( 1079) 00:16:48.345 5.167 - 5.191: 53.0500% ( 434) 00:16:48.345 5.191 - 5.215: 55.4748% ( 287) 00:16:48.345 5.215 - 5.239: 56.9956% ( 180) 00:16:48.345 5.239 - 5.262: 57.9250% ( 110) 00:16:48.345 5.262 - 5.286: 58.7107% ( 93) 00:16:48.345 5.286 - 5.310: 64.7938% ( 720) 00:16:48.345 5.310 - 5.333: 69.0351% ( 502) 00:16:48.345 5.333 - 5.357: 71.0544% ( 239) 00:16:48.345 5.357 - 5.381: 72.1781% ( 133) 00:16:48.345 5.381 - 5.404: 74.8141% ( 312) 00:16:48.345 5.404 - 5.428: 75.5576% ( 88) 00:16:48.345 5.428 - 5.452: 76.1406% ( 69) 00:16:48.345 5.452 - 5.476: 76.2927% ( 18) 00:16:48.345 5.476 - 5.499: 76.3687% ( 9) 00:16:48.345 5.499 - 5.523: 76.6137% ( 29) 00:16:48.345 5.523 - 5.547: 86.3636% ( 1154) 00:16:48.345 5.547 - 5.570: 90.4613% ( 485) 00:16:48.345 5.570 - 5.594: 93.2494% ( 330) 00:16:48.346 5.594 - 5.618: 94.3055% ( 125) 00:16:48.346 5.618 - 5.641: 94.8716% ( 67) 00:16:48.346 5.641 - 5.665: 95.1166% ( 29) 00:16:48.346 5.665 - 5.689: 95.2940% ( 21) 00:16:48.346 5.689 - 5.713: 95.3447% ( 6) 00:16:48.346 5.713 - 5.736: 95.3870% ( 5) 00:16:48.346 5.736 - 5.760: 95.4292% ( 5) 00:16:48.346 5.760 - 5.784: 95.5559% ( 15) 00:16:48.346 5.784 - 5.807: 95.6404% ( 10) 00:16:48.346 5.807 - 5.831: 95.8432% ( 24) 00:16:48.346 5.831 - 5.855: 95.9108% ( 8) 00:16:48.346 5.855 - 5.879: 95.9868% ( 9) 00:16:48.346 5.879 - 5.902: 96.0967% ( 13) 00:16:48.346 5.902 - 5.926: 96.1558% ( 7) 00:16:48.346 5.950 - 5.973: 96.1811% ( 3) 00:16:48.346 5.973 - 5.997: 96.2656% ( 10) 00:16:48.346 5.997 - 6.021: 96.2994% ( 4) 00:16:48.346 6.021 - 6.044: 96.3417% ( 5) 00:16:48.346 6.044 - 6.068: 96.4008% ( 7) 00:16:48.346 6.068 - 6.116: 96.4515% ( 6) 00:16:48.346 6.116 - 6.163: 96.5022% ( 6) 00:16:48.346 6.163 - 6.210: 96.5444% ( 5) 00:16:48.346 6.210 - 6.258: 96.6627% ( 14) 00:16:48.346 6.258 - 6.305: 96.7979% ( 16) 00:16:48.346 6.305 - 6.353: 96.9331% ( 16) 00:16:48.346 6.353 - 6.400: 96.9500% ( 2) 00:16:48.346 6.400 - 6.447: 97.0176% ( 8) 00:16:48.346 6.447 - 6.495: 97.3471% ( 39) 00:16:48.346 6.495 - 6.542: 97.3809% ( 4) 00:16:48.346 6.542 - 6.590: 97.4485% ( 8) 00:16:48.346 6.590 - 6.637: 97.6428% ( 23) 00:16:48.346 6.637 - 6.684: 97.6935% ( 6) 00:16:48.346 6.684 - 6.732: 97.7273% ( 4) 00:16:48.346 6.732 - 6.779: 97.7780% ( 6) 00:16:48.346 6.779 - 6.827: 97.8118% ( 4) 00:16:48.346 6.827 - 6.874: 98.3863% ( 68) 00:16:48.346 6.874 - 6.921: 98.7496% ( 43) 00:16:48.346 6.921 - 6.969: 99.0030% ( 30) 00:16:48.346 6.969 - 7.016: 99.0791% ( 9) 00:16:48.346 7.016 - 7.064: 99.1044% ( 3) 00:16:48.346 7.064 - 7.111: 99.1213% ( 2) 00:16:48.346 7.206 - 7.253: 99.1298% ( 1) 00:16:48.346 7.443 - 7.490: 99.1382% ( 1) 00:16:48.346 7.538 - 7.585: 99.1551% ( 2) 00:16:48.346 7.633 - 7.680: 99.1636% ( 1) 00:16:48.346 7.727 - 7.775: 99.1720% ( 1) 00:16:48.346 7.775 - 7.822: 99.1889% ( 2) 00:16:48.346 7.822 - 7.870: 99.2143% ( 3) 00:16:48.346 8.012 - 8.059: 99.2396% ( 3) 00:16:48.346 8.107 - 8.154: 99.2565% ( 2) 00:16:48.346 8.154 - 8.201: 99.2650% ( 1) 00:16:48.346 8.201 - 8.249: 99.2903% ( 3) 00:16:48.346 8.249 - 8.296: 99.2987% ( 1) 00:16:48.346 8.296 - 8.344: 99.3241% ( 3) 00:16:48.346 8.344 - 8.391: 99.3325% ( 1) 00:16:48.346 8.391 - 8.439: 99.3494% ( 2) 00:16:48.346 8.486 - 8.533: 99.3663% ( 2) 00:16:48.346 8.581 - 8.628: 99.3917% ( 3) 00:16:48.346 8.628 - 8.676: 99.4086% ( 2) 00:16:48.346 8.723 - 8.770: 99.4255% ( 2) 00:16:48.346 8.818 - 8.865: 99.4339% ( 1) 00:16:48.346 8.865 - 8.913: 99.4508% ( 2) 00:16:48.346 8.913 - 8.960: 99.4846% ( 4) 00:16:48.346 8.960 - 9.007: 99.4931% ( 1) 00:16:48.346 9.007 - 9.055: 99.5015% ( 1) 00:16:48.346 9.150 - 9.197: 99.5100% ( 1) 00:16:48.346 9.197 - 9.244: 99.5269% ( 2) 00:16:48.346 9.244 - 9.292: 99.5353% ( 1) 00:16:48.346 9.481 - 9.529: 99.5522% ( 2) 00:16:48.346 9.529 - 9.576: 99.5607% ( 1) 00:16:48.346 9.624 - 9.671: 99.5691% ( 1) 00:16:48.346 9.671 - 9.719: 99.5776% ( 1) 00:16:48.346 9.813 - 9.861: 99.5860% ( 1) 00:16:48.346 9.908 - 9.956: 99.5945% ( 1) 00:16:48.346 10.098 - 10.145: 99.6114% ( 2) 00:16:48.346 10.809 - 10.856: 99.6198% ( 1) 00:16:48.346 10.856 - 10.904: 99.6283% ( 1) 00:16:48.346 11.378 - 11.425: 99.6367% ( 1) 00:16:48.346 11.520 - 11.567: 99.6452% ( 1) 00:16:48.346 12.895 - 12.990: 99.6536% ( 1) 00:16:48.346 13.464 - 13.559: 99.6705% ( 2) 00:16:48.346 13.559 - 13.653: 99.6789% ( 1) 00:16:48.346 13.653 - 13.748: 99.7043% ( 3) 00:16:48.346 13.748 - 13.843: 99.7381% ( 4) 00:16:48.346 13.843 - 13.938: 99.7719% ( 4) 00:16:48.346 13.938 - 14.033: 99.7803% ( 1) 00:16:48.346 14.127 - 14.222: 99.7972% ( 2) 00:16:48.346 15.265 - 15.360: 99.8057% ( 1) 00:16:48.346 18.773 - 18.868: 99.8141% ( 1) 00:16:48.346 22.661 - 22.756: 99.8226% ( 1) 00:16:48.346 24.462 - 24.652: 99.8310% ( 1) 00:16:48.346 3737.979 - 3762.252: 99.8395% ( 1) 00:16:48.346 3980.705 - 4004.978: 99.9071% ( 8) 00:16:48.346 4004.978 - 4029.250: 99.9916% ( 10) 00:16:48.346 4029.250 - 4053.523: 100.0000% ( 1) 00:16:48.346 00:16:48.346 Complete histogram 00:16:48.346 ================== 00:16:48.346 Range in us Cumulative Count 00:16:48.346 2.643 - 2.655: 0.3295% ( 39) 00:16:48.346 2.655 - 2.667: 22.4823% ( 2622) 00:16:48.346 2.667 - 2.679: 64.8023% ( 5009) 00:16:48.346 2.679 - 2.690: 72.2964% ( 887) 00:16:48.346 2.690 - 2.702: 78.0500% ( 681) 00:16:48.346 2.702 - 2.714: 88.0534% ( 1184) 00:16:48.346 2.714 - 2.726: 93.1058% ( 598) 00:16:48.346 2.726 - 2.738: 96.3332% ( 382) 00:16:48.346 2.738 - 2.750: 97.2710% ( 111) 00:16:48.346 2.750 - 2.761: 97.8371% ( 67) 00:16:48.346 2.761 - 2.773: 98.2004% ( 43) 00:16:48.346 2.773 - 2.785: 98.4201% ( 26) 00:16:48.346 2.785 - 2.797: 98.5384% ( 14) 00:16:48.346 2.797 - 2.809: 98.5975% ( 7) 00:16:48.346 2.809 - 2.821: 98.6228% ( 3) 00:16:48.346 2.904 - 2.916: 98.6397% ( 2) 00:16:48.346 2.916 - 2.927: 98.6651% ( 3) 00:16:48.346 2.939 - 2.951: 98.6904% ( 3) 00:16:48.346 2.951 - 2.963: 98.6989% ( 1) 00:16:48.346 2.963 - 2.975: 98.7327% ( 4) 00:16:48.346 2.975 - 2.987: 98.7496% ( 2) 00:16:48.346 2.987 - 2.9[2024-07-12 00:29:15.774730] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:48.346 99: 98.7580% ( 1) 00:16:48.346 2.999 - 3.010: 98.7834% ( 3) 00:16:48.346 3.010 - 3.022: 98.8003% ( 2) 00:16:48.346 3.022 - 3.034: 98.8087% ( 1) 00:16:48.346 3.058 - 3.081: 98.8256% ( 2) 00:16:48.346 3.129 - 3.153: 98.8425% ( 2) 00:16:48.346 3.153 - 3.176: 98.8510% ( 1) 00:16:48.346 3.224 - 3.247: 98.8594% ( 1) 00:16:48.346 3.271 - 3.295: 98.8679% ( 1) 00:16:48.346 3.295 - 3.319: 98.8763% ( 1) 00:16:48.346 3.366 - 3.390: 98.8848% ( 1) 00:16:48.346 3.437 - 3.461: 98.9017% ( 2) 00:16:48.346 3.461 - 3.484: 98.9101% ( 1) 00:16:48.346 3.508 - 3.532: 98.9270% ( 2) 00:16:48.346 3.532 - 3.556: 98.9692% ( 5) 00:16:48.347 3.556 - 3.579: 99.0030% ( 4) 00:16:48.347 3.603 - 3.627: 99.0199% ( 2) 00:16:48.347 3.627 - 3.650: 99.0537% ( 4) 00:16:48.347 3.650 - 3.674: 99.0791% ( 3) 00:16:48.347 3.674 - 3.698: 99.0875% ( 1) 00:16:48.347 3.698 - 3.721: 99.1044% ( 2) 00:16:48.347 3.769 - 3.793: 99.1129% ( 1) 00:16:48.347 3.816 - 3.840: 99.1298% ( 2) 00:16:48.347 3.982 - 4.006: 99.1382% ( 1) 00:16:48.347 4.338 - 4.361: 99.1467% ( 1) 00:16:48.347 4.409 - 4.433: 99.1551% ( 1) 00:16:48.347 4.646 - 4.670: 99.1636% ( 1) 00:16:48.347 4.812 - 4.836: 99.1720% ( 1) 00:16:48.347 5.333 - 5.357: 99.1805% ( 1) 00:16:48.347 5.547 - 5.570: 99.1889% ( 1) 00:16:48.347 5.570 - 5.594: 99.1974% ( 1) 00:16:48.347 5.807 - 5.831: 99.2143% ( 2) 00:16:48.347 5.950 - 5.973: 99.2227% ( 1) 00:16:48.347 6.044 - 6.068: 99.2312% ( 1) 00:16:48.347 6.116 - 6.163: 99.2396% ( 1) 00:16:48.347 6.163 - 6.210: 99.2481% ( 1) 00:16:48.347 6.258 - 6.305: 99.2565% ( 1) 00:16:48.347 6.305 - 6.353: 99.2650% ( 1) 00:16:48.347 6.353 - 6.400: 99.2734% ( 1) 00:16:48.347 6.400 - 6.447: 99.2819% ( 1) 00:16:48.347 6.542 - 6.590: 99.2903% ( 1) 00:16:48.347 6.590 - 6.637: 99.2987% ( 1) 00:16:48.347 6.921 - 6.969: 99.3072% ( 1) 00:16:48.347 6.969 - 7.016: 99.3241% ( 2) 00:16:48.347 7.016 - 7.064: 99.3325% ( 1) 00:16:48.347 7.064 - 7.111: 99.3410% ( 1) 00:16:48.347 7.585 - 7.633: 99.3494% ( 1) 00:16:48.347 8.059 - 8.107: 99.3579% ( 1) 00:16:48.347 8.344 - 8.391: 99.3663% ( 1) 00:16:48.347 8.391 - 8.439: 99.3748% ( 1) 00:16:48.347 8.439 - 8.486: 99.3832% ( 1) 00:16:48.347 11.141 - 11.188: 99.3917% ( 1) 00:16:48.347 11.804 - 11.852: 99.4001% ( 1) 00:16:48.347 30.151 - 30.341: 99.4086% ( 1) 00:16:48.347 3980.705 - 4004.978: 99.8226% ( 49) 00:16:48.347 4004.978 - 4029.250: 100.0000% ( 21) 00:16:48.347 00:16:48.347 00:29:15 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:16:48.347 00:29:15 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:16:48.347 00:29:15 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:16:48.347 00:29:15 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:16:48.347 00:29:15 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:16:48.347 [ 00:16:48.347 { 00:16:48.347 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:16:48.347 "subtype": "Discovery", 00:16:48.347 "listen_addresses": [], 00:16:48.347 "allow_any_host": true, 00:16:48.347 "hosts": [] 00:16:48.347 }, 00:16:48.347 { 00:16:48.347 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:16:48.347 "subtype": "NVMe", 00:16:48.347 "listen_addresses": [ 00:16:48.347 { 00:16:48.347 "trtype": "VFIOUSER", 00:16:48.347 "adrfam": "IPv4", 00:16:48.347 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:16:48.347 "trsvcid": "0" 00:16:48.347 } 00:16:48.347 ], 00:16:48.347 "allow_any_host": true, 00:16:48.347 "hosts": [], 00:16:48.347 "serial_number": "SPDK1", 00:16:48.347 "model_number": "SPDK bdev Controller", 00:16:48.347 "max_namespaces": 32, 00:16:48.347 "min_cntlid": 1, 00:16:48.347 "max_cntlid": 65519, 00:16:48.347 "namespaces": [ 00:16:48.347 { 00:16:48.347 "nsid": 1, 00:16:48.347 "bdev_name": "Malloc1", 00:16:48.347 "name": "Malloc1", 00:16:48.347 "nguid": "FA6B238BD9E443FD873E8FFB85633BEE", 00:16:48.347 "uuid": "fa6b238b-d9e4-43fd-873e-8ffb85633bee" 00:16:48.347 }, 00:16:48.347 { 00:16:48.347 "nsid": 2, 00:16:48.347 "bdev_name": "Malloc3", 00:16:48.347 "name": "Malloc3", 00:16:48.347 "nguid": "305FA6FA76C740FA9DF1C95507B7D9BA", 00:16:48.347 "uuid": "305fa6fa-76c7-40fa-9df1-c95507b7d9ba" 00:16:48.347 } 00:16:48.347 ] 00:16:48.347 }, 00:16:48.347 { 00:16:48.347 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:16:48.347 "subtype": "NVMe", 00:16:48.347 "listen_addresses": [ 00:16:48.347 { 00:16:48.347 "trtype": "VFIOUSER", 00:16:48.347 "adrfam": "IPv4", 00:16:48.347 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:16:48.347 "trsvcid": "0" 00:16:48.347 } 00:16:48.347 ], 00:16:48.347 "allow_any_host": true, 00:16:48.347 "hosts": [], 00:16:48.347 "serial_number": "SPDK2", 00:16:48.347 "model_number": "SPDK bdev Controller", 00:16:48.347 "max_namespaces": 32, 00:16:48.347 "min_cntlid": 1, 00:16:48.347 "max_cntlid": 65519, 00:16:48.347 "namespaces": [ 00:16:48.347 { 00:16:48.347 "nsid": 1, 00:16:48.347 "bdev_name": "Malloc2", 00:16:48.347 "name": "Malloc2", 00:16:48.347 "nguid": "9CC61FD0FB084E0EAF0B22B2F0DF9C9A", 00:16:48.347 "uuid": "9cc61fd0-fb08-4e0e-af0b-22b2f0df9c9a" 00:16:48.347 } 00:16:48.347 ] 00:16:48.347 } 00:16:48.347 ] 00:16:48.347 00:29:16 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:16:48.347 00:29:16 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=929936 00:16:48.347 00:29:16 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:16:48.347 00:29:16 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:16:48.347 00:29:16 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1261 -- # local i=0 00:16:48.347 00:29:16 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1262 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:48.347 00:29:16 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1263 -- # '[' 0 -lt 200 ']' 00:16:48.347 00:29:16 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1264 -- # i=1 00:16:48.347 00:29:16 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # sleep 0.1 00:16:48.347 EAL: No free 2048 kB hugepages reported on node 1 00:16:48.605 00:29:16 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1262 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:48.605 00:29:16 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1263 -- # '[' 1 -lt 200 ']' 00:16:48.605 00:29:16 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1264 -- # i=2 00:16:48.605 00:29:16 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # sleep 0.1 00:16:48.606 [2024-07-12 00:29:16.276846] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:48.606 00:29:16 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1262 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:48.606 00:29:16 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:48.606 00:29:16 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # return 0 00:16:48.606 00:29:16 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:16:48.606 00:29:16 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:16:48.863 Malloc4 00:16:48.863 00:29:16 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:16:49.120 [2024-07-12 00:29:16.942156] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:49.120 00:29:16 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:16:49.377 Asynchronous Event Request test 00:16:49.377 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:16:49.377 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:16:49.377 Registering asynchronous event callbacks... 00:16:49.377 Starting namespace attribute notice tests for all controllers... 00:16:49.377 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:16:49.377 aer_cb - Changed Namespace 00:16:49.377 Cleaning up... 00:16:49.636 [ 00:16:49.636 { 00:16:49.636 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:16:49.636 "subtype": "Discovery", 00:16:49.636 "listen_addresses": [], 00:16:49.636 "allow_any_host": true, 00:16:49.636 "hosts": [] 00:16:49.636 }, 00:16:49.636 { 00:16:49.636 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:16:49.636 "subtype": "NVMe", 00:16:49.636 "listen_addresses": [ 00:16:49.636 { 00:16:49.636 "trtype": "VFIOUSER", 00:16:49.636 "adrfam": "IPv4", 00:16:49.636 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:16:49.636 "trsvcid": "0" 00:16:49.636 } 00:16:49.636 ], 00:16:49.636 "allow_any_host": true, 00:16:49.636 "hosts": [], 00:16:49.636 "serial_number": "SPDK1", 00:16:49.636 "model_number": "SPDK bdev Controller", 00:16:49.636 "max_namespaces": 32, 00:16:49.636 "min_cntlid": 1, 00:16:49.636 "max_cntlid": 65519, 00:16:49.636 "namespaces": [ 00:16:49.636 { 00:16:49.636 "nsid": 1, 00:16:49.636 "bdev_name": "Malloc1", 00:16:49.636 "name": "Malloc1", 00:16:49.636 "nguid": "FA6B238BD9E443FD873E8FFB85633BEE", 00:16:49.636 "uuid": "fa6b238b-d9e4-43fd-873e-8ffb85633bee" 00:16:49.636 }, 00:16:49.636 { 00:16:49.636 "nsid": 2, 00:16:49.636 "bdev_name": "Malloc3", 00:16:49.636 "name": "Malloc3", 00:16:49.636 "nguid": "305FA6FA76C740FA9DF1C95507B7D9BA", 00:16:49.636 "uuid": "305fa6fa-76c7-40fa-9df1-c95507b7d9ba" 00:16:49.636 } 00:16:49.636 ] 00:16:49.636 }, 00:16:49.636 { 00:16:49.636 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:16:49.636 "subtype": "NVMe", 00:16:49.636 "listen_addresses": [ 00:16:49.636 { 00:16:49.636 "trtype": "VFIOUSER", 00:16:49.636 "adrfam": "IPv4", 00:16:49.636 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:16:49.636 "trsvcid": "0" 00:16:49.636 } 00:16:49.636 ], 00:16:49.636 "allow_any_host": true, 00:16:49.636 "hosts": [], 00:16:49.636 "serial_number": "SPDK2", 00:16:49.636 "model_number": "SPDK bdev Controller", 00:16:49.636 "max_namespaces": 32, 00:16:49.636 "min_cntlid": 1, 00:16:49.636 "max_cntlid": 65519, 00:16:49.636 "namespaces": [ 00:16:49.636 { 00:16:49.636 "nsid": 1, 00:16:49.636 "bdev_name": "Malloc2", 00:16:49.636 "name": "Malloc2", 00:16:49.636 "nguid": "9CC61FD0FB084E0EAF0B22B2F0DF9C9A", 00:16:49.636 "uuid": "9cc61fd0-fb08-4e0e-af0b-22b2f0df9c9a" 00:16:49.636 }, 00:16:49.636 { 00:16:49.636 "nsid": 2, 00:16:49.636 "bdev_name": "Malloc4", 00:16:49.636 "name": "Malloc4", 00:16:49.636 "nguid": "B1F38F15435B49B5AFD4B711C6E98227", 00:16:49.636 "uuid": "b1f38f15-435b-49b5-afd4-b711c6e98227" 00:16:49.636 } 00:16:49.636 ] 00:16:49.636 } 00:16:49.636 ] 00:16:49.636 00:29:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 929936 00:16:49.636 00:29:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:16:49.636 00:29:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 925497 00:16:49.636 00:29:17 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@946 -- # '[' -z 925497 ']' 00:16:49.636 00:29:17 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@950 -- # kill -0 925497 00:16:49.636 00:29:17 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@951 -- # uname 00:16:49.636 00:29:17 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:16:49.636 00:29:17 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 925497 00:16:49.636 00:29:17 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:16:49.636 00:29:17 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:16:49.636 00:29:17 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@964 -- # echo 'killing process with pid 925497' 00:16:49.636 killing process with pid 925497 00:16:49.636 00:29:17 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@965 -- # kill 925497 00:16:49.636 00:29:17 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@970 -- # wait 925497 00:16:49.895 00:29:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:16:49.895 00:29:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:16:49.895 00:29:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:16:49.895 00:29:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:16:49.895 00:29:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:16:49.895 00:29:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=930050 00:16:49.895 00:29:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:16:49.895 00:29:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 930050' 00:16:49.895 Process pid: 930050 00:16:49.895 00:29:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:16:49.895 00:29:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 930050 00:16:49.895 00:29:17 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@827 -- # '[' -z 930050 ']' 00:16:49.895 00:29:17 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:49.895 00:29:17 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:49.895 00:29:17 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:49.895 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:49.895 00:29:17 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:49.895 00:29:17 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:16:49.895 [2024-07-12 00:29:17.555776] thread.c:2937:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:16:49.895 [2024-07-12 00:29:17.557050] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:16:49.895 [2024-07-12 00:29:17.557121] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:49.895 EAL: No free 2048 kB hugepages reported on node 1 00:16:49.895 [2024-07-12 00:29:17.620214] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:49.895 [2024-07-12 00:29:17.711215] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:49.895 [2024-07-12 00:29:17.711277] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:49.895 [2024-07-12 00:29:17.711293] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:49.895 [2024-07-12 00:29:17.711313] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:49.895 [2024-07-12 00:29:17.711326] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:49.895 [2024-07-12 00:29:17.711384] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:49.895 [2024-07-12 00:29:17.711434] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:49.895 [2024-07-12 00:29:17.711485] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:49.895 [2024-07-12 00:29:17.711488] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:50.153 [2024-07-12 00:29:17.800803] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:16:50.153 [2024-07-12 00:29:17.800998] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:16:50.153 [2024-07-12 00:29:17.801258] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:16:50.153 [2024-07-12 00:29:17.801724] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:16:50.153 [2024-07-12 00:29:17.801975] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:16:50.153 00:29:17 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:16:50.153 00:29:17 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@860 -- # return 0 00:16:50.153 00:29:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:16:51.086 00:29:18 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:16:51.345 00:29:19 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:16:51.345 00:29:19 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:16:51.345 00:29:19 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:16:51.345 00:29:19 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:16:51.345 00:29:19 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:16:51.604 Malloc1 00:16:51.863 00:29:19 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:16:52.121 00:29:19 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:16:52.379 00:29:20 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:16:52.637 00:29:20 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:16:52.637 00:29:20 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:16:52.637 00:29:20 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:16:52.895 Malloc2 00:16:52.895 00:29:20 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:16:53.153 00:29:20 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:16:53.411 00:29:21 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:16:54.009 00:29:21 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:16:54.009 00:29:21 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 930050 00:16:54.009 00:29:21 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@946 -- # '[' -z 930050 ']' 00:16:54.009 00:29:21 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@950 -- # kill -0 930050 00:16:54.009 00:29:21 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@951 -- # uname 00:16:54.009 00:29:21 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:16:54.009 00:29:21 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 930050 00:16:54.009 00:29:21 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:16:54.009 00:29:21 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:16:54.009 00:29:21 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@964 -- # echo 'killing process with pid 930050' 00:16:54.009 killing process with pid 930050 00:16:54.009 00:29:21 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@965 -- # kill 930050 00:16:54.009 00:29:21 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@970 -- # wait 930050 00:16:54.009 00:29:21 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:16:54.009 00:29:21 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:16:54.009 00:16:54.009 real 0m53.574s 00:16:54.009 user 3m32.109s 00:16:54.009 sys 0m4.492s 00:16:54.009 00:29:21 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1122 -- # xtrace_disable 00:16:54.009 00:29:21 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:16:54.009 ************************************ 00:16:54.009 END TEST nvmf_vfio_user 00:16:54.009 ************************************ 00:16:54.009 00:29:21 nvmf_tcp -- nvmf/nvmf.sh@42 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:16:54.009 00:29:21 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:16:54.009 00:29:21 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:16:54.009 00:29:21 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:54.009 ************************************ 00:16:54.009 START TEST nvmf_vfio_user_nvme_compliance 00:16:54.009 ************************************ 00:16:54.009 00:29:21 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:16:54.268 * Looking for test storage... 00:16:54.268 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:16:54.268 00:29:21 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:54.268 00:29:21 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:16:54.268 00:29:21 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:54.268 00:29:21 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:54.268 00:29:21 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:54.268 00:29:21 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:54.268 00:29:21 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:54.268 00:29:21 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:54.268 00:29:21 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:54.268 00:29:21 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:54.268 00:29:21 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:54.268 00:29:21 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:54.268 00:29:21 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:16:54.268 00:29:21 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:16:54.268 00:29:21 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:54.269 00:29:21 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:54.269 00:29:21 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:54.269 00:29:21 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:54.269 00:29:21 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:54.269 00:29:21 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:54.269 00:29:21 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:54.269 00:29:21 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:54.269 00:29:21 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:54.269 00:29:21 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:54.269 00:29:21 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:54.269 00:29:21 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:16:54.269 00:29:21 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:54.269 00:29:21 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@47 -- # : 0 00:16:54.269 00:29:21 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:54.269 00:29:21 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:54.269 00:29:21 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:54.269 00:29:21 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:54.269 00:29:21 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:54.269 00:29:21 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:54.269 00:29:21 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:54.269 00:29:21 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:54.269 00:29:21 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:54.269 00:29:21 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:54.269 00:29:21 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:16:54.269 00:29:21 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:16:54.269 00:29:21 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:16:54.269 00:29:21 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=930520 00:16:54.269 00:29:21 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:16:54.269 00:29:21 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 930520' 00:16:54.269 Process pid: 930520 00:16:54.269 00:29:21 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:16:54.269 00:29:21 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 930520 00:16:54.269 00:29:21 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@827 -- # '[' -z 930520 ']' 00:16:54.269 00:29:21 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:54.269 00:29:21 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:54.269 00:29:21 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:54.269 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:54.269 00:29:21 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:54.269 00:29:21 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:54.269 [2024-07-12 00:29:21.929754] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:16:54.269 [2024-07-12 00:29:21.929865] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:54.269 EAL: No free 2048 kB hugepages reported on node 1 00:16:54.269 [2024-07-12 00:29:21.993282] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:54.269 [2024-07-12 00:29:22.083593] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:54.269 [2024-07-12 00:29:22.083655] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:54.269 [2024-07-12 00:29:22.083671] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:54.269 [2024-07-12 00:29:22.083684] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:54.269 [2024-07-12 00:29:22.083696] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:54.269 [2024-07-12 00:29:22.086622] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:54.269 [2024-07-12 00:29:22.086699] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:54.269 [2024-07-12 00:29:22.090602] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:54.528 00:29:22 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:16:54.528 00:29:22 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@860 -- # return 0 00:16:54.528 00:29:22 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:16:55.462 00:29:23 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:16:55.462 00:29:23 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:16:55.462 00:29:23 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:16:55.462 00:29:23 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:55.462 00:29:23 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:55.462 00:29:23 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:55.462 00:29:23 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:16:55.462 00:29:23 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:16:55.462 00:29:23 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:55.462 00:29:23 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:55.462 malloc0 00:16:55.462 00:29:23 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:55.462 00:29:23 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:16:55.462 00:29:23 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:55.462 00:29:23 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:55.462 00:29:23 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:55.462 00:29:23 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:16:55.462 00:29:23 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:55.462 00:29:23 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:55.462 00:29:23 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:55.462 00:29:23 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:16:55.462 00:29:23 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:55.462 00:29:23 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:55.462 00:29:23 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:55.462 00:29:23 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:16:55.719 EAL: No free 2048 kB hugepages reported on node 1 00:16:55.719 00:16:55.719 00:16:55.719 CUnit - A unit testing framework for C - Version 2.1-3 00:16:55.719 http://cunit.sourceforge.net/ 00:16:55.719 00:16:55.719 00:16:55.719 Suite: nvme_compliance 00:16:55.719 Test: admin_identify_ctrlr_verify_dptr ...[2024-07-12 00:29:23.425526] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:55.719 [2024-07-12 00:29:23.427090] vfio_user.c: 804:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:16:55.719 [2024-07-12 00:29:23.427117] vfio_user.c:5514:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:16:55.719 [2024-07-12 00:29:23.427131] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:16:55.719 [2024-07-12 00:29:23.428565] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:55.719 passed 00:16:55.719 Test: admin_identify_ctrlr_verify_fused ...[2024-07-12 00:29:23.530270] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:55.719 [2024-07-12 00:29:23.533281] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:55.977 passed 00:16:55.977 Test: admin_identify_ns ...[2024-07-12 00:29:23.635627] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:55.977 [2024-07-12 00:29:23.697637] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:16:55.977 [2024-07-12 00:29:23.705604] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:16:55.977 [2024-07-12 00:29:23.726769] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:55.977 passed 00:16:56.235 Test: admin_get_features_mandatory_features ...[2024-07-12 00:29:23.821912] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:56.235 [2024-07-12 00:29:23.825942] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:56.235 passed 00:16:56.235 Test: admin_get_features_optional_features ...[2024-07-12 00:29:23.923552] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:56.235 [2024-07-12 00:29:23.926571] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:56.235 passed 00:16:56.235 Test: admin_set_features_number_of_queues ...[2024-07-12 00:29:24.022708] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:56.493 [2024-07-12 00:29:24.129760] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:56.493 passed 00:16:56.493 Test: admin_get_log_page_mandatory_logs ...[2024-07-12 00:29:24.227005] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:56.493 [2024-07-12 00:29:24.230036] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:56.493 passed 00:16:56.493 Test: admin_get_log_page_with_lpo ...[2024-07-12 00:29:24.327117] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:56.751 [2024-07-12 00:29:24.394630] ctrlr.c:2654:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:16:56.751 [2024-07-12 00:29:24.407708] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:56.751 passed 00:16:56.751 Test: fabric_property_get ...[2024-07-12 00:29:24.504997] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:56.751 [2024-07-12 00:29:24.506329] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:16:56.751 [2024-07-12 00:29:24.508022] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:56.751 passed 00:16:57.010 Test: admin_delete_io_sq_use_admin_qid ...[2024-07-12 00:29:24.607676] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:57.010 [2024-07-12 00:29:24.609007] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:16:57.010 [2024-07-12 00:29:24.610708] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:57.010 passed 00:16:57.010 Test: admin_delete_io_sq_delete_sq_twice ...[2024-07-12 00:29:24.707552] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:57.010 [2024-07-12 00:29:24.792611] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:16:57.010 [2024-07-12 00:29:24.808617] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:16:57.010 [2024-07-12 00:29:24.813753] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:57.268 passed 00:16:57.268 Test: admin_delete_io_cq_use_admin_qid ...[2024-07-12 00:29:24.909899] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:57.268 [2024-07-12 00:29:24.911236] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:16:57.268 [2024-07-12 00:29:24.912935] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:57.268 passed 00:16:57.268 Test: admin_delete_io_cq_delete_cq_first ...[2024-07-12 00:29:25.011075] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:57.268 [2024-07-12 00:29:25.087602] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:16:57.526 [2024-07-12 00:29:25.111617] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:16:57.526 [2024-07-12 00:29:25.116763] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:57.526 passed 00:16:57.526 Test: admin_create_io_cq_verify_iv_pc ...[2024-07-12 00:29:25.213992] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:57.526 [2024-07-12 00:29:25.215384] vfio_user.c:2158:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:16:57.526 [2024-07-12 00:29:25.215428] vfio_user.c:2152:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:16:57.526 [2024-07-12 00:29:25.217023] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:57.526 passed 00:16:57.526 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-07-12 00:29:25.316173] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:57.784 [2024-07-12 00:29:25.408599] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:16:57.784 [2024-07-12 00:29:25.416600] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:16:57.784 [2024-07-12 00:29:25.424615] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:16:57.784 [2024-07-12 00:29:25.432608] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:16:57.784 [2024-07-12 00:29:25.461756] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:57.784 passed 00:16:57.784 Test: admin_create_io_sq_verify_pc ...[2024-07-12 00:29:25.557855] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:57.784 [2024-07-12 00:29:25.574613] vfio_user.c:2051:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:16:57.784 [2024-07-12 00:29:25.592392] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:58.041 passed 00:16:58.041 Test: admin_create_io_qp_max_qps ...[2024-07-12 00:29:25.688066] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:58.975 [2024-07-12 00:29:26.795613] nvme_ctrlr.c:5342:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user] No free I/O queue IDs 00:16:59.540 [2024-07-12 00:29:27.175862] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:59.540 passed 00:16:59.540 Test: admin_create_io_sq_shared_cq ...[2024-07-12 00:29:27.272218] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:59.798 [2024-07-12 00:29:27.404596] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:16:59.798 [2024-07-12 00:29:27.441693] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:59.798 passed 00:16:59.798 00:16:59.798 Run Summary: Type Total Ran Passed Failed Inactive 00:16:59.798 suites 1 1 n/a 0 0 00:16:59.798 tests 18 18 18 0 0 00:16:59.798 asserts 360 360 360 0 n/a 00:16:59.798 00:16:59.798 Elapsed time = 1.693 seconds 00:16:59.798 00:29:27 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 930520 00:16:59.798 00:29:27 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@946 -- # '[' -z 930520 ']' 00:16:59.798 00:29:27 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@950 -- # kill -0 930520 00:16:59.798 00:29:27 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@951 -- # uname 00:16:59.798 00:29:27 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:16:59.798 00:29:27 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 930520 00:16:59.798 00:29:27 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:16:59.798 00:29:27 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:16:59.798 00:29:27 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@964 -- # echo 'killing process with pid 930520' 00:16:59.798 killing process with pid 930520 00:16:59.798 00:29:27 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@965 -- # kill 930520 00:16:59.798 00:29:27 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@970 -- # wait 930520 00:17:00.058 00:29:27 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:17:00.058 00:29:27 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:17:00.058 00:17:00.058 real 0m5.901s 00:17:00.058 user 0m16.689s 00:17:00.058 sys 0m0.519s 00:17:00.058 00:29:27 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:00.058 00:29:27 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:17:00.058 ************************************ 00:17:00.058 END TEST nvmf_vfio_user_nvme_compliance 00:17:00.058 ************************************ 00:17:00.058 00:29:27 nvmf_tcp -- nvmf/nvmf.sh@43 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:17:00.058 00:29:27 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:17:00.058 00:29:27 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:00.058 00:29:27 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:00.058 ************************************ 00:17:00.058 START TEST nvmf_vfio_user_fuzz 00:17:00.058 ************************************ 00:17:00.058 00:29:27 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:17:00.058 * Looking for test storage... 00:17:00.058 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:00.058 00:29:27 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:00.058 00:29:27 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:17:00.058 00:29:27 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:00.058 00:29:27 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:00.058 00:29:27 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:00.058 00:29:27 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:00.058 00:29:27 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:00.058 00:29:27 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:00.058 00:29:27 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:00.058 00:29:27 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:00.058 00:29:27 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:00.058 00:29:27 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:00.058 00:29:27 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:17:00.058 00:29:27 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:17:00.058 00:29:27 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:00.058 00:29:27 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:00.058 00:29:27 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:00.058 00:29:27 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:00.058 00:29:27 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:00.058 00:29:27 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:00.058 00:29:27 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:00.058 00:29:27 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:00.058 00:29:27 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:00.058 00:29:27 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:00.058 00:29:27 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:00.058 00:29:27 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:17:00.059 00:29:27 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:00.059 00:29:27 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@47 -- # : 0 00:17:00.059 00:29:27 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:00.059 00:29:27 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:00.059 00:29:27 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:00.059 00:29:27 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:00.059 00:29:27 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:00.059 00:29:27 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:00.059 00:29:27 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:00.059 00:29:27 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:00.059 00:29:27 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:17:00.059 00:29:27 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:17:00.059 00:29:27 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:17:00.059 00:29:27 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:17:00.059 00:29:27 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:17:00.059 00:29:27 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:17:00.059 00:29:27 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:17:00.059 00:29:27 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=931166 00:17:00.059 00:29:27 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:17:00.059 00:29:27 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 931166' 00:17:00.059 Process pid: 931166 00:17:00.059 00:29:27 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:17:00.059 00:29:27 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 931166 00:17:00.059 00:29:27 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@827 -- # '[' -z 931166 ']' 00:17:00.059 00:29:27 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:00.059 00:29:27 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:00.059 00:29:27 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:00.059 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:00.059 00:29:27 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:00.059 00:29:27 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:17:00.317 00:29:28 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:00.317 00:29:28 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@860 -- # return 0 00:17:00.317 00:29:28 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:17:01.690 00:29:29 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:17:01.690 00:29:29 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:01.690 00:29:29 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:17:01.690 00:29:29 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:01.690 00:29:29 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:17:01.690 00:29:29 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:17:01.690 00:29:29 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:01.690 00:29:29 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:17:01.690 malloc0 00:17:01.690 00:29:29 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:01.690 00:29:29 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:17:01.690 00:29:29 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:01.690 00:29:29 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:17:01.690 00:29:29 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:01.690 00:29:29 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:17:01.690 00:29:29 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:01.690 00:29:29 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:17:01.690 00:29:29 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:01.690 00:29:29 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:17:01.690 00:29:29 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:01.690 00:29:29 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:17:01.690 00:29:29 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:01.690 00:29:29 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:17:01.690 00:29:29 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:17:33.756 Fuzzing completed. Shutting down the fuzz application 00:17:33.756 00:17:33.756 Dumping successful admin opcodes: 00:17:33.756 8, 9, 10, 24, 00:17:33.756 Dumping successful io opcodes: 00:17:33.756 0, 00:17:33.756 NS: 0x200003a1ef00 I/O qp, Total commands completed: 544215, total successful commands: 2092, random_seed: 3734512128 00:17:33.756 NS: 0x200003a1ef00 admin qp, Total commands completed: 102510, total successful commands: 842, random_seed: 252863040 00:17:33.756 00:29:59 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:17:33.756 00:29:59 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:33.756 00:29:59 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:17:33.756 00:29:59 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:33.756 00:29:59 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 931166 00:17:33.756 00:29:59 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@946 -- # '[' -z 931166 ']' 00:17:33.756 00:29:59 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@950 -- # kill -0 931166 00:17:33.756 00:29:59 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@951 -- # uname 00:17:33.756 00:29:59 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:33.756 00:29:59 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 931166 00:17:33.756 00:29:59 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:17:33.756 00:29:59 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:17:33.756 00:29:59 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@964 -- # echo 'killing process with pid 931166' 00:17:33.756 killing process with pid 931166 00:17:33.756 00:29:59 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@965 -- # kill 931166 00:17:33.756 00:29:59 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@970 -- # wait 931166 00:17:33.756 00:29:59 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:17:33.756 00:29:59 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:17:33.756 00:17:33.756 real 0m32.112s 00:17:33.756 user 0m31.367s 00:17:33.756 sys 0m27.271s 00:17:33.756 00:29:59 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:33.756 00:29:59 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:17:33.756 ************************************ 00:17:33.756 END TEST nvmf_vfio_user_fuzz 00:17:33.756 ************************************ 00:17:33.756 00:29:59 nvmf_tcp -- nvmf/nvmf.sh@47 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:17:33.756 00:29:59 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:17:33.756 00:29:59 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:33.756 00:29:59 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:33.756 ************************************ 00:17:33.756 START TEST nvmf_host_management 00:17:33.756 ************************************ 00:17:33.756 00:29:59 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:17:33.756 * Looking for test storage... 00:17:33.756 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:33.756 00:29:59 nvmf_tcp.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:33.756 00:29:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:17:33.756 00:29:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:33.756 00:29:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:33.756 00:29:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:33.756 00:29:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:33.756 00:29:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:33.756 00:29:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:33.756 00:29:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:33.756 00:29:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:33.756 00:29:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:33.756 00:29:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:33.756 00:29:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:17:33.756 00:29:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:17:33.756 00:29:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:33.756 00:29:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:33.756 00:29:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:33.756 00:29:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:33.756 00:29:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:33.756 00:29:59 nvmf_tcp.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:33.756 00:29:59 nvmf_tcp.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:33.756 00:29:59 nvmf_tcp.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:33.756 00:29:59 nvmf_tcp.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:33.756 00:29:59 nvmf_tcp.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:33.756 00:29:59 nvmf_tcp.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:33.756 00:29:59 nvmf_tcp.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:17:33.756 00:29:59 nvmf_tcp.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:33.756 00:29:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@47 -- # : 0 00:17:33.756 00:29:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:33.756 00:29:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:33.756 00:29:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:33.756 00:29:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:33.756 00:29:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:33.756 00:29:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:33.756 00:29:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:33.756 00:29:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:33.756 00:30:00 nvmf_tcp.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:33.756 00:30:00 nvmf_tcp.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:33.756 00:30:00 nvmf_tcp.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:17:33.756 00:30:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:33.756 00:30:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:33.756 00:30:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:33.756 00:30:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:33.756 00:30:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:33.756 00:30:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:33.756 00:30:00 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:33.756 00:30:00 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:33.756 00:30:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:33.756 00:30:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:33.756 00:30:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@285 -- # xtrace_disable 00:17:33.756 00:30:00 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:17:33.756 00:30:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:33.756 00:30:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # pci_devs=() 00:17:33.756 00:30:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:33.756 00:30:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:33.756 00:30:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:33.756 00:30:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:33.756 00:30:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:33.756 00:30:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # net_devs=() 00:17:33.756 00:30:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:33.756 00:30:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # e810=() 00:17:33.756 00:30:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # local -ga e810 00:17:33.756 00:30:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # x722=() 00:17:33.756 00:30:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # local -ga x722 00:17:33.756 00:30:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # mlx=() 00:17:33.756 00:30:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # local -ga mlx 00:17:33.756 00:30:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:33.756 00:30:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:33.756 00:30:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:33.756 00:30:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:33.756 00:30:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:33.756 00:30:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:33.756 00:30:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:33.756 00:30:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:33.756 00:30:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:33.756 00:30:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:33.756 00:30:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:33.757 00:30:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:33.757 00:30:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:33.757 00:30:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:33.757 00:30:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:33.757 00:30:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:33.757 00:30:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:33.757 00:30:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:33.757 00:30:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:17:33.757 Found 0000:08:00.0 (0x8086 - 0x159b) 00:17:33.757 00:30:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:33.757 00:30:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:33.757 00:30:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:33.757 00:30:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:33.757 00:30:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:33.757 00:30:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:33.757 00:30:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:17:33.757 Found 0000:08:00.1 (0x8086 - 0x159b) 00:17:33.757 00:30:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:33.757 00:30:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:33.757 00:30:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:33.757 00:30:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:33.757 00:30:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:33.757 00:30:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:33.757 00:30:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:33.757 00:30:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:33.757 00:30:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:33.757 00:30:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:33.757 00:30:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:33.757 00:30:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:33.757 00:30:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:33.757 00:30:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:33.757 00:30:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:33.757 00:30:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:17:33.757 Found net devices under 0000:08:00.0: cvl_0_0 00:17:33.757 00:30:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:33.757 00:30:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:33.757 00:30:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:33.757 00:30:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:33.757 00:30:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:33.757 00:30:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:33.757 00:30:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:33.757 00:30:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:33.757 00:30:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:17:33.757 Found net devices under 0000:08:00.1: cvl_0_1 00:17:33.757 00:30:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:33.757 00:30:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:33.757 00:30:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # is_hw=yes 00:17:33.757 00:30:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:33.757 00:30:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:33.757 00:30:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:33.757 00:30:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:33.757 00:30:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:33.757 00:30:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:33.757 00:30:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:33.757 00:30:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:33.757 00:30:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:33.757 00:30:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:33.757 00:30:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:33.757 00:30:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:33.757 00:30:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:33.757 00:30:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:33.757 00:30:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:33.757 00:30:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:34.015 00:30:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:34.015 00:30:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:34.015 00:30:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:34.015 00:30:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:34.015 00:30:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:34.015 00:30:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:34.015 00:30:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:34.015 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:34.015 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.237 ms 00:17:34.015 00:17:34.015 --- 10.0.0.2 ping statistics --- 00:17:34.015 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:34.015 rtt min/avg/max/mdev = 0.237/0.237/0.237/0.000 ms 00:17:34.015 00:30:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:34.015 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:34.015 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.078 ms 00:17:34.015 00:17:34.015 --- 10.0.0.1 ping statistics --- 00:17:34.015 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:34.015 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:17:34.015 00:30:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:34.015 00:30:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@422 -- # return 0 00:17:34.015 00:30:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:34.015 00:30:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:34.015 00:30:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:34.015 00:30:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:34.015 00:30:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:34.015 00:30:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:34.015 00:30:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:34.015 00:30:01 nvmf_tcp.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:17:34.015 00:30:01 nvmf_tcp.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:17:34.015 00:30:01 nvmf_tcp.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:17:34.015 00:30:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:34.015 00:30:01 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@720 -- # xtrace_disable 00:17:34.015 00:30:01 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:17:34.015 00:30:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@481 -- # nvmfpid=935397 00:17:34.015 00:30:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:17:34.015 00:30:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@482 -- # waitforlisten 935397 00:17:34.015 00:30:01 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@827 -- # '[' -z 935397 ']' 00:17:34.015 00:30:01 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:34.015 00:30:01 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:34.015 00:30:01 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:34.015 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:34.015 00:30:01 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:34.015 00:30:01 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:17:34.015 [2024-07-12 00:30:01.755844] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:17:34.015 [2024-07-12 00:30:01.755934] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:34.015 EAL: No free 2048 kB hugepages reported on node 1 00:17:34.015 [2024-07-12 00:30:01.819461] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:34.273 [2024-07-12 00:30:01.908370] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:34.273 [2024-07-12 00:30:01.908428] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:34.273 [2024-07-12 00:30:01.908444] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:34.273 [2024-07-12 00:30:01.908458] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:34.273 [2024-07-12 00:30:01.908469] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:34.273 [2024-07-12 00:30:01.908548] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:34.273 [2024-07-12 00:30:01.908611] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:17:34.273 [2024-07-12 00:30:01.908662] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:17:34.273 [2024-07-12 00:30:01.908665] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:34.273 00:30:02 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:34.273 00:30:02 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@860 -- # return 0 00:17:34.274 00:30:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:34.274 00:30:02 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:34.274 00:30:02 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:17:34.274 00:30:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:34.274 00:30:02 nvmf_tcp.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:34.274 00:30:02 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:34.274 00:30:02 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:17:34.274 [2024-07-12 00:30:02.052153] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:34.274 00:30:02 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:34.274 00:30:02 nvmf_tcp.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:17:34.274 00:30:02 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@720 -- # xtrace_disable 00:17:34.274 00:30:02 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:17:34.274 00:30:02 nvmf_tcp.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:17:34.274 00:30:02 nvmf_tcp.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:17:34.274 00:30:02 nvmf_tcp.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:17:34.274 00:30:02 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:34.274 00:30:02 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:17:34.274 Malloc0 00:17:34.274 [2024-07-12 00:30:02.109263] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:34.532 00:30:02 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:34.532 00:30:02 nvmf_tcp.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:17:34.532 00:30:02 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:34.532 00:30:02 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:17:34.532 00:30:02 nvmf_tcp.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=935460 00:17:34.532 00:30:02 nvmf_tcp.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 935460 /var/tmp/bdevperf.sock 00:17:34.532 00:30:02 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:17:34.532 00:30:02 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@827 -- # '[' -z 935460 ']' 00:17:34.532 00:30:02 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:17:34.532 00:30:02 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:34.532 00:30:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:17:34.532 00:30:02 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:34.532 00:30:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:17:34.532 00:30:02 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:34.532 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:34.532 00:30:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:34.532 00:30:02 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:34.532 00:30:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:34.532 { 00:17:34.532 "params": { 00:17:34.532 "name": "Nvme$subsystem", 00:17:34.532 "trtype": "$TEST_TRANSPORT", 00:17:34.532 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:34.532 "adrfam": "ipv4", 00:17:34.532 "trsvcid": "$NVMF_PORT", 00:17:34.532 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:34.532 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:34.532 "hdgst": ${hdgst:-false}, 00:17:34.532 "ddgst": ${ddgst:-false} 00:17:34.532 }, 00:17:34.532 "method": "bdev_nvme_attach_controller" 00:17:34.532 } 00:17:34.532 EOF 00:17:34.532 )") 00:17:34.532 00:30:02 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:17:34.532 00:30:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:17:34.532 00:30:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:17:34.532 00:30:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:17:34.532 00:30:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:17:34.532 "params": { 00:17:34.532 "name": "Nvme0", 00:17:34.532 "trtype": "tcp", 00:17:34.532 "traddr": "10.0.0.2", 00:17:34.532 "adrfam": "ipv4", 00:17:34.532 "trsvcid": "4420", 00:17:34.532 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:17:34.532 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:17:34.532 "hdgst": false, 00:17:34.532 "ddgst": false 00:17:34.532 }, 00:17:34.532 "method": "bdev_nvme_attach_controller" 00:17:34.532 }' 00:17:34.532 [2024-07-12 00:30:02.187061] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:17:34.532 [2024-07-12 00:30:02.187161] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid935460 ] 00:17:34.532 EAL: No free 2048 kB hugepages reported on node 1 00:17:34.532 [2024-07-12 00:30:02.248340] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:34.532 [2024-07-12 00:30:02.335838] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:34.790 Running I/O for 10 seconds... 00:17:35.048 00:30:02 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:35.048 00:30:02 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@860 -- # return 0 00:17:35.048 00:30:02 nvmf_tcp.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:17:35.048 00:30:02 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:35.048 00:30:02 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:17:35.048 00:30:02 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:35.048 00:30:02 nvmf_tcp.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:35.048 00:30:02 nvmf_tcp.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:17:35.048 00:30:02 nvmf_tcp.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:17:35.048 00:30:02 nvmf_tcp.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:17:35.048 00:30:02 nvmf_tcp.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:17:35.048 00:30:02 nvmf_tcp.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:17:35.048 00:30:02 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:17:35.048 00:30:02 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:17:35.048 00:30:02 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:17:35.048 00:30:02 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:17:35.048 00:30:02 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:35.048 00:30:02 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:17:35.048 00:30:02 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:35.048 00:30:02 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=67 00:17:35.048 00:30:02 nvmf_tcp.nvmf_host_management -- target/host_management.sh@58 -- # '[' 67 -ge 100 ']' 00:17:35.048 00:30:02 nvmf_tcp.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:17:35.310 00:30:02 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:17:35.310 00:30:02 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:17:35.310 00:30:02 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:17:35.310 00:30:02 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:17:35.310 00:30:02 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:35.310 00:30:02 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:17:35.310 00:30:02 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:35.310 00:30:03 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=515 00:17:35.310 00:30:03 nvmf_tcp.nvmf_host_management -- target/host_management.sh@58 -- # '[' 515 -ge 100 ']' 00:17:35.310 00:30:03 nvmf_tcp.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:17:35.310 00:30:03 nvmf_tcp.nvmf_host_management -- target/host_management.sh@60 -- # break 00:17:35.310 00:30:03 nvmf_tcp.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:17:35.310 00:30:03 nvmf_tcp.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:17:35.310 00:30:03 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:35.310 00:30:03 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:17:35.310 [2024-07-12 00:30:03.036647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:78464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.310 [2024-07-12 00:30:03.036733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.310 [2024-07-12 00:30:03.036765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:78592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.310 [2024-07-12 00:30:03.036783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.310 [2024-07-12 00:30:03.036812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:78720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.310 [2024-07-12 00:30:03.036829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.310 [2024-07-12 00:30:03.036852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:78848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.311 [2024-07-12 00:30:03.036869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.311 [2024-07-12 00:30:03.036887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:78976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.311 [2024-07-12 00:30:03.036904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.311 [2024-07-12 00:30:03.036922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:79104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.311 [2024-07-12 00:30:03.036937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.311 [2024-07-12 00:30:03.036956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:79232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.311 [2024-07-12 00:30:03.036972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.311 [2024-07-12 00:30:03.036989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:79360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.311 [2024-07-12 00:30:03.037005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.311 [2024-07-12 00:30:03.037023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:79488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.311 [2024-07-12 00:30:03.037039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.311 [2024-07-12 00:30:03.037057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:79616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.311 [2024-07-12 00:30:03.037073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.311 [2024-07-12 00:30:03.037090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:79744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.311 [2024-07-12 00:30:03.037107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.311 [2024-07-12 00:30:03.037125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:79872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.311 [2024-07-12 00:30:03.037140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.311 [2024-07-12 00:30:03.037159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:80000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.311 [2024-07-12 00:30:03.037175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.311 [2024-07-12 00:30:03.037193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:80128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.311 [2024-07-12 00:30:03.037209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.311 [2024-07-12 00:30:03.037228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:80256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.311 [2024-07-12 00:30:03.037248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.311 [2024-07-12 00:30:03.037266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:80384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.311 [2024-07-12 00:30:03.037283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.311 [2024-07-12 00:30:03.037301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:80512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.311 [2024-07-12 00:30:03.037318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.311 [2024-07-12 00:30:03.037336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:80640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.311 [2024-07-12 00:30:03.037352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.311 [2024-07-12 00:30:03.037370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:80768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.311 [2024-07-12 00:30:03.037387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.311 [2024-07-12 00:30:03.037405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:80896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.311 [2024-07-12 00:30:03.037422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.311 [2024-07-12 00:30:03.037440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:81024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.311 [2024-07-12 00:30:03.037456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.311 [2024-07-12 00:30:03.037475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:81152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.311 [2024-07-12 00:30:03.037491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.311 [2024-07-12 00:30:03.037509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:81280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.311 [2024-07-12 00:30:03.037526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.311 [2024-07-12 00:30:03.037545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:81408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.311 [2024-07-12 00:30:03.037561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.311 [2024-07-12 00:30:03.037579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:81536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.311 [2024-07-12 00:30:03.037604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.311 [2024-07-12 00:30:03.037623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:81664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.311 [2024-07-12 00:30:03.037640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.311 [2024-07-12 00:30:03.037658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:81792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.311 [2024-07-12 00:30:03.037675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.311 [2024-07-12 00:30:03.037703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:73728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.311 [2024-07-12 00:30:03.037720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.311 [2024-07-12 00:30:03.037739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:73856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.311 [2024-07-12 00:30:03.037755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.311 [2024-07-12 00:30:03.037774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:73984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.311 [2024-07-12 00:30:03.037790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.311 [2024-07-12 00:30:03.037808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:74112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.311 [2024-07-12 00:30:03.037825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.311 [2024-07-12 00:30:03.037843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:74240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.311 [2024-07-12 00:30:03.037859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.311 [2024-07-12 00:30:03.037878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:74368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.311 [2024-07-12 00:30:03.037894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.311 [2024-07-12 00:30:03.037913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:74496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.311 [2024-07-12 00:30:03.037929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.311 [2024-07-12 00:30:03.037948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:74624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.311 [2024-07-12 00:30:03.037964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.311 [2024-07-12 00:30:03.037982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:74752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.311 [2024-07-12 00:30:03.037998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.311 [2024-07-12 00:30:03.038017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:74880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.311 [2024-07-12 00:30:03.038033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.311 [2024-07-12 00:30:03.038052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:75008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.311 [2024-07-12 00:30:03.038069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.311 [2024-07-12 00:30:03.038089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:75136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.311 [2024-07-12 00:30:03.038105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.311 [2024-07-12 00:30:03.038123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:75264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.311 [2024-07-12 00:30:03.038145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.311 [2024-07-12 00:30:03.038164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:75392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.311 [2024-07-12 00:30:03.038180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.311 [2024-07-12 00:30:03.038198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:75520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.311 [2024-07-12 00:30:03.038215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.311 [2024-07-12 00:30:03.038233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:75648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.311 [2024-07-12 00:30:03.038250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.311 [2024-07-12 00:30:03.038268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:75776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.311 [2024-07-12 00:30:03.038284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.311 [2024-07-12 00:30:03.038302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:75904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.311 [2024-07-12 00:30:03.038319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.311 [2024-07-12 00:30:03.038337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:76032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.312 [2024-07-12 00:30:03.038353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.312 [2024-07-12 00:30:03.038371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:76160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.312 [2024-07-12 00:30:03.038387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.312 [2024-07-12 00:30:03.038406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:76288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.312 [2024-07-12 00:30:03.038422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.312 [2024-07-12 00:30:03.038441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:76416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.312 [2024-07-12 00:30:03.038457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.312 [2024-07-12 00:30:03.038475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:76544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.312 [2024-07-12 00:30:03.038491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.312 [2024-07-12 00:30:03.038510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:76672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.312 [2024-07-12 00:30:03.038526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.312 [2024-07-12 00:30:03.038545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:76800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.312 [2024-07-12 00:30:03.038561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.312 [2024-07-12 00:30:03.038583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:76928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.312 [2024-07-12 00:30:03.038609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.312 [2024-07-12 00:30:03.038628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:77056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.312 [2024-07-12 00:30:03.038644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.312 [2024-07-12 00:30:03.038663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:77184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.312 [2024-07-12 00:30:03.038679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.312 [2024-07-12 00:30:03.038697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:77312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.312 [2024-07-12 00:30:03.038713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.312 [2024-07-12 00:30:03.038731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:77440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.312 [2024-07-12 00:30:03.038747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.312 [2024-07-12 00:30:03.038766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:77568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.312 [2024-07-12 00:30:03.038782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.312 [2024-07-12 00:30:03.038800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:77696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.312 [2024-07-12 00:30:03.038817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.312 [2024-07-12 00:30:03.038835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:77824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.312 [2024-07-12 00:30:03.038851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.312 [2024-07-12 00:30:03.038869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:77952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.312 [2024-07-12 00:30:03.038885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.312 [2024-07-12 00:30:03.038904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:78080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.312 [2024-07-12 00:30:03.038920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.312 [2024-07-12 00:30:03.038938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:78208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.312 [2024-07-12 00:30:03.038954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.312 [2024-07-12 00:30:03.038972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:78336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.312 [2024-07-12 00:30:03.038989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:35.312 [2024-07-12 00:30:03.039072] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x5a6da0 was disconnected and freed. reset controller. 00:17:35.312 [2024-07-12 00:30:03.040399] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:17:35.312 00:30:03 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:35.312 00:30:03 nvmf_tcp.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:17:35.312 00:30:03 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:35.312 00:30:03 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:17:35.312 task offset: 78464 on job bdev=Nvme0n1 fails 00:17:35.312 00:17:35.312 Latency(us) 00:17:35.312 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:35.312 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:35.312 Job: Nvme0n1 ended in about 0.41 seconds with error 00:17:35.312 Verification LBA range: start 0x0 length 0x400 00:17:35.312 Nvme0n1 : 0.41 1389.09 86.82 154.34 0.00 40035.32 3082.62 39807.05 00:17:35.312 =================================================================================================================== 00:17:35.312 Total : 1389.09 86.82 154.34 0.00 40035.32 3082.62 39807.05 00:17:35.312 [2024-07-12 00:30:03.042849] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:17:35.312 [2024-07-12 00:30:03.042884] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5aca70 (9): Bad file descriptor 00:17:35.312 00:30:03 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:35.312 00:30:03 nvmf_tcp.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:17:35.312 [2024-07-12 00:30:03.095177] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:17:36.285 00:30:04 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 935460 00:17:36.286 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (935460) - No such process 00:17:36.286 00:30:04 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # true 00:17:36.286 00:30:04 nvmf_tcp.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:17:36.286 00:30:04 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:17:36.286 00:30:04 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:17:36.286 00:30:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:17:36.286 00:30:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:17:36.286 00:30:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:36.286 00:30:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:36.286 { 00:17:36.286 "params": { 00:17:36.286 "name": "Nvme$subsystem", 00:17:36.286 "trtype": "$TEST_TRANSPORT", 00:17:36.286 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:36.286 "adrfam": "ipv4", 00:17:36.286 "trsvcid": "$NVMF_PORT", 00:17:36.286 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:36.286 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:36.286 "hdgst": ${hdgst:-false}, 00:17:36.286 "ddgst": ${ddgst:-false} 00:17:36.286 }, 00:17:36.286 "method": "bdev_nvme_attach_controller" 00:17:36.286 } 00:17:36.286 EOF 00:17:36.286 )") 00:17:36.286 00:30:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:17:36.286 00:30:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:17:36.286 00:30:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:17:36.286 00:30:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:17:36.286 "params": { 00:17:36.286 "name": "Nvme0", 00:17:36.286 "trtype": "tcp", 00:17:36.286 "traddr": "10.0.0.2", 00:17:36.286 "adrfam": "ipv4", 00:17:36.286 "trsvcid": "4420", 00:17:36.286 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:17:36.286 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:17:36.286 "hdgst": false, 00:17:36.286 "ddgst": false 00:17:36.286 }, 00:17:36.286 "method": "bdev_nvme_attach_controller" 00:17:36.286 }' 00:17:36.286 [2024-07-12 00:30:04.100734] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:17:36.286 [2024-07-12 00:30:04.100846] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid935686 ] 00:17:36.547 EAL: No free 2048 kB hugepages reported on node 1 00:17:36.547 [2024-07-12 00:30:04.161893] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:36.547 [2024-07-12 00:30:04.252260] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:36.806 Running I/O for 1 seconds... 00:17:37.739 00:17:37.739 Latency(us) 00:17:37.739 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:37.739 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:37.739 Verification LBA range: start 0x0 length 0x400 00:17:37.739 Nvme0n1 : 1.04 1535.55 95.97 0.00 0.00 40870.44 5315.70 38253.61 00:17:37.739 =================================================================================================================== 00:17:37.739 Total : 1535.55 95.97 0.00 0.00 40870.44 5315.70 38253.61 00:17:37.997 00:30:05 nvmf_tcp.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:17:37.997 00:30:05 nvmf_tcp.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:17:37.997 00:30:05 nvmf_tcp.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:17:37.997 00:30:05 nvmf_tcp.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:17:37.997 00:30:05 nvmf_tcp.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:17:37.997 00:30:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:37.997 00:30:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@117 -- # sync 00:17:37.997 00:30:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:37.997 00:30:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@120 -- # set +e 00:17:37.997 00:30:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:37.997 00:30:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:37.997 rmmod nvme_tcp 00:17:37.997 rmmod nvme_fabrics 00:17:37.997 rmmod nvme_keyring 00:17:37.997 00:30:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:37.997 00:30:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@124 -- # set -e 00:17:37.997 00:30:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@125 -- # return 0 00:17:37.997 00:30:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@489 -- # '[' -n 935397 ']' 00:17:37.997 00:30:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@490 -- # killprocess 935397 00:17:37.997 00:30:05 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@946 -- # '[' -z 935397 ']' 00:17:37.997 00:30:05 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@950 -- # kill -0 935397 00:17:37.997 00:30:05 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@951 -- # uname 00:17:37.997 00:30:05 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:37.997 00:30:05 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 935397 00:17:37.997 00:30:05 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:17:37.997 00:30:05 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:17:37.997 00:30:05 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@964 -- # echo 'killing process with pid 935397' 00:17:37.997 killing process with pid 935397 00:17:37.997 00:30:05 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@965 -- # kill 935397 00:17:37.997 00:30:05 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@970 -- # wait 935397 00:17:38.258 [2024-07-12 00:30:05.923788] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:17:38.258 00:30:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:38.258 00:30:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:38.258 00:30:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:38.258 00:30:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:38.258 00:30:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:38.258 00:30:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:38.258 00:30:05 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:38.258 00:30:05 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:40.167 00:30:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:40.167 00:30:08 nvmf_tcp.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:17:40.167 00:17:40.167 real 0m8.070s 00:17:40.167 user 0m19.145s 00:17:40.167 sys 0m2.230s 00:17:40.167 00:30:08 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:40.167 00:30:08 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:17:40.167 ************************************ 00:17:40.167 END TEST nvmf_host_management 00:17:40.167 ************************************ 00:17:40.426 00:30:08 nvmf_tcp -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:17:40.426 00:30:08 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:17:40.426 00:30:08 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:40.426 00:30:08 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:40.426 ************************************ 00:17:40.426 START TEST nvmf_lvol 00:17:40.426 ************************************ 00:17:40.426 00:30:08 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:17:40.426 * Looking for test storage... 00:17:40.426 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:40.426 00:30:08 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:40.426 00:30:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:17:40.426 00:30:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:40.426 00:30:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:40.426 00:30:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:40.426 00:30:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:40.426 00:30:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:40.426 00:30:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:40.426 00:30:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:40.426 00:30:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:40.427 00:30:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:40.427 00:30:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:40.427 00:30:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:17:40.427 00:30:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:17:40.427 00:30:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:40.427 00:30:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:40.427 00:30:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:40.427 00:30:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:40.427 00:30:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:40.427 00:30:08 nvmf_tcp.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:40.427 00:30:08 nvmf_tcp.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:40.427 00:30:08 nvmf_tcp.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:40.427 00:30:08 nvmf_tcp.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:40.427 00:30:08 nvmf_tcp.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:40.427 00:30:08 nvmf_tcp.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:40.427 00:30:08 nvmf_tcp.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:17:40.427 00:30:08 nvmf_tcp.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:40.427 00:30:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@47 -- # : 0 00:17:40.427 00:30:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:40.427 00:30:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:40.427 00:30:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:40.427 00:30:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:40.427 00:30:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:40.427 00:30:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:40.427 00:30:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:40.427 00:30:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:40.427 00:30:08 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:40.427 00:30:08 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:40.427 00:30:08 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:17:40.427 00:30:08 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:17:40.427 00:30:08 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:40.427 00:30:08 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:17:40.427 00:30:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:40.427 00:30:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:40.427 00:30:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:40.427 00:30:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:40.427 00:30:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:40.427 00:30:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:40.427 00:30:08 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:40.427 00:30:08 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:40.427 00:30:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:40.427 00:30:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:40.427 00:30:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@285 -- # xtrace_disable 00:17:40.427 00:30:08 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:17:42.331 00:30:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:42.331 00:30:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # pci_devs=() 00:17:42.331 00:30:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:42.331 00:30:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:42.331 00:30:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:42.331 00:30:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:42.331 00:30:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:42.331 00:30:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # net_devs=() 00:17:42.331 00:30:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:42.331 00:30:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # e810=() 00:17:42.331 00:30:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # local -ga e810 00:17:42.331 00:30:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # x722=() 00:17:42.331 00:30:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # local -ga x722 00:17:42.331 00:30:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # mlx=() 00:17:42.331 00:30:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # local -ga mlx 00:17:42.331 00:30:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:42.331 00:30:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:42.331 00:30:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:42.331 00:30:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:42.331 00:30:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:42.332 00:30:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:42.332 00:30:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:42.332 00:30:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:42.332 00:30:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:42.332 00:30:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:42.332 00:30:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:42.332 00:30:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:42.332 00:30:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:42.332 00:30:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:42.332 00:30:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:42.332 00:30:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:42.332 00:30:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:42.332 00:30:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:42.332 00:30:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:17:42.332 Found 0000:08:00.0 (0x8086 - 0x159b) 00:17:42.332 00:30:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:42.332 00:30:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:42.332 00:30:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:42.332 00:30:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:42.332 00:30:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:42.332 00:30:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:42.332 00:30:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:17:42.332 Found 0000:08:00.1 (0x8086 - 0x159b) 00:17:42.332 00:30:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:42.332 00:30:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:42.332 00:30:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:42.332 00:30:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:42.332 00:30:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:42.332 00:30:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:42.332 00:30:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:42.332 00:30:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:42.332 00:30:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:42.332 00:30:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:42.332 00:30:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:42.332 00:30:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:42.332 00:30:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:42.332 00:30:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:42.332 00:30:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:42.332 00:30:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:17:42.332 Found net devices under 0000:08:00.0: cvl_0_0 00:17:42.332 00:30:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:42.332 00:30:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:42.332 00:30:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:42.332 00:30:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:42.332 00:30:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:42.332 00:30:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:42.332 00:30:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:42.332 00:30:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:42.332 00:30:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:17:42.332 Found net devices under 0000:08:00.1: cvl_0_1 00:17:42.332 00:30:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:42.332 00:30:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:42.332 00:30:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # is_hw=yes 00:17:42.332 00:30:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:42.332 00:30:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:42.332 00:30:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:42.332 00:30:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:42.332 00:30:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:42.332 00:30:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:42.332 00:30:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:42.332 00:30:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:42.332 00:30:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:42.332 00:30:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:42.332 00:30:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:42.332 00:30:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:42.332 00:30:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:42.332 00:30:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:42.332 00:30:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:42.332 00:30:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:42.332 00:30:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:42.332 00:30:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:42.332 00:30:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:42.332 00:30:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:42.332 00:30:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:42.332 00:30:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:42.332 00:30:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:42.332 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:42.332 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.400 ms 00:17:42.332 00:17:42.332 --- 10.0.0.2 ping statistics --- 00:17:42.332 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:42.332 rtt min/avg/max/mdev = 0.400/0.400/0.400/0.000 ms 00:17:42.332 00:30:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:42.332 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:42.332 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.148 ms 00:17:42.332 00:17:42.332 --- 10.0.0.1 ping statistics --- 00:17:42.332 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:42.332 rtt min/avg/max/mdev = 0.148/0.148/0.148/0.000 ms 00:17:42.332 00:30:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:42.332 00:30:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@422 -- # return 0 00:17:42.332 00:30:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:42.332 00:30:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:42.332 00:30:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:42.332 00:30:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:42.332 00:30:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:42.332 00:30:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:42.332 00:30:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:42.332 00:30:09 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:17:42.332 00:30:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:42.332 00:30:09 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@720 -- # xtrace_disable 00:17:42.332 00:30:09 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:17:42.332 00:30:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@481 -- # nvmfpid=937779 00:17:42.332 00:30:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@482 -- # waitforlisten 937779 00:17:42.332 00:30:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:17:42.332 00:30:09 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@827 -- # '[' -z 937779 ']' 00:17:42.332 00:30:09 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:42.332 00:30:09 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:42.332 00:30:09 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:42.332 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:42.332 00:30:09 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:42.332 00:30:09 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:17:42.332 [2024-07-12 00:30:09.895571] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:17:42.332 [2024-07-12 00:30:09.895685] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:42.332 EAL: No free 2048 kB hugepages reported on node 1 00:17:42.332 [2024-07-12 00:30:09.965017] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:42.332 [2024-07-12 00:30:10.053383] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:42.332 [2024-07-12 00:30:10.053436] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:42.332 [2024-07-12 00:30:10.053452] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:42.332 [2024-07-12 00:30:10.053465] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:42.332 [2024-07-12 00:30:10.053477] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:42.332 [2024-07-12 00:30:10.053567] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:42.332 [2024-07-12 00:30:10.053598] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:42.332 [2024-07-12 00:30:10.053604] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:42.332 00:30:10 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:42.332 00:30:10 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@860 -- # return 0 00:17:42.332 00:30:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:42.332 00:30:10 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:42.332 00:30:10 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:17:42.590 00:30:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:42.590 00:30:10 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:17:42.847 [2024-07-12 00:30:10.442264] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:42.847 00:30:10 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:43.105 00:30:10 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:17:43.105 00:30:10 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:43.362 00:30:11 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:17:43.362 00:30:11 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:17:43.619 00:30:11 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:17:43.876 00:30:11 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=e572a40a-1cff-438a-8ef0-5a3fdd8b3586 00:17:43.876 00:30:11 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u e572a40a-1cff-438a-8ef0-5a3fdd8b3586 lvol 20 00:17:44.440 00:30:12 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=79365219-40b5-464f-973e-8c392584a7bb 00:17:44.440 00:30:12 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:17:44.697 00:30:12 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 79365219-40b5-464f-973e-8c392584a7bb 00:17:44.954 00:30:12 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:17:45.212 [2024-07-12 00:30:12.885580] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:45.212 00:30:12 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:45.469 00:30:13 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=938138 00:17:45.469 00:30:13 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:17:45.469 00:30:13 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:17:45.469 EAL: No free 2048 kB hugepages reported on node 1 00:17:46.402 00:30:14 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 79365219-40b5-464f-973e-8c392584a7bb MY_SNAPSHOT 00:17:46.967 00:30:14 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=daf9c5f7-618b-460b-ac02-d1d1250b89da 00:17:46.967 00:30:14 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 79365219-40b5-464f-973e-8c392584a7bb 30 00:17:47.224 00:30:14 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone daf9c5f7-618b-460b-ac02-d1d1250b89da MY_CLONE 00:17:47.481 00:30:15 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=88e57381-5127-4ff3-bb1a-c4d417266a8c 00:17:47.481 00:30:15 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 88e57381-5127-4ff3-bb1a-c4d417266a8c 00:17:48.048 00:30:15 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 938138 00:17:56.149 Initializing NVMe Controllers 00:17:56.149 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:17:56.149 Controller IO queue size 128, less than required. 00:17:56.149 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:56.149 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:17:56.149 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:17:56.149 Initialization complete. Launching workers. 00:17:56.149 ======================================================== 00:17:56.149 Latency(us) 00:17:56.149 Device Information : IOPS MiB/s Average min max 00:17:56.149 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 9625.10 37.60 13301.66 1655.38 124955.96 00:17:56.149 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 9486.20 37.06 13496.28 2070.18 63070.21 00:17:56.149 ======================================================== 00:17:56.149 Total : 19111.30 74.65 13398.27 1655.38 124955.96 00:17:56.149 00:17:56.149 00:30:23 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:17:56.150 00:30:23 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 79365219-40b5-464f-973e-8c392584a7bb 00:17:56.445 00:30:24 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u e572a40a-1cff-438a-8ef0-5a3fdd8b3586 00:17:56.704 00:30:24 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:17:56.704 00:30:24 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:17:56.704 00:30:24 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:17:56.704 00:30:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:56.704 00:30:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@117 -- # sync 00:17:56.704 00:30:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:56.704 00:30:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@120 -- # set +e 00:17:56.704 00:30:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:56.704 00:30:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:56.704 rmmod nvme_tcp 00:17:56.704 rmmod nvme_fabrics 00:17:56.704 rmmod nvme_keyring 00:17:56.704 00:30:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:56.704 00:30:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@124 -- # set -e 00:17:56.704 00:30:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@125 -- # return 0 00:17:56.704 00:30:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@489 -- # '[' -n 937779 ']' 00:17:56.704 00:30:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@490 -- # killprocess 937779 00:17:56.704 00:30:24 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@946 -- # '[' -z 937779 ']' 00:17:56.704 00:30:24 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@950 -- # kill -0 937779 00:17:56.704 00:30:24 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@951 -- # uname 00:17:56.705 00:30:24 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:56.705 00:30:24 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 937779 00:17:56.705 00:30:24 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:17:56.705 00:30:24 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:17:56.705 00:30:24 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@964 -- # echo 'killing process with pid 937779' 00:17:56.705 killing process with pid 937779 00:17:56.705 00:30:24 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@965 -- # kill 937779 00:17:56.705 00:30:24 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@970 -- # wait 937779 00:17:56.964 00:30:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:56.964 00:30:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:56.964 00:30:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:56.964 00:30:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:56.964 00:30:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:56.964 00:30:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:56.964 00:30:24 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:56.964 00:30:24 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:59.503 00:30:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:59.503 00:17:59.503 real 0m18.684s 00:17:59.503 user 1m5.541s 00:17:59.503 sys 0m5.329s 00:17:59.503 00:30:26 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:59.503 00:30:26 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:17:59.503 ************************************ 00:17:59.503 END TEST nvmf_lvol 00:17:59.503 ************************************ 00:17:59.503 00:30:26 nvmf_tcp -- nvmf/nvmf.sh@49 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:17:59.503 00:30:26 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:17:59.503 00:30:26 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:59.503 00:30:26 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:59.503 ************************************ 00:17:59.503 START TEST nvmf_lvs_grow 00:17:59.503 ************************************ 00:17:59.503 00:30:26 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:17:59.503 * Looking for test storage... 00:17:59.503 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:59.503 00:30:26 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:59.503 00:30:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:17:59.503 00:30:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:59.503 00:30:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:59.503 00:30:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:59.503 00:30:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:59.503 00:30:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:59.503 00:30:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:59.503 00:30:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:59.503 00:30:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:59.503 00:30:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:59.503 00:30:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:59.503 00:30:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:17:59.503 00:30:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:17:59.503 00:30:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:59.503 00:30:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:59.503 00:30:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:59.503 00:30:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:59.503 00:30:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:59.503 00:30:26 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:59.503 00:30:26 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:59.503 00:30:26 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:59.504 00:30:26 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:59.504 00:30:26 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:59.504 00:30:26 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:59.504 00:30:26 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:17:59.504 00:30:26 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:59.504 00:30:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@47 -- # : 0 00:17:59.504 00:30:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:59.504 00:30:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:59.504 00:30:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:59.504 00:30:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:59.504 00:30:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:59.504 00:30:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:59.504 00:30:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:59.504 00:30:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:59.504 00:30:26 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:59.504 00:30:26 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:59.504 00:30:26 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:17:59.504 00:30:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:59.504 00:30:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:59.504 00:30:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:59.504 00:30:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:59.504 00:30:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:59.504 00:30:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:59.504 00:30:26 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:59.504 00:30:26 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:59.504 00:30:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:59.504 00:30:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:59.504 00:30:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@285 -- # xtrace_disable 00:17:59.504 00:30:26 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:18:00.883 00:30:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:00.883 00:30:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # pci_devs=() 00:18:00.883 00:30:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:00.883 00:30:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:00.883 00:30:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:00.883 00:30:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:00.883 00:30:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:00.883 00:30:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # net_devs=() 00:18:00.883 00:30:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:00.883 00:30:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # e810=() 00:18:00.883 00:30:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # local -ga e810 00:18:00.883 00:30:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # x722=() 00:18:00.883 00:30:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # local -ga x722 00:18:00.883 00:30:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # mlx=() 00:18:00.883 00:30:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # local -ga mlx 00:18:00.883 00:30:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:00.883 00:30:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:00.883 00:30:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:00.883 00:30:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:00.883 00:30:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:00.883 00:30:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:00.883 00:30:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:00.883 00:30:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:00.883 00:30:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:00.883 00:30:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:00.883 00:30:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:00.883 00:30:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:00.883 00:30:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:00.883 00:30:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:00.883 00:30:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:00.883 00:30:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:00.883 00:30:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:00.883 00:30:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:00.883 00:30:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:18:00.883 Found 0000:08:00.0 (0x8086 - 0x159b) 00:18:00.883 00:30:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:00.883 00:30:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:00.883 00:30:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:00.883 00:30:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:00.883 00:30:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:00.883 00:30:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:00.883 00:30:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:18:00.883 Found 0000:08:00.1 (0x8086 - 0x159b) 00:18:00.883 00:30:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:00.883 00:30:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:00.883 00:30:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:00.883 00:30:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:00.883 00:30:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:00.883 00:30:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:00.883 00:30:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:00.883 00:30:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:00.883 00:30:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:00.883 00:30:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:00.883 00:30:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:00.883 00:30:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:00.883 00:30:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:00.883 00:30:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:00.883 00:30:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:00.883 00:30:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:18:00.883 Found net devices under 0000:08:00.0: cvl_0_0 00:18:00.883 00:30:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:00.883 00:30:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:00.883 00:30:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:00.883 00:30:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:00.883 00:30:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:00.883 00:30:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:00.883 00:30:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:00.883 00:30:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:00.883 00:30:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:18:00.883 Found net devices under 0000:08:00.1: cvl_0_1 00:18:00.883 00:30:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:00.883 00:30:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:00.883 00:30:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # is_hw=yes 00:18:00.883 00:30:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:00.883 00:30:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:00.883 00:30:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:00.883 00:30:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:00.883 00:30:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:00.883 00:30:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:00.883 00:30:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:00.883 00:30:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:00.883 00:30:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:00.883 00:30:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:00.883 00:30:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:00.883 00:30:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:00.883 00:30:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:00.883 00:30:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:00.883 00:30:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:00.883 00:30:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:00.883 00:30:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:00.883 00:30:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:00.883 00:30:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:00.883 00:30:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:00.883 00:30:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:00.883 00:30:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:00.883 00:30:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:00.883 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:00.883 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.247 ms 00:18:00.883 00:18:00.883 --- 10.0.0.2 ping statistics --- 00:18:00.883 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:00.883 rtt min/avg/max/mdev = 0.247/0.247/0.247/0.000 ms 00:18:00.883 00:30:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:00.883 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:00.883 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.112 ms 00:18:00.883 00:18:00.883 --- 10.0.0.1 ping statistics --- 00:18:00.883 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:00.883 rtt min/avg/max/mdev = 0.112/0.112/0.112/0.000 ms 00:18:00.883 00:30:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:00.883 00:30:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@422 -- # return 0 00:18:00.883 00:30:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:00.883 00:30:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:00.883 00:30:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:00.883 00:30:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:00.883 00:30:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:00.883 00:30:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:00.883 00:30:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:00.883 00:30:28 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:18:00.883 00:30:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:00.883 00:30:28 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@720 -- # xtrace_disable 00:18:00.883 00:30:28 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:18:00.883 00:30:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@481 -- # nvmfpid=940635 00:18:00.883 00:30:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:18:00.883 00:30:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@482 -- # waitforlisten 940635 00:18:00.884 00:30:28 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # '[' -z 940635 ']' 00:18:00.884 00:30:28 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:00.884 00:30:28 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:00.884 00:30:28 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:00.884 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:00.884 00:30:28 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:00.884 00:30:28 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:18:00.884 [2024-07-12 00:30:28.585678] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:18:00.884 [2024-07-12 00:30:28.585785] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:00.884 EAL: No free 2048 kB hugepages reported on node 1 00:18:00.884 [2024-07-12 00:30:28.654123] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:01.143 [2024-07-12 00:30:28.741046] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:01.143 [2024-07-12 00:30:28.741094] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:01.143 [2024-07-12 00:30:28.741109] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:01.143 [2024-07-12 00:30:28.741123] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:01.143 [2024-07-12 00:30:28.741142] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:01.143 [2024-07-12 00:30:28.741177] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:01.143 00:30:28 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:01.143 00:30:28 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@860 -- # return 0 00:18:01.143 00:30:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:01.143 00:30:28 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:01.143 00:30:28 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:18:01.143 00:30:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:01.143 00:30:28 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:18:01.402 [2024-07-12 00:30:29.135775] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:01.402 00:30:29 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:18:01.402 00:30:29 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:18:01.402 00:30:29 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1103 -- # xtrace_disable 00:18:01.402 00:30:29 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:18:01.402 ************************************ 00:18:01.402 START TEST lvs_grow_clean 00:18:01.402 ************************************ 00:18:01.402 00:30:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1121 -- # lvs_grow 00:18:01.402 00:30:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:18:01.402 00:30:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:18:01.402 00:30:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:18:01.402 00:30:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:18:01.402 00:30:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:18:01.402 00:30:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:18:01.402 00:30:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:18:01.402 00:30:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:18:01.402 00:30:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:18:01.971 00:30:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:18:01.971 00:30:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:18:02.230 00:30:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=d0370c7e-7b76-48e4-af51-2611647ba215 00:18:02.230 00:30:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d0370c7e-7b76-48e4-af51-2611647ba215 00:18:02.230 00:30:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:18:02.488 00:30:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:18:02.488 00:30:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:18:02.488 00:30:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u d0370c7e-7b76-48e4-af51-2611647ba215 lvol 150 00:18:02.746 00:30:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=8ac095e6-06a0-4f54-87c2-79df03ea6726 00:18:02.746 00:30:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:18:02.746 00:30:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:18:03.005 [2024-07-12 00:30:30.676315] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:18:03.005 [2024-07-12 00:30:30.676402] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:18:03.005 true 00:18:03.005 00:30:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d0370c7e-7b76-48e4-af51-2611647ba215 00:18:03.005 00:30:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:18:03.263 00:30:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:18:03.263 00:30:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:18:03.522 00:30:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 8ac095e6-06a0-4f54-87c2-79df03ea6726 00:18:03.780 00:30:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:18:04.039 [2024-07-12 00:30:31.847912] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:04.039 00:30:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:18:04.606 00:30:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=941070 00:18:04.606 00:30:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:04.606 00:30:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 941070 /var/tmp/bdevperf.sock 00:18:04.606 00:30:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@827 -- # '[' -z 941070 ']' 00:18:04.606 00:30:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:04.606 00:30:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:04.606 00:30:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:18:04.606 00:30:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:04.606 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:04.606 00:30:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:04.606 00:30:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:18:04.606 [2024-07-12 00:30:32.212203] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:18:04.606 [2024-07-12 00:30:32.212309] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid941070 ] 00:18:04.606 EAL: No free 2048 kB hugepages reported on node 1 00:18:04.606 [2024-07-12 00:30:32.272961] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:04.606 [2024-07-12 00:30:32.360208] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:04.865 00:30:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:04.865 00:30:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@860 -- # return 0 00:18:04.865 00:30:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:18:05.430 Nvme0n1 00:18:05.430 00:30:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:18:05.690 [ 00:18:05.690 { 00:18:05.690 "name": "Nvme0n1", 00:18:05.690 "aliases": [ 00:18:05.690 "8ac095e6-06a0-4f54-87c2-79df03ea6726" 00:18:05.690 ], 00:18:05.690 "product_name": "NVMe disk", 00:18:05.690 "block_size": 4096, 00:18:05.690 "num_blocks": 38912, 00:18:05.690 "uuid": "8ac095e6-06a0-4f54-87c2-79df03ea6726", 00:18:05.690 "assigned_rate_limits": { 00:18:05.690 "rw_ios_per_sec": 0, 00:18:05.690 "rw_mbytes_per_sec": 0, 00:18:05.690 "r_mbytes_per_sec": 0, 00:18:05.690 "w_mbytes_per_sec": 0 00:18:05.690 }, 00:18:05.690 "claimed": false, 00:18:05.690 "zoned": false, 00:18:05.690 "supported_io_types": { 00:18:05.690 "read": true, 00:18:05.690 "write": true, 00:18:05.690 "unmap": true, 00:18:05.690 "write_zeroes": true, 00:18:05.690 "flush": true, 00:18:05.690 "reset": true, 00:18:05.690 "compare": true, 00:18:05.690 "compare_and_write": true, 00:18:05.690 "abort": true, 00:18:05.690 "nvme_admin": true, 00:18:05.690 "nvme_io": true 00:18:05.690 }, 00:18:05.690 "memory_domains": [ 00:18:05.690 { 00:18:05.690 "dma_device_id": "system", 00:18:05.690 "dma_device_type": 1 00:18:05.690 } 00:18:05.690 ], 00:18:05.690 "driver_specific": { 00:18:05.690 "nvme": [ 00:18:05.690 { 00:18:05.690 "trid": { 00:18:05.690 "trtype": "TCP", 00:18:05.690 "adrfam": "IPv4", 00:18:05.690 "traddr": "10.0.0.2", 00:18:05.690 "trsvcid": "4420", 00:18:05.690 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:18:05.690 }, 00:18:05.690 "ctrlr_data": { 00:18:05.690 "cntlid": 1, 00:18:05.690 "vendor_id": "0x8086", 00:18:05.690 "model_number": "SPDK bdev Controller", 00:18:05.690 "serial_number": "SPDK0", 00:18:05.690 "firmware_revision": "24.05.1", 00:18:05.690 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:18:05.690 "oacs": { 00:18:05.690 "security": 0, 00:18:05.690 "format": 0, 00:18:05.690 "firmware": 0, 00:18:05.690 "ns_manage": 0 00:18:05.690 }, 00:18:05.690 "multi_ctrlr": true, 00:18:05.690 "ana_reporting": false 00:18:05.690 }, 00:18:05.690 "vs": { 00:18:05.690 "nvme_version": "1.3" 00:18:05.690 }, 00:18:05.690 "ns_data": { 00:18:05.690 "id": 1, 00:18:05.690 "can_share": true 00:18:05.690 } 00:18:05.690 } 00:18:05.690 ], 00:18:05.690 "mp_policy": "active_passive" 00:18:05.690 } 00:18:05.690 } 00:18:05.690 ] 00:18:05.690 00:30:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=941173 00:18:05.690 00:30:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:18:05.690 00:30:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:05.690 Running I/O for 10 seconds... 00:18:06.629 Latency(us) 00:18:06.629 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:06.629 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:06.629 Nvme0n1 : 1.00 13717.00 53.58 0.00 0.00 0.00 0.00 0.00 00:18:06.629 =================================================================================================================== 00:18:06.629 Total : 13717.00 53.58 0.00 0.00 0.00 0.00 0.00 00:18:06.629 00:18:07.567 00:30:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u d0370c7e-7b76-48e4-af51-2611647ba215 00:18:07.826 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:07.826 Nvme0n1 : 2.00 13907.00 54.32 0.00 0.00 0.00 0.00 0.00 00:18:07.826 =================================================================================================================== 00:18:07.826 Total : 13907.00 54.32 0.00 0.00 0.00 0.00 0.00 00:18:07.826 00:18:07.826 true 00:18:07.826 00:30:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d0370c7e-7b76-48e4-af51-2611647ba215 00:18:07.826 00:30:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:18:08.084 00:30:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:18:08.084 00:30:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:18:08.084 00:30:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 941173 00:18:08.653 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:08.653 Nvme0n1 : 3.00 13970.33 54.57 0.00 0.00 0.00 0.00 0.00 00:18:08.653 =================================================================================================================== 00:18:08.653 Total : 13970.33 54.57 0.00 0.00 0.00 0.00 0.00 00:18:08.653 00:18:09.593 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:09.593 Nvme0n1 : 4.00 14033.75 54.82 0.00 0.00 0.00 0.00 0.00 00:18:09.593 =================================================================================================================== 00:18:09.593 Total : 14033.75 54.82 0.00 0.00 0.00 0.00 0.00 00:18:09.593 00:18:10.975 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:10.975 Nvme0n1 : 5.00 14071.80 54.97 0.00 0.00 0.00 0.00 0.00 00:18:10.975 =================================================================================================================== 00:18:10.975 Total : 14071.80 54.97 0.00 0.00 0.00 0.00 0.00 00:18:10.975 00:18:11.914 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:11.914 Nvme0n1 : 6.00 14118.33 55.15 0.00 0.00 0.00 0.00 0.00 00:18:11.914 =================================================================================================================== 00:18:11.914 Total : 14118.33 55.15 0.00 0.00 0.00 0.00 0.00 00:18:11.914 00:18:12.848 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:12.848 Nvme0n1 : 7.00 14152.86 55.28 0.00 0.00 0.00 0.00 0.00 00:18:12.848 =================================================================================================================== 00:18:12.848 Total : 14152.86 55.28 0.00 0.00 0.00 0.00 0.00 00:18:12.848 00:18:13.812 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:13.812 Nvme0n1 : 8.00 14177.62 55.38 0.00 0.00 0.00 0.00 0.00 00:18:13.812 =================================================================================================================== 00:18:13.812 Total : 14177.62 55.38 0.00 0.00 0.00 0.00 0.00 00:18:13.812 00:18:14.746 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:14.746 Nvme0n1 : 9.00 14190.33 55.43 0.00 0.00 0.00 0.00 0.00 00:18:14.746 =================================================================================================================== 00:18:14.746 Total : 14190.33 55.43 0.00 0.00 0.00 0.00 0.00 00:18:14.746 00:18:15.690 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:15.690 Nvme0n1 : 10.00 14214.00 55.52 0.00 0.00 0.00 0.00 0.00 00:18:15.690 =================================================================================================================== 00:18:15.690 Total : 14214.00 55.52 0.00 0.00 0.00 0.00 0.00 00:18:15.690 00:18:15.690 00:18:15.690 Latency(us) 00:18:15.690 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:15.690 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:15.690 Nvme0n1 : 10.00 14215.95 55.53 0.00 0.00 8997.95 5242.88 18252.99 00:18:15.690 =================================================================================================================== 00:18:15.690 Total : 14215.95 55.53 0.00 0.00 8997.95 5242.88 18252.99 00:18:15.690 0 00:18:15.690 00:30:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 941070 00:18:15.690 00:30:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@946 -- # '[' -z 941070 ']' 00:18:15.690 00:30:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@950 -- # kill -0 941070 00:18:15.690 00:30:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@951 -- # uname 00:18:15.690 00:30:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:18:15.690 00:30:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 941070 00:18:15.690 00:30:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:18:15.690 00:30:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:18:15.690 00:30:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 941070' 00:18:15.690 killing process with pid 941070 00:18:15.690 00:30:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@965 -- # kill 941070 00:18:15.690 Received shutdown signal, test time was about 10.000000 seconds 00:18:15.690 00:18:15.690 Latency(us) 00:18:15.690 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:15.690 =================================================================================================================== 00:18:15.690 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:15.690 00:30:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@970 -- # wait 941070 00:18:15.948 00:30:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:18:16.207 00:30:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:18:16.465 00:30:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d0370c7e-7b76-48e4-af51-2611647ba215 00:18:16.465 00:30:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:18:16.723 00:30:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:18:16.723 00:30:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:18:16.723 00:30:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:18:16.983 [2024-07-12 00:30:44.814698] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:18:17.241 00:30:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d0370c7e-7b76-48e4-af51-2611647ba215 00:18:17.241 00:30:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@648 -- # local es=0 00:18:17.241 00:30:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d0370c7e-7b76-48e4-af51-2611647ba215 00:18:17.241 00:30:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:17.241 00:30:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:17.241 00:30:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:17.241 00:30:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:17.241 00:30:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:17.241 00:30:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:17.241 00:30:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:17.241 00:30:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:18:17.241 00:30:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d0370c7e-7b76-48e4-af51-2611647ba215 00:18:17.498 request: 00:18:17.498 { 00:18:17.498 "uuid": "d0370c7e-7b76-48e4-af51-2611647ba215", 00:18:17.498 "method": "bdev_lvol_get_lvstores", 00:18:17.498 "req_id": 1 00:18:17.498 } 00:18:17.498 Got JSON-RPC error response 00:18:17.498 response: 00:18:17.498 { 00:18:17.498 "code": -19, 00:18:17.498 "message": "No such device" 00:18:17.498 } 00:18:17.498 00:30:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # es=1 00:18:17.498 00:30:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:17.498 00:30:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:17.498 00:30:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:17.498 00:30:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:18:17.756 aio_bdev 00:18:17.756 00:30:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 8ac095e6-06a0-4f54-87c2-79df03ea6726 00:18:17.756 00:30:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@895 -- # local bdev_name=8ac095e6-06a0-4f54-87c2-79df03ea6726 00:18:17.756 00:30:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:18:17.756 00:30:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@897 -- # local i 00:18:17.756 00:30:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:18:17.756 00:30:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:18:17.756 00:30:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:18:18.013 00:30:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 8ac095e6-06a0-4f54-87c2-79df03ea6726 -t 2000 00:18:18.270 [ 00:18:18.270 { 00:18:18.270 "name": "8ac095e6-06a0-4f54-87c2-79df03ea6726", 00:18:18.270 "aliases": [ 00:18:18.270 "lvs/lvol" 00:18:18.270 ], 00:18:18.270 "product_name": "Logical Volume", 00:18:18.270 "block_size": 4096, 00:18:18.270 "num_blocks": 38912, 00:18:18.270 "uuid": "8ac095e6-06a0-4f54-87c2-79df03ea6726", 00:18:18.270 "assigned_rate_limits": { 00:18:18.270 "rw_ios_per_sec": 0, 00:18:18.270 "rw_mbytes_per_sec": 0, 00:18:18.270 "r_mbytes_per_sec": 0, 00:18:18.270 "w_mbytes_per_sec": 0 00:18:18.270 }, 00:18:18.270 "claimed": false, 00:18:18.270 "zoned": false, 00:18:18.270 "supported_io_types": { 00:18:18.270 "read": true, 00:18:18.270 "write": true, 00:18:18.270 "unmap": true, 00:18:18.270 "write_zeroes": true, 00:18:18.270 "flush": false, 00:18:18.270 "reset": true, 00:18:18.270 "compare": false, 00:18:18.270 "compare_and_write": false, 00:18:18.270 "abort": false, 00:18:18.270 "nvme_admin": false, 00:18:18.270 "nvme_io": false 00:18:18.270 }, 00:18:18.270 "driver_specific": { 00:18:18.270 "lvol": { 00:18:18.270 "lvol_store_uuid": "d0370c7e-7b76-48e4-af51-2611647ba215", 00:18:18.270 "base_bdev": "aio_bdev", 00:18:18.270 "thin_provision": false, 00:18:18.270 "num_allocated_clusters": 38, 00:18:18.270 "snapshot": false, 00:18:18.270 "clone": false, 00:18:18.270 "esnap_clone": false 00:18:18.270 } 00:18:18.270 } 00:18:18.270 } 00:18:18.270 ] 00:18:18.270 00:30:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # return 0 00:18:18.270 00:30:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d0370c7e-7b76-48e4-af51-2611647ba215 00:18:18.270 00:30:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:18:18.527 00:30:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:18:18.527 00:30:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d0370c7e-7b76-48e4-af51-2611647ba215 00:18:18.527 00:30:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:18:19.093 00:30:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:18:19.093 00:30:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 8ac095e6-06a0-4f54-87c2-79df03ea6726 00:18:19.093 00:30:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u d0370c7e-7b76-48e4-af51-2611647ba215 00:18:19.660 00:30:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:18:19.660 00:30:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:18:19.660 00:18:19.660 real 0m18.283s 00:18:19.660 user 0m17.888s 00:18:19.660 sys 0m1.857s 00:18:19.660 00:30:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1122 -- # xtrace_disable 00:18:19.660 00:30:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:18:19.660 ************************************ 00:18:19.660 END TEST lvs_grow_clean 00:18:19.660 ************************************ 00:18:19.660 00:30:47 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:18:19.660 00:30:47 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:18:19.660 00:30:47 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1103 -- # xtrace_disable 00:18:19.660 00:30:47 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:18:19.918 ************************************ 00:18:19.918 START TEST lvs_grow_dirty 00:18:19.918 ************************************ 00:18:19.918 00:30:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1121 -- # lvs_grow dirty 00:18:19.918 00:30:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:18:19.918 00:30:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:18:19.918 00:30:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:18:19.918 00:30:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:18:19.918 00:30:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:18:19.918 00:30:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:18:19.918 00:30:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:18:19.918 00:30:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:18:19.918 00:30:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:18:20.176 00:30:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:18:20.176 00:30:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:18:20.434 00:30:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=6c3890bb-39d7-4c30-ae89-0ac14b9695d6 00:18:20.434 00:30:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:18:20.434 00:30:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6c3890bb-39d7-4c30-ae89-0ac14b9695d6 00:18:20.693 00:30:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:18:20.693 00:30:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:18:20.693 00:30:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 6c3890bb-39d7-4c30-ae89-0ac14b9695d6 lvol 150 00:18:20.693 00:30:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=7e9402ae-7fdf-4fb7-be1f-11a2aff3ab71 00:18:20.693 00:30:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:18:20.693 00:30:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:18:21.259 [2024-07-12 00:30:48.801952] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:18:21.259 [2024-07-12 00:30:48.802053] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:18:21.259 true 00:18:21.259 00:30:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6c3890bb-39d7-4c30-ae89-0ac14b9695d6 00:18:21.259 00:30:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:18:21.517 00:30:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:18:21.517 00:30:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:18:21.775 00:30:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 7e9402ae-7fdf-4fb7-be1f-11a2aff3ab71 00:18:22.034 00:30:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:18:22.292 [2024-07-12 00:30:49.993649] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:22.292 00:30:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:18:22.550 00:30:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=942735 00:18:22.550 00:30:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:18:22.550 00:30:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:22.550 00:30:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 942735 /var/tmp/bdevperf.sock 00:18:22.550 00:30:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@827 -- # '[' -z 942735 ']' 00:18:22.550 00:30:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:22.550 00:30:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:22.550 00:30:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:22.550 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:22.550 00:30:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:22.550 00:30:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:18:22.550 [2024-07-12 00:30:50.356940] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:18:22.550 [2024-07-12 00:30:50.357045] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid942735 ] 00:18:22.550 EAL: No free 2048 kB hugepages reported on node 1 00:18:22.808 [2024-07-12 00:30:50.417719] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:22.808 [2024-07-12 00:30:50.505050] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:22.809 00:30:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:22.809 00:30:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # return 0 00:18:22.809 00:30:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:18:23.373 Nvme0n1 00:18:23.373 00:30:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:18:23.630 [ 00:18:23.630 { 00:18:23.630 "name": "Nvme0n1", 00:18:23.630 "aliases": [ 00:18:23.630 "7e9402ae-7fdf-4fb7-be1f-11a2aff3ab71" 00:18:23.630 ], 00:18:23.630 "product_name": "NVMe disk", 00:18:23.630 "block_size": 4096, 00:18:23.630 "num_blocks": 38912, 00:18:23.630 "uuid": "7e9402ae-7fdf-4fb7-be1f-11a2aff3ab71", 00:18:23.630 "assigned_rate_limits": { 00:18:23.630 "rw_ios_per_sec": 0, 00:18:23.630 "rw_mbytes_per_sec": 0, 00:18:23.630 "r_mbytes_per_sec": 0, 00:18:23.630 "w_mbytes_per_sec": 0 00:18:23.630 }, 00:18:23.630 "claimed": false, 00:18:23.630 "zoned": false, 00:18:23.630 "supported_io_types": { 00:18:23.630 "read": true, 00:18:23.630 "write": true, 00:18:23.630 "unmap": true, 00:18:23.630 "write_zeroes": true, 00:18:23.630 "flush": true, 00:18:23.630 "reset": true, 00:18:23.630 "compare": true, 00:18:23.630 "compare_and_write": true, 00:18:23.630 "abort": true, 00:18:23.630 "nvme_admin": true, 00:18:23.630 "nvme_io": true 00:18:23.630 }, 00:18:23.630 "memory_domains": [ 00:18:23.630 { 00:18:23.630 "dma_device_id": "system", 00:18:23.630 "dma_device_type": 1 00:18:23.630 } 00:18:23.630 ], 00:18:23.630 "driver_specific": { 00:18:23.630 "nvme": [ 00:18:23.630 { 00:18:23.630 "trid": { 00:18:23.630 "trtype": "TCP", 00:18:23.630 "adrfam": "IPv4", 00:18:23.630 "traddr": "10.0.0.2", 00:18:23.630 "trsvcid": "4420", 00:18:23.630 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:18:23.630 }, 00:18:23.630 "ctrlr_data": { 00:18:23.630 "cntlid": 1, 00:18:23.630 "vendor_id": "0x8086", 00:18:23.630 "model_number": "SPDK bdev Controller", 00:18:23.630 "serial_number": "SPDK0", 00:18:23.630 "firmware_revision": "24.05.1", 00:18:23.630 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:18:23.630 "oacs": { 00:18:23.630 "security": 0, 00:18:23.630 "format": 0, 00:18:23.630 "firmware": 0, 00:18:23.630 "ns_manage": 0 00:18:23.630 }, 00:18:23.630 "multi_ctrlr": true, 00:18:23.630 "ana_reporting": false 00:18:23.630 }, 00:18:23.630 "vs": { 00:18:23.630 "nvme_version": "1.3" 00:18:23.630 }, 00:18:23.630 "ns_data": { 00:18:23.630 "id": 1, 00:18:23.630 "can_share": true 00:18:23.630 } 00:18:23.630 } 00:18:23.630 ], 00:18:23.630 "mp_policy": "active_passive" 00:18:23.630 } 00:18:23.630 } 00:18:23.630 ] 00:18:23.630 00:30:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=942839 00:18:23.630 00:30:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:23.630 00:30:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:18:23.886 Running I/O for 10 seconds... 00:18:24.817 Latency(us) 00:18:24.817 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:24.817 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:24.817 Nvme0n1 : 1.00 13590.00 53.09 0.00 0.00 0.00 0.00 0.00 00:18:24.817 =================================================================================================================== 00:18:24.817 Total : 13590.00 53.09 0.00 0.00 0.00 0.00 0.00 00:18:24.817 00:18:25.761 00:30:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 6c3890bb-39d7-4c30-ae89-0ac14b9695d6 00:18:25.761 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:25.761 Nvme0n1 : 2.00 13653.00 53.33 0.00 0.00 0.00 0.00 0.00 00:18:25.761 =================================================================================================================== 00:18:25.761 Total : 13653.00 53.33 0.00 0.00 0.00 0.00 0.00 00:18:25.761 00:18:26.019 true 00:18:26.019 00:30:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6c3890bb-39d7-4c30-ae89-0ac14b9695d6 00:18:26.019 00:30:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:18:26.278 00:30:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:18:26.278 00:30:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:18:26.278 00:30:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 942839 00:18:26.844 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:26.844 Nvme0n1 : 3.00 13716.33 53.58 0.00 0.00 0.00 0.00 0.00 00:18:26.844 =================================================================================================================== 00:18:26.844 Total : 13716.33 53.58 0.00 0.00 0.00 0.00 0.00 00:18:26.844 00:18:27.777 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:27.777 Nvme0n1 : 4.00 13780.00 53.83 0.00 0.00 0.00 0.00 0.00 00:18:27.777 =================================================================================================================== 00:18:27.777 Total : 13780.00 53.83 0.00 0.00 0.00 0.00 0.00 00:18:27.777 00:18:28.742 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:28.742 Nvme0n1 : 5.00 13818.00 53.98 0.00 0.00 0.00 0.00 0.00 00:18:28.742 =================================================================================================================== 00:18:28.742 Total : 13818.00 53.98 0.00 0.00 0.00 0.00 0.00 00:18:28.742 00:18:30.114 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:30.114 Nvme0n1 : 6.00 13864.50 54.16 0.00 0.00 0.00 0.00 0.00 00:18:30.114 =================================================================================================================== 00:18:30.114 Total : 13864.50 54.16 0.00 0.00 0.00 0.00 0.00 00:18:30.114 00:18:31.049 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:31.049 Nvme0n1 : 7.00 13897.71 54.29 0.00 0.00 0.00 0.00 0.00 00:18:31.049 =================================================================================================================== 00:18:31.049 Total : 13897.71 54.29 0.00 0.00 0.00 0.00 0.00 00:18:31.049 00:18:31.985 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:31.985 Nvme0n1 : 8.00 13922.62 54.39 0.00 0.00 0.00 0.00 0.00 00:18:31.985 =================================================================================================================== 00:18:31.985 Total : 13922.62 54.39 0.00 0.00 0.00 0.00 0.00 00:18:31.985 00:18:32.919 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:32.919 Nvme0n1 : 9.00 13942.11 54.46 0.00 0.00 0.00 0.00 0.00 00:18:32.919 =================================================================================================================== 00:18:32.919 Total : 13942.11 54.46 0.00 0.00 0.00 0.00 0.00 00:18:32.919 00:18:33.855 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:33.855 Nvme0n1 : 10.00 13957.60 54.52 0.00 0.00 0.00 0.00 0.00 00:18:33.855 =================================================================================================================== 00:18:33.855 Total : 13957.60 54.52 0.00 0.00 0.00 0.00 0.00 00:18:33.855 00:18:33.855 00:18:33.855 Latency(us) 00:18:33.855 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:33.855 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:33.855 Nvme0n1 : 10.01 13959.16 54.53 0.00 0.00 9164.26 2366.58 17670.45 00:18:33.855 =================================================================================================================== 00:18:33.855 Total : 13959.16 54.53 0.00 0.00 9164.26 2366.58 17670.45 00:18:33.855 0 00:18:33.855 00:31:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 942735 00:18:33.855 00:31:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@946 -- # '[' -z 942735 ']' 00:18:33.855 00:31:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@950 -- # kill -0 942735 00:18:33.855 00:31:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@951 -- # uname 00:18:33.855 00:31:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:18:33.855 00:31:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 942735 00:18:33.855 00:31:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:18:33.855 00:31:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:18:33.855 00:31:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # echo 'killing process with pid 942735' 00:18:33.855 killing process with pid 942735 00:18:33.855 00:31:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@965 -- # kill 942735 00:18:33.855 Received shutdown signal, test time was about 10.000000 seconds 00:18:33.855 00:18:33.855 Latency(us) 00:18:33.855 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:33.855 =================================================================================================================== 00:18:33.855 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:33.855 00:31:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@970 -- # wait 942735 00:18:34.113 00:31:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:18:34.371 00:31:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:18:34.630 00:31:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:18:34.630 00:31:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6c3890bb-39d7-4c30-ae89-0ac14b9695d6 00:18:34.887 00:31:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:18:34.887 00:31:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:18:34.887 00:31:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 940635 00:18:34.887 00:31:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 940635 00:18:34.887 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 940635 Killed "${NVMF_APP[@]}" "$@" 00:18:34.887 00:31:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:18:34.887 00:31:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:18:34.887 00:31:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:34.887 00:31:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@720 -- # xtrace_disable 00:18:34.887 00:31:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:18:34.887 00:31:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:18:34.887 00:31:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@481 -- # nvmfpid=943848 00:18:34.887 00:31:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@482 -- # waitforlisten 943848 00:18:34.887 00:31:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@827 -- # '[' -z 943848 ']' 00:18:34.887 00:31:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:34.887 00:31:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:34.887 00:31:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:34.887 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:34.887 00:31:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:34.887 00:31:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:18:35.145 [2024-07-12 00:31:02.730014] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:18:35.145 [2024-07-12 00:31:02.730098] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:35.145 EAL: No free 2048 kB hugepages reported on node 1 00:18:35.145 [2024-07-12 00:31:02.795213] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:35.145 [2024-07-12 00:31:02.881369] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:35.145 [2024-07-12 00:31:02.881421] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:35.145 [2024-07-12 00:31:02.881437] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:35.145 [2024-07-12 00:31:02.881451] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:35.145 [2024-07-12 00:31:02.881463] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:35.145 [2024-07-12 00:31:02.881497] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:35.145 00:31:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:35.145 00:31:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # return 0 00:18:35.145 00:31:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:35.145 00:31:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:35.145 00:31:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:18:35.403 00:31:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:35.403 00:31:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:18:35.662 [2024-07-12 00:31:03.283026] blobstore.c:4865:bs_recover: *NOTICE*: Performing recovery on blobstore 00:18:35.662 [2024-07-12 00:31:03.283170] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:18:35.662 [2024-07-12 00:31:03.283224] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:18:35.662 00:31:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:18:35.662 00:31:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 7e9402ae-7fdf-4fb7-be1f-11a2aff3ab71 00:18:35.662 00:31:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@895 -- # local bdev_name=7e9402ae-7fdf-4fb7-be1f-11a2aff3ab71 00:18:35.662 00:31:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:18:35.662 00:31:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local i 00:18:35.662 00:31:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:18:35.662 00:31:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:18:35.662 00:31:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:18:35.920 00:31:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 7e9402ae-7fdf-4fb7-be1f-11a2aff3ab71 -t 2000 00:18:36.178 [ 00:18:36.178 { 00:18:36.178 "name": "7e9402ae-7fdf-4fb7-be1f-11a2aff3ab71", 00:18:36.178 "aliases": [ 00:18:36.178 "lvs/lvol" 00:18:36.178 ], 00:18:36.178 "product_name": "Logical Volume", 00:18:36.178 "block_size": 4096, 00:18:36.178 "num_blocks": 38912, 00:18:36.178 "uuid": "7e9402ae-7fdf-4fb7-be1f-11a2aff3ab71", 00:18:36.178 "assigned_rate_limits": { 00:18:36.178 "rw_ios_per_sec": 0, 00:18:36.178 "rw_mbytes_per_sec": 0, 00:18:36.178 "r_mbytes_per_sec": 0, 00:18:36.178 "w_mbytes_per_sec": 0 00:18:36.178 }, 00:18:36.178 "claimed": false, 00:18:36.178 "zoned": false, 00:18:36.178 "supported_io_types": { 00:18:36.178 "read": true, 00:18:36.178 "write": true, 00:18:36.178 "unmap": true, 00:18:36.178 "write_zeroes": true, 00:18:36.178 "flush": false, 00:18:36.178 "reset": true, 00:18:36.178 "compare": false, 00:18:36.178 "compare_and_write": false, 00:18:36.178 "abort": false, 00:18:36.178 "nvme_admin": false, 00:18:36.178 "nvme_io": false 00:18:36.178 }, 00:18:36.178 "driver_specific": { 00:18:36.178 "lvol": { 00:18:36.178 "lvol_store_uuid": "6c3890bb-39d7-4c30-ae89-0ac14b9695d6", 00:18:36.178 "base_bdev": "aio_bdev", 00:18:36.178 "thin_provision": false, 00:18:36.178 "num_allocated_clusters": 38, 00:18:36.178 "snapshot": false, 00:18:36.178 "clone": false, 00:18:36.178 "esnap_clone": false 00:18:36.178 } 00:18:36.178 } 00:18:36.178 } 00:18:36.178 ] 00:18:36.178 00:31:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # return 0 00:18:36.178 00:31:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6c3890bb-39d7-4c30-ae89-0ac14b9695d6 00:18:36.178 00:31:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:18:36.435 00:31:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:18:36.435 00:31:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6c3890bb-39d7-4c30-ae89-0ac14b9695d6 00:18:36.435 00:31:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:18:36.693 00:31:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:18:36.693 00:31:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:18:36.953 [2024-07-12 00:31:04.764656] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:18:37.213 00:31:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6c3890bb-39d7-4c30-ae89-0ac14b9695d6 00:18:37.213 00:31:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@648 -- # local es=0 00:18:37.213 00:31:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6c3890bb-39d7-4c30-ae89-0ac14b9695d6 00:18:37.213 00:31:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:37.213 00:31:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:37.213 00:31:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:37.213 00:31:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:37.213 00:31:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:37.213 00:31:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:37.214 00:31:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:37.214 00:31:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:18:37.214 00:31:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6c3890bb-39d7-4c30-ae89-0ac14b9695d6 00:18:37.473 request: 00:18:37.473 { 00:18:37.473 "uuid": "6c3890bb-39d7-4c30-ae89-0ac14b9695d6", 00:18:37.473 "method": "bdev_lvol_get_lvstores", 00:18:37.473 "req_id": 1 00:18:37.473 } 00:18:37.473 Got JSON-RPC error response 00:18:37.473 response: 00:18:37.473 { 00:18:37.473 "code": -19, 00:18:37.473 "message": "No such device" 00:18:37.473 } 00:18:37.473 00:31:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # es=1 00:18:37.473 00:31:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:37.473 00:31:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:37.473 00:31:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:37.473 00:31:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:18:37.732 aio_bdev 00:18:37.732 00:31:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 7e9402ae-7fdf-4fb7-be1f-11a2aff3ab71 00:18:37.732 00:31:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@895 -- # local bdev_name=7e9402ae-7fdf-4fb7-be1f-11a2aff3ab71 00:18:37.732 00:31:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:18:37.732 00:31:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local i 00:18:37.732 00:31:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:18:37.732 00:31:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:18:37.732 00:31:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:18:37.990 00:31:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 7e9402ae-7fdf-4fb7-be1f-11a2aff3ab71 -t 2000 00:18:38.248 [ 00:18:38.248 { 00:18:38.248 "name": "7e9402ae-7fdf-4fb7-be1f-11a2aff3ab71", 00:18:38.248 "aliases": [ 00:18:38.248 "lvs/lvol" 00:18:38.248 ], 00:18:38.248 "product_name": "Logical Volume", 00:18:38.248 "block_size": 4096, 00:18:38.248 "num_blocks": 38912, 00:18:38.248 "uuid": "7e9402ae-7fdf-4fb7-be1f-11a2aff3ab71", 00:18:38.248 "assigned_rate_limits": { 00:18:38.248 "rw_ios_per_sec": 0, 00:18:38.248 "rw_mbytes_per_sec": 0, 00:18:38.248 "r_mbytes_per_sec": 0, 00:18:38.248 "w_mbytes_per_sec": 0 00:18:38.248 }, 00:18:38.248 "claimed": false, 00:18:38.248 "zoned": false, 00:18:38.248 "supported_io_types": { 00:18:38.248 "read": true, 00:18:38.248 "write": true, 00:18:38.248 "unmap": true, 00:18:38.248 "write_zeroes": true, 00:18:38.248 "flush": false, 00:18:38.248 "reset": true, 00:18:38.248 "compare": false, 00:18:38.248 "compare_and_write": false, 00:18:38.248 "abort": false, 00:18:38.248 "nvme_admin": false, 00:18:38.248 "nvme_io": false 00:18:38.248 }, 00:18:38.248 "driver_specific": { 00:18:38.248 "lvol": { 00:18:38.248 "lvol_store_uuid": "6c3890bb-39d7-4c30-ae89-0ac14b9695d6", 00:18:38.248 "base_bdev": "aio_bdev", 00:18:38.248 "thin_provision": false, 00:18:38.248 "num_allocated_clusters": 38, 00:18:38.248 "snapshot": false, 00:18:38.248 "clone": false, 00:18:38.248 "esnap_clone": false 00:18:38.248 } 00:18:38.248 } 00:18:38.248 } 00:18:38.248 ] 00:18:38.248 00:31:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # return 0 00:18:38.248 00:31:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6c3890bb-39d7-4c30-ae89-0ac14b9695d6 00:18:38.248 00:31:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:18:38.506 00:31:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:18:38.506 00:31:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6c3890bb-39d7-4c30-ae89-0ac14b9695d6 00:18:38.506 00:31:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:18:38.764 00:31:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:18:38.764 00:31:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 7e9402ae-7fdf-4fb7-be1f-11a2aff3ab71 00:18:39.022 00:31:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 6c3890bb-39d7-4c30-ae89-0ac14b9695d6 00:18:39.588 00:31:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:18:39.847 00:31:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:18:39.847 00:18:39.847 real 0m19.976s 00:18:39.847 user 0m50.681s 00:18:39.847 sys 0m4.520s 00:18:39.847 00:31:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1122 -- # xtrace_disable 00:18:39.847 00:31:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:18:39.847 ************************************ 00:18:39.847 END TEST lvs_grow_dirty 00:18:39.847 ************************************ 00:18:39.847 00:31:07 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:18:39.847 00:31:07 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@804 -- # type=--id 00:18:39.847 00:31:07 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@805 -- # id=0 00:18:39.847 00:31:07 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@806 -- # '[' --id = --pid ']' 00:18:39.847 00:31:07 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:18:39.847 00:31:07 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # shm_files=nvmf_trace.0 00:18:39.847 00:31:07 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # [[ -z nvmf_trace.0 ]] 00:18:39.847 00:31:07 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # for n in $shm_files 00:18:39.847 00:31:07 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@817 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:18:39.847 nvmf_trace.0 00:18:39.847 00:31:07 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@819 -- # return 0 00:18:39.847 00:31:07 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:18:39.847 00:31:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:39.847 00:31:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@117 -- # sync 00:18:39.847 00:31:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:39.847 00:31:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@120 -- # set +e 00:18:39.847 00:31:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:39.848 00:31:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:39.848 rmmod nvme_tcp 00:18:39.848 rmmod nvme_fabrics 00:18:39.848 rmmod nvme_keyring 00:18:39.848 00:31:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:39.848 00:31:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set -e 00:18:39.848 00:31:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@125 -- # return 0 00:18:39.848 00:31:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@489 -- # '[' -n 943848 ']' 00:18:39.848 00:31:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@490 -- # killprocess 943848 00:18:39.848 00:31:07 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@946 -- # '[' -z 943848 ']' 00:18:39.848 00:31:07 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@950 -- # kill -0 943848 00:18:39.848 00:31:07 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@951 -- # uname 00:18:39.848 00:31:07 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:18:39.848 00:31:07 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 943848 00:18:39.848 00:31:07 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:18:39.848 00:31:07 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:18:39.848 00:31:07 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # echo 'killing process with pid 943848' 00:18:39.848 killing process with pid 943848 00:18:39.848 00:31:07 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@965 -- # kill 943848 00:18:39.848 00:31:07 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@970 -- # wait 943848 00:18:40.108 00:31:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:40.108 00:31:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:40.108 00:31:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:40.108 00:31:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:40.108 00:31:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:40.108 00:31:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:40.108 00:31:07 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:40.108 00:31:07 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:42.647 00:31:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:42.647 00:18:42.647 real 0m43.077s 00:18:42.647 user 1m14.683s 00:18:42.647 sys 0m7.936s 00:18:42.647 00:31:09 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1122 -- # xtrace_disable 00:18:42.647 00:31:09 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:18:42.647 ************************************ 00:18:42.647 END TEST nvmf_lvs_grow 00:18:42.647 ************************************ 00:18:42.647 00:31:09 nvmf_tcp -- nvmf/nvmf.sh@50 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:18:42.647 00:31:09 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:18:42.647 00:31:09 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:18:42.647 00:31:09 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:42.647 ************************************ 00:18:42.647 START TEST nvmf_bdev_io_wait 00:18:42.647 ************************************ 00:18:42.647 00:31:09 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:18:42.647 * Looking for test storage... 00:18:42.647 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:42.647 00:31:09 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:42.647 00:31:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:18:42.647 00:31:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:42.647 00:31:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:42.647 00:31:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:42.647 00:31:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:42.647 00:31:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:42.647 00:31:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:42.647 00:31:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:42.647 00:31:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:42.647 00:31:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:42.647 00:31:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:42.647 00:31:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:18:42.647 00:31:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:18:42.647 00:31:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:42.647 00:31:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:42.647 00:31:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:42.647 00:31:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:42.647 00:31:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:42.647 00:31:09 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:42.647 00:31:09 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:42.647 00:31:09 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:42.647 00:31:09 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:42.647 00:31:09 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:42.647 00:31:09 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:42.647 00:31:09 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:18:42.647 00:31:09 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:42.647 00:31:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # : 0 00:18:42.647 00:31:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:42.647 00:31:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:42.647 00:31:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:42.647 00:31:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:42.647 00:31:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:42.647 00:31:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:42.647 00:31:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:42.647 00:31:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:42.647 00:31:09 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:42.647 00:31:09 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:42.647 00:31:09 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:18:42.647 00:31:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:42.647 00:31:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:42.647 00:31:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:42.647 00:31:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:42.647 00:31:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:42.647 00:31:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:42.647 00:31:09 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:42.647 00:31:09 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:42.647 00:31:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:42.647 00:31:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:42.647 00:31:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@285 -- # xtrace_disable 00:18:42.647 00:31:09 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:18:44.025 00:31:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:44.025 00:31:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # pci_devs=() 00:18:44.025 00:31:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:44.025 00:31:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:44.025 00:31:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:44.025 00:31:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:44.025 00:31:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:44.025 00:31:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # net_devs=() 00:18:44.025 00:31:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:44.025 00:31:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # e810=() 00:18:44.025 00:31:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # local -ga e810 00:18:44.025 00:31:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # x722=() 00:18:44.025 00:31:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # local -ga x722 00:18:44.025 00:31:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # mlx=() 00:18:44.025 00:31:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # local -ga mlx 00:18:44.025 00:31:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:44.025 00:31:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:44.025 00:31:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:44.025 00:31:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:44.025 00:31:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:44.025 00:31:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:44.025 00:31:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:44.025 00:31:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:44.025 00:31:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:44.025 00:31:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:44.025 00:31:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:44.025 00:31:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:44.025 00:31:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:44.025 00:31:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:44.025 00:31:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:44.025 00:31:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:44.025 00:31:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:44.025 00:31:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:44.025 00:31:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:18:44.025 Found 0000:08:00.0 (0x8086 - 0x159b) 00:18:44.025 00:31:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:44.025 00:31:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:44.025 00:31:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:44.025 00:31:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:44.025 00:31:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:44.025 00:31:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:44.025 00:31:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:18:44.025 Found 0000:08:00.1 (0x8086 - 0x159b) 00:18:44.025 00:31:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:44.025 00:31:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:44.025 00:31:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:44.025 00:31:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:44.025 00:31:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:44.025 00:31:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:44.025 00:31:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:44.025 00:31:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:44.025 00:31:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:44.025 00:31:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:44.025 00:31:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:44.025 00:31:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:44.025 00:31:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:44.025 00:31:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:44.025 00:31:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:44.025 00:31:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:18:44.025 Found net devices under 0000:08:00.0: cvl_0_0 00:18:44.025 00:31:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:44.025 00:31:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:44.025 00:31:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:44.025 00:31:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:44.025 00:31:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:44.025 00:31:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:44.025 00:31:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:44.025 00:31:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:44.025 00:31:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:18:44.025 Found net devices under 0000:08:00.1: cvl_0_1 00:18:44.025 00:31:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:44.025 00:31:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:44.025 00:31:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # is_hw=yes 00:18:44.025 00:31:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:44.025 00:31:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:44.025 00:31:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:44.025 00:31:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:44.025 00:31:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:44.025 00:31:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:44.025 00:31:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:44.025 00:31:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:44.025 00:31:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:44.025 00:31:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:44.025 00:31:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:44.025 00:31:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:44.025 00:31:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:44.025 00:31:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:44.025 00:31:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:44.025 00:31:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:44.025 00:31:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:44.025 00:31:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:44.025 00:31:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:44.025 00:31:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:44.025 00:31:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:44.025 00:31:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:44.025 00:31:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:44.025 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:44.025 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.290 ms 00:18:44.025 00:18:44.025 --- 10.0.0.2 ping statistics --- 00:18:44.025 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:44.026 rtt min/avg/max/mdev = 0.290/0.290/0.290/0.000 ms 00:18:44.026 00:31:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:44.026 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:44.026 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.109 ms 00:18:44.026 00:18:44.026 --- 10.0.0.1 ping statistics --- 00:18:44.026 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:44.026 rtt min/avg/max/mdev = 0.109/0.109/0.109/0.000 ms 00:18:44.026 00:31:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:44.026 00:31:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # return 0 00:18:44.026 00:31:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:44.026 00:31:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:44.026 00:31:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:44.026 00:31:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:44.026 00:31:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:44.026 00:31:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:44.026 00:31:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:44.026 00:31:11 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:18:44.026 00:31:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:44.026 00:31:11 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@720 -- # xtrace_disable 00:18:44.026 00:31:11 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:18:44.026 00:31:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # nvmfpid=945891 00:18:44.026 00:31:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:18:44.026 00:31:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # waitforlisten 945891 00:18:44.026 00:31:11 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@827 -- # '[' -z 945891 ']' 00:18:44.026 00:31:11 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:44.026 00:31:11 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:44.026 00:31:11 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:44.026 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:44.026 00:31:11 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:44.026 00:31:11 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:18:44.026 [2024-07-12 00:31:11.807092] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:18:44.026 [2024-07-12 00:31:11.807192] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:44.026 EAL: No free 2048 kB hugepages reported on node 1 00:18:44.284 [2024-07-12 00:31:11.872749] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:44.284 [2024-07-12 00:31:11.965526] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:44.284 [2024-07-12 00:31:11.965592] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:44.284 [2024-07-12 00:31:11.965619] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:44.284 [2024-07-12 00:31:11.965639] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:44.284 [2024-07-12 00:31:11.965657] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:44.284 [2024-07-12 00:31:11.965749] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:44.284 [2024-07-12 00:31:11.965804] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:44.284 [2024-07-12 00:31:11.965857] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:18:44.284 [2024-07-12 00:31:11.965866] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:44.284 00:31:12 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:44.284 00:31:12 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@860 -- # return 0 00:18:44.284 00:31:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:44.284 00:31:12 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:44.284 00:31:12 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:18:44.284 00:31:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:44.284 00:31:12 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:18:44.284 00:31:12 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:44.284 00:31:12 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:18:44.284 00:31:12 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:44.284 00:31:12 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:18:44.284 00:31:12 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:44.284 00:31:12 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:18:44.579 00:31:12 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:44.579 00:31:12 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:44.579 00:31:12 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:44.579 00:31:12 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:18:44.579 [2024-07-12 00:31:12.153511] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:44.579 00:31:12 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:44.579 00:31:12 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:44.579 00:31:12 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:44.579 00:31:12 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:18:44.579 Malloc0 00:18:44.579 00:31:12 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:44.579 00:31:12 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:44.579 00:31:12 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:44.579 00:31:12 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:18:44.579 00:31:12 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:44.579 00:31:12 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:44.579 00:31:12 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:44.579 00:31:12 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:18:44.579 00:31:12 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:44.579 00:31:12 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:44.579 00:31:12 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:44.579 00:31:12 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:18:44.579 [2024-07-12 00:31:12.225028] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:44.579 00:31:12 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:44.579 00:31:12 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=945922 00:18:44.579 00:31:12 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=945924 00:18:44.579 00:31:12 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:18:44.579 00:31:12 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:18:44.579 00:31:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:18:44.579 00:31:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:18:44.579 00:31:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:44.579 00:31:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:44.579 { 00:18:44.579 "params": { 00:18:44.579 "name": "Nvme$subsystem", 00:18:44.579 "trtype": "$TEST_TRANSPORT", 00:18:44.579 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:44.579 "adrfam": "ipv4", 00:18:44.579 "trsvcid": "$NVMF_PORT", 00:18:44.579 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:44.579 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:44.579 "hdgst": ${hdgst:-false}, 00:18:44.579 "ddgst": ${ddgst:-false} 00:18:44.579 }, 00:18:44.579 "method": "bdev_nvme_attach_controller" 00:18:44.580 } 00:18:44.580 EOF 00:18:44.580 )") 00:18:44.580 00:31:12 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=945926 00:18:44.580 00:31:12 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:18:44.580 00:31:12 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:18:44.580 00:31:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:18:44.580 00:31:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:18:44.580 00:31:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:44.580 00:31:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:44.580 { 00:18:44.580 "params": { 00:18:44.580 "name": "Nvme$subsystem", 00:18:44.580 "trtype": "$TEST_TRANSPORT", 00:18:44.580 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:44.580 "adrfam": "ipv4", 00:18:44.580 "trsvcid": "$NVMF_PORT", 00:18:44.580 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:44.580 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:44.580 "hdgst": ${hdgst:-false}, 00:18:44.580 "ddgst": ${ddgst:-false} 00:18:44.580 }, 00:18:44.580 "method": "bdev_nvme_attach_controller" 00:18:44.580 } 00:18:44.580 EOF 00:18:44.580 )") 00:18:44.580 00:31:12 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:18:44.580 00:31:12 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:18:44.580 00:31:12 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=945929 00:18:44.580 00:31:12 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:18:44.580 00:31:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:18:44.580 00:31:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:18:44.580 00:31:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:18:44.580 00:31:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:44.580 00:31:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:44.580 { 00:18:44.580 "params": { 00:18:44.580 "name": "Nvme$subsystem", 00:18:44.580 "trtype": "$TEST_TRANSPORT", 00:18:44.580 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:44.580 "adrfam": "ipv4", 00:18:44.580 "trsvcid": "$NVMF_PORT", 00:18:44.580 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:44.580 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:44.580 "hdgst": ${hdgst:-false}, 00:18:44.580 "ddgst": ${ddgst:-false} 00:18:44.580 }, 00:18:44.580 "method": "bdev_nvme_attach_controller" 00:18:44.580 } 00:18:44.580 EOF 00:18:44.580 )") 00:18:44.580 00:31:12 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:18:44.580 00:31:12 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:18:44.580 00:31:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:18:44.580 00:31:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:18:44.580 00:31:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:18:44.580 00:31:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:44.580 00:31:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:44.580 { 00:18:44.580 "params": { 00:18:44.580 "name": "Nvme$subsystem", 00:18:44.580 "trtype": "$TEST_TRANSPORT", 00:18:44.580 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:44.580 "adrfam": "ipv4", 00:18:44.580 "trsvcid": "$NVMF_PORT", 00:18:44.580 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:44.580 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:44.580 "hdgst": ${hdgst:-false}, 00:18:44.580 "ddgst": ${ddgst:-false} 00:18:44.580 }, 00:18:44.580 "method": "bdev_nvme_attach_controller" 00:18:44.580 } 00:18:44.580 EOF 00:18:44.580 )") 00:18:44.580 00:31:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:18:44.580 00:31:12 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 945922 00:18:44.580 00:31:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:18:44.580 00:31:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:18:44.580 00:31:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:18:44.580 00:31:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:18:44.580 00:31:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:18:44.580 00:31:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:18:44.580 00:31:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:18:44.580 "params": { 00:18:44.580 "name": "Nvme1", 00:18:44.580 "trtype": "tcp", 00:18:44.580 "traddr": "10.0.0.2", 00:18:44.580 "adrfam": "ipv4", 00:18:44.580 "trsvcid": "4420", 00:18:44.580 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:44.580 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:44.580 "hdgst": false, 00:18:44.580 "ddgst": false 00:18:44.580 }, 00:18:44.580 "method": "bdev_nvme_attach_controller" 00:18:44.580 }' 00:18:44.580 00:31:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:18:44.580 00:31:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:18:44.580 "params": { 00:18:44.580 "name": "Nvme1", 00:18:44.580 "trtype": "tcp", 00:18:44.580 "traddr": "10.0.0.2", 00:18:44.580 "adrfam": "ipv4", 00:18:44.580 "trsvcid": "4420", 00:18:44.580 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:44.580 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:44.580 "hdgst": false, 00:18:44.580 "ddgst": false 00:18:44.580 }, 00:18:44.580 "method": "bdev_nvme_attach_controller" 00:18:44.580 }' 00:18:44.580 00:31:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:18:44.580 00:31:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:18:44.580 "params": { 00:18:44.580 "name": "Nvme1", 00:18:44.580 "trtype": "tcp", 00:18:44.580 "traddr": "10.0.0.2", 00:18:44.580 "adrfam": "ipv4", 00:18:44.580 "trsvcid": "4420", 00:18:44.580 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:44.580 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:44.580 "hdgst": false, 00:18:44.580 "ddgst": false 00:18:44.580 }, 00:18:44.580 "method": "bdev_nvme_attach_controller" 00:18:44.580 }' 00:18:44.580 00:31:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:18:44.580 00:31:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:18:44.580 "params": { 00:18:44.580 "name": "Nvme1", 00:18:44.580 "trtype": "tcp", 00:18:44.580 "traddr": "10.0.0.2", 00:18:44.580 "adrfam": "ipv4", 00:18:44.580 "trsvcid": "4420", 00:18:44.580 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:44.580 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:44.580 "hdgst": false, 00:18:44.580 "ddgst": false 00:18:44.580 }, 00:18:44.580 "method": "bdev_nvme_attach_controller" 00:18:44.580 }' 00:18:44.580 [2024-07-12 00:31:12.275197] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:18:44.580 [2024-07-12 00:31:12.275197] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:18:44.580 [2024-07-12 00:31:12.275197] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:18:44.580 [2024-07-12 00:31:12.275197] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:18:44.580 [2024-07-12 00:31:12.275293] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-07-12 00:31:12.275293] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-07-12 00:31:12.275294] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-07-12 00:31:12.275294] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:18:44.580 .cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:18:44.580 .cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:18:44.580 .cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:18:44.580 EAL: No free 2048 kB hugepages reported on node 1 00:18:44.580 EAL: No free 2048 kB hugepages reported on node 1 00:18:44.580 [2024-07-12 00:31:12.407523] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:44.838 EAL: No free 2048 kB hugepages reported on node 1 00:18:44.838 [2024-07-12 00:31:12.474437] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:18:44.838 [2024-07-12 00:31:12.476348] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:44.838 EAL: No free 2048 kB hugepages reported on node 1 00:18:44.838 [2024-07-12 00:31:12.543462] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:18:44.838 [2024-07-12 00:31:12.545025] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:44.838 [2024-07-12 00:31:12.612954] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:18:44.838 [2024-07-12 00:31:12.615819] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:45.097 [2024-07-12 00:31:12.682949] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:18:45.097 Running I/O for 1 seconds... 00:18:45.097 Running I/O for 1 seconds... 00:18:45.097 Running I/O for 1 seconds... 00:18:45.097 Running I/O for 1 seconds... 00:18:46.047 00:18:46.047 Latency(us) 00:18:46.047 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:46.047 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:18:46.047 Nvme1n1 : 1.00 156612.80 611.77 0.00 0.00 813.91 320.09 1061.93 00:18:46.047 =================================================================================================================== 00:18:46.047 Total : 156612.80 611.77 0.00 0.00 813.91 320.09 1061.93 00:18:46.047 00:18:46.047 Latency(us) 00:18:46.047 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:46.047 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:18:46.047 Nvme1n1 : 1.02 5583.17 21.81 0.00 0.00 22728.83 10000.31 33787.45 00:18:46.047 =================================================================================================================== 00:18:46.047 Total : 5583.17 21.81 0.00 0.00 22728.83 10000.31 33787.45 00:18:46.047 00:18:46.047 Latency(us) 00:18:46.047 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:46.047 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:18:46.047 Nvme1n1 : 1.01 5541.83 21.65 0.00 0.00 23016.93 6068.15 47574.28 00:18:46.047 =================================================================================================================== 00:18:46.047 Total : 5541.83 21.65 0.00 0.00 23016.93 6068.15 47574.28 00:18:46.047 00:18:46.047 Latency(us) 00:18:46.047 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:46.047 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:18:46.047 Nvme1n1 : 1.01 8799.99 34.37 0.00 0.00 14475.69 8058.50 26796.94 00:18:46.047 =================================================================================================================== 00:18:46.047 Total : 8799.99 34.37 0.00 0.00 14475.69 8058.50 26796.94 00:18:46.307 00:31:14 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 945924 00:18:46.307 00:31:14 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 945926 00:18:46.307 00:31:14 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 945929 00:18:46.307 00:31:14 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:46.307 00:31:14 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:46.307 00:31:14 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:18:46.567 00:31:14 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:46.567 00:31:14 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:18:46.567 00:31:14 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:18:46.567 00:31:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:46.567 00:31:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # sync 00:18:46.567 00:31:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:46.567 00:31:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@120 -- # set +e 00:18:46.567 00:31:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:46.567 00:31:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:46.567 rmmod nvme_tcp 00:18:46.567 rmmod nvme_fabrics 00:18:46.567 rmmod nvme_keyring 00:18:46.567 00:31:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:46.567 00:31:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set -e 00:18:46.567 00:31:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # return 0 00:18:46.567 00:31:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # '[' -n 945891 ']' 00:18:46.567 00:31:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # killprocess 945891 00:18:46.567 00:31:14 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@946 -- # '[' -z 945891 ']' 00:18:46.567 00:31:14 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@950 -- # kill -0 945891 00:18:46.567 00:31:14 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@951 -- # uname 00:18:46.567 00:31:14 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:18:46.567 00:31:14 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 945891 00:18:46.567 00:31:14 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:18:46.567 00:31:14 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:18:46.567 00:31:14 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # echo 'killing process with pid 945891' 00:18:46.567 killing process with pid 945891 00:18:46.567 00:31:14 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@965 -- # kill 945891 00:18:46.567 00:31:14 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@970 -- # wait 945891 00:18:46.567 00:31:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:46.567 00:31:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:46.567 00:31:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:46.567 00:31:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:46.567 00:31:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:46.567 00:31:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:46.567 00:31:14 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:46.567 00:31:14 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:49.105 00:31:16 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:49.105 00:18:49.105 real 0m6.530s 00:18:49.105 user 0m15.170s 00:18:49.105 sys 0m3.081s 00:18:49.105 00:31:16 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1122 -- # xtrace_disable 00:18:49.105 00:31:16 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:18:49.105 ************************************ 00:18:49.105 END TEST nvmf_bdev_io_wait 00:18:49.105 ************************************ 00:18:49.105 00:31:16 nvmf_tcp -- nvmf/nvmf.sh@51 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:18:49.105 00:31:16 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:18:49.105 00:31:16 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:18:49.105 00:31:16 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:49.105 ************************************ 00:18:49.105 START TEST nvmf_queue_depth 00:18:49.105 ************************************ 00:18:49.105 00:31:16 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:18:49.105 * Looking for test storage... 00:18:49.105 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:49.105 00:31:16 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:49.105 00:31:16 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:18:49.105 00:31:16 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:49.105 00:31:16 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:49.105 00:31:16 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:49.105 00:31:16 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:49.105 00:31:16 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:49.105 00:31:16 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:49.105 00:31:16 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:49.105 00:31:16 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:49.105 00:31:16 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:49.105 00:31:16 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:49.105 00:31:16 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:18:49.105 00:31:16 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:18:49.105 00:31:16 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:49.105 00:31:16 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:49.105 00:31:16 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:49.105 00:31:16 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:49.105 00:31:16 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:49.105 00:31:16 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:49.105 00:31:16 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:49.105 00:31:16 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:49.105 00:31:16 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:49.105 00:31:16 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:49.105 00:31:16 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:49.105 00:31:16 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:18:49.105 00:31:16 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:49.105 00:31:16 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@47 -- # : 0 00:18:49.105 00:31:16 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:49.105 00:31:16 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:49.105 00:31:16 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:49.105 00:31:16 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:49.105 00:31:16 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:49.105 00:31:16 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:49.105 00:31:16 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:49.105 00:31:16 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:49.105 00:31:16 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:18:49.105 00:31:16 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:18:49.105 00:31:16 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:49.105 00:31:16 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:18:49.105 00:31:16 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:49.105 00:31:16 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:49.105 00:31:16 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:49.105 00:31:16 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:49.105 00:31:16 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:49.106 00:31:16 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:49.106 00:31:16 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:49.106 00:31:16 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:49.106 00:31:16 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:49.106 00:31:16 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:49.106 00:31:16 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@285 -- # xtrace_disable 00:18:49.106 00:31:16 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:18:50.484 00:31:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:50.484 00:31:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # pci_devs=() 00:18:50.484 00:31:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:50.484 00:31:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:50.484 00:31:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:50.484 00:31:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:50.484 00:31:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:50.484 00:31:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # net_devs=() 00:18:50.484 00:31:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:50.484 00:31:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # e810=() 00:18:50.484 00:31:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # local -ga e810 00:18:50.484 00:31:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # x722=() 00:18:50.484 00:31:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # local -ga x722 00:18:50.484 00:31:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # mlx=() 00:18:50.484 00:31:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # local -ga mlx 00:18:50.484 00:31:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:50.484 00:31:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:50.484 00:31:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:50.484 00:31:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:50.484 00:31:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:50.484 00:31:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:50.484 00:31:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:50.484 00:31:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:50.484 00:31:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:50.484 00:31:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:50.484 00:31:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:50.484 00:31:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:50.484 00:31:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:50.484 00:31:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:50.484 00:31:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:50.484 00:31:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:50.484 00:31:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:50.484 00:31:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:50.484 00:31:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:18:50.484 Found 0000:08:00.0 (0x8086 - 0x159b) 00:18:50.484 00:31:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:50.484 00:31:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:50.484 00:31:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:50.484 00:31:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:50.484 00:31:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:50.484 00:31:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:50.484 00:31:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:18:50.484 Found 0000:08:00.1 (0x8086 - 0x159b) 00:18:50.484 00:31:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:50.484 00:31:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:50.484 00:31:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:50.484 00:31:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:50.484 00:31:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:50.484 00:31:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:50.484 00:31:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:50.484 00:31:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:50.484 00:31:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:50.484 00:31:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:50.484 00:31:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:50.484 00:31:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:50.484 00:31:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:50.484 00:31:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:50.484 00:31:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:50.484 00:31:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:18:50.484 Found net devices under 0000:08:00.0: cvl_0_0 00:18:50.484 00:31:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:50.484 00:31:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:50.484 00:31:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:50.484 00:31:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:50.484 00:31:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:50.484 00:31:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:50.484 00:31:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:50.484 00:31:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:50.485 00:31:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:18:50.485 Found net devices under 0000:08:00.1: cvl_0_1 00:18:50.485 00:31:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:50.485 00:31:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:50.485 00:31:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # is_hw=yes 00:18:50.485 00:31:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:50.485 00:31:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:50.485 00:31:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:50.485 00:31:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:50.485 00:31:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:50.485 00:31:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:50.485 00:31:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:50.485 00:31:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:50.485 00:31:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:50.485 00:31:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:50.485 00:31:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:50.485 00:31:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:50.485 00:31:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:50.485 00:31:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:50.485 00:31:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:50.485 00:31:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:50.485 00:31:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:50.485 00:31:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:50.485 00:31:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:50.485 00:31:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:50.485 00:31:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:50.485 00:31:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:50.485 00:31:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:50.485 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:50.485 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.203 ms 00:18:50.485 00:18:50.485 --- 10.0.0.2 ping statistics --- 00:18:50.485 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:50.485 rtt min/avg/max/mdev = 0.203/0.203/0.203/0.000 ms 00:18:50.485 00:31:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:50.485 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:50.485 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.072 ms 00:18:50.485 00:18:50.485 --- 10.0.0.1 ping statistics --- 00:18:50.485 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:50.485 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:18:50.485 00:31:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:50.485 00:31:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@422 -- # return 0 00:18:50.485 00:31:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:50.485 00:31:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:50.485 00:31:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:50.485 00:31:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:50.485 00:31:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:50.485 00:31:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:50.485 00:31:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:50.485 00:31:18 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:18:50.485 00:31:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:50.485 00:31:18 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@720 -- # xtrace_disable 00:18:50.485 00:31:18 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:18:50.485 00:31:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@481 -- # nvmfpid=947553 00:18:50.485 00:31:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:50.485 00:31:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@482 -- # waitforlisten 947553 00:18:50.485 00:31:18 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@827 -- # '[' -z 947553 ']' 00:18:50.485 00:31:18 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:50.485 00:31:18 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:50.485 00:31:18 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:50.485 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:50.485 00:31:18 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:50.485 00:31:18 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:18:50.485 [2024-07-12 00:31:18.294960] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:18:50.485 [2024-07-12 00:31:18.295069] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:50.745 EAL: No free 2048 kB hugepages reported on node 1 00:18:50.745 [2024-07-12 00:31:18.361370] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:50.746 [2024-07-12 00:31:18.447643] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:50.746 [2024-07-12 00:31:18.447705] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:50.746 [2024-07-12 00:31:18.447722] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:50.746 [2024-07-12 00:31:18.447735] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:50.746 [2024-07-12 00:31:18.447747] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:50.746 [2024-07-12 00:31:18.447783] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:50.746 00:31:18 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:50.746 00:31:18 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@860 -- # return 0 00:18:50.746 00:31:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:50.746 00:31:18 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:50.746 00:31:18 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:18:50.746 00:31:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:50.746 00:31:18 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:50.746 00:31:18 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:50.746 00:31:18 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:18:50.746 [2024-07-12 00:31:18.577974] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:50.746 00:31:18 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:50.746 00:31:18 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:50.746 00:31:18 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:50.746 00:31:18 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:18:51.005 Malloc0 00:18:51.005 00:31:18 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:51.005 00:31:18 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:51.005 00:31:18 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:51.005 00:31:18 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:18:51.005 00:31:18 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:51.005 00:31:18 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:51.005 00:31:18 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:51.005 00:31:18 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:18:51.005 00:31:18 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:51.005 00:31:18 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:51.005 00:31:18 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:51.005 00:31:18 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:18:51.005 [2024-07-12 00:31:18.634601] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:51.005 00:31:18 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:51.005 00:31:18 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=947661 00:18:51.005 00:31:18 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:18:51.005 00:31:18 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:51.005 00:31:18 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 947661 /var/tmp/bdevperf.sock 00:18:51.005 00:31:18 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@827 -- # '[' -z 947661 ']' 00:18:51.005 00:31:18 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:51.005 00:31:18 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:51.005 00:31:18 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:51.005 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:51.005 00:31:18 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:51.005 00:31:18 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:18:51.005 [2024-07-12 00:31:18.682737] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:18:51.005 [2024-07-12 00:31:18.682822] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid947661 ] 00:18:51.005 EAL: No free 2048 kB hugepages reported on node 1 00:18:51.005 [2024-07-12 00:31:18.741955] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:51.005 [2024-07-12 00:31:18.829131] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:51.265 00:31:18 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:51.265 00:31:18 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@860 -- # return 0 00:18:51.265 00:31:18 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:18:51.265 00:31:18 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:51.265 00:31:18 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:18:51.525 NVMe0n1 00:18:51.525 00:31:19 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:51.525 00:31:19 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:51.525 Running I/O for 10 seconds... 00:19:03.742 00:19:03.742 Latency(us) 00:19:03.743 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:03.743 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:19:03.743 Verification LBA range: start 0x0 length 0x4000 00:19:03.743 NVMe0n1 : 10.11 7992.35 31.22 0.00 0.00 127528.52 28932.93 82332.63 00:19:03.743 =================================================================================================================== 00:19:03.743 Total : 7992.35 31.22 0.00 0.00 127528.52 28932.93 82332.63 00:19:03.743 0 00:19:03.743 00:31:29 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 947661 00:19:03.743 00:31:29 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@946 -- # '[' -z 947661 ']' 00:19:03.743 00:31:29 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@950 -- # kill -0 947661 00:19:03.743 00:31:29 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@951 -- # uname 00:19:03.743 00:31:29 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:19:03.743 00:31:29 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 947661 00:19:03.743 00:31:29 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:19:03.743 00:31:29 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:19:03.743 00:31:29 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@964 -- # echo 'killing process with pid 947661' 00:19:03.743 killing process with pid 947661 00:19:03.743 00:31:29 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@965 -- # kill 947661 00:19:03.743 Received shutdown signal, test time was about 10.000000 seconds 00:19:03.743 00:19:03.743 Latency(us) 00:19:03.743 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:03.743 =================================================================================================================== 00:19:03.743 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:03.743 00:31:29 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@970 -- # wait 947661 00:19:03.743 00:31:29 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:19:03.743 00:31:29 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:19:03.743 00:31:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:03.743 00:31:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@117 -- # sync 00:19:03.743 00:31:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:03.743 00:31:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@120 -- # set +e 00:19:03.743 00:31:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:03.743 00:31:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:03.743 rmmod nvme_tcp 00:19:03.743 rmmod nvme_fabrics 00:19:03.743 rmmod nvme_keyring 00:19:03.743 00:31:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:03.743 00:31:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@124 -- # set -e 00:19:03.743 00:31:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@125 -- # return 0 00:19:03.743 00:31:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@489 -- # '[' -n 947553 ']' 00:19:03.743 00:31:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@490 -- # killprocess 947553 00:19:03.743 00:31:29 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@946 -- # '[' -z 947553 ']' 00:19:03.743 00:31:29 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@950 -- # kill -0 947553 00:19:03.743 00:31:29 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@951 -- # uname 00:19:03.743 00:31:29 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:19:03.743 00:31:29 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 947553 00:19:03.743 00:31:29 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:19:03.743 00:31:29 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:19:03.743 00:31:29 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@964 -- # echo 'killing process with pid 947553' 00:19:03.743 killing process with pid 947553 00:19:03.743 00:31:29 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@965 -- # kill 947553 00:19:03.743 00:31:29 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@970 -- # wait 947553 00:19:03.743 00:31:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:03.743 00:31:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:03.743 00:31:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:03.743 00:31:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:03.743 00:31:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:03.743 00:31:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:03.743 00:31:29 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:03.743 00:31:29 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:04.313 00:31:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:04.313 00:19:04.313 real 0m15.405s 00:19:04.313 user 0m22.544s 00:19:04.313 sys 0m2.473s 00:19:04.313 00:31:31 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1122 -- # xtrace_disable 00:19:04.313 00:31:31 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:19:04.313 ************************************ 00:19:04.313 END TEST nvmf_queue_depth 00:19:04.313 ************************************ 00:19:04.313 00:31:31 nvmf_tcp -- nvmf/nvmf.sh@52 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:19:04.313 00:31:31 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:19:04.313 00:31:31 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:19:04.313 00:31:31 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:04.313 ************************************ 00:19:04.313 START TEST nvmf_target_multipath 00:19:04.313 ************************************ 00:19:04.313 00:31:31 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:19:04.313 * Looking for test storage... 00:19:04.313 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:04.313 00:31:32 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:04.313 00:31:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:19:04.313 00:31:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:04.313 00:31:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:04.313 00:31:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:04.313 00:31:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:04.313 00:31:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:04.313 00:31:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:04.313 00:31:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:04.313 00:31:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:04.313 00:31:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:04.313 00:31:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:04.313 00:31:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:19:04.313 00:31:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:19:04.313 00:31:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:04.313 00:31:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:04.313 00:31:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:04.313 00:31:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:04.313 00:31:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:04.313 00:31:32 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:04.313 00:31:32 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:04.313 00:31:32 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:04.314 00:31:32 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:04.314 00:31:32 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:04.314 00:31:32 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:04.314 00:31:32 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:19:04.314 00:31:32 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:04.314 00:31:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@47 -- # : 0 00:19:04.314 00:31:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:04.314 00:31:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:04.314 00:31:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:04.314 00:31:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:04.314 00:31:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:04.314 00:31:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:04.314 00:31:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:04.314 00:31:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:04.314 00:31:32 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:04.314 00:31:32 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:04.314 00:31:32 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:19:04.314 00:31:32 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:04.314 00:31:32 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:19:04.314 00:31:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:04.314 00:31:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:04.314 00:31:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:04.314 00:31:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:04.314 00:31:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:04.314 00:31:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:04.314 00:31:32 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:04.314 00:31:32 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:04.314 00:31:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:04.314 00:31:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:04.314 00:31:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@285 -- # xtrace_disable 00:19:04.314 00:31:32 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:19:06.221 00:31:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:06.221 00:31:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # pci_devs=() 00:19:06.221 00:31:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:06.221 00:31:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:06.221 00:31:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:06.221 00:31:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:06.221 00:31:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:06.221 00:31:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # net_devs=() 00:19:06.221 00:31:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:06.221 00:31:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # e810=() 00:19:06.221 00:31:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # local -ga e810 00:19:06.221 00:31:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # x722=() 00:19:06.221 00:31:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # local -ga x722 00:19:06.221 00:31:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # mlx=() 00:19:06.221 00:31:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # local -ga mlx 00:19:06.221 00:31:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:06.221 00:31:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:06.221 00:31:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:06.221 00:31:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:06.221 00:31:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:06.221 00:31:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:06.221 00:31:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:06.221 00:31:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:06.221 00:31:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:06.221 00:31:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:06.221 00:31:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:06.221 00:31:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:06.221 00:31:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:06.221 00:31:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:06.221 00:31:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:06.221 00:31:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:06.221 00:31:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:06.221 00:31:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:06.221 00:31:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:19:06.221 Found 0000:08:00.0 (0x8086 - 0x159b) 00:19:06.221 00:31:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:06.222 00:31:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:06.222 00:31:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:06.222 00:31:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:06.222 00:31:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:06.222 00:31:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:06.222 00:31:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:19:06.222 Found 0000:08:00.1 (0x8086 - 0x159b) 00:19:06.222 00:31:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:06.222 00:31:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:06.222 00:31:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:06.222 00:31:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:06.222 00:31:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:06.222 00:31:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:06.222 00:31:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:06.222 00:31:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:06.222 00:31:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:06.222 00:31:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:06.222 00:31:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:06.222 00:31:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:06.222 00:31:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:06.222 00:31:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:06.222 00:31:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:06.222 00:31:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:19:06.222 Found net devices under 0000:08:00.0: cvl_0_0 00:19:06.222 00:31:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:06.222 00:31:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:06.222 00:31:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:06.222 00:31:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:06.222 00:31:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:06.222 00:31:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:06.222 00:31:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:06.222 00:31:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:06.222 00:31:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:19:06.222 Found net devices under 0000:08:00.1: cvl_0_1 00:19:06.222 00:31:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:06.222 00:31:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:06.222 00:31:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # is_hw=yes 00:19:06.222 00:31:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:06.222 00:31:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:06.222 00:31:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:06.222 00:31:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:06.222 00:31:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:06.222 00:31:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:06.222 00:31:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:06.222 00:31:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:06.222 00:31:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:06.222 00:31:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:06.222 00:31:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:06.222 00:31:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:06.222 00:31:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:06.222 00:31:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:06.222 00:31:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:06.222 00:31:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:06.222 00:31:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:06.222 00:31:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:06.222 00:31:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:06.222 00:31:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:06.222 00:31:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:06.222 00:31:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:06.222 00:31:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:06.222 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:06.222 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.396 ms 00:19:06.222 00:19:06.222 --- 10.0.0.2 ping statistics --- 00:19:06.222 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:06.222 rtt min/avg/max/mdev = 0.396/0.396/0.396/0.000 ms 00:19:06.222 00:31:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:06.222 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:06.222 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.197 ms 00:19:06.222 00:19:06.222 --- 10.0.0.1 ping statistics --- 00:19:06.222 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:06.222 rtt min/avg/max/mdev = 0.197/0.197/0.197/0.000 ms 00:19:06.222 00:31:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:06.222 00:31:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@422 -- # return 0 00:19:06.222 00:31:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:06.222 00:31:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:06.222 00:31:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:06.222 00:31:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:06.222 00:31:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:06.222 00:31:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:06.222 00:31:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:06.222 00:31:33 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:19:06.222 00:31:33 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:19:06.222 only one NIC for nvmf test 00:19:06.222 00:31:33 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:19:06.222 00:31:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:06.222 00:31:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:19:06.222 00:31:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:06.222 00:31:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:19:06.222 00:31:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:06.222 00:31:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:06.222 rmmod nvme_tcp 00:19:06.222 rmmod nvme_fabrics 00:19:06.222 rmmod nvme_keyring 00:19:06.222 00:31:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:06.222 00:31:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:19:06.222 00:31:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:19:06.222 00:31:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:19:06.222 00:31:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:06.222 00:31:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:06.222 00:31:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:06.222 00:31:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:06.222 00:31:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:06.222 00:31:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:06.222 00:31:33 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:06.222 00:31:33 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:08.131 00:31:35 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:08.131 00:31:35 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:19:08.131 00:31:35 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:19:08.131 00:31:35 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:08.131 00:31:35 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:19:08.131 00:31:35 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:08.131 00:31:35 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:19:08.131 00:31:35 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:08.131 00:31:35 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:08.131 00:31:35 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:08.131 00:31:35 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:19:08.131 00:31:35 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:19:08.131 00:31:35 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:19:08.131 00:31:35 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:08.131 00:31:35 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:08.131 00:31:35 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:08.131 00:31:35 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:08.131 00:31:35 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:08.131 00:31:35 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:08.131 00:31:35 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:08.131 00:31:35 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:08.132 00:31:35 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:08.132 00:19:08.132 real 0m3.915s 00:19:08.132 user 0m0.632s 00:19:08.132 sys 0m1.256s 00:19:08.132 00:31:35 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1122 -- # xtrace_disable 00:19:08.132 00:31:35 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:19:08.132 ************************************ 00:19:08.132 END TEST nvmf_target_multipath 00:19:08.132 ************************************ 00:19:08.132 00:31:35 nvmf_tcp -- nvmf/nvmf.sh@53 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:19:08.132 00:31:35 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:19:08.132 00:31:35 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:19:08.132 00:31:35 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:08.132 ************************************ 00:19:08.132 START TEST nvmf_zcopy 00:19:08.132 ************************************ 00:19:08.132 00:31:35 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:19:08.390 * Looking for test storage... 00:19:08.390 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:08.390 00:31:35 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:08.390 00:31:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:19:08.390 00:31:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:08.390 00:31:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:08.390 00:31:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:08.390 00:31:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:08.390 00:31:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:08.390 00:31:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:08.390 00:31:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:08.390 00:31:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:08.390 00:31:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:08.390 00:31:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:08.390 00:31:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:19:08.390 00:31:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:19:08.390 00:31:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:08.390 00:31:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:08.390 00:31:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:08.390 00:31:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:08.390 00:31:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:08.390 00:31:35 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:08.390 00:31:35 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:08.390 00:31:35 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:08.390 00:31:35 nvmf_tcp.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:08.390 00:31:35 nvmf_tcp.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:08.390 00:31:35 nvmf_tcp.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:08.390 00:31:35 nvmf_tcp.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:19:08.390 00:31:35 nvmf_tcp.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:08.390 00:31:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@47 -- # : 0 00:19:08.390 00:31:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:08.390 00:31:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:08.390 00:31:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:08.390 00:31:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:08.390 00:31:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:08.390 00:31:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:08.390 00:31:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:08.390 00:31:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:08.390 00:31:35 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:19:08.390 00:31:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:08.390 00:31:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:08.390 00:31:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:08.390 00:31:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:08.390 00:31:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:08.390 00:31:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:08.390 00:31:35 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:08.390 00:31:35 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:08.390 00:31:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:08.390 00:31:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:08.390 00:31:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@285 -- # xtrace_disable 00:19:08.390 00:31:36 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:19:09.767 00:31:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:09.767 00:31:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # pci_devs=() 00:19:09.767 00:31:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:09.767 00:31:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:09.767 00:31:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:09.767 00:31:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:09.767 00:31:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:09.767 00:31:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # net_devs=() 00:19:09.767 00:31:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:09.767 00:31:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # e810=() 00:19:09.767 00:31:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # local -ga e810 00:19:09.767 00:31:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # x722=() 00:19:09.767 00:31:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # local -ga x722 00:19:09.767 00:31:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # mlx=() 00:19:09.767 00:31:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # local -ga mlx 00:19:09.767 00:31:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:09.767 00:31:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:09.767 00:31:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:09.767 00:31:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:09.767 00:31:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:09.767 00:31:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:09.767 00:31:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:09.767 00:31:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:09.767 00:31:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:09.767 00:31:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:09.767 00:31:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:09.767 00:31:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:09.767 00:31:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:09.767 00:31:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:09.767 00:31:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:09.767 00:31:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:09.767 00:31:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:09.767 00:31:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:09.767 00:31:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:19:09.767 Found 0000:08:00.0 (0x8086 - 0x159b) 00:19:09.767 00:31:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:09.767 00:31:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:09.767 00:31:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:09.767 00:31:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:09.767 00:31:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:09.767 00:31:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:09.767 00:31:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:19:09.767 Found 0000:08:00.1 (0x8086 - 0x159b) 00:19:09.767 00:31:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:09.767 00:31:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:09.767 00:31:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:09.767 00:31:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:09.767 00:31:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:09.767 00:31:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:09.767 00:31:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:09.767 00:31:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:09.767 00:31:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:09.767 00:31:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:09.767 00:31:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:09.767 00:31:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:09.767 00:31:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:09.767 00:31:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:09.767 00:31:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:09.767 00:31:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:19:09.767 Found net devices under 0000:08:00.0: cvl_0_0 00:19:09.767 00:31:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:09.767 00:31:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:09.767 00:31:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:09.767 00:31:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:09.768 00:31:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:09.768 00:31:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:09.768 00:31:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:09.768 00:31:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:09.768 00:31:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:19:09.768 Found net devices under 0000:08:00.1: cvl_0_1 00:19:09.768 00:31:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:09.768 00:31:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:09.768 00:31:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # is_hw=yes 00:19:09.768 00:31:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:09.768 00:31:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:09.768 00:31:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:09.768 00:31:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:09.768 00:31:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:09.768 00:31:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:09.768 00:31:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:09.768 00:31:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:09.768 00:31:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:09.768 00:31:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:09.768 00:31:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:09.768 00:31:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:09.768 00:31:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:09.768 00:31:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:09.768 00:31:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:09.768 00:31:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:09.768 00:31:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:09.768 00:31:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:09.768 00:31:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:09.768 00:31:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:09.768 00:31:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:09.768 00:31:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:10.026 00:31:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:10.026 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:10.026 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.294 ms 00:19:10.026 00:19:10.026 --- 10.0.0.2 ping statistics --- 00:19:10.026 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:10.026 rtt min/avg/max/mdev = 0.294/0.294/0.294/0.000 ms 00:19:10.026 00:31:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:10.026 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:10.026 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.130 ms 00:19:10.026 00:19:10.026 --- 10.0.0.1 ping statistics --- 00:19:10.026 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:10.026 rtt min/avg/max/mdev = 0.130/0.130/0.130/0.000 ms 00:19:10.026 00:31:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:10.026 00:31:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@422 -- # return 0 00:19:10.026 00:31:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:10.026 00:31:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:10.026 00:31:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:10.026 00:31:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:10.026 00:31:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:10.026 00:31:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:10.026 00:31:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:10.026 00:31:37 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:19:10.026 00:31:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:10.026 00:31:37 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@720 -- # xtrace_disable 00:19:10.026 00:31:37 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:19:10.026 00:31:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@481 -- # nvmfpid=951549 00:19:10.026 00:31:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:10.026 00:31:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@482 -- # waitforlisten 951549 00:19:10.026 00:31:37 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@827 -- # '[' -z 951549 ']' 00:19:10.026 00:31:37 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:10.026 00:31:37 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@832 -- # local max_retries=100 00:19:10.026 00:31:37 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:10.026 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:10.026 00:31:37 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@836 -- # xtrace_disable 00:19:10.026 00:31:37 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:19:10.026 [2024-07-12 00:31:37.697895] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:19:10.027 [2024-07-12 00:31:37.698000] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:10.027 EAL: No free 2048 kB hugepages reported on node 1 00:19:10.027 [2024-07-12 00:31:37.763523] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:10.027 [2024-07-12 00:31:37.853044] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:10.027 [2024-07-12 00:31:37.853103] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:10.027 [2024-07-12 00:31:37.853119] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:10.027 [2024-07-12 00:31:37.853132] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:10.027 [2024-07-12 00:31:37.853144] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:10.027 [2024-07-12 00:31:37.853180] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:10.307 00:31:37 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:19:10.307 00:31:37 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@860 -- # return 0 00:19:10.307 00:31:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:10.307 00:31:37 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:10.307 00:31:37 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:19:10.307 00:31:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:10.307 00:31:37 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:19:10.307 00:31:37 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:19:10.307 00:31:37 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:10.307 00:31:37 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:19:10.307 [2024-07-12 00:31:37.989688] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:10.307 00:31:37 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:10.307 00:31:37 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:19:10.307 00:31:37 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:10.307 00:31:37 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:19:10.307 00:31:38 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:10.307 00:31:38 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:10.307 00:31:38 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:10.307 00:31:38 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:19:10.307 [2024-07-12 00:31:38.005848] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:10.307 00:31:38 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:10.307 00:31:38 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:19:10.307 00:31:38 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:10.307 00:31:38 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:19:10.307 00:31:38 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:10.307 00:31:38 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:19:10.307 00:31:38 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:10.307 00:31:38 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:19:10.307 malloc0 00:19:10.307 00:31:38 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:10.307 00:31:38 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:10.307 00:31:38 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:10.307 00:31:38 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:19:10.307 00:31:38 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:10.307 00:31:38 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:19:10.307 00:31:38 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:19:10.307 00:31:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:19:10.307 00:31:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:19:10.307 00:31:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:10.307 00:31:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:10.307 { 00:19:10.307 "params": { 00:19:10.307 "name": "Nvme$subsystem", 00:19:10.307 "trtype": "$TEST_TRANSPORT", 00:19:10.307 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:10.307 "adrfam": "ipv4", 00:19:10.307 "trsvcid": "$NVMF_PORT", 00:19:10.307 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:10.307 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:10.307 "hdgst": ${hdgst:-false}, 00:19:10.307 "ddgst": ${ddgst:-false} 00:19:10.307 }, 00:19:10.307 "method": "bdev_nvme_attach_controller" 00:19:10.307 } 00:19:10.307 EOF 00:19:10.307 )") 00:19:10.307 00:31:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:19:10.307 00:31:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:19:10.307 00:31:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:19:10.307 00:31:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:19:10.307 "params": { 00:19:10.307 "name": "Nvme1", 00:19:10.307 "trtype": "tcp", 00:19:10.307 "traddr": "10.0.0.2", 00:19:10.307 "adrfam": "ipv4", 00:19:10.307 "trsvcid": "4420", 00:19:10.307 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:10.307 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:10.307 "hdgst": false, 00:19:10.307 "ddgst": false 00:19:10.307 }, 00:19:10.307 "method": "bdev_nvme_attach_controller" 00:19:10.307 }' 00:19:10.307 [2024-07-12 00:31:38.086284] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:19:10.307 [2024-07-12 00:31:38.086378] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid951571 ] 00:19:10.307 EAL: No free 2048 kB hugepages reported on node 1 00:19:10.574 [2024-07-12 00:31:38.147067] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:10.574 [2024-07-12 00:31:38.238132] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:10.831 Running I/O for 10 seconds... 00:19:20.797 00:19:20.797 Latency(us) 00:19:20.797 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:20.797 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:19:20.797 Verification LBA range: start 0x0 length 0x1000 00:19:20.797 Nvme1n1 : 10.01 5484.60 42.85 0.00 0.00 23265.55 801.00 33981.63 00:19:20.797 =================================================================================================================== 00:19:20.797 Total : 5484.60 42.85 0.00 0.00 23265.55 801.00 33981.63 00:19:21.056 00:31:48 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=952562 00:19:21.056 00:31:48 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:19:21.056 00:31:48 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:19:21.056 00:31:48 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:19:21.056 00:31:48 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:19:21.056 00:31:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:19:21.056 00:31:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:19:21.056 00:31:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:21.056 00:31:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:21.056 { 00:19:21.056 "params": { 00:19:21.056 "name": "Nvme$subsystem", 00:19:21.056 "trtype": "$TEST_TRANSPORT", 00:19:21.056 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:21.056 "adrfam": "ipv4", 00:19:21.056 "trsvcid": "$NVMF_PORT", 00:19:21.056 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:21.056 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:21.056 "hdgst": ${hdgst:-false}, 00:19:21.056 "ddgst": ${ddgst:-false} 00:19:21.056 }, 00:19:21.056 "method": "bdev_nvme_attach_controller" 00:19:21.056 } 00:19:21.056 EOF 00:19:21.056 )") 00:19:21.056 00:31:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:19:21.056 [2024-07-12 00:31:48.734655] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:21.056 [2024-07-12 00:31:48.734701] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:21.056 00:31:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:19:21.056 00:31:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:19:21.056 00:31:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:19:21.056 "params": { 00:19:21.056 "name": "Nvme1", 00:19:21.056 "trtype": "tcp", 00:19:21.056 "traddr": "10.0.0.2", 00:19:21.056 "adrfam": "ipv4", 00:19:21.056 "trsvcid": "4420", 00:19:21.056 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:21.056 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:21.056 "hdgst": false, 00:19:21.056 "ddgst": false 00:19:21.056 }, 00:19:21.056 "method": "bdev_nvme_attach_controller" 00:19:21.056 }' 00:19:21.056 [2024-07-12 00:31:48.742611] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:21.056 [2024-07-12 00:31:48.742644] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:21.056 [2024-07-12 00:31:48.750630] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:21.056 [2024-07-12 00:31:48.750654] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:21.056 [2024-07-12 00:31:48.758650] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:21.057 [2024-07-12 00:31:48.758673] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:21.057 [2024-07-12 00:31:48.766670] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:21.057 [2024-07-12 00:31:48.766693] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:21.057 [2024-07-12 00:31:48.774640] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:19:21.057 [2024-07-12 00:31:48.774687] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:21.057 [2024-07-12 00:31:48.774711] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:21.057 [2024-07-12 00:31:48.774734] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid952562 ] 00:19:21.057 [2024-07-12 00:31:48.782709] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:21.057 [2024-07-12 00:31:48.782732] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:21.057 [2024-07-12 00:31:48.790733] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:21.057 [2024-07-12 00:31:48.790756] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:21.057 [2024-07-12 00:31:48.798755] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:21.057 [2024-07-12 00:31:48.798777] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:21.057 EAL: No free 2048 kB hugepages reported on node 1 00:19:21.057 [2024-07-12 00:31:48.806777] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:21.057 [2024-07-12 00:31:48.806800] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:21.057 [2024-07-12 00:31:48.814798] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:21.057 [2024-07-12 00:31:48.814822] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:21.057 [2024-07-12 00:31:48.822821] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:21.057 [2024-07-12 00:31:48.822843] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:21.057 [2024-07-12 00:31:48.830844] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:21.057 [2024-07-12 00:31:48.830874] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:21.057 [2024-07-12 00:31:48.835737] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:21.057 [2024-07-12 00:31:48.838910] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:21.057 [2024-07-12 00:31:48.838950] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:21.057 [2024-07-12 00:31:48.846969] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:21.057 [2024-07-12 00:31:48.847019] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:21.057 [2024-07-12 00:31:48.854950] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:21.057 [2024-07-12 00:31:48.854986] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:21.057 [2024-07-12 00:31:48.862946] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:21.057 [2024-07-12 00:31:48.862972] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:21.057 [2024-07-12 00:31:48.870979] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:21.057 [2024-07-12 00:31:48.871008] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:21.057 [2024-07-12 00:31:48.878997] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:21.057 [2024-07-12 00:31:48.879025] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:21.057 [2024-07-12 00:31:48.887081] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:21.057 [2024-07-12 00:31:48.887130] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:21.316 [2024-07-12 00:31:48.895086] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:21.316 [2024-07-12 00:31:48.895131] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:21.316 [2024-07-12 00:31:48.903073] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:21.316 [2024-07-12 00:31:48.903102] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:21.316 [2024-07-12 00:31:48.911085] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:21.316 [2024-07-12 00:31:48.911114] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:21.316 [2024-07-12 00:31:48.919146] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:21.316 [2024-07-12 00:31:48.919178] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:21.316 [2024-07-12 00:31:48.926405] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:21.316 [2024-07-12 00:31:48.927124] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:21.316 [2024-07-12 00:31:48.927148] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:21.316 [2024-07-12 00:31:48.935134] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:21.316 [2024-07-12 00:31:48.935157] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:21.316 [2024-07-12 00:31:48.943235] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:21.316 [2024-07-12 00:31:48.943286] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:21.316 [2024-07-12 00:31:48.951258] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:21.316 [2024-07-12 00:31:48.951307] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:21.316 [2024-07-12 00:31:48.959283] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:21.316 [2024-07-12 00:31:48.959332] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:21.316 [2024-07-12 00:31:48.967304] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:21.316 [2024-07-12 00:31:48.967354] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:21.316 [2024-07-12 00:31:48.975319] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:21.316 [2024-07-12 00:31:48.975380] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:21.316 [2024-07-12 00:31:48.983333] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:21.316 [2024-07-12 00:31:48.983379] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:21.316 [2024-07-12 00:31:48.991362] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:21.316 [2024-07-12 00:31:48.991412] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:21.316 [2024-07-12 00:31:48.999388] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:21.316 [2024-07-12 00:31:48.999440] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:21.316 [2024-07-12 00:31:49.007405] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:21.316 [2024-07-12 00:31:49.007453] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:21.316 [2024-07-12 00:31:49.015357] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:21.316 [2024-07-12 00:31:49.015382] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:21.316 [2024-07-12 00:31:49.023385] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:21.316 [2024-07-12 00:31:49.023413] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:21.316 [2024-07-12 00:31:49.031410] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:21.316 [2024-07-12 00:31:49.031438] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:21.316 [2024-07-12 00:31:49.039435] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:21.316 [2024-07-12 00:31:49.039461] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:21.316 [2024-07-12 00:31:49.047456] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:21.316 [2024-07-12 00:31:49.047482] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:21.316 [2024-07-12 00:31:49.055472] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:21.317 [2024-07-12 00:31:49.055496] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:21.317 [2024-07-12 00:31:49.063493] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:21.317 [2024-07-12 00:31:49.063516] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:21.317 [2024-07-12 00:31:49.071521] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:21.317 [2024-07-12 00:31:49.071543] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:21.317 [2024-07-12 00:31:49.079545] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:21.317 [2024-07-12 00:31:49.079567] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:21.317 [2024-07-12 00:31:49.087575] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:21.317 [2024-07-12 00:31:49.087606] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:21.317 [2024-07-12 00:31:49.095617] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:21.317 [2024-07-12 00:31:49.095642] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:21.317 [2024-07-12 00:31:49.103626] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:21.317 [2024-07-12 00:31:49.103660] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:21.317 [2024-07-12 00:31:49.111643] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:21.317 [2024-07-12 00:31:49.111666] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:21.317 [2024-07-12 00:31:49.119663] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:21.317 [2024-07-12 00:31:49.119685] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:21.317 [2024-07-12 00:31:49.127682] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:21.317 [2024-07-12 00:31:49.127715] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:21.317 [2024-07-12 00:31:49.135705] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:21.317 [2024-07-12 00:31:49.135728] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:21.317 [2024-07-12 00:31:49.143732] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:21.317 [2024-07-12 00:31:49.143756] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:21.317 [2024-07-12 00:31:49.151762] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:21.317 [2024-07-12 00:31:49.151787] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:21.575 [2024-07-12 00:31:49.159783] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:21.575 [2024-07-12 00:31:49.159806] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:21.575 [2024-07-12 00:31:49.167805] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:21.575 [2024-07-12 00:31:49.167827] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:21.575 [2024-07-12 00:31:49.175827] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:21.575 [2024-07-12 00:31:49.175850] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:21.575 [2024-07-12 00:31:49.183852] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:21.575 [2024-07-12 00:31:49.183874] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:21.575 [2024-07-12 00:31:49.191880] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:21.575 [2024-07-12 00:31:49.191905] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:21.575 [2024-07-12 00:31:49.199902] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:21.575 [2024-07-12 00:31:49.199925] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:21.575 [2024-07-12 00:31:49.207924] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:21.575 [2024-07-12 00:31:49.207947] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:21.575 [2024-07-12 00:31:49.215947] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:21.575 [2024-07-12 00:31:49.215970] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:21.575 [2024-07-12 00:31:49.223973] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:21.576 [2024-07-12 00:31:49.223996] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:21.576 [2024-07-12 00:31:49.232000] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:21.576 [2024-07-12 00:31:49.232024] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:21.576 [2024-07-12 00:31:49.240031] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:21.576 [2024-07-12 00:31:49.240057] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:21.576 [2024-07-12 00:31:49.248097] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:21.576 [2024-07-12 00:31:49.248126] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:21.576 [2024-07-12 00:31:49.256112] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:21.576 [2024-07-12 00:31:49.256138] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:21.576 Running I/O for 5 seconds... 00:19:21.576 [2024-07-12 00:31:49.268308] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:21.576 [2024-07-12 00:31:49.268338] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:21.576 [2024-07-12 00:31:49.278180] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:21.576 [2024-07-12 00:31:49.278211] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:21.576 [2024-07-12 00:31:49.291227] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:21.576 [2024-07-12 00:31:49.291257] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:21.576 [2024-07-12 00:31:49.302986] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:21.576 [2024-07-12 00:31:49.303016] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:21.576 [2024-07-12 00:31:49.314872] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:21.576 [2024-07-12 00:31:49.314901] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:21.576 [2024-07-12 00:31:49.326866] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:21.576 [2024-07-12 00:31:49.326895] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:21.576 [2024-07-12 00:31:49.338616] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:21.576 [2024-07-12 00:31:49.338646] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:21.576 [2024-07-12 00:31:49.350519] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:21.576 [2024-07-12 00:31:49.350549] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:21.576 [2024-07-12 00:31:49.362249] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:21.576 [2024-07-12 00:31:49.362278] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:21.576 [2024-07-12 00:31:49.374291] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:21.576 [2024-07-12 00:31:49.374320] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:21.576 [2024-07-12 00:31:49.386133] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:21.576 [2024-07-12 00:31:49.386162] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:21.576 [2024-07-12 00:31:49.397981] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:21.576 [2024-07-12 00:31:49.398010] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:21.576 [2024-07-12 00:31:49.410030] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:21.576 [2024-07-12 00:31:49.410058] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:21.834 [2024-07-12 00:31:49.422087] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:21.834 [2024-07-12 00:31:49.422125] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:21.834 [2024-07-12 00:31:49.436169] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:21.834 [2024-07-12 00:31:49.436197] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:21.834 [2024-07-12 00:31:49.447248] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:21.834 [2024-07-12 00:31:49.447285] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:21.834 [2024-07-12 00:31:49.459538] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:21.834 [2024-07-12 00:31:49.459566] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:21.834 [2024-07-12 00:31:49.471390] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:21.834 [2024-07-12 00:31:49.471419] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:21.834 [2024-07-12 00:31:49.483058] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:21.834 [2024-07-12 00:31:49.483086] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:21.834 [2024-07-12 00:31:49.495364] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:21.834 [2024-07-12 00:31:49.495392] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:21.834 [2024-07-12 00:31:49.507126] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:21.834 [2024-07-12 00:31:49.507162] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:21.834 [2024-07-12 00:31:49.518970] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:21.834 [2024-07-12 00:31:49.518998] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:21.834 [2024-07-12 00:31:49.531208] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:21.834 [2024-07-12 00:31:49.531236] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:21.834 [2024-07-12 00:31:49.543217] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:21.834 [2024-07-12 00:31:49.543245] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:21.834 [2024-07-12 00:31:49.554713] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:21.834 [2024-07-12 00:31:49.554741] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:21.834 [2024-07-12 00:31:49.566347] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:21.834 [2024-07-12 00:31:49.566376] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:21.834 [2024-07-12 00:31:49.578021] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:21.834 [2024-07-12 00:31:49.578050] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:21.834 [2024-07-12 00:31:49.589399] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:21.834 [2024-07-12 00:31:49.589428] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:21.834 [2024-07-12 00:31:49.600990] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:21.834 [2024-07-12 00:31:49.601019] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:21.834 [2024-07-12 00:31:49.612925] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:21.834 [2024-07-12 00:31:49.612963] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:21.834 [2024-07-12 00:31:49.624447] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:21.834 [2024-07-12 00:31:49.624476] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:21.834 [2024-07-12 00:31:49.636482] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:21.834 [2024-07-12 00:31:49.636511] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:21.834 [2024-07-12 00:31:49.648546] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:21.834 [2024-07-12 00:31:49.648574] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:21.834 [2024-07-12 00:31:49.660533] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:21.835 [2024-07-12 00:31:49.660560] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:21.835 [2024-07-12 00:31:49.672526] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:21.835 [2024-07-12 00:31:49.672554] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:22.093 [2024-07-12 00:31:49.684580] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:22.093 [2024-07-12 00:31:49.684616] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:22.093 [2024-07-12 00:31:49.696507] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:22.093 [2024-07-12 00:31:49.696536] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:22.093 [2024-07-12 00:31:49.708383] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:22.093 [2024-07-12 00:31:49.708412] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:22.093 [2024-07-12 00:31:49.720389] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:22.093 [2024-07-12 00:31:49.720417] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:22.093 [2024-07-12 00:31:49.732410] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:22.093 [2024-07-12 00:31:49.732447] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:22.093 [2024-07-12 00:31:49.744254] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:22.093 [2024-07-12 00:31:49.744291] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:22.093 [2024-07-12 00:31:49.756602] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:22.093 [2024-07-12 00:31:49.756630] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:22.093 [2024-07-12 00:31:49.768155] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:22.093 [2024-07-12 00:31:49.768183] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:22.093 [2024-07-12 00:31:49.780109] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:22.093 [2024-07-12 00:31:49.780137] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:22.093 [2024-07-12 00:31:49.791941] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:22.093 [2024-07-12 00:31:49.791969] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:22.093 [2024-07-12 00:31:49.803696] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:22.093 [2024-07-12 00:31:49.803724] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:22.093 [2024-07-12 00:31:49.815142] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:22.093 [2024-07-12 00:31:49.815170] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:22.093 [2024-07-12 00:31:49.827002] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:22.093 [2024-07-12 00:31:49.827031] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:22.093 [2024-07-12 00:31:49.841021] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:22.093 [2024-07-12 00:31:49.841050] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:22.093 [2024-07-12 00:31:49.852452] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:22.093 [2024-07-12 00:31:49.852481] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:22.093 [2024-07-12 00:31:49.864305] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:22.093 [2024-07-12 00:31:49.864333] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:22.094 [2024-07-12 00:31:49.875767] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:22.094 [2024-07-12 00:31:49.875795] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:22.094 [2024-07-12 00:31:49.887653] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:22.094 [2024-07-12 00:31:49.887681] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:22.094 [2024-07-12 00:31:49.899861] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:22.094 [2024-07-12 00:31:49.899889] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:22.094 [2024-07-12 00:31:49.911968] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:22.094 [2024-07-12 00:31:49.911996] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:22.094 [2024-07-12 00:31:49.924089] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:22.094 [2024-07-12 00:31:49.924117] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:22.352 [2024-07-12 00:31:49.935653] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:22.352 [2024-07-12 00:31:49.935683] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:22.352 [2024-07-12 00:31:49.947322] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:22.352 [2024-07-12 00:31:49.947351] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:22.352 [2024-07-12 00:31:49.959117] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:22.352 [2024-07-12 00:31:49.959153] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:22.352 [2024-07-12 00:31:49.970773] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:22.352 [2024-07-12 00:31:49.970801] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:22.352 [2024-07-12 00:31:49.982659] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:22.352 [2024-07-12 00:31:49.982688] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:22.352 [2024-07-12 00:31:49.994768] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:22.352 [2024-07-12 00:31:49.994796] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:22.352 [2024-07-12 00:31:50.006260] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:22.352 [2024-07-12 00:31:50.006299] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:22.352 [2024-07-12 00:31:50.017939] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:22.352 [2024-07-12 00:31:50.017981] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:22.352 [2024-07-12 00:31:50.028464] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:22.352 [2024-07-12 00:31:50.028507] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:22.352 [2024-07-12 00:31:50.040826] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:22.352 [2024-07-12 00:31:50.040857] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:22.352 [2024-07-12 00:31:50.052906] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:22.352 [2024-07-12 00:31:50.052936] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:22.352 [2024-07-12 00:31:50.065317] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:22.352 [2024-07-12 00:31:50.065346] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:22.352 [2024-07-12 00:31:50.077409] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:22.352 [2024-07-12 00:31:50.077441] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:22.352 [2024-07-12 00:31:50.089922] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:22.352 [2024-07-12 00:31:50.089952] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:22.352 [2024-07-12 00:31:50.102277] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:22.352 [2024-07-12 00:31:50.102307] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:22.352 [2024-07-12 00:31:50.114693] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:22.352 [2024-07-12 00:31:50.114724] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:22.352 [2024-07-12 00:31:50.126649] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:22.352 [2024-07-12 00:31:50.126679] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:22.352 [2024-07-12 00:31:50.138991] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:22.352 [2024-07-12 00:31:50.139022] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:22.352 [2024-07-12 00:31:50.151529] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:22.352 [2024-07-12 00:31:50.151559] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:22.352 [2024-07-12 00:31:50.164070] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:22.352 [2024-07-12 00:31:50.164100] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:22.352 [2024-07-12 00:31:50.176412] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:22.352 [2024-07-12 00:31:50.176447] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:22.352 [2024-07-12 00:31:50.188444] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:22.352 [2024-07-12 00:31:50.188483] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:22.611 [2024-07-12 00:31:50.200399] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:22.611 [2024-07-12 00:31:50.200430] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:22.611 [2024-07-12 00:31:50.213221] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:22.611 [2024-07-12 00:31:50.213251] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:22.611 [2024-07-12 00:31:50.225571] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:22.611 [2024-07-12 00:31:50.225614] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:22.611 [2024-07-12 00:31:50.238031] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:22.611 [2024-07-12 00:31:50.238061] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:22.611 [2024-07-12 00:31:50.250570] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:22.611 [2024-07-12 00:31:50.250611] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:22.611 [2024-07-12 00:31:50.262791] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:22.611 [2024-07-12 00:31:50.262821] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:22.611 [2024-07-12 00:31:50.275263] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:22.611 [2024-07-12 00:31:50.275294] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:22.611 [2024-07-12 00:31:50.287458] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:22.611 [2024-07-12 00:31:50.287488] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:22.611 [2024-07-12 00:31:50.299452] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:22.611 [2024-07-12 00:31:50.299482] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:22.611 [2024-07-12 00:31:50.311963] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:22.611 [2024-07-12 00:31:50.311993] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:22.611 [2024-07-12 00:31:50.324218] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:22.611 [2024-07-12 00:31:50.324254] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:22.611 [2024-07-12 00:31:50.336391] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:22.611 [2024-07-12 00:31:50.336422] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:22.611 [2024-07-12 00:31:50.348746] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:22.611 [2024-07-12 00:31:50.348776] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:22.611 [2024-07-12 00:31:50.360970] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:22.611 [2024-07-12 00:31:50.360999] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:22.611 [2024-07-12 00:31:50.373450] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:22.611 [2024-07-12 00:31:50.373480] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:22.611 [2024-07-12 00:31:50.385662] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:22.611 [2024-07-12 00:31:50.385693] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:22.611 [2024-07-12 00:31:50.397843] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:22.611 [2024-07-12 00:31:50.397873] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:22.611 [2024-07-12 00:31:50.409785] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:22.611 [2024-07-12 00:31:50.409815] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:22.611 [2024-07-12 00:31:50.421964] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:22.611 [2024-07-12 00:31:50.422003] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:22.611 [2024-07-12 00:31:50.433238] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:22.611 [2024-07-12 00:31:50.433268] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:22.611 [2024-07-12 00:31:50.445025] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:22.611 [2024-07-12 00:31:50.445056] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:22.869 [2024-07-12 00:31:50.457388] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:22.869 [2024-07-12 00:31:50.457420] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:22.869 [2024-07-12 00:31:50.469613] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:22.869 [2024-07-12 00:31:50.469643] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:22.869 [2024-07-12 00:31:50.481765] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:22.869 [2024-07-12 00:31:50.481796] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:22.869 [2024-07-12 00:31:50.494202] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:22.869 [2024-07-12 00:31:50.494240] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:22.869 [2024-07-12 00:31:50.506204] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:22.869 [2024-07-12 00:31:50.506233] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:22.869 [2024-07-12 00:31:50.518580] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:22.869 [2024-07-12 00:31:50.518625] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:22.869 [2024-07-12 00:31:50.531125] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:22.869 [2024-07-12 00:31:50.531155] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:22.869 [2024-07-12 00:31:50.543298] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:22.869 [2024-07-12 00:31:50.543327] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:22.869 [2024-07-12 00:31:50.555644] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:22.869 [2024-07-12 00:31:50.555674] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:22.869 [2024-07-12 00:31:50.567889] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:22.869 [2024-07-12 00:31:50.567924] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:22.869 [2024-07-12 00:31:50.579824] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:22.869 [2024-07-12 00:31:50.579853] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:22.869 [2024-07-12 00:31:50.592273] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:22.869 [2024-07-12 00:31:50.592302] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:22.869 [2024-07-12 00:31:50.604631] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:22.869 [2024-07-12 00:31:50.604660] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:22.869 [2024-07-12 00:31:50.617341] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:22.869 [2024-07-12 00:31:50.617371] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:22.869 [2024-07-12 00:31:50.629794] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:22.869 [2024-07-12 00:31:50.629824] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:22.869 [2024-07-12 00:31:50.642301] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:22.869 [2024-07-12 00:31:50.642336] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:22.869 [2024-07-12 00:31:50.654803] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:22.869 [2024-07-12 00:31:50.654841] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:22.870 [2024-07-12 00:31:50.667334] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:22.870 [2024-07-12 00:31:50.667363] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:22.870 [2024-07-12 00:31:50.679568] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:22.870 [2024-07-12 00:31:50.679615] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:22.870 [2024-07-12 00:31:50.691640] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:22.870 [2024-07-12 00:31:50.691670] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:22.870 [2024-07-12 00:31:50.704025] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:22.870 [2024-07-12 00:31:50.704055] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:23.128 [2024-07-12 00:31:50.716308] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:23.128 [2024-07-12 00:31:50.716338] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:23.128 [2024-07-12 00:31:50.728601] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:23.128 [2024-07-12 00:31:50.728630] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:23.128 [2024-07-12 00:31:50.740754] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:23.128 [2024-07-12 00:31:50.740784] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:23.128 [2024-07-12 00:31:50.753124] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:23.128 [2024-07-12 00:31:50.753155] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:23.128 [2024-07-12 00:31:50.765340] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:23.128 [2024-07-12 00:31:50.765370] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:23.128 [2024-07-12 00:31:50.777883] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:23.128 [2024-07-12 00:31:50.777914] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:23.128 [2024-07-12 00:31:50.790576] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:23.128 [2024-07-12 00:31:50.790626] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:23.128 [2024-07-12 00:31:50.802786] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:23.128 [2024-07-12 00:31:50.802815] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:23.128 [2024-07-12 00:31:50.815425] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:23.128 [2024-07-12 00:31:50.815461] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:23.128 [2024-07-12 00:31:50.827802] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:23.128 [2024-07-12 00:31:50.827832] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:23.128 [2024-07-12 00:31:50.840353] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:23.128 [2024-07-12 00:31:50.840383] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:23.128 [2024-07-12 00:31:50.852315] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:23.128 [2024-07-12 00:31:50.852344] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:23.128 [2024-07-12 00:31:50.864245] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:23.128 [2024-07-12 00:31:50.864274] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:23.128 [2024-07-12 00:31:50.876404] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:23.128 [2024-07-12 00:31:50.876433] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:23.128 [2024-07-12 00:31:50.888707] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:23.128 [2024-07-12 00:31:50.888737] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:23.128 [2024-07-12 00:31:50.901107] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:23.128 [2024-07-12 00:31:50.901144] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:23.128 [2024-07-12 00:31:50.913657] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:23.128 [2024-07-12 00:31:50.913687] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:23.128 [2024-07-12 00:31:50.925985] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:23.128 [2024-07-12 00:31:50.926014] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:23.128 [2024-07-12 00:31:50.938257] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:23.128 [2024-07-12 00:31:50.938287] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:23.128 [2024-07-12 00:31:50.950419] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:23.128 [2024-07-12 00:31:50.950449] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:23.128 [2024-07-12 00:31:50.963016] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:23.128 [2024-07-12 00:31:50.963045] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:23.387 [2024-07-12 00:31:50.975563] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:23.387 [2024-07-12 00:31:50.975603] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:23.387 [2024-07-12 00:31:50.988329] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:23.387 [2024-07-12 00:31:50.988358] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:23.387 [2024-07-12 00:31:51.000733] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:23.387 [2024-07-12 00:31:51.000763] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:23.387 [2024-07-12 00:31:51.012970] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:23.387 [2024-07-12 00:31:51.013000] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:23.387 [2024-07-12 00:31:51.025101] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:23.387 [2024-07-12 00:31:51.025131] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:23.387 [2024-07-12 00:31:51.037342] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:23.387 [2024-07-12 00:31:51.037371] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:23.387 [2024-07-12 00:31:51.049519] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:23.387 [2024-07-12 00:31:51.049550] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:23.387 [2024-07-12 00:31:51.061571] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:23.387 [2024-07-12 00:31:51.061624] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:23.387 [2024-07-12 00:31:51.074145] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:23.387 [2024-07-12 00:31:51.074175] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:23.387 [2024-07-12 00:31:51.086585] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:23.387 [2024-07-12 00:31:51.086641] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:23.387 [2024-07-12 00:31:51.098942] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:23.387 [2024-07-12 00:31:51.098972] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:23.387 [2024-07-12 00:31:51.111087] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:23.387 [2024-07-12 00:31:51.111116] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:23.387 [2024-07-12 00:31:51.123371] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:23.387 [2024-07-12 00:31:51.123401] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:23.387 [2024-07-12 00:31:51.136171] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:23.387 [2024-07-12 00:31:51.136200] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:23.387 [2024-07-12 00:31:51.146287] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:23.387 [2024-07-12 00:31:51.146317] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:23.387 [2024-07-12 00:31:51.158640] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:23.387 [2024-07-12 00:31:51.158669] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:23.387 [2024-07-12 00:31:51.171336] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:23.387 [2024-07-12 00:31:51.171365] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:23.387 [2024-07-12 00:31:51.183780] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:23.387 [2024-07-12 00:31:51.183810] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:23.387 [2024-07-12 00:31:51.195783] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:23.387 [2024-07-12 00:31:51.195813] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:23.387 [2024-07-12 00:31:51.208308] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:23.387 [2024-07-12 00:31:51.208338] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:23.387 [2024-07-12 00:31:51.220437] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:23.387 [2024-07-12 00:31:51.220466] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:23.646 [2024-07-12 00:31:51.233113] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:23.646 [2024-07-12 00:31:51.233145] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:23.646 [2024-07-12 00:31:51.245313] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:23.646 [2024-07-12 00:31:51.245342] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:23.646 [2024-07-12 00:31:51.257564] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:23.646 [2024-07-12 00:31:51.257603] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:23.646 [2024-07-12 00:31:51.270030] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:23.646 [2024-07-12 00:31:51.270060] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:23.646 [2024-07-12 00:31:51.282440] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:23.646 [2024-07-12 00:31:51.282470] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:23.646 [2024-07-12 00:31:51.295126] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:23.646 [2024-07-12 00:31:51.295155] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:23.646 [2024-07-12 00:31:51.307077] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:23.646 [2024-07-12 00:31:51.307107] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:23.646 [2024-07-12 00:31:51.319293] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:23.646 [2024-07-12 00:31:51.319322] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:23.646 [2024-07-12 00:31:51.331436] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:23.646 [2024-07-12 00:31:51.331466] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:23.646 [2024-07-12 00:31:51.343818] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:23.646 [2024-07-12 00:31:51.343847] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:23.646 [2024-07-12 00:31:51.356323] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:23.646 [2024-07-12 00:31:51.356353] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:23.646 [2024-07-12 00:31:51.368842] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:23.646 [2024-07-12 00:31:51.368872] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:23.646 [2024-07-12 00:31:51.381155] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:23.646 [2024-07-12 00:31:51.381185] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:23.646 [2024-07-12 00:31:51.393643] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:23.646 [2024-07-12 00:31:51.393673] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:23.646 [2024-07-12 00:31:51.406288] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:23.646 [2024-07-12 00:31:51.406317] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:23.646 [2024-07-12 00:31:51.419111] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:23.646 [2024-07-12 00:31:51.419140] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:23.646 [2024-07-12 00:31:51.431728] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:23.646 [2024-07-12 00:31:51.431757] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:23.646 [2024-07-12 00:31:51.444141] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:23.646 [2024-07-12 00:31:51.444171] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:23.646 [2024-07-12 00:31:51.456223] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:23.646 [2024-07-12 00:31:51.456253] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:23.646 [2024-07-12 00:31:51.468235] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:23.646 [2024-07-12 00:31:51.468264] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:23.646 [2024-07-12 00:31:51.480225] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:23.646 [2024-07-12 00:31:51.480254] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:23.904 [2024-07-12 00:31:51.492536] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:23.905 [2024-07-12 00:31:51.492566] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:23.905 [2024-07-12 00:31:51.505175] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:23.905 [2024-07-12 00:31:51.505204] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:23.905 [2024-07-12 00:31:51.517723] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:23.905 [2024-07-12 00:31:51.517753] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:23.905 [2024-07-12 00:31:51.529607] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:23.905 [2024-07-12 00:31:51.529644] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:23.905 [2024-07-12 00:31:51.542205] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:23.905 [2024-07-12 00:31:51.542235] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:23.905 [2024-07-12 00:31:51.554640] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:23.905 [2024-07-12 00:31:51.554670] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:23.905 [2024-07-12 00:31:51.567088] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:23.905 [2024-07-12 00:31:51.567117] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:23.905 [2024-07-12 00:31:51.579331] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:23.905 [2024-07-12 00:31:51.579369] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:23.905 [2024-07-12 00:31:51.591555] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:23.905 [2024-07-12 00:31:51.591595] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:23.905 [2024-07-12 00:31:51.603844] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:23.905 [2024-07-12 00:31:51.603874] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:23.905 [2024-07-12 00:31:51.616376] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:23.905 [2024-07-12 00:31:51.616405] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:23.905 [2024-07-12 00:31:51.629021] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:23.905 [2024-07-12 00:31:51.629050] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:23.905 [2024-07-12 00:31:51.641446] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:23.905 [2024-07-12 00:31:51.641477] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:23.905 [2024-07-12 00:31:51.653524] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:23.905 [2024-07-12 00:31:51.653553] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:23.905 [2024-07-12 00:31:51.665627] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:23.905 [2024-07-12 00:31:51.665656] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:23.905 [2024-07-12 00:31:51.678273] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:23.905 [2024-07-12 00:31:51.678316] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:23.905 [2024-07-12 00:31:51.690536] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:23.905 [2024-07-12 00:31:51.690572] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:23.905 [2024-07-12 00:31:51.703039] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:23.905 [2024-07-12 00:31:51.703082] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:23.905 [2024-07-12 00:31:51.715379] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:23.905 [2024-07-12 00:31:51.715408] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:23.905 [2024-07-12 00:31:51.727923] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:23.905 [2024-07-12 00:31:51.727952] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:23.905 [2024-07-12 00:31:51.740282] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:23.905 [2024-07-12 00:31:51.740312] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:24.163 [2024-07-12 00:31:51.752146] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:24.163 [2024-07-12 00:31:51.752184] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:24.163 [2024-07-12 00:31:51.764509] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:24.163 [2024-07-12 00:31:51.764538] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:24.163 [2024-07-12 00:31:51.776814] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:24.163 [2024-07-12 00:31:51.776844] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:24.163 [2024-07-12 00:31:51.789198] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:24.163 [2024-07-12 00:31:51.789228] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:24.163 [2024-07-12 00:31:51.801738] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:24.163 [2024-07-12 00:31:51.801767] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:24.163 [2024-07-12 00:31:51.814452] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:24.163 [2024-07-12 00:31:51.814491] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:24.163 [2024-07-12 00:31:51.826966] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:24.163 [2024-07-12 00:31:51.826995] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:24.163 [2024-07-12 00:31:51.838884] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:24.163 [2024-07-12 00:31:51.838913] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:24.163 [2024-07-12 00:31:51.851124] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:24.163 [2024-07-12 00:31:51.851159] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:24.163 [2024-07-12 00:31:51.863451] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:24.163 [2024-07-12 00:31:51.863480] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:24.163 [2024-07-12 00:31:51.875716] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:24.163 [2024-07-12 00:31:51.875745] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:24.163 [2024-07-12 00:31:51.888049] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:24.163 [2024-07-12 00:31:51.888079] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:24.163 [2024-07-12 00:31:51.900839] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:24.163 [2024-07-12 00:31:51.900868] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:24.163 [2024-07-12 00:31:51.913104] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:24.163 [2024-07-12 00:31:51.913133] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:24.163 [2024-07-12 00:31:51.925277] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:24.163 [2024-07-12 00:31:51.925307] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:24.163 [2024-07-12 00:31:51.937762] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:24.163 [2024-07-12 00:31:51.937791] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:24.163 [2024-07-12 00:31:51.950195] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:24.163 [2024-07-12 00:31:51.950224] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:24.163 [2024-07-12 00:31:51.962661] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:24.163 [2024-07-12 00:31:51.962690] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:24.163 [2024-07-12 00:31:51.975126] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:24.163 [2024-07-12 00:31:51.975155] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:24.163 [2024-07-12 00:31:51.987214] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:24.163 [2024-07-12 00:31:51.987243] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:24.163 [2024-07-12 00:31:51.999799] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:24.163 [2024-07-12 00:31:51.999828] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:24.422 [2024-07-12 00:31:52.012383] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:24.422 [2024-07-12 00:31:52.012413] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:24.422 [2024-07-12 00:31:52.024384] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:24.422 [2024-07-12 00:31:52.024413] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:24.422 [2024-07-12 00:31:52.036602] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:24.422 [2024-07-12 00:31:52.036631] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:24.422 [2024-07-12 00:31:52.048915] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:24.422 [2024-07-12 00:31:52.048953] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:24.422 [2024-07-12 00:31:52.060920] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:24.422 [2024-07-12 00:31:52.060950] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:24.422 [2024-07-12 00:31:52.072849] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:24.422 [2024-07-12 00:31:52.072878] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:24.422 [2024-07-12 00:31:52.084851] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:24.422 [2024-07-12 00:31:52.084880] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:24.422 [2024-07-12 00:31:52.097480] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:24.422 [2024-07-12 00:31:52.097512] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:24.422 [2024-07-12 00:31:52.109975] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:24.422 [2024-07-12 00:31:52.110004] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:24.422 [2024-07-12 00:31:52.122253] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:24.422 [2024-07-12 00:31:52.122282] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:24.422 [2024-07-12 00:31:52.134346] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:24.422 [2024-07-12 00:31:52.134377] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:24.422 [2024-07-12 00:31:52.146767] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:24.422 [2024-07-12 00:31:52.146797] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:24.422 [2024-07-12 00:31:52.159245] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:24.422 [2024-07-12 00:31:52.159275] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:24.422 [2024-07-12 00:31:52.171915] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:24.422 [2024-07-12 00:31:52.171945] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:24.422 [2024-07-12 00:31:52.184193] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:24.422 [2024-07-12 00:31:52.184222] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:24.422 [2024-07-12 00:31:52.196757] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:24.422 [2024-07-12 00:31:52.196786] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:24.422 [2024-07-12 00:31:52.209330] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:24.422 [2024-07-12 00:31:52.209359] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:24.422 [2024-07-12 00:31:52.221505] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:24.422 [2024-07-12 00:31:52.221534] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:24.422 [2024-07-12 00:31:52.233738] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:24.422 [2024-07-12 00:31:52.233768] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:24.422 [2024-07-12 00:31:52.246191] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:24.422 [2024-07-12 00:31:52.246229] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:24.422 [2024-07-12 00:31:52.258224] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:24.422 [2024-07-12 00:31:52.258254] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:24.680 [2024-07-12 00:31:52.270385] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:24.680 [2024-07-12 00:31:52.270415] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:24.680 [2024-07-12 00:31:52.282840] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:24.680 [2024-07-12 00:31:52.282879] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:24.680 [2024-07-12 00:31:52.294810] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:24.680 [2024-07-12 00:31:52.294840] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:24.680 [2024-07-12 00:31:52.307106] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:24.680 [2024-07-12 00:31:52.307136] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:24.680 [2024-07-12 00:31:52.319509] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:24.680 [2024-07-12 00:31:52.319542] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:24.680 [2024-07-12 00:31:52.332030] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:24.680 [2024-07-12 00:31:52.332059] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:24.680 [2024-07-12 00:31:52.345038] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:24.680 [2024-07-12 00:31:52.345067] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:24.680 [2024-07-12 00:31:52.357604] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:24.680 [2024-07-12 00:31:52.357641] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:24.680 [2024-07-12 00:31:52.370308] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:24.680 [2024-07-12 00:31:52.370338] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:24.680 [2024-07-12 00:31:52.382445] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:24.680 [2024-07-12 00:31:52.382474] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:24.680 [2024-07-12 00:31:52.394892] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:24.680 [2024-07-12 00:31:52.394921] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:24.680 [2024-07-12 00:31:52.407417] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:24.680 [2024-07-12 00:31:52.407446] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:24.680 [2024-07-12 00:31:52.419736] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:24.680 [2024-07-12 00:31:52.419765] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:24.680 [2024-07-12 00:31:52.432344] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:24.680 [2024-07-12 00:31:52.432380] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:24.680 [2024-07-12 00:31:52.444546] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:24.680 [2024-07-12 00:31:52.444576] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:24.680 [2024-07-12 00:31:52.457078] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:24.680 [2024-07-12 00:31:52.457108] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:24.681 [2024-07-12 00:31:52.469560] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:24.681 [2024-07-12 00:31:52.469601] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:24.681 [2024-07-12 00:31:52.482000] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:24.681 [2024-07-12 00:31:52.482028] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:24.681 [2024-07-12 00:31:52.494320] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:24.681 [2024-07-12 00:31:52.494349] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:24.681 [2024-07-12 00:31:52.506803] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:24.681 [2024-07-12 00:31:52.506833] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:24.681 [2024-07-12 00:31:52.518866] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:24.681 [2024-07-12 00:31:52.518910] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:24.939 [2024-07-12 00:31:52.531383] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:24.939 [2024-07-12 00:31:52.531413] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:24.939 [2024-07-12 00:31:52.543863] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:24.939 [2024-07-12 00:31:52.543893] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:24.939 [2024-07-12 00:31:52.556105] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:24.939 [2024-07-12 00:31:52.556134] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:24.939 [2024-07-12 00:31:52.568078] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:24.939 [2024-07-12 00:31:52.568107] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:24.939 [2024-07-12 00:31:52.580356] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:24.939 [2024-07-12 00:31:52.580385] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:24.939 [2024-07-12 00:31:52.592406] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:24.939 [2024-07-12 00:31:52.592436] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:24.939 [2024-07-12 00:31:52.604468] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:24.939 [2024-07-12 00:31:52.604499] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:24.939 [2024-07-12 00:31:52.616943] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:24.939 [2024-07-12 00:31:52.616974] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:24.939 [2024-07-12 00:31:52.629009] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:24.939 [2024-07-12 00:31:52.629040] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:24.939 [2024-07-12 00:31:52.641219] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:24.939 [2024-07-12 00:31:52.641250] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:24.939 [2024-07-12 00:31:52.653736] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:24.939 [2024-07-12 00:31:52.653766] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:24.939 [2024-07-12 00:31:52.666649] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:24.939 [2024-07-12 00:31:52.666683] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:24.939 [2024-07-12 00:31:52.679204] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:24.939 [2024-07-12 00:31:52.679235] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:24.939 [2024-07-12 00:31:52.692025] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:24.939 [2024-07-12 00:31:52.692056] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:24.939 [2024-07-12 00:31:52.704600] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:24.939 [2024-07-12 00:31:52.704634] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:24.939 [2024-07-12 00:31:52.717424] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:24.939 [2024-07-12 00:31:52.717453] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:24.939 [2024-07-12 00:31:52.729645] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:24.939 [2024-07-12 00:31:52.729675] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:24.939 [2024-07-12 00:31:52.742277] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:24.939 [2024-07-12 00:31:52.742307] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:24.939 [2024-07-12 00:31:52.755109] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:24.939 [2024-07-12 00:31:52.755139] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:24.939 [2024-07-12 00:31:52.769418] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:24.939 [2024-07-12 00:31:52.769446] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:25.198 [2024-07-12 00:31:52.781115] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:25.198 [2024-07-12 00:31:52.781147] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:25.198 [2024-07-12 00:31:52.793515] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:25.198 [2024-07-12 00:31:52.793544] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:25.198 [2024-07-12 00:31:52.805520] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:25.198 [2024-07-12 00:31:52.805549] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:25.198 [2024-07-12 00:31:52.817997] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:25.198 [2024-07-12 00:31:52.818026] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:25.198 [2024-07-12 00:31:52.830368] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:25.198 [2024-07-12 00:31:52.830398] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:25.198 [2024-07-12 00:31:52.842930] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:25.198 [2024-07-12 00:31:52.842959] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:25.198 [2024-07-12 00:31:52.855279] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:25.198 [2024-07-12 00:31:52.855308] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:25.198 [2024-07-12 00:31:52.867722] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:25.198 [2024-07-12 00:31:52.867752] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:25.198 [2024-07-12 00:31:52.880137] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:25.198 [2024-07-12 00:31:52.880166] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:25.198 [2024-07-12 00:31:52.892616] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:25.198 [2024-07-12 00:31:52.892653] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:25.198 [2024-07-12 00:31:52.905219] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:25.198 [2024-07-12 00:31:52.905248] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:25.198 [2024-07-12 00:31:52.917386] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:25.198 [2024-07-12 00:31:52.917423] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:25.198 [2024-07-12 00:31:52.929838] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:25.198 [2024-07-12 00:31:52.929867] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:25.198 [2024-07-12 00:31:52.942409] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:25.198 [2024-07-12 00:31:52.942442] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:25.198 [2024-07-12 00:31:52.955038] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:25.198 [2024-07-12 00:31:52.955068] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:25.198 [2024-07-12 00:31:52.967266] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:25.198 [2024-07-12 00:31:52.967296] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:25.198 [2024-07-12 00:31:52.979542] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:25.198 [2024-07-12 00:31:52.979570] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:25.198 [2024-07-12 00:31:52.991842] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:25.198 [2024-07-12 00:31:52.991873] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:25.198 [2024-07-12 00:31:53.004002] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:25.198 [2024-07-12 00:31:53.004032] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:25.198 [2024-07-12 00:31:53.016252] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:25.198 [2024-07-12 00:31:53.016281] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:25.198 [2024-07-12 00:31:53.028709] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:25.198 [2024-07-12 00:31:53.028747] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:25.455 [2024-07-12 00:31:53.041065] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:25.455 [2024-07-12 00:31:53.041100] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:25.455 [2024-07-12 00:31:53.053282] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:25.455 [2024-07-12 00:31:53.053311] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:25.455 [2024-07-12 00:31:53.065314] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:25.455 [2024-07-12 00:31:53.065343] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:25.455 [2024-07-12 00:31:53.077656] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:25.455 [2024-07-12 00:31:53.077685] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:25.455 [2024-07-12 00:31:53.089953] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:25.455 [2024-07-12 00:31:53.089982] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:25.455 [2024-07-12 00:31:53.102223] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:25.455 [2024-07-12 00:31:53.102252] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:25.455 [2024-07-12 00:31:53.114070] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:25.455 [2024-07-12 00:31:53.114100] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:25.455 [2024-07-12 00:31:53.126218] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:25.455 [2024-07-12 00:31:53.126251] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:25.455 [2024-07-12 00:31:53.138301] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:25.455 [2024-07-12 00:31:53.138331] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:25.455 [2024-07-12 00:31:53.150855] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:25.455 [2024-07-12 00:31:53.150884] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:25.455 [2024-07-12 00:31:53.163080] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:25.455 [2024-07-12 00:31:53.163109] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:25.455 [2024-07-12 00:31:53.175384] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:25.455 [2024-07-12 00:31:53.175414] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:25.455 [2024-07-12 00:31:53.187797] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:25.455 [2024-07-12 00:31:53.187827] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:25.455 [2024-07-12 00:31:53.199701] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:25.455 [2024-07-12 00:31:53.199730] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:25.455 [2024-07-12 00:31:53.211747] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:25.455 [2024-07-12 00:31:53.211777] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:25.455 [2024-07-12 00:31:53.223289] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:25.455 [2024-07-12 00:31:53.223327] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:25.455 [2024-07-12 00:31:53.234790] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:25.455 [2024-07-12 00:31:53.234819] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:25.455 [2024-07-12 00:31:53.246920] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:25.455 [2024-07-12 00:31:53.246950] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:25.455 [2024-07-12 00:31:53.259209] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:25.455 [2024-07-12 00:31:53.259245] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:25.455 [2024-07-12 00:31:53.271646] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:25.455 [2024-07-12 00:31:53.271676] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:25.455 [2024-07-12 00:31:53.283638] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:25.455 [2024-07-12 00:31:53.283668] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:25.712 [2024-07-12 00:31:53.295939] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:25.712 [2024-07-12 00:31:53.295969] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:25.712 [2024-07-12 00:31:53.308106] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:25.712 [2024-07-12 00:31:53.308136] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:25.712 [2024-07-12 00:31:53.319998] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:25.712 [2024-07-12 00:31:53.320028] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:25.712 [2024-07-12 00:31:53.332437] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:25.712 [2024-07-12 00:31:53.332473] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:25.712 [2024-07-12 00:31:53.345031] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:25.712 [2024-07-12 00:31:53.345061] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:25.712 [2024-07-12 00:31:53.357143] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:25.712 [2024-07-12 00:31:53.357173] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:25.712 [2024-07-12 00:31:53.369006] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:25.712 [2024-07-12 00:31:53.369035] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:25.712 [2024-07-12 00:31:53.381423] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:25.712 [2024-07-12 00:31:53.381460] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:25.712 [2024-07-12 00:31:53.393508] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:25.712 [2024-07-12 00:31:53.393538] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:25.712 [2024-07-12 00:31:53.406195] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:25.712 [2024-07-12 00:31:53.406229] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:25.712 [2024-07-12 00:31:53.420517] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:25.712 [2024-07-12 00:31:53.420546] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:25.712 [2024-07-12 00:31:53.432215] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:25.712 [2024-07-12 00:31:53.432244] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:25.712 [2024-07-12 00:31:53.443869] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:25.712 [2024-07-12 00:31:53.443911] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:25.712 [2024-07-12 00:31:53.455951] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:25.712 [2024-07-12 00:31:53.455981] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:25.712 [2024-07-12 00:31:53.468281] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:25.712 [2024-07-12 00:31:53.468310] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:25.712 [2024-07-12 00:31:53.480540] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:25.712 [2024-07-12 00:31:53.480570] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:25.712 [2024-07-12 00:31:53.493233] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:25.712 [2024-07-12 00:31:53.493263] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:25.712 [2024-07-12 00:31:53.505384] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:25.712 [2024-07-12 00:31:53.505420] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:25.712 [2024-07-12 00:31:53.517668] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:25.712 [2024-07-12 00:31:53.517698] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:25.713 [2024-07-12 00:31:53.529835] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:25.713 [2024-07-12 00:31:53.529864] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:25.713 [2024-07-12 00:31:53.541960] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:25.713 [2024-07-12 00:31:53.541990] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:25.971 [2024-07-12 00:31:53.554450] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:25.971 [2024-07-12 00:31:53.554480] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:25.971 [2024-07-12 00:31:53.566572] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:25.971 [2024-07-12 00:31:53.566610] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:25.971 [2024-07-12 00:31:53.578521] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:25.971 [2024-07-12 00:31:53.578563] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:25.971 [2024-07-12 00:31:53.590581] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:25.971 [2024-07-12 00:31:53.590624] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:25.971 [2024-07-12 00:31:53.603159] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:25.971 [2024-07-12 00:31:53.603188] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:25.971 [2024-07-12 00:31:53.615603] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:25.971 [2024-07-12 00:31:53.615633] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:25.971 [2024-07-12 00:31:53.627884] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:25.971 [2024-07-12 00:31:53.627914] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:25.971 [2024-07-12 00:31:53.640166] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:25.971 [2024-07-12 00:31:53.640195] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:25.971 [2024-07-12 00:31:53.652420] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:25.971 [2024-07-12 00:31:53.652450] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:25.971 [2024-07-12 00:31:53.664890] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:25.971 [2024-07-12 00:31:53.664920] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:25.971 [2024-07-12 00:31:53.677409] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:25.971 [2024-07-12 00:31:53.677447] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:25.971 [2024-07-12 00:31:53.689527] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:25.971 [2024-07-12 00:31:53.689558] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:25.971 [2024-07-12 00:31:53.701838] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:25.971 [2024-07-12 00:31:53.701869] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:25.971 [2024-07-12 00:31:53.713912] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:25.971 [2024-07-12 00:31:53.713942] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:25.971 [2024-07-12 00:31:53.726223] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:25.971 [2024-07-12 00:31:53.726253] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:25.971 [2024-07-12 00:31:53.738452] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:25.971 [2024-07-12 00:31:53.738481] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:25.971 [2024-07-12 00:31:53.750826] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:25.971 [2024-07-12 00:31:53.750856] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:25.971 [2024-07-12 00:31:53.763477] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:25.972 [2024-07-12 00:31:53.763514] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:25.972 [2024-07-12 00:31:53.776175] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:25.972 [2024-07-12 00:31:53.776205] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:25.972 [2024-07-12 00:31:53.788304] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:25.972 [2024-07-12 00:31:53.788334] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:25.972 [2024-07-12 00:31:53.800886] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:25.972 [2024-07-12 00:31:53.800916] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:26.230 [2024-07-12 00:31:53.813068] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:26.230 [2024-07-12 00:31:53.813098] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:26.230 [2024-07-12 00:31:53.825510] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:26.230 [2024-07-12 00:31:53.825539] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:26.230 [2024-07-12 00:31:53.837230] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:26.230 [2024-07-12 00:31:53.837259] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:26.230 [2024-07-12 00:31:53.849457] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:26.230 [2024-07-12 00:31:53.849486] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:26.230 [2024-07-12 00:31:53.860834] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:26.230 [2024-07-12 00:31:53.860863] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:26.230 [2024-07-12 00:31:53.873129] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:26.230 [2024-07-12 00:31:53.873158] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:26.230 [2024-07-12 00:31:53.885787] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:26.230 [2024-07-12 00:31:53.885816] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:26.230 [2024-07-12 00:31:53.898108] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:26.230 [2024-07-12 00:31:53.898137] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:26.230 [2024-07-12 00:31:53.910863] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:26.230 [2024-07-12 00:31:53.910901] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:26.230 [2024-07-12 00:31:53.923061] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:26.230 [2024-07-12 00:31:53.923090] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:26.230 [2024-07-12 00:31:53.935237] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:26.230 [2024-07-12 00:31:53.935267] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:26.230 [2024-07-12 00:31:53.947345] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:26.230 [2024-07-12 00:31:53.947374] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:26.230 [2024-07-12 00:31:53.959218] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:26.230 [2024-07-12 00:31:53.959247] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:26.230 [2024-07-12 00:31:53.971436] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:26.230 [2024-07-12 00:31:53.971465] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:26.230 [2024-07-12 00:31:53.990239] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:26.230 [2024-07-12 00:31:53.990279] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:26.230 [2024-07-12 00:31:54.002101] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:26.230 [2024-07-12 00:31:54.002131] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:26.230 [2024-07-12 00:31:54.014505] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:26.230 [2024-07-12 00:31:54.014538] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:26.230 [2024-07-12 00:31:54.027189] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:26.230 [2024-07-12 00:31:54.027219] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:26.230 [2024-07-12 00:31:54.039847] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:26.230 [2024-07-12 00:31:54.039877] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:26.230 [2024-07-12 00:31:54.052560] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:26.230 [2024-07-12 00:31:54.052603] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:26.230 [2024-07-12 00:31:54.065238] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:26.230 [2024-07-12 00:31:54.065269] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:26.489 [2024-07-12 00:31:54.077845] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:26.489 [2024-07-12 00:31:54.077877] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:26.489 [2024-07-12 00:31:54.090357] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:26.489 [2024-07-12 00:31:54.090387] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:26.489 [2024-07-12 00:31:54.102765] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:26.489 [2024-07-12 00:31:54.102795] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:26.489 [2024-07-12 00:31:54.114889] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:26.489 [2024-07-12 00:31:54.114919] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:26.489 [2024-07-12 00:31:54.127166] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:26.489 [2024-07-12 00:31:54.127196] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:26.489 [2024-07-12 00:31:54.139271] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:26.489 [2024-07-12 00:31:54.139301] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:26.489 [2024-07-12 00:31:54.151380] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:26.489 [2024-07-12 00:31:54.151418] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:26.489 [2024-07-12 00:31:54.163790] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:26.489 [2024-07-12 00:31:54.163819] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:26.489 [2024-07-12 00:31:54.175906] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:26.489 [2024-07-12 00:31:54.175936] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:26.489 [2024-07-12 00:31:54.188221] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:26.489 [2024-07-12 00:31:54.188250] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:26.489 [2024-07-12 00:31:54.200448] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:26.489 [2024-07-12 00:31:54.200478] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:26.489 [2024-07-12 00:31:54.212718] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:26.489 [2024-07-12 00:31:54.212748] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:26.489 [2024-07-12 00:31:54.225183] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:26.489 [2024-07-12 00:31:54.225213] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:26.489 [2024-07-12 00:31:54.237224] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:26.489 [2024-07-12 00:31:54.237254] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:26.489 [2024-07-12 00:31:54.249839] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:26.489 [2024-07-12 00:31:54.249869] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:26.489 [2024-07-12 00:31:54.261950] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:26.489 [2024-07-12 00:31:54.261980] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:26.489 [2024-07-12 00:31:54.274165] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:26.489 [2024-07-12 00:31:54.274194] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:26.489 [2024-07-12 00:31:54.280683] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:26.489 [2024-07-12 00:31:54.280712] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:26.489 00:19:26.489 Latency(us) 00:19:26.489 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:26.489 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:19:26.489 Nvme1n1 : 5.01 10362.71 80.96 0.00 0.00 12333.81 5898.24 20000.62 00:19:26.489 =================================================================================================================== 00:19:26.489 Total : 10362.71 80.96 0.00 0.00 12333.81 5898.24 20000.62 00:19:26.489 [2024-07-12 00:31:54.288697] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:26.489 [2024-07-12 00:31:54.288725] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:26.489 [2024-07-12 00:31:54.296728] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:26.489 [2024-07-12 00:31:54.296759] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:26.489 [2024-07-12 00:31:54.304847] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:26.489 [2024-07-12 00:31:54.304912] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:26.489 [2024-07-12 00:31:54.312848] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:26.489 [2024-07-12 00:31:54.312910] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:26.489 [2024-07-12 00:31:54.324930] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:26.489 [2024-07-12 00:31:54.324999] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:26.747 [2024-07-12 00:31:54.332926] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:26.748 [2024-07-12 00:31:54.332987] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:26.748 [2024-07-12 00:31:54.340965] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:26.748 [2024-07-12 00:31:54.341031] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:26.748 [2024-07-12 00:31:54.348983] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:26.748 [2024-07-12 00:31:54.349045] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:26.748 [2024-07-12 00:31:54.356984] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:26.748 [2024-07-12 00:31:54.357046] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:26.748 [2024-07-12 00:31:54.369037] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:26.748 [2024-07-12 00:31:54.369109] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:26.748 [2024-07-12 00:31:54.377032] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:26.748 [2024-07-12 00:31:54.377086] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:26.748 [2024-07-12 00:31:54.385086] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:26.748 [2024-07-12 00:31:54.385153] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:26.748 [2024-07-12 00:31:54.393117] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:26.748 [2024-07-12 00:31:54.393181] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:26.748 [2024-07-12 00:31:54.401115] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:26.748 [2024-07-12 00:31:54.401171] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:26.748 [2024-07-12 00:31:54.409136] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:26.748 [2024-07-12 00:31:54.409196] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:26.748 [2024-07-12 00:31:54.417183] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:26.748 [2024-07-12 00:31:54.417247] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:26.748 [2024-07-12 00:31:54.425170] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:26.748 [2024-07-12 00:31:54.425219] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:26.748 [2024-07-12 00:31:54.433130] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:26.748 [2024-07-12 00:31:54.433159] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:26.748 [2024-07-12 00:31:54.441151] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:26.748 [2024-07-12 00:31:54.441179] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:26.748 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (952562) - No such process 00:19:26.748 00:31:54 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 952562 00:19:26.748 00:31:54 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:26.748 00:31:54 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:26.748 00:31:54 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:19:26.748 00:31:54 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:26.748 00:31:54 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:19:26.748 00:31:54 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:26.748 00:31:54 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:19:26.748 delay0 00:19:26.748 00:31:54 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:26.748 00:31:54 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:19:26.748 00:31:54 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:26.748 00:31:54 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:19:26.748 00:31:54 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:26.748 00:31:54 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:19:26.748 EAL: No free 2048 kB hugepages reported on node 1 00:19:27.006 [2024-07-12 00:31:54.612709] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:19:33.565 Initializing NVMe Controllers 00:19:33.565 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:19:33.565 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:19:33.565 Initialization complete. Launching workers. 00:19:33.565 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 105 00:19:33.565 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 392, failed to submit 33 00:19:33.565 success 219, unsuccess 173, failed 0 00:19:33.565 00:32:00 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:19:33.565 00:32:00 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:19:33.565 00:32:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:33.565 00:32:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@117 -- # sync 00:19:33.565 00:32:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:33.565 00:32:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@120 -- # set +e 00:19:33.565 00:32:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:33.565 00:32:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:33.565 rmmod nvme_tcp 00:19:33.565 rmmod nvme_fabrics 00:19:33.565 rmmod nvme_keyring 00:19:33.565 00:32:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:33.565 00:32:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@124 -- # set -e 00:19:33.565 00:32:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@125 -- # return 0 00:19:33.565 00:32:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@489 -- # '[' -n 951549 ']' 00:19:33.565 00:32:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@490 -- # killprocess 951549 00:19:33.565 00:32:00 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@946 -- # '[' -z 951549 ']' 00:19:33.565 00:32:00 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@950 -- # kill -0 951549 00:19:33.565 00:32:00 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@951 -- # uname 00:19:33.565 00:32:00 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:19:33.565 00:32:00 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 951549 00:19:33.565 00:32:00 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:19:33.565 00:32:00 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:19:33.565 00:32:00 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@964 -- # echo 'killing process with pid 951549' 00:19:33.565 killing process with pid 951549 00:19:33.565 00:32:00 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@965 -- # kill 951549 00:19:33.565 00:32:00 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@970 -- # wait 951549 00:19:33.565 00:32:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:33.565 00:32:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:33.565 00:32:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:33.565 00:32:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:33.565 00:32:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:33.565 00:32:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:33.565 00:32:00 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:33.565 00:32:00 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:35.514 00:32:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:35.514 00:19:35.514 real 0m27.108s 00:19:35.514 user 0m42.200s 00:19:35.514 sys 0m6.401s 00:19:35.514 00:32:03 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1122 -- # xtrace_disable 00:19:35.514 00:32:03 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:19:35.514 ************************************ 00:19:35.514 END TEST nvmf_zcopy 00:19:35.514 ************************************ 00:19:35.514 00:32:03 nvmf_tcp -- nvmf/nvmf.sh@54 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:19:35.514 00:32:03 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:19:35.514 00:32:03 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:19:35.514 00:32:03 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:35.514 ************************************ 00:19:35.514 START TEST nvmf_nmic 00:19:35.514 ************************************ 00:19:35.514 00:32:03 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:19:35.514 * Looking for test storage... 00:19:35.515 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:35.515 00:32:03 nvmf_tcp.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:35.515 00:32:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:19:35.515 00:32:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:35.515 00:32:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:35.515 00:32:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:35.515 00:32:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:35.515 00:32:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:35.515 00:32:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:35.515 00:32:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:35.515 00:32:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:35.515 00:32:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:35.515 00:32:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:35.515 00:32:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:19:35.515 00:32:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:19:35.515 00:32:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:35.515 00:32:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:35.515 00:32:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:35.515 00:32:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:35.515 00:32:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:35.515 00:32:03 nvmf_tcp.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:35.515 00:32:03 nvmf_tcp.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:35.515 00:32:03 nvmf_tcp.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:35.515 00:32:03 nvmf_tcp.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:35.515 00:32:03 nvmf_tcp.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:35.515 00:32:03 nvmf_tcp.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:35.515 00:32:03 nvmf_tcp.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:19:35.515 00:32:03 nvmf_tcp.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:35.515 00:32:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@47 -- # : 0 00:19:35.515 00:32:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:35.515 00:32:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:35.515 00:32:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:35.515 00:32:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:35.515 00:32:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:35.515 00:32:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:35.515 00:32:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:35.515 00:32:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:35.515 00:32:03 nvmf_tcp.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:35.515 00:32:03 nvmf_tcp.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:35.515 00:32:03 nvmf_tcp.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:19:35.515 00:32:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:35.515 00:32:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:35.515 00:32:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:35.515 00:32:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:35.515 00:32:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:35.515 00:32:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:35.515 00:32:03 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:35.515 00:32:03 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:35.515 00:32:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:35.515 00:32:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:35.515 00:32:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@285 -- # xtrace_disable 00:19:35.515 00:32:03 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:19:37.423 00:32:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:37.423 00:32:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # pci_devs=() 00:19:37.423 00:32:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:37.423 00:32:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:37.423 00:32:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:37.423 00:32:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:37.423 00:32:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:37.423 00:32:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # net_devs=() 00:19:37.423 00:32:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:37.423 00:32:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # e810=() 00:19:37.423 00:32:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # local -ga e810 00:19:37.423 00:32:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # x722=() 00:19:37.423 00:32:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # local -ga x722 00:19:37.423 00:32:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # mlx=() 00:19:37.423 00:32:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # local -ga mlx 00:19:37.423 00:32:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:37.423 00:32:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:37.423 00:32:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:37.423 00:32:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:37.423 00:32:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:37.423 00:32:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:37.423 00:32:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:37.423 00:32:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:37.423 00:32:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:37.423 00:32:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:37.423 00:32:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:37.423 00:32:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:37.423 00:32:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:37.423 00:32:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:37.423 00:32:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:37.423 00:32:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:37.423 00:32:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:37.423 00:32:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:37.423 00:32:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:19:37.423 Found 0000:08:00.0 (0x8086 - 0x159b) 00:19:37.423 00:32:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:37.423 00:32:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:37.423 00:32:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:37.423 00:32:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:37.423 00:32:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:37.423 00:32:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:37.423 00:32:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:19:37.423 Found 0000:08:00.1 (0x8086 - 0x159b) 00:19:37.423 00:32:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:37.423 00:32:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:37.423 00:32:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:37.423 00:32:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:37.423 00:32:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:37.423 00:32:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:37.423 00:32:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:37.423 00:32:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:37.423 00:32:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:37.423 00:32:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:37.423 00:32:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:37.423 00:32:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:37.423 00:32:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:37.423 00:32:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:37.423 00:32:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:37.423 00:32:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:19:37.423 Found net devices under 0000:08:00.0: cvl_0_0 00:19:37.423 00:32:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:37.423 00:32:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:37.423 00:32:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:37.423 00:32:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:37.423 00:32:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:37.423 00:32:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:37.423 00:32:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:37.423 00:32:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:37.423 00:32:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:19:37.423 Found net devices under 0000:08:00.1: cvl_0_1 00:19:37.423 00:32:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:37.423 00:32:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:37.423 00:32:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # is_hw=yes 00:19:37.423 00:32:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:37.423 00:32:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:37.423 00:32:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:37.423 00:32:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:37.423 00:32:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:37.423 00:32:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:37.423 00:32:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:37.423 00:32:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:37.423 00:32:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:37.423 00:32:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:37.423 00:32:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:37.423 00:32:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:37.423 00:32:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:37.423 00:32:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:37.423 00:32:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:37.423 00:32:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:37.423 00:32:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:37.423 00:32:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:37.423 00:32:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:37.423 00:32:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:37.423 00:32:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:37.423 00:32:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:37.423 00:32:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:37.423 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:37.423 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.198 ms 00:19:37.423 00:19:37.423 --- 10.0.0.2 ping statistics --- 00:19:37.423 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:37.423 rtt min/avg/max/mdev = 0.198/0.198/0.198/0.000 ms 00:19:37.423 00:32:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:37.423 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:37.423 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.146 ms 00:19:37.423 00:19:37.423 --- 10.0.0.1 ping statistics --- 00:19:37.423 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:37.423 rtt min/avg/max/mdev = 0.146/0.146/0.146/0.000 ms 00:19:37.423 00:32:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:37.423 00:32:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@422 -- # return 0 00:19:37.423 00:32:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:37.423 00:32:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:37.423 00:32:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:37.423 00:32:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:37.423 00:32:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:37.423 00:32:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:37.423 00:32:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:37.423 00:32:04 nvmf_tcp.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:19:37.423 00:32:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:37.423 00:32:04 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@720 -- # xtrace_disable 00:19:37.423 00:32:04 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:19:37.423 00:32:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@481 -- # nvmfpid=955150 00:19:37.423 00:32:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:19:37.423 00:32:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@482 -- # waitforlisten 955150 00:19:37.423 00:32:04 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@827 -- # '[' -z 955150 ']' 00:19:37.424 00:32:04 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:37.424 00:32:04 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@832 -- # local max_retries=100 00:19:37.424 00:32:04 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:37.424 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:37.424 00:32:04 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@836 -- # xtrace_disable 00:19:37.424 00:32:04 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:19:37.424 [2024-07-12 00:32:04.994846] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:19:37.424 [2024-07-12 00:32:04.994951] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:37.424 EAL: No free 2048 kB hugepages reported on node 1 00:19:37.424 [2024-07-12 00:32:05.062913] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:37.424 [2024-07-12 00:32:05.155480] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:37.424 [2024-07-12 00:32:05.155539] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:37.424 [2024-07-12 00:32:05.155555] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:37.424 [2024-07-12 00:32:05.155569] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:37.424 [2024-07-12 00:32:05.155581] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:37.424 [2024-07-12 00:32:05.155647] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:37.424 [2024-07-12 00:32:05.155703] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:37.424 [2024-07-12 00:32:05.155732] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:37.424 [2024-07-12 00:32:05.155735] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:37.683 00:32:05 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:19:37.683 00:32:05 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@860 -- # return 0 00:19:37.683 00:32:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:37.683 00:32:05 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:37.683 00:32:05 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:19:37.683 00:32:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:37.683 00:32:05 nvmf_tcp.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:37.683 00:32:05 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:37.683 00:32:05 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:19:37.683 [2024-07-12 00:32:05.305254] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:37.683 00:32:05 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:37.683 00:32:05 nvmf_tcp.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:37.683 00:32:05 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:37.683 00:32:05 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:19:37.683 Malloc0 00:19:37.683 00:32:05 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:37.683 00:32:05 nvmf_tcp.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:19:37.683 00:32:05 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:37.683 00:32:05 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:19:37.683 00:32:05 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:37.683 00:32:05 nvmf_tcp.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:37.683 00:32:05 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:37.683 00:32:05 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:19:37.683 00:32:05 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:37.683 00:32:05 nvmf_tcp.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:37.683 00:32:05 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:37.683 00:32:05 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:19:37.683 [2024-07-12 00:32:05.355472] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:37.683 00:32:05 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:37.683 00:32:05 nvmf_tcp.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:19:37.683 test case1: single bdev can't be used in multiple subsystems 00:19:37.683 00:32:05 nvmf_tcp.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:19:37.683 00:32:05 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:37.683 00:32:05 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:19:37.683 00:32:05 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:37.683 00:32:05 nvmf_tcp.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:19:37.683 00:32:05 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:37.683 00:32:05 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:19:37.683 00:32:05 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:37.683 00:32:05 nvmf_tcp.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:19:37.683 00:32:05 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:19:37.683 00:32:05 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:37.683 00:32:05 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:19:37.683 [2024-07-12 00:32:05.379330] bdev.c:8035:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:19:37.683 [2024-07-12 00:32:05.379361] subsystem.c:2063:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:19:37.683 [2024-07-12 00:32:05.379377] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:37.683 request: 00:19:37.683 { 00:19:37.683 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:19:37.683 "namespace": { 00:19:37.683 "bdev_name": "Malloc0", 00:19:37.683 "no_auto_visible": false 00:19:37.683 }, 00:19:37.683 "method": "nvmf_subsystem_add_ns", 00:19:37.683 "req_id": 1 00:19:37.683 } 00:19:37.683 Got JSON-RPC error response 00:19:37.683 response: 00:19:37.683 { 00:19:37.683 "code": -32602, 00:19:37.683 "message": "Invalid parameters" 00:19:37.683 } 00:19:37.683 00:32:05 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:19:37.683 00:32:05 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:19:37.683 00:32:05 nvmf_tcp.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:19:37.683 00:32:05 nvmf_tcp.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:19:37.683 Adding namespace failed - expected result. 00:19:37.683 00:32:05 nvmf_tcp.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:19:37.683 test case2: host connect to nvmf target in multiple paths 00:19:37.683 00:32:05 nvmf_tcp.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:19:37.683 00:32:05 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:37.683 00:32:05 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:19:37.683 [2024-07-12 00:32:05.387453] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:19:37.683 00:32:05 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:37.683 00:32:05 nvmf_tcp.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:19:38.256 00:32:05 nvmf_tcp.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:19:38.517 00:32:06 nvmf_tcp.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:19:38.517 00:32:06 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1194 -- # local i=0 00:19:38.517 00:32:06 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:19:38.517 00:32:06 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:19:38.517 00:32:06 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1201 -- # sleep 2 00:19:41.055 00:32:08 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:19:41.055 00:32:08 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:19:41.055 00:32:08 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:19:41.055 00:32:08 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:19:41.055 00:32:08 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:19:41.055 00:32:08 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1204 -- # return 0 00:19:41.055 00:32:08 nvmf_tcp.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:19:41.055 [global] 00:19:41.055 thread=1 00:19:41.055 invalidate=1 00:19:41.055 rw=write 00:19:41.055 time_based=1 00:19:41.055 runtime=1 00:19:41.055 ioengine=libaio 00:19:41.055 direct=1 00:19:41.055 bs=4096 00:19:41.055 iodepth=1 00:19:41.055 norandommap=0 00:19:41.055 numjobs=1 00:19:41.055 00:19:41.055 verify_dump=1 00:19:41.055 verify_backlog=512 00:19:41.055 verify_state_save=0 00:19:41.055 do_verify=1 00:19:41.055 verify=crc32c-intel 00:19:41.055 [job0] 00:19:41.055 filename=/dev/nvme0n1 00:19:41.055 Could not set queue depth (nvme0n1) 00:19:41.055 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:41.055 fio-3.35 00:19:41.055 Starting 1 thread 00:19:41.996 00:19:41.996 job0: (groupid=0, jobs=1): err= 0: pid=955545: Fri Jul 12 00:32:09 2024 00:19:41.996 read: IOPS=23, BW=94.4KiB/s (96.7kB/s)(96.0KiB/1017msec) 00:19:41.996 slat (nsec): min=7844, max=31527, avg=20546.00, stdev=7279.89 00:19:41.996 clat (usec): min=396, max=41040, avg=37549.32, stdev=11431.43 00:19:41.996 lat (usec): min=415, max=41058, avg=37569.87, stdev=11430.23 00:19:41.996 clat percentiles (usec): 00:19:41.996 | 1.00th=[ 396], 5.00th=[ 478], 10.00th=[40633], 20.00th=[40633], 00:19:41.996 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:19:41.996 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:19:41.996 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:19:41.996 | 99.99th=[41157] 00:19:41.996 write: IOPS=503, BW=2014KiB/s (2062kB/s)(2048KiB/1017msec); 0 zone resets 00:19:41.996 slat (usec): min=7, max=29298, avg=65.74, stdev=1294.45 00:19:41.996 clat (usec): min=131, max=318, avg=156.95, stdev=35.66 00:19:41.996 lat (usec): min=139, max=29575, avg=222.69, stdev=1300.26 00:19:41.996 clat percentiles (usec): 00:19:41.996 | 1.00th=[ 133], 5.00th=[ 135], 10.00th=[ 137], 20.00th=[ 139], 00:19:41.996 | 30.00th=[ 141], 40.00th=[ 141], 50.00th=[ 143], 60.00th=[ 147], 00:19:41.996 | 70.00th=[ 151], 80.00th=[ 157], 90.00th=[ 229], 95.00th=[ 245], 00:19:41.996 | 99.00th=[ 277], 99.50th=[ 281], 99.90th=[ 318], 99.95th=[ 318], 00:19:41.996 | 99.99th=[ 318] 00:19:41.996 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:19:41.996 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:19:41.996 lat (usec) : 250=91.79%, 500=4.10% 00:19:41.996 lat (msec) : 50=4.10% 00:19:41.996 cpu : usr=0.10%, sys=0.59%, ctx=538, majf=0, minf=1 00:19:41.996 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:41.996 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:41.996 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:41.996 issued rwts: total=24,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:41.996 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:41.996 00:19:41.996 Run status group 0 (all jobs): 00:19:41.996 READ: bw=94.4KiB/s (96.7kB/s), 94.4KiB/s-94.4KiB/s (96.7kB/s-96.7kB/s), io=96.0KiB (98.3kB), run=1017-1017msec 00:19:41.996 WRITE: bw=2014KiB/s (2062kB/s), 2014KiB/s-2014KiB/s (2062kB/s-2062kB/s), io=2048KiB (2097kB), run=1017-1017msec 00:19:41.996 00:19:41.996 Disk stats (read/write): 00:19:41.996 nvme0n1: ios=47/512, merge=0/0, ticks=1764/77, in_queue=1841, util=98.60% 00:19:41.996 00:32:09 nvmf_tcp.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:19:42.256 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:19:42.256 00:32:09 nvmf_tcp.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:19:42.256 00:32:09 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1215 -- # local i=0 00:19:42.256 00:32:09 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:19:42.256 00:32:09 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:42.256 00:32:09 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:19:42.256 00:32:09 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:42.256 00:32:09 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # return 0 00:19:42.256 00:32:09 nvmf_tcp.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:19:42.256 00:32:09 nvmf_tcp.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:19:42.256 00:32:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:42.256 00:32:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@117 -- # sync 00:19:42.256 00:32:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:42.256 00:32:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@120 -- # set +e 00:19:42.256 00:32:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:42.256 00:32:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:42.256 rmmod nvme_tcp 00:19:42.256 rmmod nvme_fabrics 00:19:42.256 rmmod nvme_keyring 00:19:42.256 00:32:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:42.256 00:32:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@124 -- # set -e 00:19:42.256 00:32:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@125 -- # return 0 00:19:42.256 00:32:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@489 -- # '[' -n 955150 ']' 00:19:42.256 00:32:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@490 -- # killprocess 955150 00:19:42.256 00:32:09 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@946 -- # '[' -z 955150 ']' 00:19:42.256 00:32:09 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@950 -- # kill -0 955150 00:19:42.256 00:32:09 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@951 -- # uname 00:19:42.256 00:32:09 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:19:42.256 00:32:09 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 955150 00:19:42.256 00:32:09 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:19:42.256 00:32:09 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:19:42.256 00:32:09 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@964 -- # echo 'killing process with pid 955150' 00:19:42.256 killing process with pid 955150 00:19:42.256 00:32:09 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@965 -- # kill 955150 00:19:42.256 00:32:09 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@970 -- # wait 955150 00:19:42.515 00:32:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:42.515 00:32:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:42.515 00:32:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:42.515 00:32:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:42.515 00:32:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:42.515 00:32:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:42.515 00:32:10 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:42.516 00:32:10 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:44.427 00:32:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:44.427 00:19:44.427 real 0m9.138s 00:19:44.427 user 0m20.701s 00:19:44.427 sys 0m2.052s 00:19:44.427 00:32:12 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1122 -- # xtrace_disable 00:19:44.427 00:32:12 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:19:44.427 ************************************ 00:19:44.427 END TEST nvmf_nmic 00:19:44.427 ************************************ 00:19:44.427 00:32:12 nvmf_tcp -- nvmf/nvmf.sh@55 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:19:44.427 00:32:12 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:19:44.427 00:32:12 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:19:44.427 00:32:12 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:44.686 ************************************ 00:19:44.686 START TEST nvmf_fio_target 00:19:44.686 ************************************ 00:19:44.686 00:32:12 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:19:44.686 * Looking for test storage... 00:19:44.686 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:44.686 00:32:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:44.686 00:32:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:19:44.686 00:32:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:44.686 00:32:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:44.686 00:32:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:44.686 00:32:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:44.686 00:32:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:44.686 00:32:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:44.686 00:32:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:44.686 00:32:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:44.686 00:32:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:44.686 00:32:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:44.686 00:32:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:19:44.686 00:32:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:19:44.686 00:32:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:44.686 00:32:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:44.686 00:32:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:44.686 00:32:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:44.687 00:32:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:44.687 00:32:12 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:44.687 00:32:12 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:44.687 00:32:12 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:44.687 00:32:12 nvmf_tcp.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:44.687 00:32:12 nvmf_tcp.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:44.687 00:32:12 nvmf_tcp.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:44.687 00:32:12 nvmf_tcp.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:19:44.687 00:32:12 nvmf_tcp.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:44.687 00:32:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@47 -- # : 0 00:19:44.687 00:32:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:44.687 00:32:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:44.687 00:32:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:44.687 00:32:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:44.687 00:32:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:44.687 00:32:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:44.687 00:32:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:44.687 00:32:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:44.687 00:32:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:44.687 00:32:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:44.687 00:32:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:44.687 00:32:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:19:44.687 00:32:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:44.687 00:32:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:44.687 00:32:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:44.687 00:32:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:44.687 00:32:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:44.687 00:32:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:44.687 00:32:12 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:44.687 00:32:12 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:44.687 00:32:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:44.687 00:32:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:44.687 00:32:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@285 -- # xtrace_disable 00:19:44.687 00:32:12 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:19:46.594 00:32:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:46.594 00:32:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # pci_devs=() 00:19:46.594 00:32:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:46.594 00:32:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:46.594 00:32:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:46.594 00:32:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:46.594 00:32:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:46.594 00:32:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # net_devs=() 00:19:46.594 00:32:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:46.594 00:32:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # e810=() 00:19:46.594 00:32:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # local -ga e810 00:19:46.594 00:32:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # x722=() 00:19:46.594 00:32:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # local -ga x722 00:19:46.594 00:32:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # mlx=() 00:19:46.594 00:32:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # local -ga mlx 00:19:46.594 00:32:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:46.594 00:32:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:46.594 00:32:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:46.594 00:32:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:46.594 00:32:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:46.594 00:32:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:46.594 00:32:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:46.594 00:32:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:46.594 00:32:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:46.594 00:32:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:46.594 00:32:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:46.594 00:32:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:46.594 00:32:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:46.594 00:32:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:46.594 00:32:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:46.594 00:32:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:46.594 00:32:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:46.594 00:32:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:46.594 00:32:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:19:46.594 Found 0000:08:00.0 (0x8086 - 0x159b) 00:19:46.594 00:32:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:46.594 00:32:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:46.594 00:32:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:46.594 00:32:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:46.594 00:32:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:46.594 00:32:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:46.594 00:32:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:19:46.594 Found 0000:08:00.1 (0x8086 - 0x159b) 00:19:46.594 00:32:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:46.594 00:32:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:46.594 00:32:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:46.594 00:32:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:46.594 00:32:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:46.594 00:32:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:46.594 00:32:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:46.594 00:32:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:46.594 00:32:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:46.594 00:32:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:46.594 00:32:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:46.594 00:32:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:46.594 00:32:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:46.594 00:32:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:46.594 00:32:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:46.594 00:32:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:19:46.594 Found net devices under 0000:08:00.0: cvl_0_0 00:19:46.594 00:32:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:46.594 00:32:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:46.594 00:32:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:46.594 00:32:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:46.594 00:32:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:46.594 00:32:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:46.594 00:32:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:46.594 00:32:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:46.594 00:32:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:19:46.594 Found net devices under 0000:08:00.1: cvl_0_1 00:19:46.594 00:32:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:46.594 00:32:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:46.594 00:32:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # is_hw=yes 00:19:46.594 00:32:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:46.594 00:32:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:46.594 00:32:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:46.594 00:32:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:46.594 00:32:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:46.594 00:32:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:46.594 00:32:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:46.594 00:32:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:46.594 00:32:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:46.594 00:32:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:46.594 00:32:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:46.594 00:32:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:46.594 00:32:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:46.594 00:32:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:46.594 00:32:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:46.594 00:32:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:46.594 00:32:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:46.594 00:32:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:46.594 00:32:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:46.594 00:32:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:46.594 00:32:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:46.594 00:32:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:46.594 00:32:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:46.594 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:46.594 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.196 ms 00:19:46.594 00:19:46.595 --- 10.0.0.2 ping statistics --- 00:19:46.595 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:46.595 rtt min/avg/max/mdev = 0.196/0.196/0.196/0.000 ms 00:19:46.595 00:32:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:46.595 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:46.595 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.111 ms 00:19:46.595 00:19:46.595 --- 10.0.0.1 ping statistics --- 00:19:46.595 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:46.595 rtt min/avg/max/mdev = 0.111/0.111/0.111/0.000 ms 00:19:46.595 00:32:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:46.595 00:32:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@422 -- # return 0 00:19:46.595 00:32:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:46.595 00:32:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:46.595 00:32:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:46.595 00:32:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:46.595 00:32:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:46.595 00:32:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:46.595 00:32:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:46.595 00:32:14 nvmf_tcp.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:19:46.595 00:32:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:46.595 00:32:14 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@720 -- # xtrace_disable 00:19:46.595 00:32:14 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:19:46.595 00:32:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@481 -- # nvmfpid=957155 00:19:46.595 00:32:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:19:46.595 00:32:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@482 -- # waitforlisten 957155 00:19:46.595 00:32:14 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@827 -- # '[' -z 957155 ']' 00:19:46.595 00:32:14 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:46.595 00:32:14 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@832 -- # local max_retries=100 00:19:46.595 00:32:14 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:46.595 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:46.595 00:32:14 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@836 -- # xtrace_disable 00:19:46.595 00:32:14 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:19:46.595 [2024-07-12 00:32:14.112382] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:19:46.595 [2024-07-12 00:32:14.112481] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:46.595 EAL: No free 2048 kB hugepages reported on node 1 00:19:46.595 [2024-07-12 00:32:14.178957] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:46.595 [2024-07-12 00:32:14.270147] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:46.595 [2024-07-12 00:32:14.270205] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:46.595 [2024-07-12 00:32:14.270220] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:46.595 [2024-07-12 00:32:14.270234] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:46.595 [2024-07-12 00:32:14.270246] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:46.595 [2024-07-12 00:32:14.270329] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:46.595 [2024-07-12 00:32:14.270383] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:46.595 [2024-07-12 00:32:14.270413] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:46.595 [2024-07-12 00:32:14.270415] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:46.595 00:32:14 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:19:46.595 00:32:14 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@860 -- # return 0 00:19:46.595 00:32:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:46.595 00:32:14 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:46.595 00:32:14 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:19:46.595 00:32:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:46.595 00:32:14 nvmf_tcp.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:19:46.854 [2024-07-12 00:32:14.684036] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:47.112 00:32:14 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:47.371 00:32:15 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:19:47.371 00:32:15 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:47.630 00:32:15 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:19:47.630 00:32:15 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:47.889 00:32:15 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:19:47.889 00:32:15 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:48.148 00:32:15 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:19:48.148 00:32:15 nvmf_tcp.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:19:48.716 00:32:16 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:48.974 00:32:16 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:19:48.974 00:32:16 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:49.233 00:32:16 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:19:49.233 00:32:16 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:49.492 00:32:17 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:19:49.492 00:32:17 nvmf_tcp.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:19:49.751 00:32:17 nvmf_tcp.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:19:50.008 00:32:17 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:19:50.008 00:32:17 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:50.266 00:32:18 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:19:50.266 00:32:18 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:19:50.532 00:32:18 nvmf_tcp.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:50.790 [2024-07-12 00:32:18.483039] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:50.790 00:32:18 nvmf_tcp.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:19:51.048 00:32:18 nvmf_tcp.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:19:51.308 00:32:18 nvmf_tcp.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:19:51.886 00:32:19 nvmf_tcp.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:19:51.886 00:32:19 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1194 -- # local i=0 00:19:51.886 00:32:19 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:19:51.886 00:32:19 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1196 -- # [[ -n 4 ]] 00:19:51.886 00:32:19 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1197 -- # nvme_device_counter=4 00:19:51.886 00:32:19 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1201 -- # sleep 2 00:19:53.844 00:32:21 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:19:53.844 00:32:21 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:19:53.844 00:32:21 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:19:53.844 00:32:21 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1203 -- # nvme_devices=4 00:19:53.844 00:32:21 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:19:53.844 00:32:21 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1204 -- # return 0 00:19:53.844 00:32:21 nvmf_tcp.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:19:53.844 [global] 00:19:53.844 thread=1 00:19:53.844 invalidate=1 00:19:53.844 rw=write 00:19:53.844 time_based=1 00:19:53.844 runtime=1 00:19:53.844 ioengine=libaio 00:19:53.844 direct=1 00:19:53.844 bs=4096 00:19:53.844 iodepth=1 00:19:53.844 norandommap=0 00:19:53.844 numjobs=1 00:19:53.844 00:19:53.844 verify_dump=1 00:19:53.844 verify_backlog=512 00:19:53.844 verify_state_save=0 00:19:53.844 do_verify=1 00:19:53.844 verify=crc32c-intel 00:19:53.844 [job0] 00:19:53.844 filename=/dev/nvme0n1 00:19:53.844 [job1] 00:19:53.844 filename=/dev/nvme0n2 00:19:53.844 [job2] 00:19:53.844 filename=/dev/nvme0n3 00:19:53.844 [job3] 00:19:53.844 filename=/dev/nvme0n4 00:19:53.844 Could not set queue depth (nvme0n1) 00:19:53.844 Could not set queue depth (nvme0n2) 00:19:53.844 Could not set queue depth (nvme0n3) 00:19:53.844 Could not set queue depth (nvme0n4) 00:19:53.844 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:53.844 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:53.844 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:53.845 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:53.845 fio-3.35 00:19:53.845 Starting 4 threads 00:19:55.222 00:19:55.222 job0: (groupid=0, jobs=1): err= 0: pid=958003: Fri Jul 12 00:32:22 2024 00:19:55.222 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:19:55.222 slat (nsec): min=6437, max=35135, avg=14537.87, stdev=4308.61 00:19:55.222 clat (usec): min=203, max=1278, avg=347.66, stdev=98.22 00:19:55.222 lat (usec): min=210, max=1285, avg=362.20, stdev=98.04 00:19:55.222 clat percentiles (usec): 00:19:55.222 | 1.00th=[ 225], 5.00th=[ 245], 10.00th=[ 255], 20.00th=[ 265], 00:19:55.222 | 30.00th=[ 273], 40.00th=[ 281], 50.00th=[ 302], 60.00th=[ 355], 00:19:55.222 | 70.00th=[ 396], 80.00th=[ 457], 90.00th=[ 494], 95.00th=[ 506], 00:19:55.222 | 99.00th=[ 562], 99.50th=[ 578], 99.90th=[ 1057], 99.95th=[ 1287], 00:19:55.222 | 99.99th=[ 1287] 00:19:55.222 write: IOPS=1954, BW=7816KiB/s (8004kB/s)(7824KiB/1001msec); 0 zone resets 00:19:55.222 slat (usec): min=8, max=644, avg=16.20, stdev=15.62 00:19:55.222 clat (usec): min=142, max=3711, avg=202.75, stdev=90.33 00:19:55.222 lat (usec): min=152, max=3724, avg=218.95, stdev=92.17 00:19:55.222 clat percentiles (usec): 00:19:55.222 | 1.00th=[ 159], 5.00th=[ 167], 10.00th=[ 172], 20.00th=[ 176], 00:19:55.222 | 30.00th=[ 182], 40.00th=[ 186], 50.00th=[ 192], 60.00th=[ 196], 00:19:55.222 | 70.00th=[ 204], 80.00th=[ 223], 90.00th=[ 245], 95.00th=[ 265], 00:19:55.222 | 99.00th=[ 326], 99.50th=[ 355], 99.90th=[ 1057], 99.95th=[ 3720], 00:19:55.222 | 99.99th=[ 3720] 00:19:55.222 bw ( KiB/s): min= 8192, max= 8192, per=42.16%, avg=8192.00, stdev= 0.00, samples=1 00:19:55.222 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:19:55.222 lat (usec) : 250=54.27%, 500=42.27%, 750=3.32% 00:19:55.222 lat (msec) : 2=0.11%, 4=0.03% 00:19:55.222 cpu : usr=5.30%, sys=6.20%, ctx=3495, majf=0, minf=1 00:19:55.222 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:55.222 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:55.222 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:55.222 issued rwts: total=1536,1956,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:55.222 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:55.222 job1: (groupid=0, jobs=1): err= 0: pid=958004: Fri Jul 12 00:32:22 2024 00:19:55.222 read: IOPS=1866, BW=7465KiB/s (7644kB/s)(7472KiB/1001msec) 00:19:55.222 slat (nsec): min=6024, max=39997, avg=12034.33, stdev=4442.11 00:19:55.222 clat (usec): min=190, max=1146, avg=289.50, stdev=86.71 00:19:55.222 lat (usec): min=197, max=1153, avg=301.53, stdev=87.88 00:19:55.222 clat percentiles (usec): 00:19:55.222 | 1.00th=[ 206], 5.00th=[ 215], 10.00th=[ 223], 20.00th=[ 233], 00:19:55.222 | 30.00th=[ 241], 40.00th=[ 247], 50.00th=[ 253], 60.00th=[ 265], 00:19:55.222 | 70.00th=[ 285], 80.00th=[ 363], 90.00th=[ 424], 95.00th=[ 474], 00:19:55.222 | 99.00th=[ 562], 99.50th=[ 603], 99.90th=[ 898], 99.95th=[ 1139], 00:19:55.222 | 99.99th=[ 1139] 00:19:55.222 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:19:55.222 slat (nsec): min=7929, max=56412, avg=16512.19, stdev=5145.10 00:19:55.222 clat (usec): min=147, max=395, avg=188.73, stdev=23.30 00:19:55.222 lat (usec): min=156, max=413, avg=205.24, stdev=23.75 00:19:55.222 clat percentiles (usec): 00:19:55.222 | 1.00th=[ 159], 5.00th=[ 165], 10.00th=[ 169], 20.00th=[ 174], 00:19:55.223 | 30.00th=[ 176], 40.00th=[ 180], 50.00th=[ 182], 60.00th=[ 186], 00:19:55.223 | 70.00th=[ 192], 80.00th=[ 200], 90.00th=[ 219], 95.00th=[ 241], 00:19:55.223 | 99.00th=[ 269], 99.50th=[ 285], 99.90th=[ 306], 99.95th=[ 322], 00:19:55.223 | 99.99th=[ 396] 00:19:55.223 bw ( KiB/s): min= 8192, max= 8192, per=42.16%, avg=8192.00, stdev= 0.00, samples=1 00:19:55.223 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:19:55.223 lat (usec) : 250=72.63%, 500=26.00%, 750=1.28%, 1000=0.08% 00:19:55.223 lat (msec) : 2=0.03% 00:19:55.223 cpu : usr=5.30%, sys=7.00%, ctx=3916, majf=0, minf=1 00:19:55.223 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:55.223 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:55.223 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:55.223 issued rwts: total=1868,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:55.223 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:55.223 job2: (groupid=0, jobs=1): err= 0: pid=958005: Fri Jul 12 00:32:22 2024 00:19:55.223 read: IOPS=24, BW=96.6KiB/s (98.9kB/s)(100KiB/1035msec) 00:19:55.223 slat (nsec): min=8059, max=46683, avg=28741.36, stdev=8348.11 00:19:55.223 clat (usec): min=282, max=41065, avg=36024.20, stdev=13452.01 00:19:55.223 lat (usec): min=311, max=41080, avg=36052.95, stdev=13449.18 00:19:55.223 clat percentiles (usec): 00:19:55.223 | 1.00th=[ 285], 5.00th=[ 359], 10.00th=[ 363], 20.00th=[40633], 00:19:55.223 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:19:55.223 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:19:55.223 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:19:55.223 | 99.99th=[41157] 00:19:55.223 write: IOPS=494, BW=1979KiB/s (2026kB/s)(2048KiB/1035msec); 0 zone resets 00:19:55.223 slat (nsec): min=6923, max=36070, avg=9847.82, stdev=3206.29 00:19:55.223 clat (usec): min=158, max=1099, avg=246.53, stdev=59.30 00:19:55.223 lat (usec): min=168, max=1107, avg=256.38, stdev=58.77 00:19:55.223 clat percentiles (usec): 00:19:55.223 | 1.00th=[ 163], 5.00th=[ 172], 10.00th=[ 178], 20.00th=[ 194], 00:19:55.223 | 30.00th=[ 223], 40.00th=[ 247], 50.00th=[ 253], 60.00th=[ 260], 00:19:55.223 | 70.00th=[ 265], 80.00th=[ 273], 90.00th=[ 297], 95.00th=[ 314], 00:19:55.223 | 99.00th=[ 371], 99.50th=[ 400], 99.90th=[ 1106], 99.95th=[ 1106], 00:19:55.223 | 99.99th=[ 1106] 00:19:55.223 bw ( KiB/s): min= 4096, max= 4096, per=21.08%, avg=4096.00, stdev= 0.00, samples=1 00:19:55.223 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:19:55.223 lat (usec) : 250=41.15%, 500=54.56% 00:19:55.223 lat (msec) : 2=0.19%, 50=4.10% 00:19:55.223 cpu : usr=0.58%, sys=0.29%, ctx=540, majf=0, minf=1 00:19:55.223 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:55.223 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:55.223 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:55.223 issued rwts: total=25,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:55.223 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:55.223 job3: (groupid=0, jobs=1): err= 0: pid=958006: Fri Jul 12 00:32:22 2024 00:19:55.223 read: IOPS=22, BW=89.0KiB/s (91.1kB/s)(92.0KiB/1034msec) 00:19:55.223 slat (nsec): min=7198, max=32709, avg=25871.83, stdev=8171.86 00:19:55.223 clat (usec): min=375, max=41016, avg=39186.08, stdev=8460.87 00:19:55.223 lat (usec): min=392, max=41034, avg=39211.95, stdev=8462.82 00:19:55.223 clat percentiles (usec): 00:19:55.223 | 1.00th=[ 375], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:19:55.223 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:19:55.223 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:19:55.223 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:19:55.223 | 99.99th=[41157] 00:19:55.223 write: IOPS=495, BW=1981KiB/s (2028kB/s)(2048KiB/1034msec); 0 zone resets 00:19:55.223 slat (nsec): min=7571, max=33738, avg=8880.06, stdev=2025.83 00:19:55.223 clat (usec): min=154, max=407, avg=244.89, stdev=40.69 00:19:55.223 lat (usec): min=162, max=416, avg=253.77, stdev=40.84 00:19:55.223 clat percentiles (usec): 00:19:55.223 | 1.00th=[ 165], 5.00th=[ 172], 10.00th=[ 180], 20.00th=[ 204], 00:19:55.223 | 30.00th=[ 239], 40.00th=[ 245], 50.00th=[ 251], 60.00th=[ 258], 00:19:55.223 | 70.00th=[ 265], 80.00th=[ 269], 90.00th=[ 289], 95.00th=[ 306], 00:19:55.223 | 99.00th=[ 359], 99.50th=[ 396], 99.90th=[ 408], 99.95th=[ 408], 00:19:55.223 | 99.99th=[ 408] 00:19:55.223 bw ( KiB/s): min= 4096, max= 4096, per=21.08%, avg=4096.00, stdev= 0.00, samples=1 00:19:55.223 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:19:55.223 lat (usec) : 250=44.86%, 500=51.03% 00:19:55.223 lat (msec) : 50=4.11% 00:19:55.223 cpu : usr=0.48%, sys=0.19%, ctx=536, majf=0, minf=1 00:19:55.223 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:55.223 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:55.223 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:55.223 issued rwts: total=23,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:55.223 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:55.223 00:19:55.223 Run status group 0 (all jobs): 00:19:55.223 READ: bw=13.0MiB/s (13.7MB/s), 89.0KiB/s-7465KiB/s (91.1kB/s-7644kB/s), io=13.5MiB (14.1MB), run=1001-1035msec 00:19:55.223 WRITE: bw=19.0MiB/s (19.9MB/s), 1979KiB/s-8184KiB/s (2026kB/s-8380kB/s), io=19.6MiB (20.6MB), run=1001-1035msec 00:19:55.223 00:19:55.223 Disk stats (read/write): 00:19:55.223 nvme0n1: ios=1420/1536, merge=0/0, ticks=691/319, in_queue=1010, util=97.90% 00:19:55.223 nvme0n2: ios=1536/1919, merge=0/0, ticks=409/350, in_queue=759, util=86.43% 00:19:55.223 nvme0n3: ios=75/512, merge=0/0, ticks=1351/126, in_queue=1477, util=97.69% 00:19:55.223 nvme0n4: ios=70/512, merge=0/0, ticks=1334/124, in_queue=1458, util=97.78% 00:19:55.223 00:32:22 nvmf_tcp.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:19:55.223 [global] 00:19:55.223 thread=1 00:19:55.223 invalidate=1 00:19:55.223 rw=randwrite 00:19:55.223 time_based=1 00:19:55.223 runtime=1 00:19:55.223 ioengine=libaio 00:19:55.223 direct=1 00:19:55.223 bs=4096 00:19:55.223 iodepth=1 00:19:55.223 norandommap=0 00:19:55.223 numjobs=1 00:19:55.223 00:19:55.223 verify_dump=1 00:19:55.223 verify_backlog=512 00:19:55.223 verify_state_save=0 00:19:55.223 do_verify=1 00:19:55.223 verify=crc32c-intel 00:19:55.223 [job0] 00:19:55.223 filename=/dev/nvme0n1 00:19:55.223 [job1] 00:19:55.223 filename=/dev/nvme0n2 00:19:55.223 [job2] 00:19:55.223 filename=/dev/nvme0n3 00:19:55.223 [job3] 00:19:55.223 filename=/dev/nvme0n4 00:19:55.223 Could not set queue depth (nvme0n1) 00:19:55.223 Could not set queue depth (nvme0n2) 00:19:55.223 Could not set queue depth (nvme0n3) 00:19:55.223 Could not set queue depth (nvme0n4) 00:19:55.481 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:55.481 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:55.481 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:55.481 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:55.481 fio-3.35 00:19:55.481 Starting 4 threads 00:19:56.860 00:19:56.860 job0: (groupid=0, jobs=1): err= 0: pid=958194: Fri Jul 12 00:32:24 2024 00:19:56.860 read: IOPS=1869, BW=7477KiB/s (7656kB/s)(7484KiB/1001msec) 00:19:56.860 slat (nsec): min=4957, max=35067, avg=9753.94, stdev=4485.42 00:19:56.860 clat (usec): min=188, max=42012, avg=320.02, stdev=1925.56 00:19:56.860 lat (usec): min=193, max=42028, avg=329.78, stdev=1926.02 00:19:56.860 clat percentiles (usec): 00:19:56.860 | 1.00th=[ 200], 5.00th=[ 206], 10.00th=[ 210], 20.00th=[ 215], 00:19:56.860 | 30.00th=[ 219], 40.00th=[ 223], 50.00th=[ 225], 60.00th=[ 229], 00:19:56.860 | 70.00th=[ 235], 80.00th=[ 241], 90.00th=[ 258], 95.00th=[ 277], 00:19:56.860 | 99.00th=[ 343], 99.50th=[ 396], 99.90th=[42206], 99.95th=[42206], 00:19:56.861 | 99.99th=[42206] 00:19:56.861 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:19:56.861 slat (nsec): min=6448, max=48558, avg=11130.41, stdev=4116.55 00:19:56.861 clat (usec): min=135, max=719, avg=170.41, stdev=25.60 00:19:56.861 lat (usec): min=142, max=728, avg=181.54, stdev=26.23 00:19:56.861 clat percentiles (usec): 00:19:56.861 | 1.00th=[ 141], 5.00th=[ 147], 10.00th=[ 151], 20.00th=[ 155], 00:19:56.861 | 30.00th=[ 159], 40.00th=[ 163], 50.00th=[ 167], 60.00th=[ 169], 00:19:56.861 | 70.00th=[ 174], 80.00th=[ 180], 90.00th=[ 190], 95.00th=[ 202], 00:19:56.861 | 99.00th=[ 260], 99.50th=[ 273], 99.90th=[ 351], 99.95th=[ 408], 00:19:56.861 | 99.99th=[ 717] 00:19:56.861 bw ( KiB/s): min= 8192, max= 8192, per=45.17%, avg=8192.00, stdev= 0.00, samples=1 00:19:56.861 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:19:56.861 lat (usec) : 250=92.88%, 500=6.92%, 750=0.08%, 1000=0.03% 00:19:56.861 lat (msec) : 50=0.10% 00:19:56.861 cpu : usr=2.30%, sys=4.40%, ctx=3919, majf=0, minf=1 00:19:56.861 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:56.861 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:56.861 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:56.861 issued rwts: total=1871,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:56.861 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:56.861 job1: (groupid=0, jobs=1): err= 0: pid=958195: Fri Jul 12 00:32:24 2024 00:19:56.861 read: IOPS=74, BW=299KiB/s (307kB/s)(300KiB/1002msec) 00:19:56.861 slat (nsec): min=7093, max=38286, avg=19500.15, stdev=7869.37 00:19:56.861 clat (usec): min=206, max=41222, avg=11501.76, stdev=18175.63 00:19:56.861 lat (usec): min=223, max=41230, avg=11521.26, stdev=18178.90 00:19:56.861 clat percentiles (usec): 00:19:56.861 | 1.00th=[ 206], 5.00th=[ 219], 10.00th=[ 225], 20.00th=[ 260], 00:19:56.861 | 30.00th=[ 265], 40.00th=[ 277], 50.00th=[ 293], 60.00th=[ 322], 00:19:56.861 | 70.00th=[ 359], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:19:56.861 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:19:56.861 | 99.99th=[41157] 00:19:56.861 write: IOPS=510, BW=2044KiB/s (2093kB/s)(2048KiB/1002msec); 0 zone resets 00:19:56.861 slat (nsec): min=7116, max=21742, avg=8427.43, stdev=1842.65 00:19:56.861 clat (usec): min=154, max=423, avg=256.62, stdev=33.70 00:19:56.861 lat (usec): min=162, max=431, avg=265.05, stdev=34.10 00:19:56.861 clat percentiles (usec): 00:19:56.861 | 1.00th=[ 169], 5.00th=[ 221], 10.00th=[ 233], 20.00th=[ 239], 00:19:56.861 | 30.00th=[ 245], 40.00th=[ 247], 50.00th=[ 251], 60.00th=[ 255], 00:19:56.861 | 70.00th=[ 262], 80.00th=[ 269], 90.00th=[ 293], 95.00th=[ 322], 00:19:56.861 | 99.00th=[ 375], 99.50th=[ 392], 99.90th=[ 424], 99.95th=[ 424], 00:19:56.861 | 99.99th=[ 424] 00:19:56.861 bw ( KiB/s): min= 4096, max= 4096, per=22.58%, avg=4096.00, stdev= 0.00, samples=1 00:19:56.861 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:19:56.861 lat (usec) : 250=42.25%, 500=54.17% 00:19:56.861 lat (msec) : 50=3.58% 00:19:56.861 cpu : usr=0.40%, sys=0.40%, ctx=588, majf=0, minf=1 00:19:56.861 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:56.861 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:56.861 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:56.861 issued rwts: total=75,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:56.861 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:56.861 job2: (groupid=0, jobs=1): err= 0: pid=958197: Fri Jul 12 00:32:24 2024 00:19:56.861 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:19:56.861 slat (nsec): min=5314, max=33769, avg=8822.36, stdev=5390.47 00:19:56.861 clat (usec): min=190, max=42019, avg=1637.12, stdev=7418.52 00:19:56.861 lat (usec): min=196, max=42034, avg=1645.94, stdev=7421.35 00:19:56.861 clat percentiles (usec): 00:19:56.861 | 1.00th=[ 196], 5.00th=[ 202], 10.00th=[ 206], 20.00th=[ 208], 00:19:56.861 | 30.00th=[ 212], 40.00th=[ 215], 50.00th=[ 217], 60.00th=[ 219], 00:19:56.861 | 70.00th=[ 223], 80.00th=[ 229], 90.00th=[ 243], 95.00th=[ 408], 00:19:56.861 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:19:56.861 | 99.99th=[42206] 00:19:56.861 write: IOPS=596, BW=2386KiB/s (2443kB/s)(2388KiB/1001msec); 0 zone resets 00:19:56.861 slat (nsec): min=6640, max=37862, avg=8770.34, stdev=3124.90 00:19:56.861 clat (usec): min=143, max=427, avg=250.00, stdev=50.46 00:19:56.861 lat (usec): min=150, max=434, avg=258.77, stdev=49.60 00:19:56.861 clat percentiles (usec): 00:19:56.861 | 1.00th=[ 149], 5.00th=[ 159], 10.00th=[ 165], 20.00th=[ 229], 00:19:56.861 | 30.00th=[ 243], 40.00th=[ 245], 50.00th=[ 249], 60.00th=[ 253], 00:19:56.861 | 70.00th=[ 260], 80.00th=[ 277], 90.00th=[ 314], 95.00th=[ 338], 00:19:56.861 | 99.00th=[ 396], 99.50th=[ 412], 99.90th=[ 429], 99.95th=[ 429], 00:19:56.861 | 99.99th=[ 429] 00:19:56.861 bw ( KiB/s): min= 4096, max= 4096, per=22.58%, avg=4096.00, stdev= 0.00, samples=1 00:19:56.861 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:19:56.861 lat (usec) : 250=70.06%, 500=28.04%, 750=0.18% 00:19:56.861 lat (msec) : 10=0.09%, 20=0.09%, 50=1.53% 00:19:56.861 cpu : usr=0.30%, sys=1.20%, ctx=1110, majf=0, minf=1 00:19:56.861 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:56.861 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:56.861 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:56.861 issued rwts: total=512,597,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:56.861 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:56.861 job3: (groupid=0, jobs=1): err= 0: pid=958198: Fri Jul 12 00:32:24 2024 00:19:56.861 read: IOPS=1317, BW=5271KiB/s (5398kB/s)(5456KiB/1035msec) 00:19:56.861 slat (nsec): min=5337, max=34039, avg=10325.35, stdev=3647.43 00:19:56.861 clat (usec): min=199, max=41023, avg=507.90, stdev=3107.41 00:19:56.861 lat (usec): min=210, max=41040, avg=518.23, stdev=3108.21 00:19:56.861 clat percentiles (usec): 00:19:56.861 | 1.00th=[ 217], 5.00th=[ 231], 10.00th=[ 235], 20.00th=[ 241], 00:19:56.861 | 30.00th=[ 247], 40.00th=[ 251], 50.00th=[ 258], 60.00th=[ 273], 00:19:56.861 | 70.00th=[ 281], 80.00th=[ 289], 90.00th=[ 297], 95.00th=[ 330], 00:19:56.861 | 99.00th=[ 498], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:19:56.861 | 99.99th=[41157] 00:19:56.861 write: IOPS=1484, BW=5936KiB/s (6079kB/s)(6144KiB/1035msec); 0 zone resets 00:19:56.861 slat (nsec): min=7007, max=58352, avg=13832.29, stdev=6809.19 00:19:56.861 clat (usec): min=136, max=1823, avg=193.37, stdev=54.91 00:19:56.861 lat (usec): min=146, max=1831, avg=207.20, stdev=56.56 00:19:56.861 clat percentiles (usec): 00:19:56.861 | 1.00th=[ 143], 5.00th=[ 153], 10.00th=[ 159], 20.00th=[ 165], 00:19:56.861 | 30.00th=[ 172], 40.00th=[ 178], 50.00th=[ 186], 60.00th=[ 196], 00:19:56.861 | 70.00th=[ 204], 80.00th=[ 215], 90.00th=[ 233], 95.00th=[ 247], 00:19:56.861 | 99.00th=[ 302], 99.50th=[ 343], 99.90th=[ 717], 99.95th=[ 1827], 00:19:56.861 | 99.99th=[ 1827] 00:19:56.861 bw ( KiB/s): min= 3864, max= 8424, per=33.88%, avg=6144.00, stdev=3224.41, samples=2 00:19:56.861 iops : min= 966, max= 2106, avg=1536.00, stdev=806.10, samples=2 00:19:56.861 lat (usec) : 250=68.69%, 500=30.76%, 750=0.14%, 1000=0.07% 00:19:56.861 lat (msec) : 2=0.07%, 50=0.28% 00:19:56.861 cpu : usr=2.22%, sys=3.48%, ctx=2903, majf=0, minf=1 00:19:56.861 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:56.861 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:56.861 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:56.861 issued rwts: total=1364,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:56.861 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:56.861 00:19:56.861 Run status group 0 (all jobs): 00:19:56.861 READ: bw=14.4MiB/s (15.1MB/s), 299KiB/s-7477KiB/s (307kB/s-7656kB/s), io=14.9MiB (15.7MB), run=1001-1035msec 00:19:56.861 WRITE: bw=17.7MiB/s (18.6MB/s), 2044KiB/s-8184KiB/s (2093kB/s-8380kB/s), io=18.3MiB (19.2MB), run=1001-1035msec 00:19:56.861 00:19:56.861 Disk stats (read/write): 00:19:56.861 nvme0n1: ios=1586/1657, merge=0/0, ticks=547/281, in_queue=828, util=86.87% 00:19:56.861 nvme0n2: ios=117/512, merge=0/0, ticks=881/130, in_queue=1011, util=98.48% 00:19:56.861 nvme0n3: ios=83/512, merge=0/0, ticks=1005/138, in_queue=1143, util=98.12% 00:19:56.861 nvme0n4: ios=1399/1536, merge=0/0, ticks=631/290, in_queue=921, util=99.37% 00:19:56.861 00:32:24 nvmf_tcp.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:19:56.861 [global] 00:19:56.861 thread=1 00:19:56.861 invalidate=1 00:19:56.861 rw=write 00:19:56.861 time_based=1 00:19:56.861 runtime=1 00:19:56.861 ioengine=libaio 00:19:56.861 direct=1 00:19:56.861 bs=4096 00:19:56.861 iodepth=128 00:19:56.861 norandommap=0 00:19:56.861 numjobs=1 00:19:56.861 00:19:56.861 verify_dump=1 00:19:56.861 verify_backlog=512 00:19:56.861 verify_state_save=0 00:19:56.861 do_verify=1 00:19:56.861 verify=crc32c-intel 00:19:56.861 [job0] 00:19:56.861 filename=/dev/nvme0n1 00:19:56.861 [job1] 00:19:56.861 filename=/dev/nvme0n2 00:19:56.861 [job2] 00:19:56.861 filename=/dev/nvme0n3 00:19:56.861 [job3] 00:19:56.861 filename=/dev/nvme0n4 00:19:56.861 Could not set queue depth (nvme0n1) 00:19:56.861 Could not set queue depth (nvme0n2) 00:19:56.861 Could not set queue depth (nvme0n3) 00:19:56.861 Could not set queue depth (nvme0n4) 00:19:56.861 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:56.861 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:56.861 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:56.861 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:56.861 fio-3.35 00:19:56.861 Starting 4 threads 00:19:58.244 00:19:58.244 job0: (groupid=0, jobs=1): err= 0: pid=958374: Fri Jul 12 00:32:25 2024 00:19:58.244 read: IOPS=3562, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1006msec) 00:19:58.244 slat (usec): min=2, max=16826, avg=126.51, stdev=772.21 00:19:58.244 clat (usec): min=7983, max=44677, avg=16609.31, stdev=6139.69 00:19:58.244 lat (usec): min=7994, max=44885, avg=16735.82, stdev=6193.72 00:19:58.244 clat percentiles (usec): 00:19:58.244 | 1.00th=[ 8848], 5.00th=[10290], 10.00th=[11600], 20.00th=[12125], 00:19:58.244 | 30.00th=[12518], 40.00th=[13960], 50.00th=[14353], 60.00th=[16188], 00:19:58.244 | 70.00th=[17171], 80.00th=[20579], 90.00th=[25822], 95.00th=[28705], 00:19:58.244 | 99.00th=[39584], 99.50th=[42206], 99.90th=[43254], 99.95th=[44827], 00:19:58.244 | 99.99th=[44827] 00:19:58.244 write: IOPS=4001, BW=15.6MiB/s (16.4MB/s)(15.7MiB/1006msec); 0 zone resets 00:19:58.244 slat (usec): min=4, max=11922, avg=126.73, stdev=749.35 00:19:58.244 clat (usec): min=5102, max=34957, avg=16780.41, stdev=5454.32 00:19:58.244 lat (usec): min=5938, max=34966, avg=16907.14, stdev=5511.21 00:19:58.244 clat percentiles (usec): 00:19:58.244 | 1.00th=[ 6259], 5.00th=[10159], 10.00th=[11469], 20.00th=[12518], 00:19:58.244 | 30.00th=[12780], 40.00th=[13829], 50.00th=[15139], 60.00th=[16909], 00:19:58.244 | 70.00th=[19530], 80.00th=[21627], 90.00th=[24249], 95.00th=[26346], 00:19:58.244 | 99.00th=[31327], 99.50th=[33424], 99.90th=[33817], 99.95th=[33817], 00:19:58.244 | 99.99th=[34866] 00:19:58.244 bw ( KiB/s): min=13816, max=17376, per=26.09%, avg=15596.00, stdev=2517.30, samples=2 00:19:58.244 iops : min= 3454, max= 4344, avg=3899.00, stdev=629.33, samples=2 00:19:58.244 lat (msec) : 10=4.47%, 20=71.14%, 50=24.39% 00:19:58.244 cpu : usr=5.77%, sys=5.87%, ctx=376, majf=0, minf=1 00:19:58.244 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:19:58.244 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:58.244 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:58.244 issued rwts: total=3584,4026,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:58.244 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:58.244 job1: (groupid=0, jobs=1): err= 0: pid=958375: Fri Jul 12 00:32:25 2024 00:19:58.244 read: IOPS=2532, BW=9.89MiB/s (10.4MB/s)(10.0MiB/1011msec) 00:19:58.244 slat (usec): min=2, max=49738, avg=204.99, stdev=1671.32 00:19:58.244 clat (usec): min=4861, max=97330, avg=25835.25, stdev=18031.84 00:19:58.244 lat (usec): min=4868, max=97335, avg=26040.24, stdev=18156.18 00:19:58.244 clat percentiles (usec): 00:19:58.244 | 1.00th=[ 4948], 5.00th=[11207], 10.00th=[12649], 20.00th=[13566], 00:19:58.244 | 30.00th=[14222], 40.00th=[14484], 50.00th=[15533], 60.00th=[21627], 00:19:58.244 | 70.00th=[23725], 80.00th=[46924], 90.00th=[56361], 95.00th=[57410], 00:19:58.244 | 99.00th=[96994], 99.50th=[96994], 99.90th=[96994], 99.95th=[96994], 00:19:58.244 | 99.99th=[96994] 00:19:58.244 write: IOPS=2991, BW=11.7MiB/s (12.3MB/s)(11.8MiB/1011msec); 0 zone resets 00:19:58.244 slat (usec): min=4, max=13788, avg=146.04, stdev=954.19 00:19:58.244 clat (usec): min=2881, max=82318, avg=20310.58, stdev=13009.77 00:19:58.244 lat (usec): min=2911, max=82325, avg=20456.62, stdev=13038.79 00:19:58.244 clat percentiles (usec): 00:19:58.244 | 1.00th=[ 6194], 5.00th=[ 8979], 10.00th=[10814], 20.00th=[11207], 00:19:58.244 | 30.00th=[12256], 40.00th=[13435], 50.00th=[17433], 60.00th=[18744], 00:19:58.244 | 70.00th=[23462], 80.00th=[27395], 90.00th=[33817], 95.00th=[34341], 00:19:58.244 | 99.00th=[82314], 99.50th=[82314], 99.90th=[82314], 99.95th=[82314], 00:19:58.244 | 99.99th=[82314] 00:19:58.244 bw ( KiB/s): min= 8192, max=14984, per=19.39%, avg=11588.00, stdev=4802.67, samples=2 00:19:58.245 iops : min= 2048, max= 3746, avg=2897.00, stdev=1200.67, samples=2 00:19:58.245 lat (msec) : 4=0.18%, 10=4.89%, 20=54.75%, 50=30.14%, 100=10.05% 00:19:58.245 cpu : usr=2.77%, sys=6.44%, ctx=174, majf=0, minf=1 00:19:58.245 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:19:58.245 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:58.245 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:58.245 issued rwts: total=2560,3024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:58.245 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:58.245 job2: (groupid=0, jobs=1): err= 0: pid=958376: Fri Jul 12 00:32:25 2024 00:19:58.245 read: IOPS=4276, BW=16.7MiB/s (17.5MB/s)(16.7MiB/1002msec) 00:19:58.245 slat (usec): min=4, max=17212, avg=118.73, stdev=742.90 00:19:58.245 clat (usec): min=934, max=41995, avg=14017.20, stdev=4173.12 00:19:58.245 lat (usec): min=6355, max=42012, avg=14135.93, stdev=4228.56 00:19:58.245 clat percentiles (usec): 00:19:58.245 | 1.00th=[ 7308], 5.00th=[ 9372], 10.00th=[10683], 20.00th=[11731], 00:19:58.245 | 30.00th=[12125], 40.00th=[13173], 50.00th=[13566], 60.00th=[13829], 00:19:58.245 | 70.00th=[14353], 80.00th=[14746], 90.00th=[16909], 95.00th=[22414], 00:19:58.245 | 99.00th=[34866], 99.50th=[35390], 99.90th=[35390], 99.95th=[35390], 00:19:58.245 | 99.99th=[42206] 00:19:58.245 write: IOPS=4598, BW=18.0MiB/s (18.8MB/s)(18.0MiB/1002msec); 0 zone resets 00:19:58.245 slat (usec): min=4, max=9846, avg=96.12, stdev=445.68 00:19:58.245 clat (usec): min=7319, max=35355, avg=14356.34, stdev=3839.59 00:19:58.245 lat (usec): min=7340, max=35366, avg=14452.46, stdev=3861.38 00:19:58.245 clat percentiles (usec): 00:19:58.245 | 1.00th=[ 8094], 5.00th=[10552], 10.00th=[11600], 20.00th=[12387], 00:19:58.245 | 30.00th=[12911], 40.00th=[13304], 50.00th=[13566], 60.00th=[13829], 00:19:58.245 | 70.00th=[13960], 80.00th=[15008], 90.00th=[19006], 95.00th=[23200], 00:19:58.245 | 99.00th=[30016], 99.50th=[33817], 99.90th=[35390], 99.95th=[35390], 00:19:58.245 | 99.99th=[35390] 00:19:58.245 bw ( KiB/s): min=16384, max=20480, per=30.84%, avg=18432.00, stdev=2896.31, samples=2 00:19:58.245 iops : min= 4096, max= 5120, avg=4608.00, stdev=724.08, samples=2 00:19:58.245 lat (usec) : 1000=0.01% 00:19:58.245 lat (msec) : 10=5.80%, 20=86.45%, 50=7.74% 00:19:58.245 cpu : usr=6.89%, sys=9.69%, ctx=571, majf=0, minf=1 00:19:58.245 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:19:58.245 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:58.245 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:58.245 issued rwts: total=4285,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:58.245 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:58.245 job3: (groupid=0, jobs=1): err= 0: pid=958377: Fri Jul 12 00:32:25 2024 00:19:58.245 read: IOPS=3065, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1002msec) 00:19:58.245 slat (usec): min=2, max=34762, avg=161.62, stdev=1242.56 00:19:58.245 clat (usec): min=9128, max=99835, avg=19904.07, stdev=14006.54 00:19:58.245 lat (usec): min=10258, max=99895, avg=20065.69, stdev=14118.72 00:19:58.245 clat percentiles (msec): 00:19:58.245 | 1.00th=[ 11], 5.00th=[ 12], 10.00th=[ 13], 20.00th=[ 14], 00:19:58.245 | 30.00th=[ 14], 40.00th=[ 14], 50.00th=[ 15], 60.00th=[ 15], 00:19:58.245 | 70.00th=[ 16], 80.00th=[ 24], 90.00th=[ 36], 95.00th=[ 54], 00:19:58.245 | 99.00th=[ 79], 99.50th=[ 82], 99.90th=[ 88], 99.95th=[ 94], 00:19:58.245 | 99.99th=[ 101] 00:19:58.245 write: IOPS=3443, BW=13.4MiB/s (14.1MB/s)(13.5MiB/1002msec); 0 zone resets 00:19:58.245 slat (usec): min=4, max=23125, avg=138.05, stdev=947.42 00:19:58.245 clat (usec): min=651, max=79443, avg=19092.01, stdev=11247.18 00:19:58.245 lat (usec): min=1488, max=79462, avg=19230.05, stdev=11345.26 00:19:58.245 clat percentiles (usec): 00:19:58.245 | 1.00th=[ 6849], 5.00th=[10421], 10.00th=[11076], 20.00th=[12518], 00:19:58.245 | 30.00th=[12780], 40.00th=[13173], 50.00th=[13698], 60.00th=[14222], 00:19:58.245 | 70.00th=[21103], 80.00th=[27919], 90.00th=[32375], 95.00th=[45876], 00:19:58.245 | 99.00th=[62653], 99.50th=[62653], 99.90th=[65274], 99.95th=[74974], 00:19:58.245 | 99.99th=[79168] 00:19:58.245 bw ( KiB/s): min=10200, max=16384, per=22.24%, avg=13292.00, stdev=4372.75, samples=2 00:19:58.245 iops : min= 2550, max= 4096, avg=3323.00, stdev=1093.19, samples=2 00:19:58.245 lat (usec) : 750=0.02% 00:19:58.245 lat (msec) : 2=0.11%, 10=1.43%, 20=70.33%, 50=23.55%, 100=4.57% 00:19:58.245 cpu : usr=3.50%, sys=5.49%, ctx=261, majf=0, minf=1 00:19:58.245 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:19:58.245 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:58.245 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:58.245 issued rwts: total=3072,3450,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:58.245 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:58.245 00:19:58.245 Run status group 0 (all jobs): 00:19:58.245 READ: bw=52.2MiB/s (54.7MB/s), 9.89MiB/s-16.7MiB/s (10.4MB/s-17.5MB/s), io=52.7MiB (55.3MB), run=1002-1011msec 00:19:58.245 WRITE: bw=58.4MiB/s (61.2MB/s), 11.7MiB/s-18.0MiB/s (12.3MB/s-18.8MB/s), io=59.0MiB (61.9MB), run=1002-1011msec 00:19:58.245 00:19:58.245 Disk stats (read/write): 00:19:58.245 nvme0n1: ios=3077/3439, merge=0/0, ticks=25116/27106, in_queue=52222, util=85.47% 00:19:58.245 nvme0n2: ios=2097/2166, merge=0/0, ticks=24697/16916, in_queue=41613, util=88.11% 00:19:58.245 nvme0n3: ios=3641/4055, merge=0/0, ticks=24633/25649, in_queue=50282, util=90.83% 00:19:58.245 nvme0n4: ios=2687/3072, merge=0/0, ticks=15778/20763, in_queue=36541, util=100.00% 00:19:58.245 00:32:25 nvmf_tcp.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:19:58.245 [global] 00:19:58.245 thread=1 00:19:58.245 invalidate=1 00:19:58.245 rw=randwrite 00:19:58.245 time_based=1 00:19:58.245 runtime=1 00:19:58.245 ioengine=libaio 00:19:58.245 direct=1 00:19:58.245 bs=4096 00:19:58.245 iodepth=128 00:19:58.245 norandommap=0 00:19:58.245 numjobs=1 00:19:58.245 00:19:58.245 verify_dump=1 00:19:58.245 verify_backlog=512 00:19:58.245 verify_state_save=0 00:19:58.245 do_verify=1 00:19:58.245 verify=crc32c-intel 00:19:58.245 [job0] 00:19:58.245 filename=/dev/nvme0n1 00:19:58.245 [job1] 00:19:58.245 filename=/dev/nvme0n2 00:19:58.245 [job2] 00:19:58.245 filename=/dev/nvme0n3 00:19:58.245 [job3] 00:19:58.245 filename=/dev/nvme0n4 00:19:58.245 Could not set queue depth (nvme0n1) 00:19:58.245 Could not set queue depth (nvme0n2) 00:19:58.245 Could not set queue depth (nvme0n3) 00:19:58.245 Could not set queue depth (nvme0n4) 00:19:58.245 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:58.245 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:58.245 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:58.245 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:58.245 fio-3.35 00:19:58.245 Starting 4 threads 00:19:59.623 00:19:59.623 job0: (groupid=0, jobs=1): err= 0: pid=958643: Fri Jul 12 00:32:27 2024 00:19:59.623 read: IOPS=3918, BW=15.3MiB/s (16.0MB/s)(15.4MiB/1003msec) 00:19:59.623 slat (usec): min=2, max=17259, avg=125.78, stdev=732.18 00:19:59.623 clat (usec): min=875, max=52170, avg=15955.05, stdev=7959.65 00:19:59.623 lat (usec): min=4597, max=52188, avg=16080.83, stdev=7999.35 00:19:59.623 clat percentiles (usec): 00:19:59.623 | 1.00th=[ 7570], 5.00th=[10683], 10.00th=[11338], 20.00th=[12256], 00:19:59.623 | 30.00th=[12649], 40.00th=[12911], 50.00th=[13435], 60.00th=[13698], 00:19:59.623 | 70.00th=[13960], 80.00th=[15401], 90.00th=[25297], 95.00th=[34341], 00:19:59.623 | 99.00th=[51119], 99.50th=[51643], 99.90th=[51643], 99.95th=[52167], 00:19:59.623 | 99.99th=[52167] 00:19:59.623 write: IOPS=4083, BW=16.0MiB/s (16.7MB/s)(16.0MiB/1003msec); 0 zone resets 00:19:59.623 slat (usec): min=3, max=8729, avg=117.53, stdev=669.26 00:19:59.623 clat (usec): min=6111, max=39567, avg=15674.73, stdev=7134.64 00:19:59.623 lat (usec): min=6985, max=39573, avg=15792.26, stdev=7162.27 00:19:59.623 clat percentiles (usec): 00:19:59.623 | 1.00th=[ 8848], 5.00th=[ 9503], 10.00th=[10421], 20.00th=[11338], 00:19:59.623 | 30.00th=[11863], 40.00th=[12518], 50.00th=[12649], 60.00th=[13435], 00:19:59.623 | 70.00th=[14746], 80.00th=[17957], 90.00th=[28443], 95.00th=[35390], 00:19:59.623 | 99.00th=[36439], 99.50th=[37487], 99.90th=[39584], 99.95th=[39584], 00:19:59.623 | 99.99th=[39584] 00:19:59.623 bw ( KiB/s): min=12288, max=20480, per=25.47%, avg=16384.00, stdev=5792.62, samples=2 00:19:59.623 iops : min= 3072, max= 5120, avg=4096.00, stdev=1448.15, samples=2 00:19:59.623 lat (usec) : 1000=0.01% 00:19:59.623 lat (msec) : 10=4.51%, 20=77.37%, 50=17.33%, 100=0.77% 00:19:59.623 cpu : usr=2.89%, sys=5.09%, ctx=396, majf=0, minf=15 00:19:59.623 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:19:59.623 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:59.623 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:59.623 issued rwts: total=3930,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:59.623 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:59.623 job1: (groupid=0, jobs=1): err= 0: pid=958644: Fri Jul 12 00:32:27 2024 00:19:59.623 read: IOPS=5104, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1003msec) 00:19:59.623 slat (usec): min=3, max=10457, avg=106.19, stdev=736.25 00:19:59.623 clat (usec): min=4047, max=42437, avg=13923.77, stdev=5808.02 00:19:59.623 lat (usec): min=4054, max=42445, avg=14029.96, stdev=5847.40 00:19:59.623 clat percentiles (usec): 00:19:59.623 | 1.00th=[ 6259], 5.00th=[ 9503], 10.00th=[10421], 20.00th=[10945], 00:19:59.623 | 30.00th=[11338], 40.00th=[11469], 50.00th=[11863], 60.00th=[12256], 00:19:59.623 | 70.00th=[13566], 80.00th=[15270], 90.00th=[19792], 95.00th=[27919], 00:19:59.623 | 99.00th=[39060], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:19:59.623 | 99.99th=[42206] 00:19:59.623 write: IOPS=5137, BW=20.1MiB/s (21.0MB/s)(20.1MiB/1003msec); 0 zone resets 00:19:59.623 slat (usec): min=4, max=9636, avg=79.35, stdev=504.55 00:19:59.623 clat (usec): min=341, max=21509, avg=10796.25, stdev=2152.24 00:19:59.623 lat (usec): min=2513, max=21520, avg=10875.59, stdev=2195.51 00:19:59.623 clat percentiles (usec): 00:19:59.623 | 1.00th=[ 4015], 5.00th=[ 6194], 10.00th=[ 7635], 20.00th=[ 9765], 00:19:59.623 | 30.00th=[10290], 40.00th=[10945], 50.00th=[11338], 60.00th=[11600], 00:19:59.623 | 70.00th=[11600], 80.00th=[11731], 90.00th=[12387], 95.00th=[13173], 00:19:59.623 | 99.00th=[16581], 99.50th=[17171], 99.90th=[20841], 99.95th=[21365], 00:19:59.623 | 99.99th=[21627] 00:19:59.623 bw ( KiB/s): min=18000, max=22960, per=31.84%, avg=20480.00, stdev=3507.25, samples=2 00:19:59.623 iops : min= 4500, max= 5740, avg=5120.00, stdev=876.81, samples=2 00:19:59.623 lat (usec) : 500=0.01% 00:19:59.623 lat (msec) : 4=0.45%, 10=15.88%, 20=78.86%, 50=4.81% 00:19:59.623 cpu : usr=6.19%, sys=9.28%, ctx=462, majf=0, minf=9 00:19:59.623 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:19:59.623 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:59.623 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:59.623 issued rwts: total=5120,5153,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:59.623 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:59.623 job2: (groupid=0, jobs=1): err= 0: pid=958646: Fri Jul 12 00:32:27 2024 00:19:59.623 read: IOPS=3576, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1002msec) 00:19:59.623 slat (usec): min=4, max=10423, avg=123.44, stdev=709.73 00:19:59.623 clat (usec): min=8891, max=39919, avg=16362.79, stdev=3598.82 00:19:59.623 lat (usec): min=8916, max=39937, avg=16486.22, stdev=3660.28 00:19:59.623 clat percentiles (usec): 00:19:59.623 | 1.00th=[10421], 5.00th=[13829], 10.00th=[13960], 20.00th=[14091], 00:19:59.623 | 30.00th=[14353], 40.00th=[14615], 50.00th=[15795], 60.00th=[16188], 00:19:59.623 | 70.00th=[16712], 80.00th=[18220], 90.00th=[19792], 95.00th=[20579], 00:19:59.623 | 99.00th=[36439], 99.50th=[36439], 99.90th=[36439], 99.95th=[37487], 00:19:59.623 | 99.99th=[40109] 00:19:59.623 write: IOPS=3815, BW=14.9MiB/s (15.6MB/s)(14.9MiB/1002msec); 0 zone resets 00:19:59.623 slat (usec): min=6, max=32160, avg=135.39, stdev=950.74 00:19:59.623 clat (usec): min=619, max=47834, avg=17772.42, stdev=7143.82 00:19:59.623 lat (usec): min=5672, max=47844, avg=17907.82, stdev=7204.61 00:19:59.623 clat percentiles (usec): 00:19:59.623 | 1.00th=[ 6652], 5.00th=[12780], 10.00th=[13173], 20.00th=[13566], 00:19:59.623 | 30.00th=[14353], 40.00th=[14746], 50.00th=[15401], 60.00th=[16057], 00:19:59.623 | 70.00th=[17433], 80.00th=[18482], 90.00th=[28705], 95.00th=[34341], 00:19:59.623 | 99.00th=[47449], 99.50th=[47973], 99.90th=[47973], 99.95th=[47973], 00:19:59.623 | 99.99th=[47973] 00:19:59.623 bw ( KiB/s): min=13264, max=16384, per=23.05%, avg=14824.00, stdev=2206.17, samples=2 00:19:59.623 iops : min= 3316, max= 4096, avg=3706.00, stdev=551.54, samples=2 00:19:59.623 lat (usec) : 750=0.01% 00:19:59.623 lat (msec) : 10=1.50%, 20=85.14%, 50=13.35% 00:19:59.623 cpu : usr=4.10%, sys=9.29%, ctx=289, majf=0, minf=11 00:19:59.623 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:19:59.623 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:59.623 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:59.623 issued rwts: total=3584,3823,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:59.623 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:59.623 job3: (groupid=0, jobs=1): err= 0: pid=958650: Fri Jul 12 00:32:27 2024 00:19:59.623 read: IOPS=2871, BW=11.2MiB/s (11.8MB/s)(11.3MiB/1004msec) 00:19:59.623 slat (usec): min=3, max=16279, avg=153.55, stdev=880.73 00:19:59.623 clat (usec): min=689, max=52653, avg=19457.73, stdev=7920.80 00:19:59.623 lat (usec): min=3109, max=52660, avg=19611.28, stdev=7936.48 00:19:59.623 clat percentiles (usec): 00:19:59.623 | 1.00th=[ 3818], 5.00th=[11207], 10.00th=[12518], 20.00th=[13960], 00:19:59.623 | 30.00th=[15139], 40.00th=[16581], 50.00th=[17695], 60.00th=[18482], 00:19:59.623 | 70.00th=[19006], 80.00th=[22676], 90.00th=[31589], 95.00th=[35914], 00:19:59.623 | 99.00th=[46924], 99.50th=[52691], 99.90th=[52691], 99.95th=[52691], 00:19:59.623 | 99.99th=[52691] 00:19:59.623 write: IOPS=3059, BW=12.0MiB/s (12.5MB/s)(12.0MiB/1004msec); 0 zone resets 00:19:59.623 slat (usec): min=4, max=20323, avg=175.09, stdev=836.39 00:19:59.623 clat (usec): min=10359, max=82880, avg=22947.55, stdev=17513.17 00:19:59.623 lat (usec): min=10364, max=82892, avg=23122.64, stdev=17630.97 00:19:59.623 clat percentiles (usec): 00:19:59.623 | 1.00th=[10945], 5.00th=[11863], 10.00th=[13173], 20.00th=[13566], 00:19:59.623 | 30.00th=[14353], 40.00th=[15401], 50.00th=[16712], 60.00th=[17171], 00:19:59.623 | 70.00th=[18220], 80.00th=[21365], 90.00th=[61080], 95.00th=[68682], 00:19:59.624 | 99.00th=[81265], 99.50th=[81265], 99.90th=[83362], 99.95th=[83362], 00:19:59.624 | 99.99th=[83362] 00:19:59.624 bw ( KiB/s): min=10816, max=13760, per=19.10%, avg=12288.00, stdev=2081.72, samples=2 00:19:59.624 iops : min= 2704, max= 3440, avg=3072.00, stdev=520.43, samples=2 00:19:59.624 lat (usec) : 750=0.02% 00:19:59.624 lat (msec) : 4=0.54%, 10=0.54%, 20=74.93%, 50=17.53%, 100=6.45% 00:19:59.624 cpu : usr=2.39%, sys=4.39%, ctx=386, majf=0, minf=15 00:19:59.624 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:19:59.624 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:59.624 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:59.624 issued rwts: total=2883,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:59.624 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:59.624 00:19:59.624 Run status group 0 (all jobs): 00:19:59.624 READ: bw=60.4MiB/s (63.3MB/s), 11.2MiB/s-19.9MiB/s (11.8MB/s-20.9MB/s), io=60.6MiB (63.6MB), run=1002-1004msec 00:19:59.624 WRITE: bw=62.8MiB/s (65.9MB/s), 12.0MiB/s-20.1MiB/s (12.5MB/s-21.0MB/s), io=63.1MiB (66.1MB), run=1002-1004msec 00:19:59.624 00:19:59.624 Disk stats (read/write): 00:19:59.624 nvme0n1: ios=3526/3584, merge=0/0, ticks=13368/14091, in_queue=27459, util=91.28% 00:19:59.624 nvme0n2: ios=4649/4741, merge=0/0, ticks=51207/47868, in_queue=99075, util=95.53% 00:19:59.624 nvme0n3: ios=3120/3247, merge=0/0, ticks=24796/23752, in_queue=48548, util=99.69% 00:19:59.624 nvme0n4: ios=2218/2560, merge=0/0, ticks=11267/15866, in_queue=27133, util=98.53% 00:19:59.624 00:32:27 nvmf_tcp.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:19:59.624 00:32:27 nvmf_tcp.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=958750 00:19:59.624 00:32:27 nvmf_tcp.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:19:59.624 00:32:27 nvmf_tcp.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:19:59.624 [global] 00:19:59.624 thread=1 00:19:59.624 invalidate=1 00:19:59.624 rw=read 00:19:59.624 time_based=1 00:19:59.624 runtime=10 00:19:59.624 ioengine=libaio 00:19:59.624 direct=1 00:19:59.624 bs=4096 00:19:59.624 iodepth=1 00:19:59.624 norandommap=1 00:19:59.624 numjobs=1 00:19:59.624 00:19:59.624 [job0] 00:19:59.624 filename=/dev/nvme0n1 00:19:59.624 [job1] 00:19:59.624 filename=/dev/nvme0n2 00:19:59.624 [job2] 00:19:59.624 filename=/dev/nvme0n3 00:19:59.624 [job3] 00:19:59.624 filename=/dev/nvme0n4 00:19:59.624 Could not set queue depth (nvme0n1) 00:19:59.624 Could not set queue depth (nvme0n2) 00:19:59.624 Could not set queue depth (nvme0n3) 00:19:59.624 Could not set queue depth (nvme0n4) 00:19:59.881 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:59.881 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:59.881 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:59.881 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:59.881 fio-3.35 00:19:59.881 Starting 4 threads 00:20:03.165 00:32:30 nvmf_tcp.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:20:03.165 00:32:30 nvmf_tcp.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:20:03.165 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=36429824, buflen=4096 00:20:03.165 fio: pid=958829, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:20:03.165 00:32:30 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:20:03.165 00:32:30 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:20:03.165 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=17166336, buflen=4096 00:20:03.165 fio: pid=958825, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:20:03.424 00:32:31 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:20:03.424 00:32:31 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:20:03.424 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=9302016, buflen=4096 00:20:03.424 fio: pid=958822, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:20:03.683 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=1974272, buflen=4096 00:20:03.683 fio: pid=958823, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:20:03.683 00:32:31 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:20:03.683 00:32:31 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:20:03.683 00:20:03.683 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=958822: Fri Jul 12 00:32:31 2024 00:20:03.683 read: IOPS=645, BW=2581KiB/s (2643kB/s)(9084KiB/3519msec) 00:20:03.683 slat (usec): min=4, max=17827, avg=23.95, stdev=456.01 00:20:03.683 clat (usec): min=175, max=45002, avg=1513.24, stdev=7195.87 00:20:03.683 lat (usec): min=181, max=45022, avg=1537.19, stdev=7210.06 00:20:03.683 clat percentiles (usec): 00:20:03.683 | 1.00th=[ 182], 5.00th=[ 188], 10.00th=[ 192], 20.00th=[ 196], 00:20:03.683 | 30.00th=[ 200], 40.00th=[ 204], 50.00th=[ 208], 60.00th=[ 215], 00:20:03.683 | 70.00th=[ 223], 80.00th=[ 241], 90.00th=[ 281], 95.00th=[ 359], 00:20:03.683 | 99.00th=[41681], 99.50th=[42206], 99.90th=[42206], 99.95th=[43254], 00:20:03.683 | 99.99th=[44827] 00:20:03.683 bw ( KiB/s): min= 360, max= 7128, per=10.27%, avg=1712.00, stdev=2668.31, samples=6 00:20:03.683 iops : min= 90, max= 1782, avg=428.00, stdev=667.08, samples=6 00:20:03.683 lat (usec) : 250=83.63%, 500=13.12%, 750=0.04% 00:20:03.683 lat (msec) : 4=0.04%, 50=3.12% 00:20:03.683 cpu : usr=0.20%, sys=0.80%, ctx=2276, majf=0, minf=1 00:20:03.683 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:03.683 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:03.683 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:03.683 issued rwts: total=2272,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:03.683 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:03.683 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=958823: Fri Jul 12 00:32:31 2024 00:20:03.683 read: IOPS=127, BW=507KiB/s (519kB/s)(1928KiB/3802msec) 00:20:03.683 slat (usec): min=5, max=25809, avg=139.51, stdev=1431.95 00:20:03.683 clat (usec): min=191, max=58352, avg=7716.30, stdev=15727.26 00:20:03.683 lat (usec): min=198, max=58366, avg=7856.05, stdev=15736.42 00:20:03.683 clat percentiles (usec): 00:20:03.683 | 1.00th=[ 200], 5.00th=[ 223], 10.00th=[ 269], 20.00th=[ 302], 00:20:03.683 | 30.00th=[ 326], 40.00th=[ 351], 50.00th=[ 367], 60.00th=[ 388], 00:20:03.683 | 70.00th=[ 433], 80.00th=[ 465], 90.00th=[40633], 95.00th=[41157], 00:20:03.683 | 99.00th=[41157], 99.50th=[44827], 99.90th=[58459], 99.95th=[58459], 00:20:03.683 | 99.99th=[58459] 00:20:03.683 bw ( KiB/s): min= 306, max= 736, per=3.13%, avg=521.43, stdev=163.15, samples=7 00:20:03.683 iops : min= 76, max= 184, avg=130.29, stdev=40.90, samples=7 00:20:03.683 lat (usec) : 250=7.45%, 500=73.71%, 750=0.21%, 1000=0.21% 00:20:03.683 lat (msec) : 4=0.21%, 50=17.81%, 100=0.21% 00:20:03.683 cpu : usr=0.16%, sys=0.39%, ctx=492, majf=0, minf=1 00:20:03.683 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:03.683 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:03.683 complete : 0=0.2%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:03.683 issued rwts: total=483,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:03.683 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:03.683 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=958825: Fri Jul 12 00:32:31 2024 00:20:03.683 read: IOPS=1293, BW=5172KiB/s (5297kB/s)(16.4MiB/3241msec) 00:20:03.683 slat (nsec): min=5397, max=37746, avg=10541.38, stdev=4172.56 00:20:03.683 clat (usec): min=190, max=44035, avg=755.03, stdev=4368.86 00:20:03.683 lat (usec): min=196, max=44050, avg=765.57, stdev=4370.11 00:20:03.683 clat percentiles (usec): 00:20:03.683 | 1.00th=[ 204], 5.00th=[ 217], 10.00th=[ 229], 20.00th=[ 243], 00:20:03.683 | 30.00th=[ 251], 40.00th=[ 260], 50.00th=[ 277], 60.00th=[ 293], 00:20:03.683 | 70.00th=[ 302], 80.00th=[ 322], 90.00th=[ 400], 95.00th=[ 453], 00:20:03.683 | 99.00th=[41157], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:20:03.683 | 99.99th=[43779] 00:20:03.683 bw ( KiB/s): min= 96, max=11808, per=33.09%, avg=5514.67, stdev=5956.61, samples=6 00:20:03.683 iops : min= 24, max= 2952, avg=1378.67, stdev=1489.15, samples=6 00:20:03.683 lat (usec) : 250=29.58%, 500=68.92%, 750=0.31% 00:20:03.683 lat (msec) : 2=0.05%, 50=1.12% 00:20:03.683 cpu : usr=0.59%, sys=1.76%, ctx=4192, majf=0, minf=1 00:20:03.683 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:03.683 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:03.683 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:03.683 issued rwts: total=4192,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:03.683 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:03.683 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=958829: Fri Jul 12 00:32:31 2024 00:20:03.683 read: IOPS=3048, BW=11.9MiB/s (12.5MB/s)(34.7MiB/2918msec) 00:20:03.683 slat (nsec): min=5620, max=54140, avg=11795.93, stdev=5189.33 00:20:03.683 clat (usec): min=193, max=41064, avg=310.77, stdev=1144.72 00:20:03.683 lat (usec): min=205, max=41081, avg=322.57, stdev=1144.96 00:20:03.683 clat percentiles (usec): 00:20:03.683 | 1.00th=[ 206], 5.00th=[ 215], 10.00th=[ 221], 20.00th=[ 237], 00:20:03.684 | 30.00th=[ 245], 40.00th=[ 249], 50.00th=[ 253], 60.00th=[ 260], 00:20:03.684 | 70.00th=[ 277], 80.00th=[ 310], 90.00th=[ 363], 95.00th=[ 469], 00:20:03.684 | 99.00th=[ 515], 99.50th=[ 523], 99.90th=[ 2671], 99.95th=[41157], 00:20:03.684 | 99.99th=[41157] 00:20:03.684 bw ( KiB/s): min= 5528, max=15240, per=69.66%, avg=11608.00, stdev=3989.41, samples=5 00:20:03.684 iops : min= 1382, max= 3810, avg=2902.00, stdev=997.35, samples=5 00:20:03.684 lat (usec) : 250=42.35%, 500=55.47%, 750=2.07% 00:20:03.684 lat (msec) : 4=0.02%, 50=0.08% 00:20:03.684 cpu : usr=2.43%, sys=5.66%, ctx=8896, majf=0, minf=1 00:20:03.684 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:03.684 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:03.684 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:03.684 issued rwts: total=8895,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:03.684 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:03.684 00:20:03.684 Run status group 0 (all jobs): 00:20:03.684 READ: bw=16.3MiB/s (17.1MB/s), 507KiB/s-11.9MiB/s (519kB/s-12.5MB/s), io=61.9MiB (64.9MB), run=2918-3802msec 00:20:03.684 00:20:03.684 Disk stats (read/write): 00:20:03.684 nvme0n1: ios=2259/0, merge=0/0, ticks=3252/0, in_queue=3252, util=95.34% 00:20:03.684 nvme0n2: ios=511/0, merge=0/0, ticks=4023/0, in_queue=4023, util=98.58% 00:20:03.684 nvme0n3: ios=4138/0, merge=0/0, ticks=3005/0, in_queue=3005, util=96.79% 00:20:03.684 nvme0n4: ios=8718/0, merge=0/0, ticks=2608/0, in_queue=2608, util=96.78% 00:20:03.942 00:32:31 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:20:03.942 00:32:31 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:20:04.509 00:32:32 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:20:04.509 00:32:32 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:20:04.768 00:32:32 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:20:04.768 00:32:32 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:20:05.027 00:32:32 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:20:05.027 00:32:32 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:20:05.291 00:32:32 nvmf_tcp.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:20:05.291 00:32:32 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # wait 958750 00:20:05.291 00:32:32 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:20:05.291 00:32:32 nvmf_tcp.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:20:05.291 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:20:05.291 00:32:32 nvmf_tcp.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:20:05.291 00:32:32 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1215 -- # local i=0 00:20:05.291 00:32:32 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:20:05.291 00:32:32 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:20:05.291 00:32:32 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:20:05.291 00:32:32 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:20:05.291 00:32:32 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # return 0 00:20:05.291 00:32:32 nvmf_tcp.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:20:05.291 00:32:32 nvmf_tcp.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:20:05.291 nvmf hotplug test: fio failed as expected 00:20:05.291 00:32:32 nvmf_tcp.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:05.550 00:32:33 nvmf_tcp.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:20:05.550 00:32:33 nvmf_tcp.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:20:05.550 00:32:33 nvmf_tcp.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:20:05.550 00:32:33 nvmf_tcp.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:20:05.550 00:32:33 nvmf_tcp.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:20:05.550 00:32:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:05.550 00:32:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@117 -- # sync 00:20:05.550 00:32:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:05.550 00:32:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@120 -- # set +e 00:20:05.550 00:32:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:05.550 00:32:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:05.550 rmmod nvme_tcp 00:20:05.550 rmmod nvme_fabrics 00:20:05.550 rmmod nvme_keyring 00:20:05.550 00:32:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:05.550 00:32:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@124 -- # set -e 00:20:05.550 00:32:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@125 -- # return 0 00:20:05.550 00:32:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@489 -- # '[' -n 957155 ']' 00:20:05.550 00:32:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@490 -- # killprocess 957155 00:20:05.550 00:32:33 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@946 -- # '[' -z 957155 ']' 00:20:05.550 00:32:33 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@950 -- # kill -0 957155 00:20:05.550 00:32:33 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@951 -- # uname 00:20:05.550 00:32:33 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:20:05.550 00:32:33 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 957155 00:20:05.550 00:32:33 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:20:05.550 00:32:33 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:20:05.550 00:32:33 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@964 -- # echo 'killing process with pid 957155' 00:20:05.550 killing process with pid 957155 00:20:05.550 00:32:33 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@965 -- # kill 957155 00:20:05.550 00:32:33 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@970 -- # wait 957155 00:20:05.808 00:32:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:05.808 00:32:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:05.808 00:32:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:05.808 00:32:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:05.808 00:32:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:05.808 00:32:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:05.808 00:32:33 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:05.808 00:32:33 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:07.716 00:32:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:07.716 00:20:07.716 real 0m23.266s 00:20:07.716 user 1m22.712s 00:20:07.716 sys 0m6.479s 00:20:07.716 00:32:35 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1122 -- # xtrace_disable 00:20:07.716 00:32:35 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:20:07.716 ************************************ 00:20:07.716 END TEST nvmf_fio_target 00:20:07.716 ************************************ 00:20:07.975 00:32:35 nvmf_tcp -- nvmf/nvmf.sh@56 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:20:07.975 00:32:35 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:20:07.975 00:32:35 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:20:07.975 00:32:35 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:07.975 ************************************ 00:20:07.975 START TEST nvmf_bdevio 00:20:07.975 ************************************ 00:20:07.975 00:32:35 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:20:07.975 * Looking for test storage... 00:20:07.975 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:07.975 00:32:35 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:07.975 00:32:35 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:20:07.975 00:32:35 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:07.975 00:32:35 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:07.975 00:32:35 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:07.975 00:32:35 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:07.975 00:32:35 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:07.975 00:32:35 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:07.975 00:32:35 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:07.975 00:32:35 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:07.975 00:32:35 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:07.975 00:32:35 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:07.975 00:32:35 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:20:07.975 00:32:35 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:20:07.975 00:32:35 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:07.975 00:32:35 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:07.975 00:32:35 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:07.975 00:32:35 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:07.975 00:32:35 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:07.975 00:32:35 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:07.975 00:32:35 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:07.975 00:32:35 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:07.975 00:32:35 nvmf_tcp.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:07.975 00:32:35 nvmf_tcp.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:07.975 00:32:35 nvmf_tcp.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:07.975 00:32:35 nvmf_tcp.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:20:07.975 00:32:35 nvmf_tcp.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:07.975 00:32:35 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@47 -- # : 0 00:20:07.975 00:32:35 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:07.975 00:32:35 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:07.975 00:32:35 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:07.975 00:32:35 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:07.975 00:32:35 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:07.975 00:32:35 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:07.975 00:32:35 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:07.975 00:32:35 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:07.975 00:32:35 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:07.975 00:32:35 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:07.975 00:32:35 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:20:07.975 00:32:35 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:07.975 00:32:35 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:07.975 00:32:35 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:07.975 00:32:35 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:07.975 00:32:35 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:07.975 00:32:35 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:07.975 00:32:35 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:07.975 00:32:35 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:07.975 00:32:35 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:07.975 00:32:35 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:07.975 00:32:35 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@285 -- # xtrace_disable 00:20:07.975 00:32:35 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:20:09.882 00:32:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:09.882 00:32:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # pci_devs=() 00:20:09.882 00:32:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:09.882 00:32:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:09.882 00:32:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:09.882 00:32:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:09.882 00:32:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:09.882 00:32:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # net_devs=() 00:20:09.882 00:32:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:09.882 00:32:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # e810=() 00:20:09.882 00:32:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # local -ga e810 00:20:09.882 00:32:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # x722=() 00:20:09.882 00:32:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # local -ga x722 00:20:09.882 00:32:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # mlx=() 00:20:09.882 00:32:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # local -ga mlx 00:20:09.882 00:32:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:09.882 00:32:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:09.882 00:32:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:09.882 00:32:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:09.882 00:32:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:09.882 00:32:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:09.882 00:32:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:09.882 00:32:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:09.882 00:32:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:09.882 00:32:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:09.882 00:32:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:09.882 00:32:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:09.882 00:32:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:09.882 00:32:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:09.882 00:32:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:09.882 00:32:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:09.882 00:32:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:09.882 00:32:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:09.882 00:32:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:20:09.882 Found 0000:08:00.0 (0x8086 - 0x159b) 00:20:09.882 00:32:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:09.882 00:32:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:09.882 00:32:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:09.882 00:32:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:09.882 00:32:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:09.882 00:32:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:09.882 00:32:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:20:09.882 Found 0000:08:00.1 (0x8086 - 0x159b) 00:20:09.882 00:32:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:09.882 00:32:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:09.882 00:32:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:09.882 00:32:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:09.882 00:32:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:09.882 00:32:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:09.882 00:32:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:09.882 00:32:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:09.882 00:32:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:09.882 00:32:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:09.882 00:32:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:09.882 00:32:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:09.882 00:32:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:09.882 00:32:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:09.882 00:32:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:09.882 00:32:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:20:09.882 Found net devices under 0000:08:00.0: cvl_0_0 00:20:09.882 00:32:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:09.882 00:32:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:09.882 00:32:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:09.882 00:32:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:09.882 00:32:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:09.882 00:32:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:09.882 00:32:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:09.882 00:32:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:09.882 00:32:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:20:09.882 Found net devices under 0000:08:00.1: cvl_0_1 00:20:09.882 00:32:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:09.882 00:32:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:09.882 00:32:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # is_hw=yes 00:20:09.882 00:32:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:09.882 00:32:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:09.882 00:32:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:09.882 00:32:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:09.882 00:32:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:09.882 00:32:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:09.882 00:32:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:09.882 00:32:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:09.882 00:32:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:09.882 00:32:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:09.882 00:32:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:09.882 00:32:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:09.882 00:32:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:09.882 00:32:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:09.882 00:32:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:09.882 00:32:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:09.882 00:32:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:09.882 00:32:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:09.882 00:32:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:09.882 00:32:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:09.882 00:32:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:09.882 00:32:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:09.882 00:32:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:09.882 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:09.882 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.272 ms 00:20:09.882 00:20:09.882 --- 10.0.0.2 ping statistics --- 00:20:09.882 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:09.883 rtt min/avg/max/mdev = 0.272/0.272/0.272/0.000 ms 00:20:09.883 00:32:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:09.883 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:09.883 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.173 ms 00:20:09.883 00:20:09.883 --- 10.0.0.1 ping statistics --- 00:20:09.883 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:09.883 rtt min/avg/max/mdev = 0.173/0.173/0.173/0.000 ms 00:20:09.883 00:32:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:09.883 00:32:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@422 -- # return 0 00:20:09.883 00:32:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:09.883 00:32:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:09.883 00:32:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:09.883 00:32:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:09.883 00:32:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:09.883 00:32:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:09.883 00:32:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:09.883 00:32:37 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:20:09.883 00:32:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:09.883 00:32:37 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@720 -- # xtrace_disable 00:20:09.883 00:32:37 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:20:09.883 00:32:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@481 -- # nvmfpid=960856 00:20:09.883 00:32:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:20:09.883 00:32:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@482 -- # waitforlisten 960856 00:20:09.883 00:32:37 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@827 -- # '[' -z 960856 ']' 00:20:09.883 00:32:37 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:09.883 00:32:37 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@832 -- # local max_retries=100 00:20:09.883 00:32:37 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:09.883 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:09.883 00:32:37 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@836 -- # xtrace_disable 00:20:09.883 00:32:37 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:20:09.883 [2024-07-12 00:32:37.447881] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:20:09.883 [2024-07-12 00:32:37.447990] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:09.883 EAL: No free 2048 kB hugepages reported on node 1 00:20:09.883 [2024-07-12 00:32:37.513902] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:09.883 [2024-07-12 00:32:37.605532] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:09.883 [2024-07-12 00:32:37.605599] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:09.883 [2024-07-12 00:32:37.605626] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:09.883 [2024-07-12 00:32:37.605646] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:09.883 [2024-07-12 00:32:37.605664] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:09.883 [2024-07-12 00:32:37.605742] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:20:09.883 [2024-07-12 00:32:37.605800] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:20:09.883 [2024-07-12 00:32:37.605852] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:20:09.883 [2024-07-12 00:32:37.605858] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:10.142 00:32:37 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:20:10.142 00:32:37 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@860 -- # return 0 00:20:10.142 00:32:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:10.142 00:32:37 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:10.142 00:32:37 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:20:10.142 00:32:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:10.142 00:32:37 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:10.142 00:32:37 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:10.142 00:32:37 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:20:10.142 [2024-07-12 00:32:37.755268] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:10.142 00:32:37 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:10.142 00:32:37 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:20:10.142 00:32:37 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:10.142 00:32:37 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:20:10.142 Malloc0 00:20:10.142 00:32:37 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:10.142 00:32:37 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:10.142 00:32:37 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:10.142 00:32:37 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:20:10.142 00:32:37 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:10.142 00:32:37 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:10.142 00:32:37 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:10.142 00:32:37 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:20:10.142 00:32:37 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:10.142 00:32:37 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:10.142 00:32:37 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:10.142 00:32:37 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:20:10.142 [2024-07-12 00:32:37.805543] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:10.142 00:32:37 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:10.142 00:32:37 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:20:10.142 00:32:37 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:20:10.142 00:32:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # config=() 00:20:10.142 00:32:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # local subsystem config 00:20:10.142 00:32:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:10.142 00:32:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:10.142 { 00:20:10.142 "params": { 00:20:10.142 "name": "Nvme$subsystem", 00:20:10.142 "trtype": "$TEST_TRANSPORT", 00:20:10.142 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:10.142 "adrfam": "ipv4", 00:20:10.142 "trsvcid": "$NVMF_PORT", 00:20:10.142 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:10.142 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:10.142 "hdgst": ${hdgst:-false}, 00:20:10.142 "ddgst": ${ddgst:-false} 00:20:10.142 }, 00:20:10.142 "method": "bdev_nvme_attach_controller" 00:20:10.142 } 00:20:10.142 EOF 00:20:10.142 )") 00:20:10.142 00:32:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # cat 00:20:10.142 00:32:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@556 -- # jq . 00:20:10.142 00:32:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@557 -- # IFS=, 00:20:10.142 00:32:37 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:20:10.142 "params": { 00:20:10.142 "name": "Nvme1", 00:20:10.142 "trtype": "tcp", 00:20:10.142 "traddr": "10.0.0.2", 00:20:10.142 "adrfam": "ipv4", 00:20:10.142 "trsvcid": "4420", 00:20:10.142 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:10.142 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:10.142 "hdgst": false, 00:20:10.142 "ddgst": false 00:20:10.142 }, 00:20:10.142 "method": "bdev_nvme_attach_controller" 00:20:10.142 }' 00:20:10.142 [2024-07-12 00:32:37.853340] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:20:10.142 [2024-07-12 00:32:37.853432] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid960887 ] 00:20:10.142 EAL: No free 2048 kB hugepages reported on node 1 00:20:10.142 [2024-07-12 00:32:37.914127] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:20:10.406 [2024-07-12 00:32:38.003544] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:10.406 [2024-07-12 00:32:38.003604] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:10.406 [2024-07-12 00:32:38.003612] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:10.406 I/O targets: 00:20:10.406 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:20:10.406 00:20:10.406 00:20:10.406 CUnit - A unit testing framework for C - Version 2.1-3 00:20:10.406 http://cunit.sourceforge.net/ 00:20:10.406 00:20:10.406 00:20:10.406 Suite: bdevio tests on: Nvme1n1 00:20:10.406 Test: blockdev write read block ...passed 00:20:10.704 Test: blockdev write zeroes read block ...passed 00:20:10.704 Test: blockdev write zeroes read no split ...passed 00:20:10.704 Test: blockdev write zeroes read split ...passed 00:20:10.704 Test: blockdev write zeroes read split partial ...passed 00:20:10.704 Test: blockdev reset ...[2024-07-12 00:32:38.322900] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:10.704 [2024-07-12 00:32:38.323022] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x163a760 (9): Bad file descriptor 00:20:10.704 [2024-07-12 00:32:38.377314] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:20:10.704 passed 00:20:10.704 Test: blockdev write read 8 blocks ...passed 00:20:10.704 Test: blockdev write read size > 128k ...passed 00:20:10.704 Test: blockdev write read invalid size ...passed 00:20:10.704 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:20:10.704 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:20:10.704 Test: blockdev write read max offset ...passed 00:20:10.704 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:20:10.968 Test: blockdev writev readv 8 blocks ...passed 00:20:10.968 Test: blockdev writev readv 30 x 1block ...passed 00:20:10.968 Test: blockdev writev readv block ...passed 00:20:10.968 Test: blockdev writev readv size > 128k ...passed 00:20:10.968 Test: blockdev writev readv size > 128k in two iovs ...passed 00:20:10.968 Test: blockdev comparev and writev ...[2024-07-12 00:32:38.589182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:10.968 [2024-07-12 00:32:38.589224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:10.968 [2024-07-12 00:32:38.589251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:10.968 [2024-07-12 00:32:38.589269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:10.968 [2024-07-12 00:32:38.589620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:10.968 [2024-07-12 00:32:38.589646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:20:10.968 [2024-07-12 00:32:38.589670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:10.968 [2024-07-12 00:32:38.589687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:20:10.968 [2024-07-12 00:32:38.590021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:10.968 [2024-07-12 00:32:38.590045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:20:10.968 [2024-07-12 00:32:38.590069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:10.968 [2024-07-12 00:32:38.590085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:20:10.968 [2024-07-12 00:32:38.590424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:10.968 [2024-07-12 00:32:38.590448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:20:10.968 [2024-07-12 00:32:38.590471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:10.968 [2024-07-12 00:32:38.590488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:20:10.968 passed 00:20:10.968 Test: blockdev nvme passthru rw ...passed 00:20:10.968 Test: blockdev nvme passthru vendor specific ...[2024-07-12 00:32:38.672941] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:10.968 [2024-07-12 00:32:38.672970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:20:10.968 [2024-07-12 00:32:38.673159] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:10.968 [2024-07-12 00:32:38.673183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:20:10.968 [2024-07-12 00:32:38.673353] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:10.968 [2024-07-12 00:32:38.673377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:20:10.968 [2024-07-12 00:32:38.673553] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:10.968 [2024-07-12 00:32:38.673576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:20:10.968 passed 00:20:10.968 Test: blockdev nvme admin passthru ...passed 00:20:10.968 Test: blockdev copy ...passed 00:20:10.968 00:20:10.968 Run Summary: Type Total Ran Passed Failed Inactive 00:20:10.968 suites 1 1 n/a 0 0 00:20:10.968 tests 23 23 23 0 0 00:20:10.968 asserts 152 152 152 0 n/a 00:20:10.968 00:20:10.968 Elapsed time = 1.117 seconds 00:20:11.228 00:32:38 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:11.229 00:32:38 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:11.229 00:32:38 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:20:11.229 00:32:38 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:11.229 00:32:38 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:20:11.229 00:32:38 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:20:11.229 00:32:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:11.229 00:32:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@117 -- # sync 00:20:11.229 00:32:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:11.229 00:32:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@120 -- # set +e 00:20:11.229 00:32:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:11.229 00:32:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:11.229 rmmod nvme_tcp 00:20:11.229 rmmod nvme_fabrics 00:20:11.229 rmmod nvme_keyring 00:20:11.229 00:32:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:11.229 00:32:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@124 -- # set -e 00:20:11.229 00:32:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@125 -- # return 0 00:20:11.229 00:32:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@489 -- # '[' -n 960856 ']' 00:20:11.229 00:32:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@490 -- # killprocess 960856 00:20:11.229 00:32:38 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@946 -- # '[' -z 960856 ']' 00:20:11.229 00:32:38 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@950 -- # kill -0 960856 00:20:11.229 00:32:38 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@951 -- # uname 00:20:11.229 00:32:38 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:20:11.229 00:32:38 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 960856 00:20:11.229 00:32:38 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@952 -- # process_name=reactor_3 00:20:11.229 00:32:38 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@956 -- # '[' reactor_3 = sudo ']' 00:20:11.229 00:32:38 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@964 -- # echo 'killing process with pid 960856' 00:20:11.229 killing process with pid 960856 00:20:11.229 00:32:38 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@965 -- # kill 960856 00:20:11.229 00:32:38 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@970 -- # wait 960856 00:20:11.487 00:32:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:11.487 00:32:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:11.487 00:32:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:11.487 00:32:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:11.487 00:32:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:11.487 00:32:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:11.487 00:32:39 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:11.487 00:32:39 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:13.396 00:32:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:13.396 00:20:13.396 real 0m5.594s 00:20:13.396 user 0m8.626s 00:20:13.396 sys 0m1.753s 00:20:13.396 00:32:41 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1122 -- # xtrace_disable 00:20:13.396 00:32:41 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:20:13.396 ************************************ 00:20:13.396 END TEST nvmf_bdevio 00:20:13.396 ************************************ 00:20:13.396 00:32:41 nvmf_tcp -- nvmf/nvmf.sh@57 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:20:13.396 00:32:41 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:20:13.396 00:32:41 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:20:13.396 00:32:41 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:13.655 ************************************ 00:20:13.655 START TEST nvmf_auth_target 00:20:13.655 ************************************ 00:20:13.655 00:32:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:20:13.655 * Looking for test storage... 00:20:13.655 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:13.655 00:32:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:13.655 00:32:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:20:13.655 00:32:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:13.655 00:32:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:13.655 00:32:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:13.655 00:32:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:13.655 00:32:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:13.655 00:32:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:13.655 00:32:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:13.655 00:32:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:13.655 00:32:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:13.655 00:32:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:13.655 00:32:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:20:13.655 00:32:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:20:13.655 00:32:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:13.655 00:32:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:13.655 00:32:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:13.655 00:32:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:13.655 00:32:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:13.655 00:32:41 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:13.655 00:32:41 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:13.655 00:32:41 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:13.656 00:32:41 nvmf_tcp.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:13.656 00:32:41 nvmf_tcp.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:13.656 00:32:41 nvmf_tcp.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:13.656 00:32:41 nvmf_tcp.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:20:13.656 00:32:41 nvmf_tcp.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:13.656 00:32:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@47 -- # : 0 00:20:13.656 00:32:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:13.656 00:32:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:13.656 00:32:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:13.656 00:32:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:13.656 00:32:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:13.656 00:32:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:13.656 00:32:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:13.656 00:32:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:13.656 00:32:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:20:13.656 00:32:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:20:13.656 00:32:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:20:13.656 00:32:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:20:13.656 00:32:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:20:13.656 00:32:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:20:13.656 00:32:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:20:13.656 00:32:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@59 -- # nvmftestinit 00:20:13.656 00:32:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:13.656 00:32:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:13.656 00:32:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:13.656 00:32:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:13.656 00:32:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:13.656 00:32:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:13.656 00:32:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:13.656 00:32:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:13.656 00:32:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:13.656 00:32:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:13.656 00:32:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@285 -- # xtrace_disable 00:20:13.656 00:32:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:15.036 00:32:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:15.036 00:32:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@291 -- # pci_devs=() 00:20:15.036 00:32:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:15.036 00:32:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:15.036 00:32:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:15.036 00:32:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:15.036 00:32:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:15.036 00:32:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@295 -- # net_devs=() 00:20:15.036 00:32:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:15.036 00:32:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@296 -- # e810=() 00:20:15.036 00:32:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@296 -- # local -ga e810 00:20:15.036 00:32:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@297 -- # x722=() 00:20:15.036 00:32:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@297 -- # local -ga x722 00:20:15.036 00:32:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@298 -- # mlx=() 00:20:15.036 00:32:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@298 -- # local -ga mlx 00:20:15.036 00:32:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:15.036 00:32:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:15.036 00:32:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:15.036 00:32:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:15.036 00:32:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:15.036 00:32:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:15.036 00:32:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:15.036 00:32:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:15.036 00:32:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:15.036 00:32:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:15.036 00:32:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:15.036 00:32:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:15.036 00:32:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:15.036 00:32:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:15.036 00:32:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:15.036 00:32:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:15.036 00:32:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:15.036 00:32:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:15.036 00:32:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:20:15.036 Found 0000:08:00.0 (0x8086 - 0x159b) 00:20:15.036 00:32:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:15.036 00:32:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:15.036 00:32:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:15.036 00:32:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:15.036 00:32:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:15.036 00:32:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:15.036 00:32:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:20:15.036 Found 0000:08:00.1 (0x8086 - 0x159b) 00:20:15.036 00:32:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:15.036 00:32:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:15.036 00:32:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:15.036 00:32:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:15.036 00:32:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:15.036 00:32:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:15.036 00:32:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:15.036 00:32:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:15.036 00:32:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:15.036 00:32:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:15.036 00:32:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:15.036 00:32:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:15.036 00:32:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:15.036 00:32:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:15.036 00:32:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:15.036 00:32:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:20:15.036 Found net devices under 0000:08:00.0: cvl_0_0 00:20:15.036 00:32:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:15.036 00:32:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:15.036 00:32:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:15.036 00:32:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:15.036 00:32:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:15.036 00:32:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:15.036 00:32:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:15.036 00:32:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:15.036 00:32:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:20:15.036 Found net devices under 0000:08:00.1: cvl_0_1 00:20:15.036 00:32:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:15.036 00:32:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:15.036 00:32:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # is_hw=yes 00:20:15.036 00:32:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:15.036 00:32:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:15.036 00:32:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:15.036 00:32:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:15.036 00:32:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:15.036 00:32:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:15.036 00:32:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:15.036 00:32:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:15.036 00:32:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:15.036 00:32:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:15.036 00:32:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:15.036 00:32:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:15.036 00:32:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:15.036 00:32:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:15.036 00:32:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:15.036 00:32:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:15.295 00:32:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:15.295 00:32:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:15.295 00:32:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:15.295 00:32:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:15.295 00:32:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:15.295 00:32:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:15.295 00:32:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:15.295 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:15.295 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.222 ms 00:20:15.295 00:20:15.295 --- 10.0.0.2 ping statistics --- 00:20:15.295 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:15.295 rtt min/avg/max/mdev = 0.222/0.222/0.222/0.000 ms 00:20:15.295 00:32:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:15.295 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:15.295 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.107 ms 00:20:15.295 00:20:15.295 --- 10.0.0.1 ping statistics --- 00:20:15.295 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:15.295 rtt min/avg/max/mdev = 0.107/0.107/0.107/0.000 ms 00:20:15.295 00:32:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:15.295 00:32:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@422 -- # return 0 00:20:15.295 00:32:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:15.295 00:32:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:15.295 00:32:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:15.295 00:32:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:15.295 00:32:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:15.295 00:32:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:15.295 00:32:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:15.295 00:32:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@60 -- # nvmfappstart -L nvmf_auth 00:20:15.295 00:32:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:15.295 00:32:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@720 -- # xtrace_disable 00:20:15.295 00:32:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:15.295 00:32:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=962477 00:20:15.295 00:32:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:20:15.295 00:32:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 962477 00:20:15.295 00:32:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@827 -- # '[' -z 962477 ']' 00:20:15.295 00:32:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:15.295 00:32:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@832 -- # local max_retries=100 00:20:15.295 00:32:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:15.295 00:32:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # xtrace_disable 00:20:15.295 00:32:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:15.865 00:32:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:20:15.865 00:32:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@860 -- # return 0 00:20:15.865 00:32:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:15.865 00:32:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:15.865 00:32:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:15.865 00:32:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:15.865 00:32:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@62 -- # hostpid=962507 00:20:15.865 00:32:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:20:15.865 00:32:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@64 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:20:15.865 00:32:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key null 48 00:20:15.865 00:32:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:20:15.865 00:32:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:15.866 00:32:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:20:15.866 00:32:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=null 00:20:15.866 00:32:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:20:15.866 00:32:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:20:15.866 00:32:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=0486753659fc8443bf68bf2104facac9e7ae9b672a757701 00:20:15.866 00:32:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:20:15.866 00:32:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.DDL 00:20:15.866 00:32:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 0486753659fc8443bf68bf2104facac9e7ae9b672a757701 0 00:20:15.866 00:32:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 0486753659fc8443bf68bf2104facac9e7ae9b672a757701 0 00:20:15.866 00:32:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:20:15.866 00:32:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:20:15.866 00:32:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=0486753659fc8443bf68bf2104facac9e7ae9b672a757701 00:20:15.866 00:32:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=0 00:20:15.866 00:32:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:20:15.866 00:32:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.DDL 00:20:15.866 00:32:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.DDL 00:20:15.866 00:32:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # keys[0]=/tmp/spdk.key-null.DDL 00:20:15.866 00:32:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key sha512 64 00:20:15.866 00:32:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:20:15.866 00:32:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:15.866 00:32:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:20:15.866 00:32:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:20:15.866 00:32:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:20:15.866 00:32:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:20:15.866 00:32:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=f94eef62198fe2a8b8d06ef6803f66ea06a926a20aff455979da1b3e25d95eee 00:20:15.866 00:32:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:20:15.866 00:32:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.IRI 00:20:15.866 00:32:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key f94eef62198fe2a8b8d06ef6803f66ea06a926a20aff455979da1b3e25d95eee 3 00:20:15.866 00:32:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 f94eef62198fe2a8b8d06ef6803f66ea06a926a20aff455979da1b3e25d95eee 3 00:20:15.866 00:32:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:20:15.866 00:32:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:20:15.866 00:32:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=f94eef62198fe2a8b8d06ef6803f66ea06a926a20aff455979da1b3e25d95eee 00:20:15.866 00:32:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:20:15.866 00:32:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:20:15.866 00:32:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.IRI 00:20:15.866 00:32:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.IRI 00:20:15.866 00:32:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # ckeys[0]=/tmp/spdk.key-sha512.IRI 00:20:15.866 00:32:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha256 32 00:20:15.866 00:32:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:20:15.866 00:32:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:15.866 00:32:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:20:15.866 00:32:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:20:15.866 00:32:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:20:15.866 00:32:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:20:15.866 00:32:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=fa392d990fa91317b605fdda792b7180 00:20:15.866 00:32:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:20:15.866 00:32:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.eiG 00:20:15.866 00:32:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key fa392d990fa91317b605fdda792b7180 1 00:20:15.866 00:32:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 fa392d990fa91317b605fdda792b7180 1 00:20:15.866 00:32:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:20:15.866 00:32:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:20:15.866 00:32:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=fa392d990fa91317b605fdda792b7180 00:20:15.866 00:32:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:20:15.866 00:32:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:20:15.866 00:32:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.eiG 00:20:15.866 00:32:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.eiG 00:20:15.866 00:32:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # keys[1]=/tmp/spdk.key-sha256.eiG 00:20:15.866 00:32:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha384 48 00:20:15.866 00:32:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:20:15.866 00:32:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:15.866 00:32:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:20:15.866 00:32:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:20:15.866 00:32:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:20:15.866 00:32:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:20:15.866 00:32:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=ccaa2caba09ad40737f6b28bae70db26ddd3fc551025c5d0 00:20:15.866 00:32:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:20:15.866 00:32:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.1uw 00:20:15.866 00:32:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key ccaa2caba09ad40737f6b28bae70db26ddd3fc551025c5d0 2 00:20:15.866 00:32:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 ccaa2caba09ad40737f6b28bae70db26ddd3fc551025c5d0 2 00:20:15.866 00:32:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:20:15.866 00:32:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:20:15.866 00:32:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=ccaa2caba09ad40737f6b28bae70db26ddd3fc551025c5d0 00:20:15.866 00:32:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:20:15.866 00:32:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:20:15.866 00:32:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.1uw 00:20:15.866 00:32:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.1uw 00:20:15.866 00:32:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # ckeys[1]=/tmp/spdk.key-sha384.1uw 00:20:15.866 00:32:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha384 48 00:20:15.866 00:32:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:20:15.866 00:32:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:15.866 00:32:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:20:15.866 00:32:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:20:15.866 00:32:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:20:15.866 00:32:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:20:15.866 00:32:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=f933f60a5135e9341616b822e9b89698cd9e68e75c158320 00:20:15.866 00:32:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:20:15.866 00:32:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.3rS 00:20:15.866 00:32:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key f933f60a5135e9341616b822e9b89698cd9e68e75c158320 2 00:20:15.866 00:32:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 f933f60a5135e9341616b822e9b89698cd9e68e75c158320 2 00:20:15.866 00:32:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:20:15.866 00:32:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:20:15.867 00:32:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=f933f60a5135e9341616b822e9b89698cd9e68e75c158320 00:20:15.867 00:32:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:20:15.867 00:32:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:20:16.125 00:32:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.3rS 00:20:16.125 00:32:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.3rS 00:20:16.125 00:32:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # keys[2]=/tmp/spdk.key-sha384.3rS 00:20:16.125 00:32:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha256 32 00:20:16.125 00:32:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:20:16.125 00:32:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:16.125 00:32:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:20:16.125 00:32:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:20:16.125 00:32:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:20:16.125 00:32:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:20:16.125 00:32:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=0b58acb6bd21fb46c8d471d01c742ffc 00:20:16.125 00:32:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:20:16.125 00:32:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.jtd 00:20:16.125 00:32:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 0b58acb6bd21fb46c8d471d01c742ffc 1 00:20:16.125 00:32:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 0b58acb6bd21fb46c8d471d01c742ffc 1 00:20:16.125 00:32:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:20:16.125 00:32:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:20:16.125 00:32:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=0b58acb6bd21fb46c8d471d01c742ffc 00:20:16.125 00:32:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:20:16.125 00:32:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:20:16.125 00:32:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.jtd 00:20:16.125 00:32:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.jtd 00:20:16.125 00:32:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # ckeys[2]=/tmp/spdk.key-sha256.jtd 00:20:16.125 00:32:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # gen_dhchap_key sha512 64 00:20:16.125 00:32:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:20:16.125 00:32:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:16.125 00:32:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:20:16.125 00:32:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:20:16.125 00:32:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:20:16.125 00:32:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:20:16.125 00:32:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=655e1b637ef586f19e213a7b3a1c23e5dd4b00257f0437c77352150398b85f30 00:20:16.125 00:32:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:20:16.125 00:32:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.ZyU 00:20:16.125 00:32:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 655e1b637ef586f19e213a7b3a1c23e5dd4b00257f0437c77352150398b85f30 3 00:20:16.125 00:32:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 655e1b637ef586f19e213a7b3a1c23e5dd4b00257f0437c77352150398b85f30 3 00:20:16.125 00:32:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:20:16.125 00:32:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:20:16.125 00:32:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=655e1b637ef586f19e213a7b3a1c23e5dd4b00257f0437c77352150398b85f30 00:20:16.125 00:32:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:20:16.125 00:32:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:20:16.125 00:32:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.ZyU 00:20:16.126 00:32:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.ZyU 00:20:16.126 00:32:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # keys[3]=/tmp/spdk.key-sha512.ZyU 00:20:16.126 00:32:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # ckeys[3]= 00:20:16.126 00:32:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@72 -- # waitforlisten 962477 00:20:16.126 00:32:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@827 -- # '[' -z 962477 ']' 00:20:16.126 00:32:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:16.126 00:32:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@832 -- # local max_retries=100 00:20:16.126 00:32:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:16.126 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:16.126 00:32:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # xtrace_disable 00:20:16.126 00:32:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:16.384 00:32:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:20:16.384 00:32:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@860 -- # return 0 00:20:16.384 00:32:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@73 -- # waitforlisten 962507 /var/tmp/host.sock 00:20:16.384 00:32:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@827 -- # '[' -z 962507 ']' 00:20:16.384 00:32:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/host.sock 00:20:16.384 00:32:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@832 -- # local max_retries=100 00:20:16.384 00:32:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:20:16.384 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:20:16.384 00:32:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # xtrace_disable 00:20:16.384 00:32:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:16.643 00:32:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:20:16.643 00:32:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@860 -- # return 0 00:20:16.643 00:32:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd 00:20:16.643 00:32:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:16.643 00:32:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:16.643 00:32:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:16.643 00:32:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:20:16.643 00:32:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.DDL 00:20:16.643 00:32:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:16.643 00:32:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:16.643 00:32:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:16.643 00:32:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.DDL 00:20:16.643 00:32:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.DDL 00:20:17.211 00:32:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha512.IRI ]] 00:20:17.211 00:32:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.IRI 00:20:17.211 00:32:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:17.211 00:32:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:17.211 00:32:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:17.211 00:32:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.IRI 00:20:17.211 00:32:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.IRI 00:20:17.470 00:32:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:20:17.470 00:32:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.eiG 00:20:17.470 00:32:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:17.470 00:32:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:17.470 00:32:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:17.470 00:32:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.eiG 00:20:17.470 00:32:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.eiG 00:20:17.729 00:32:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha384.1uw ]] 00:20:17.729 00:32:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.1uw 00:20:17.729 00:32:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:17.729 00:32:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:17.729 00:32:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:17.729 00:32:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.1uw 00:20:17.729 00:32:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.1uw 00:20:17.987 00:32:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:20:17.987 00:32:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.3rS 00:20:17.987 00:32:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:17.987 00:32:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:17.987 00:32:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:17.987 00:32:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.3rS 00:20:17.987 00:32:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.3rS 00:20:18.245 00:32:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha256.jtd ]] 00:20:18.245 00:32:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.jtd 00:20:18.245 00:32:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:18.245 00:32:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:18.245 00:32:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:18.245 00:32:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.jtd 00:20:18.245 00:32:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.jtd 00:20:18.504 00:32:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:20:18.504 00:32:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.ZyU 00:20:18.504 00:32:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:18.504 00:32:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:18.504 00:32:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:18.504 00:32:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.ZyU 00:20:18.504 00:32:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.ZyU 00:20:18.763 00:32:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n '' ]] 00:20:18.763 00:32:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:20:18.763 00:32:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:18.763 00:32:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:18.763 00:32:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:18.763 00:32:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:19.021 00:32:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 0 00:20:19.021 00:32:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:19.021 00:32:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:19.021 00:32:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:19.021 00:32:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:19.021 00:32:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:19.021 00:32:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:19.021 00:32:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:19.021 00:32:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:19.021 00:32:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:19.021 00:32:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:19.021 00:32:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:19.278 00:20:19.278 00:32:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:19.278 00:32:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:19.278 00:32:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:19.536 00:32:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:19.536 00:32:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:19.536 00:32:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:19.536 00:32:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:19.536 00:32:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:19.536 00:32:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:19.536 { 00:20:19.536 "cntlid": 1, 00:20:19.536 "qid": 0, 00:20:19.536 "state": "enabled", 00:20:19.536 "listen_address": { 00:20:19.536 "trtype": "TCP", 00:20:19.536 "adrfam": "IPv4", 00:20:19.536 "traddr": "10.0.0.2", 00:20:19.536 "trsvcid": "4420" 00:20:19.536 }, 00:20:19.536 "peer_address": { 00:20:19.536 "trtype": "TCP", 00:20:19.536 "adrfam": "IPv4", 00:20:19.536 "traddr": "10.0.0.1", 00:20:19.536 "trsvcid": "55224" 00:20:19.536 }, 00:20:19.536 "auth": { 00:20:19.536 "state": "completed", 00:20:19.536 "digest": "sha256", 00:20:19.536 "dhgroup": "null" 00:20:19.536 } 00:20:19.536 } 00:20:19.536 ]' 00:20:19.536 00:32:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:19.536 00:32:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:19.536 00:32:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:19.536 00:32:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:20:19.536 00:32:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:19.536 00:32:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:19.536 00:32:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:19.536 00:32:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:19.794 00:32:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:00:MDQ4Njc1MzY1OWZjODQ0M2JmNjhiZjIxMDRmYWNhYzllN2FlOWI2NzJhNzU3NzAxMlqSlA==: --dhchap-ctrl-secret DHHC-1:03:Zjk0ZWVmNjIxOThmZTJhOGI4ZDA2ZWY2ODAzZjY2ZWEwNmE5MjZhMjBhZmY0NTU5NzlkYTFiM2UyNWQ5NWVlZd8/ViE=: 00:20:25.061 00:32:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:25.061 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:25.061 00:32:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:20:25.061 00:32:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:25.061 00:32:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:25.319 00:32:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:25.319 00:32:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:25.319 00:32:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:25.319 00:32:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:25.576 00:32:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 1 00:20:25.576 00:32:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:25.576 00:32:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:25.576 00:32:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:25.576 00:32:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:25.576 00:32:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:25.576 00:32:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:25.576 00:32:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:25.576 00:32:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:25.576 00:32:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:25.576 00:32:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:25.576 00:32:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:25.833 00:20:25.833 00:32:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:25.833 00:32:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:25.833 00:32:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:26.090 00:32:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:26.090 00:32:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:26.090 00:32:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:26.090 00:32:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:26.090 00:32:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:26.090 00:32:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:26.090 { 00:20:26.090 "cntlid": 3, 00:20:26.090 "qid": 0, 00:20:26.090 "state": "enabled", 00:20:26.090 "listen_address": { 00:20:26.090 "trtype": "TCP", 00:20:26.090 "adrfam": "IPv4", 00:20:26.090 "traddr": "10.0.0.2", 00:20:26.090 "trsvcid": "4420" 00:20:26.090 }, 00:20:26.090 "peer_address": { 00:20:26.090 "trtype": "TCP", 00:20:26.090 "adrfam": "IPv4", 00:20:26.090 "traddr": "10.0.0.1", 00:20:26.090 "trsvcid": "60620" 00:20:26.090 }, 00:20:26.090 "auth": { 00:20:26.090 "state": "completed", 00:20:26.090 "digest": "sha256", 00:20:26.090 "dhgroup": "null" 00:20:26.090 } 00:20:26.090 } 00:20:26.090 ]' 00:20:26.090 00:32:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:26.090 00:32:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:26.090 00:32:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:26.348 00:32:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:20:26.348 00:32:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:26.348 00:32:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:26.348 00:32:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:26.348 00:32:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:26.607 00:32:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:01:ZmEzOTJkOTkwZmE5MTMxN2I2MDVmZGRhNzkyYjcxODA4MVgW: --dhchap-ctrl-secret DHHC-1:02:Y2NhYTJjYWJhMDlhZDQwNzM3ZjZiMjhiYWU3MGRiMjZkZGQzZmM1NTEwMjVjNWQwijtRNA==: 00:20:27.986 00:32:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:27.986 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:27.986 00:32:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:20:27.986 00:32:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:27.986 00:32:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:27.986 00:32:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:27.986 00:32:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:27.986 00:32:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:27.986 00:32:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:27.986 00:32:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 2 00:20:27.986 00:32:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:27.986 00:32:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:27.986 00:32:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:27.986 00:32:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:27.986 00:32:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:27.986 00:32:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:27.986 00:32:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:27.986 00:32:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:27.986 00:32:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:27.986 00:32:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:27.986 00:32:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:28.245 00:20:28.504 00:32:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:28.504 00:32:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:28.504 00:32:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:28.764 00:32:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:28.764 00:32:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:28.764 00:32:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:28.764 00:32:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:28.764 00:32:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:28.764 00:32:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:28.764 { 00:20:28.764 "cntlid": 5, 00:20:28.764 "qid": 0, 00:20:28.764 "state": "enabled", 00:20:28.764 "listen_address": { 00:20:28.764 "trtype": "TCP", 00:20:28.764 "adrfam": "IPv4", 00:20:28.764 "traddr": "10.0.0.2", 00:20:28.764 "trsvcid": "4420" 00:20:28.764 }, 00:20:28.764 "peer_address": { 00:20:28.764 "trtype": "TCP", 00:20:28.764 "adrfam": "IPv4", 00:20:28.764 "traddr": "10.0.0.1", 00:20:28.764 "trsvcid": "60630" 00:20:28.764 }, 00:20:28.764 "auth": { 00:20:28.764 "state": "completed", 00:20:28.764 "digest": "sha256", 00:20:28.764 "dhgroup": "null" 00:20:28.764 } 00:20:28.764 } 00:20:28.764 ]' 00:20:28.764 00:32:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:28.764 00:32:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:28.765 00:32:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:28.765 00:32:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:20:28.765 00:32:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:28.765 00:32:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:28.765 00:32:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:28.765 00:32:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:29.023 00:32:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:02:ZjkzM2Y2MGE1MTM1ZTkzNDE2MTZiODIyZTliODk2OThjZDllNjhlNzVjMTU4MzIwNQF96A==: --dhchap-ctrl-secret DHHC-1:01:MGI1OGFjYjZiZDIxZmI0NmM4ZDQ3MWQwMWM3NDJmZmNekHDK: 00:20:30.401 00:32:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:30.401 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:30.401 00:32:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:20:30.401 00:32:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:30.401 00:32:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:30.401 00:32:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:30.401 00:32:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:30.401 00:32:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:30.401 00:32:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:30.660 00:32:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 3 00:20:30.660 00:32:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:30.660 00:32:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:30.660 00:32:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:30.660 00:32:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:30.660 00:32:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:30.660 00:32:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key3 00:20:30.660 00:32:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:30.660 00:32:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:30.660 00:32:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:30.660 00:32:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:30.660 00:32:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:30.919 00:20:30.919 00:32:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:30.919 00:32:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:30.919 00:32:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:31.178 00:32:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:31.178 00:32:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:31.178 00:32:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:31.178 00:32:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:31.178 00:32:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:31.178 00:32:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:31.178 { 00:20:31.178 "cntlid": 7, 00:20:31.178 "qid": 0, 00:20:31.178 "state": "enabled", 00:20:31.178 "listen_address": { 00:20:31.178 "trtype": "TCP", 00:20:31.178 "adrfam": "IPv4", 00:20:31.178 "traddr": "10.0.0.2", 00:20:31.178 "trsvcid": "4420" 00:20:31.178 }, 00:20:31.178 "peer_address": { 00:20:31.178 "trtype": "TCP", 00:20:31.178 "adrfam": "IPv4", 00:20:31.178 "traddr": "10.0.0.1", 00:20:31.178 "trsvcid": "60648" 00:20:31.178 }, 00:20:31.178 "auth": { 00:20:31.178 "state": "completed", 00:20:31.178 "digest": "sha256", 00:20:31.178 "dhgroup": "null" 00:20:31.178 } 00:20:31.178 } 00:20:31.178 ]' 00:20:31.178 00:32:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:31.178 00:32:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:31.178 00:32:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:31.437 00:32:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:20:31.437 00:32:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:31.437 00:32:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:31.437 00:32:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:31.437 00:32:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:31.696 00:32:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:03:NjU1ZTFiNjM3ZWY1ODZmMTllMjEzYTdiM2ExYzIzZTVkZDRiMDAyNTdmMDQzN2M3NzM1MjE1MDM5OGI4NWYzMBfT75I=: 00:20:32.633 00:33:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:32.891 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:32.891 00:33:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:20:32.891 00:33:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:32.891 00:33:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:32.891 00:33:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:32.891 00:33:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:32.891 00:33:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:32.891 00:33:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:32.891 00:33:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:33.149 00:33:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 0 00:20:33.149 00:33:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:33.149 00:33:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:33.150 00:33:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:33.150 00:33:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:33.150 00:33:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:33.150 00:33:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:33.150 00:33:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:33.150 00:33:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:33.150 00:33:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:33.150 00:33:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:33.150 00:33:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:33.408 00:20:33.408 00:33:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:33.408 00:33:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:33.408 00:33:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:33.666 00:33:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:33.666 00:33:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:33.666 00:33:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:33.666 00:33:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:33.666 00:33:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:33.666 00:33:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:33.666 { 00:20:33.667 "cntlid": 9, 00:20:33.667 "qid": 0, 00:20:33.667 "state": "enabled", 00:20:33.667 "listen_address": { 00:20:33.667 "trtype": "TCP", 00:20:33.667 "adrfam": "IPv4", 00:20:33.667 "traddr": "10.0.0.2", 00:20:33.667 "trsvcid": "4420" 00:20:33.667 }, 00:20:33.667 "peer_address": { 00:20:33.667 "trtype": "TCP", 00:20:33.667 "adrfam": "IPv4", 00:20:33.667 "traddr": "10.0.0.1", 00:20:33.667 "trsvcid": "60668" 00:20:33.667 }, 00:20:33.667 "auth": { 00:20:33.667 "state": "completed", 00:20:33.667 "digest": "sha256", 00:20:33.667 "dhgroup": "ffdhe2048" 00:20:33.667 } 00:20:33.667 } 00:20:33.667 ]' 00:20:33.667 00:33:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:33.924 00:33:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:33.924 00:33:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:33.924 00:33:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:33.924 00:33:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:33.924 00:33:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:33.924 00:33:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:33.924 00:33:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:34.182 00:33:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:00:MDQ4Njc1MzY1OWZjODQ0M2JmNjhiZjIxMDRmYWNhYzllN2FlOWI2NzJhNzU3NzAxMlqSlA==: --dhchap-ctrl-secret DHHC-1:03:Zjk0ZWVmNjIxOThmZTJhOGI4ZDA2ZWY2ODAzZjY2ZWEwNmE5MjZhMjBhZmY0NTU5NzlkYTFiM2UyNWQ5NWVlZd8/ViE=: 00:20:35.557 00:33:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:35.557 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:35.557 00:33:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:20:35.557 00:33:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:35.557 00:33:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:35.557 00:33:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:35.557 00:33:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:35.557 00:33:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:35.557 00:33:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:35.815 00:33:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 1 00:20:35.815 00:33:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:35.815 00:33:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:35.815 00:33:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:35.815 00:33:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:35.815 00:33:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:35.815 00:33:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:35.815 00:33:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:35.815 00:33:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:35.815 00:33:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:35.815 00:33:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:35.815 00:33:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:36.072 00:20:36.072 00:33:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:36.072 00:33:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:36.072 00:33:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:36.330 00:33:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:36.330 00:33:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:36.330 00:33:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:36.330 00:33:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:36.330 00:33:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:36.330 00:33:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:36.330 { 00:20:36.330 "cntlid": 11, 00:20:36.330 "qid": 0, 00:20:36.330 "state": "enabled", 00:20:36.330 "listen_address": { 00:20:36.330 "trtype": "TCP", 00:20:36.330 "adrfam": "IPv4", 00:20:36.330 "traddr": "10.0.0.2", 00:20:36.330 "trsvcid": "4420" 00:20:36.330 }, 00:20:36.330 "peer_address": { 00:20:36.330 "trtype": "TCP", 00:20:36.330 "adrfam": "IPv4", 00:20:36.330 "traddr": "10.0.0.1", 00:20:36.330 "trsvcid": "38746" 00:20:36.330 }, 00:20:36.330 "auth": { 00:20:36.330 "state": "completed", 00:20:36.330 "digest": "sha256", 00:20:36.330 "dhgroup": "ffdhe2048" 00:20:36.330 } 00:20:36.330 } 00:20:36.330 ]' 00:20:36.330 00:33:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:36.330 00:33:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:36.330 00:33:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:36.588 00:33:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:36.588 00:33:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:36.588 00:33:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:36.588 00:33:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:36.588 00:33:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:36.847 00:33:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:01:ZmEzOTJkOTkwZmE5MTMxN2I2MDVmZGRhNzkyYjcxODA4MVgW: --dhchap-ctrl-secret DHHC-1:02:Y2NhYTJjYWJhMDlhZDQwNzM3ZjZiMjhiYWU3MGRiMjZkZGQzZmM1NTEwMjVjNWQwijtRNA==: 00:20:38.226 00:33:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:38.226 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:38.226 00:33:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:20:38.226 00:33:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:38.226 00:33:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:38.226 00:33:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:38.226 00:33:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:38.226 00:33:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:38.226 00:33:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:38.227 00:33:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 2 00:20:38.227 00:33:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:38.227 00:33:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:38.227 00:33:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:38.227 00:33:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:38.227 00:33:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:38.227 00:33:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:38.227 00:33:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:38.227 00:33:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:38.227 00:33:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:38.227 00:33:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:38.227 00:33:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:38.795 00:20:38.795 00:33:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:38.795 00:33:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:38.795 00:33:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:39.054 00:33:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:39.054 00:33:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:39.054 00:33:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:39.054 00:33:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:39.054 00:33:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:39.054 00:33:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:39.054 { 00:20:39.054 "cntlid": 13, 00:20:39.054 "qid": 0, 00:20:39.054 "state": "enabled", 00:20:39.054 "listen_address": { 00:20:39.054 "trtype": "TCP", 00:20:39.054 "adrfam": "IPv4", 00:20:39.054 "traddr": "10.0.0.2", 00:20:39.054 "trsvcid": "4420" 00:20:39.054 }, 00:20:39.054 "peer_address": { 00:20:39.054 "trtype": "TCP", 00:20:39.054 "adrfam": "IPv4", 00:20:39.054 "traddr": "10.0.0.1", 00:20:39.054 "trsvcid": "38764" 00:20:39.054 }, 00:20:39.054 "auth": { 00:20:39.054 "state": "completed", 00:20:39.054 "digest": "sha256", 00:20:39.054 "dhgroup": "ffdhe2048" 00:20:39.054 } 00:20:39.054 } 00:20:39.054 ]' 00:20:39.054 00:33:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:39.054 00:33:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:39.054 00:33:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:39.054 00:33:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:39.054 00:33:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:39.054 00:33:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:39.054 00:33:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:39.054 00:33:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:39.314 00:33:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:02:ZjkzM2Y2MGE1MTM1ZTkzNDE2MTZiODIyZTliODk2OThjZDllNjhlNzVjMTU4MzIwNQF96A==: --dhchap-ctrl-secret DHHC-1:01:MGI1OGFjYjZiZDIxZmI0NmM4ZDQ3MWQwMWM3NDJmZmNekHDK: 00:20:40.690 00:33:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:40.690 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:40.690 00:33:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:20:40.690 00:33:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:40.690 00:33:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:40.690 00:33:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:40.690 00:33:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:40.690 00:33:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:40.690 00:33:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:40.690 00:33:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 3 00:20:40.690 00:33:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:40.690 00:33:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:40.690 00:33:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:40.690 00:33:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:40.690 00:33:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:40.948 00:33:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key3 00:20:40.948 00:33:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:40.948 00:33:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:40.948 00:33:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:40.948 00:33:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:40.948 00:33:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:41.206 00:20:41.207 00:33:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:41.207 00:33:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:41.207 00:33:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:41.465 00:33:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:41.465 00:33:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:41.465 00:33:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:41.465 00:33:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:41.465 00:33:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:41.465 00:33:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:41.465 { 00:20:41.465 "cntlid": 15, 00:20:41.465 "qid": 0, 00:20:41.465 "state": "enabled", 00:20:41.465 "listen_address": { 00:20:41.465 "trtype": "TCP", 00:20:41.465 "adrfam": "IPv4", 00:20:41.465 "traddr": "10.0.0.2", 00:20:41.465 "trsvcid": "4420" 00:20:41.465 }, 00:20:41.465 "peer_address": { 00:20:41.465 "trtype": "TCP", 00:20:41.465 "adrfam": "IPv4", 00:20:41.465 "traddr": "10.0.0.1", 00:20:41.465 "trsvcid": "38792" 00:20:41.465 }, 00:20:41.465 "auth": { 00:20:41.465 "state": "completed", 00:20:41.465 "digest": "sha256", 00:20:41.465 "dhgroup": "ffdhe2048" 00:20:41.465 } 00:20:41.465 } 00:20:41.465 ]' 00:20:41.465 00:33:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:41.465 00:33:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:41.465 00:33:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:41.465 00:33:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:41.465 00:33:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:41.724 00:33:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:41.724 00:33:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:41.724 00:33:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:41.982 00:33:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:03:NjU1ZTFiNjM3ZWY1ODZmMTllMjEzYTdiM2ExYzIzZTVkZDRiMDAyNTdmMDQzN2M3NzM1MjE1MDM5OGI4NWYzMBfT75I=: 00:20:43.362 00:33:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:43.362 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:43.362 00:33:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:20:43.362 00:33:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:43.362 00:33:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:43.362 00:33:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:43.362 00:33:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:43.362 00:33:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:43.363 00:33:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:43.363 00:33:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:43.363 00:33:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 0 00:20:43.363 00:33:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:43.363 00:33:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:43.363 00:33:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:43.363 00:33:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:43.363 00:33:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:43.363 00:33:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:43.363 00:33:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:43.363 00:33:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:43.363 00:33:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:43.363 00:33:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:43.363 00:33:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:43.929 00:20:43.929 00:33:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:43.929 00:33:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:43.929 00:33:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:44.188 00:33:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:44.188 00:33:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:44.188 00:33:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:44.188 00:33:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:44.188 00:33:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:44.188 00:33:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:44.188 { 00:20:44.188 "cntlid": 17, 00:20:44.188 "qid": 0, 00:20:44.188 "state": "enabled", 00:20:44.188 "listen_address": { 00:20:44.188 "trtype": "TCP", 00:20:44.188 "adrfam": "IPv4", 00:20:44.188 "traddr": "10.0.0.2", 00:20:44.188 "trsvcid": "4420" 00:20:44.188 }, 00:20:44.188 "peer_address": { 00:20:44.188 "trtype": "TCP", 00:20:44.188 "adrfam": "IPv4", 00:20:44.188 "traddr": "10.0.0.1", 00:20:44.188 "trsvcid": "38816" 00:20:44.188 }, 00:20:44.188 "auth": { 00:20:44.188 "state": "completed", 00:20:44.188 "digest": "sha256", 00:20:44.188 "dhgroup": "ffdhe3072" 00:20:44.188 } 00:20:44.188 } 00:20:44.188 ]' 00:20:44.188 00:33:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:44.188 00:33:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:44.188 00:33:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:44.188 00:33:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:44.188 00:33:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:44.188 00:33:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:44.188 00:33:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:44.188 00:33:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:44.454 00:33:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:00:MDQ4Njc1MzY1OWZjODQ0M2JmNjhiZjIxMDRmYWNhYzllN2FlOWI2NzJhNzU3NzAxMlqSlA==: --dhchap-ctrl-secret DHHC-1:03:Zjk0ZWVmNjIxOThmZTJhOGI4ZDA2ZWY2ODAzZjY2ZWEwNmE5MjZhMjBhZmY0NTU5NzlkYTFiM2UyNWQ5NWVlZd8/ViE=: 00:20:45.866 00:33:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:45.866 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:45.866 00:33:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:20:45.866 00:33:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:45.866 00:33:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:45.866 00:33:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:45.866 00:33:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:45.866 00:33:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:45.866 00:33:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:45.866 00:33:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 1 00:20:45.866 00:33:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:45.866 00:33:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:45.866 00:33:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:45.866 00:33:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:45.866 00:33:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:45.866 00:33:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:45.866 00:33:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:45.866 00:33:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:45.866 00:33:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:45.866 00:33:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:45.867 00:33:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:46.432 00:20:46.432 00:33:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:46.432 00:33:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:46.432 00:33:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:46.691 00:33:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:46.691 00:33:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:46.691 00:33:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:46.691 00:33:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:46.691 00:33:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:46.691 00:33:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:46.691 { 00:20:46.691 "cntlid": 19, 00:20:46.691 "qid": 0, 00:20:46.691 "state": "enabled", 00:20:46.691 "listen_address": { 00:20:46.691 "trtype": "TCP", 00:20:46.691 "adrfam": "IPv4", 00:20:46.691 "traddr": "10.0.0.2", 00:20:46.691 "trsvcid": "4420" 00:20:46.691 }, 00:20:46.691 "peer_address": { 00:20:46.691 "trtype": "TCP", 00:20:46.691 "adrfam": "IPv4", 00:20:46.691 "traddr": "10.0.0.1", 00:20:46.691 "trsvcid": "36962" 00:20:46.691 }, 00:20:46.691 "auth": { 00:20:46.691 "state": "completed", 00:20:46.691 "digest": "sha256", 00:20:46.691 "dhgroup": "ffdhe3072" 00:20:46.691 } 00:20:46.691 } 00:20:46.691 ]' 00:20:46.691 00:33:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:46.691 00:33:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:46.691 00:33:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:46.691 00:33:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:46.691 00:33:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:46.691 00:33:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:46.691 00:33:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:46.691 00:33:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:47.258 00:33:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:01:ZmEzOTJkOTkwZmE5MTMxN2I2MDVmZGRhNzkyYjcxODA4MVgW: --dhchap-ctrl-secret DHHC-1:02:Y2NhYTJjYWJhMDlhZDQwNzM3ZjZiMjhiYWU3MGRiMjZkZGQzZmM1NTEwMjVjNWQwijtRNA==: 00:20:48.199 00:33:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:48.199 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:48.199 00:33:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:20:48.199 00:33:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:48.199 00:33:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:48.199 00:33:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:48.199 00:33:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:48.199 00:33:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:48.199 00:33:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:48.764 00:33:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 2 00:20:48.764 00:33:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:48.764 00:33:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:48.765 00:33:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:48.765 00:33:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:48.765 00:33:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:48.765 00:33:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:48.765 00:33:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:48.765 00:33:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:48.765 00:33:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:48.765 00:33:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:48.765 00:33:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:49.022 00:20:49.022 00:33:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:49.022 00:33:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:49.022 00:33:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:49.280 00:33:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:49.280 00:33:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:49.280 00:33:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:49.280 00:33:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:49.280 00:33:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:49.280 00:33:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:49.280 { 00:20:49.280 "cntlid": 21, 00:20:49.280 "qid": 0, 00:20:49.280 "state": "enabled", 00:20:49.280 "listen_address": { 00:20:49.280 "trtype": "TCP", 00:20:49.280 "adrfam": "IPv4", 00:20:49.280 "traddr": "10.0.0.2", 00:20:49.280 "trsvcid": "4420" 00:20:49.280 }, 00:20:49.280 "peer_address": { 00:20:49.280 "trtype": "TCP", 00:20:49.280 "adrfam": "IPv4", 00:20:49.280 "traddr": "10.0.0.1", 00:20:49.280 "trsvcid": "36986" 00:20:49.280 }, 00:20:49.280 "auth": { 00:20:49.280 "state": "completed", 00:20:49.280 "digest": "sha256", 00:20:49.280 "dhgroup": "ffdhe3072" 00:20:49.280 } 00:20:49.280 } 00:20:49.280 ]' 00:20:49.280 00:33:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:49.280 00:33:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:49.280 00:33:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:49.280 00:33:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:49.280 00:33:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:49.280 00:33:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:49.280 00:33:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:49.280 00:33:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:49.849 00:33:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:02:ZjkzM2Y2MGE1MTM1ZTkzNDE2MTZiODIyZTliODk2OThjZDllNjhlNzVjMTU4MzIwNQF96A==: --dhchap-ctrl-secret DHHC-1:01:MGI1OGFjYjZiZDIxZmI0NmM4ZDQ3MWQwMWM3NDJmZmNekHDK: 00:20:50.788 00:33:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:50.788 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:50.788 00:33:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:20:50.788 00:33:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:50.788 00:33:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:50.788 00:33:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:50.788 00:33:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:50.788 00:33:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:50.788 00:33:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:51.047 00:33:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 3 00:20:51.047 00:33:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:51.047 00:33:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:51.047 00:33:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:51.047 00:33:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:51.047 00:33:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:51.047 00:33:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key3 00:20:51.047 00:33:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:51.047 00:33:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:51.047 00:33:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:51.047 00:33:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:51.047 00:33:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:51.613 00:20:51.613 00:33:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:51.613 00:33:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:51.613 00:33:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:51.872 00:33:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:51.872 00:33:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:51.872 00:33:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:51.872 00:33:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:51.872 00:33:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:51.872 00:33:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:51.872 { 00:20:51.872 "cntlid": 23, 00:20:51.872 "qid": 0, 00:20:51.872 "state": "enabled", 00:20:51.872 "listen_address": { 00:20:51.872 "trtype": "TCP", 00:20:51.872 "adrfam": "IPv4", 00:20:51.872 "traddr": "10.0.0.2", 00:20:51.872 "trsvcid": "4420" 00:20:51.872 }, 00:20:51.872 "peer_address": { 00:20:51.872 "trtype": "TCP", 00:20:51.872 "adrfam": "IPv4", 00:20:51.872 "traddr": "10.0.0.1", 00:20:51.872 "trsvcid": "37006" 00:20:51.872 }, 00:20:51.872 "auth": { 00:20:51.872 "state": "completed", 00:20:51.872 "digest": "sha256", 00:20:51.872 "dhgroup": "ffdhe3072" 00:20:51.872 } 00:20:51.872 } 00:20:51.872 ]' 00:20:51.872 00:33:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:51.872 00:33:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:51.872 00:33:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:51.872 00:33:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:51.872 00:33:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:51.872 00:33:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:51.872 00:33:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:51.872 00:33:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:52.441 00:33:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:03:NjU1ZTFiNjM3ZWY1ODZmMTllMjEzYTdiM2ExYzIzZTVkZDRiMDAyNTdmMDQzN2M3NzM1MjE1MDM5OGI4NWYzMBfT75I=: 00:20:53.379 00:33:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:53.379 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:53.379 00:33:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:20:53.379 00:33:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:53.379 00:33:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:53.379 00:33:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:53.379 00:33:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:53.379 00:33:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:53.379 00:33:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:53.379 00:33:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:53.637 00:33:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 0 00:20:53.637 00:33:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:53.637 00:33:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:53.637 00:33:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:53.637 00:33:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:53.637 00:33:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:53.637 00:33:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:53.637 00:33:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:53.637 00:33:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:53.637 00:33:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:53.637 00:33:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:53.637 00:33:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:54.204 00:20:54.204 00:33:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:54.204 00:33:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:54.204 00:33:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:54.462 00:33:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:54.462 00:33:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:54.462 00:33:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:54.462 00:33:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:54.462 00:33:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:54.462 00:33:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:54.462 { 00:20:54.462 "cntlid": 25, 00:20:54.462 "qid": 0, 00:20:54.462 "state": "enabled", 00:20:54.462 "listen_address": { 00:20:54.462 "trtype": "TCP", 00:20:54.462 "adrfam": "IPv4", 00:20:54.462 "traddr": "10.0.0.2", 00:20:54.462 "trsvcid": "4420" 00:20:54.462 }, 00:20:54.462 "peer_address": { 00:20:54.462 "trtype": "TCP", 00:20:54.462 "adrfam": "IPv4", 00:20:54.462 "traddr": "10.0.0.1", 00:20:54.462 "trsvcid": "37014" 00:20:54.462 }, 00:20:54.462 "auth": { 00:20:54.462 "state": "completed", 00:20:54.462 "digest": "sha256", 00:20:54.462 "dhgroup": "ffdhe4096" 00:20:54.462 } 00:20:54.462 } 00:20:54.462 ]' 00:20:54.462 00:33:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:54.462 00:33:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:54.462 00:33:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:54.462 00:33:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:54.462 00:33:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:54.722 00:33:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:54.722 00:33:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:54.722 00:33:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:54.980 00:33:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:00:MDQ4Njc1MzY1OWZjODQ0M2JmNjhiZjIxMDRmYWNhYzllN2FlOWI2NzJhNzU3NzAxMlqSlA==: --dhchap-ctrl-secret DHHC-1:03:Zjk0ZWVmNjIxOThmZTJhOGI4ZDA2ZWY2ODAzZjY2ZWEwNmE5MjZhMjBhZmY0NTU5NzlkYTFiM2UyNWQ5NWVlZd8/ViE=: 00:20:56.358 00:33:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:56.358 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:56.358 00:33:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:20:56.358 00:33:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:56.358 00:33:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:56.358 00:33:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:56.358 00:33:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:56.358 00:33:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:56.358 00:33:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:56.358 00:33:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 1 00:20:56.358 00:33:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:56.358 00:33:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:56.358 00:33:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:56.358 00:33:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:56.359 00:33:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:56.359 00:33:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:56.359 00:33:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:56.359 00:33:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:56.359 00:33:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:56.359 00:33:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:56.359 00:33:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:56.927 00:20:56.927 00:33:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:56.927 00:33:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:56.927 00:33:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:57.186 00:33:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:57.186 00:33:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:57.186 00:33:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:57.186 00:33:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:57.186 00:33:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:57.186 00:33:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:57.186 { 00:20:57.186 "cntlid": 27, 00:20:57.186 "qid": 0, 00:20:57.186 "state": "enabled", 00:20:57.186 "listen_address": { 00:20:57.186 "trtype": "TCP", 00:20:57.186 "adrfam": "IPv4", 00:20:57.186 "traddr": "10.0.0.2", 00:20:57.186 "trsvcid": "4420" 00:20:57.186 }, 00:20:57.186 "peer_address": { 00:20:57.186 "trtype": "TCP", 00:20:57.186 "adrfam": "IPv4", 00:20:57.186 "traddr": "10.0.0.1", 00:20:57.186 "trsvcid": "34900" 00:20:57.186 }, 00:20:57.186 "auth": { 00:20:57.186 "state": "completed", 00:20:57.186 "digest": "sha256", 00:20:57.186 "dhgroup": "ffdhe4096" 00:20:57.186 } 00:20:57.186 } 00:20:57.186 ]' 00:20:57.186 00:33:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:57.186 00:33:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:57.186 00:33:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:57.186 00:33:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:57.186 00:33:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:57.186 00:33:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:57.186 00:33:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:57.186 00:33:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:57.752 00:33:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:01:ZmEzOTJkOTkwZmE5MTMxN2I2MDVmZGRhNzkyYjcxODA4MVgW: --dhchap-ctrl-secret DHHC-1:02:Y2NhYTJjYWJhMDlhZDQwNzM3ZjZiMjhiYWU3MGRiMjZkZGQzZmM1NTEwMjVjNWQwijtRNA==: 00:20:58.692 00:33:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:58.692 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:58.692 00:33:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:20:58.692 00:33:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:58.692 00:33:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:58.692 00:33:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:58.692 00:33:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:58.692 00:33:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:58.692 00:33:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:58.950 00:33:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 2 00:20:58.950 00:33:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:58.950 00:33:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:58.950 00:33:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:58.950 00:33:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:58.950 00:33:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:58.950 00:33:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:58.950 00:33:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:58.950 00:33:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:58.950 00:33:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:58.950 00:33:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:58.950 00:33:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:59.517 00:20:59.517 00:33:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:59.517 00:33:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:59.517 00:33:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:59.775 00:33:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:59.775 00:33:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:59.775 00:33:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:59.775 00:33:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:59.775 00:33:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:59.775 00:33:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:59.775 { 00:20:59.775 "cntlid": 29, 00:20:59.775 "qid": 0, 00:20:59.775 "state": "enabled", 00:20:59.775 "listen_address": { 00:20:59.775 "trtype": "TCP", 00:20:59.775 "adrfam": "IPv4", 00:20:59.775 "traddr": "10.0.0.2", 00:20:59.775 "trsvcid": "4420" 00:20:59.775 }, 00:20:59.775 "peer_address": { 00:20:59.775 "trtype": "TCP", 00:20:59.775 "adrfam": "IPv4", 00:20:59.775 "traddr": "10.0.0.1", 00:20:59.775 "trsvcid": "34916" 00:20:59.775 }, 00:20:59.775 "auth": { 00:20:59.775 "state": "completed", 00:20:59.775 "digest": "sha256", 00:20:59.775 "dhgroup": "ffdhe4096" 00:20:59.775 } 00:20:59.775 } 00:20:59.775 ]' 00:20:59.775 00:33:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:59.775 00:33:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:59.775 00:33:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:59.775 00:33:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:00.033 00:33:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:00.033 00:33:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:00.033 00:33:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:00.033 00:33:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:00.320 00:33:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:02:ZjkzM2Y2MGE1MTM1ZTkzNDE2MTZiODIyZTliODk2OThjZDllNjhlNzVjMTU4MzIwNQF96A==: --dhchap-ctrl-secret DHHC-1:01:MGI1OGFjYjZiZDIxZmI0NmM4ZDQ3MWQwMWM3NDJmZmNekHDK: 00:21:01.700 00:33:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:01.700 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:01.700 00:33:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:21:01.700 00:33:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:01.700 00:33:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:01.700 00:33:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:01.700 00:33:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:01.700 00:33:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:21:01.700 00:33:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:21:01.700 00:33:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 3 00:21:01.700 00:33:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:01.700 00:33:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:21:01.700 00:33:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:21:01.700 00:33:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:01.700 00:33:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:01.700 00:33:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key3 00:21:01.700 00:33:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:01.700 00:33:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:01.700 00:33:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:01.700 00:33:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:01.700 00:33:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:02.267 00:21:02.267 00:33:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:02.267 00:33:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:02.267 00:33:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:02.526 00:33:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:02.526 00:33:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:02.526 00:33:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:02.526 00:33:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:02.526 00:33:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:02.526 00:33:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:02.526 { 00:21:02.526 "cntlid": 31, 00:21:02.526 "qid": 0, 00:21:02.526 "state": "enabled", 00:21:02.526 "listen_address": { 00:21:02.526 "trtype": "TCP", 00:21:02.526 "adrfam": "IPv4", 00:21:02.526 "traddr": "10.0.0.2", 00:21:02.526 "trsvcid": "4420" 00:21:02.526 }, 00:21:02.526 "peer_address": { 00:21:02.526 "trtype": "TCP", 00:21:02.526 "adrfam": "IPv4", 00:21:02.526 "traddr": "10.0.0.1", 00:21:02.526 "trsvcid": "34924" 00:21:02.526 }, 00:21:02.526 "auth": { 00:21:02.526 "state": "completed", 00:21:02.526 "digest": "sha256", 00:21:02.526 "dhgroup": "ffdhe4096" 00:21:02.526 } 00:21:02.526 } 00:21:02.526 ]' 00:21:02.526 00:33:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:02.526 00:33:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:02.526 00:33:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:02.526 00:33:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:02.526 00:33:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:02.784 00:33:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:02.784 00:33:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:02.784 00:33:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:03.042 00:33:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:03:NjU1ZTFiNjM3ZWY1ODZmMTllMjEzYTdiM2ExYzIzZTVkZDRiMDAyNTdmMDQzN2M3NzM1MjE1MDM5OGI4NWYzMBfT75I=: 00:21:04.420 00:33:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:04.420 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:04.420 00:33:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:21:04.420 00:33:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:04.420 00:33:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:04.420 00:33:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:04.420 00:33:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:04.420 00:33:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:04.420 00:33:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:21:04.420 00:33:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:21:04.420 00:33:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 0 00:21:04.420 00:33:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:04.420 00:33:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:21:04.420 00:33:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:21:04.420 00:33:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:04.420 00:33:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:04.420 00:33:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:04.420 00:33:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:04.420 00:33:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:04.420 00:33:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:04.420 00:33:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:04.420 00:33:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:04.988 00:21:04.988 00:33:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:04.988 00:33:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:04.988 00:33:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:05.557 00:33:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:05.557 00:33:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:05.557 00:33:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:05.557 00:33:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:05.557 00:33:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:05.557 00:33:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:05.557 { 00:21:05.557 "cntlid": 33, 00:21:05.557 "qid": 0, 00:21:05.557 "state": "enabled", 00:21:05.557 "listen_address": { 00:21:05.557 "trtype": "TCP", 00:21:05.557 "adrfam": "IPv4", 00:21:05.557 "traddr": "10.0.0.2", 00:21:05.557 "trsvcid": "4420" 00:21:05.557 }, 00:21:05.557 "peer_address": { 00:21:05.557 "trtype": "TCP", 00:21:05.557 "adrfam": "IPv4", 00:21:05.557 "traddr": "10.0.0.1", 00:21:05.557 "trsvcid": "34934" 00:21:05.557 }, 00:21:05.557 "auth": { 00:21:05.557 "state": "completed", 00:21:05.557 "digest": "sha256", 00:21:05.557 "dhgroup": "ffdhe6144" 00:21:05.557 } 00:21:05.557 } 00:21:05.557 ]' 00:21:05.557 00:33:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:05.557 00:33:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:05.557 00:33:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:05.557 00:33:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:05.557 00:33:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:05.557 00:33:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:05.557 00:33:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:05.557 00:33:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:05.814 00:33:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:00:MDQ4Njc1MzY1OWZjODQ0M2JmNjhiZjIxMDRmYWNhYzllN2FlOWI2NzJhNzU3NzAxMlqSlA==: --dhchap-ctrl-secret DHHC-1:03:Zjk0ZWVmNjIxOThmZTJhOGI4ZDA2ZWY2ODAzZjY2ZWEwNmE5MjZhMjBhZmY0NTU5NzlkYTFiM2UyNWQ5NWVlZd8/ViE=: 00:21:07.219 00:33:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:07.219 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:07.219 00:33:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:21:07.219 00:33:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:07.219 00:33:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:07.219 00:33:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:07.219 00:33:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:07.219 00:33:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:21:07.219 00:33:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:21:07.219 00:33:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 1 00:21:07.219 00:33:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:07.219 00:33:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:21:07.219 00:33:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:21:07.219 00:33:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:07.219 00:33:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:07.219 00:33:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:07.219 00:33:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:07.219 00:33:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:07.219 00:33:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:07.219 00:33:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:07.219 00:33:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:07.785 00:21:07.785 00:33:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:07.785 00:33:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:07.785 00:33:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:08.350 00:33:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:08.350 00:33:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:08.350 00:33:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:08.350 00:33:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:08.350 00:33:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:08.350 00:33:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:08.350 { 00:21:08.350 "cntlid": 35, 00:21:08.350 "qid": 0, 00:21:08.350 "state": "enabled", 00:21:08.350 "listen_address": { 00:21:08.350 "trtype": "TCP", 00:21:08.350 "adrfam": "IPv4", 00:21:08.350 "traddr": "10.0.0.2", 00:21:08.350 "trsvcid": "4420" 00:21:08.350 }, 00:21:08.350 "peer_address": { 00:21:08.350 "trtype": "TCP", 00:21:08.350 "adrfam": "IPv4", 00:21:08.350 "traddr": "10.0.0.1", 00:21:08.350 "trsvcid": "48056" 00:21:08.350 }, 00:21:08.350 "auth": { 00:21:08.350 "state": "completed", 00:21:08.350 "digest": "sha256", 00:21:08.350 "dhgroup": "ffdhe6144" 00:21:08.350 } 00:21:08.350 } 00:21:08.350 ]' 00:21:08.350 00:33:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:08.350 00:33:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:08.350 00:33:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:08.350 00:33:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:08.350 00:33:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:08.350 00:33:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:08.350 00:33:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:08.350 00:33:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:08.675 00:33:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:01:ZmEzOTJkOTkwZmE5MTMxN2I2MDVmZGRhNzkyYjcxODA4MVgW: --dhchap-ctrl-secret DHHC-1:02:Y2NhYTJjYWJhMDlhZDQwNzM3ZjZiMjhiYWU3MGRiMjZkZGQzZmM1NTEwMjVjNWQwijtRNA==: 00:21:10.049 00:33:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:10.049 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:10.049 00:33:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:21:10.049 00:33:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:10.049 00:33:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:10.049 00:33:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:10.049 00:33:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:10.049 00:33:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:21:10.049 00:33:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:21:10.049 00:33:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 2 00:21:10.049 00:33:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:10.049 00:33:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:21:10.049 00:33:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:21:10.049 00:33:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:10.049 00:33:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:10.050 00:33:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:10.050 00:33:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:10.050 00:33:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:10.050 00:33:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:10.050 00:33:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:10.050 00:33:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:10.618 00:21:10.877 00:33:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:10.877 00:33:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:10.877 00:33:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:11.135 00:33:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:11.135 00:33:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:11.135 00:33:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:11.135 00:33:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:11.135 00:33:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:11.135 00:33:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:11.135 { 00:21:11.135 "cntlid": 37, 00:21:11.135 "qid": 0, 00:21:11.135 "state": "enabled", 00:21:11.135 "listen_address": { 00:21:11.135 "trtype": "TCP", 00:21:11.135 "adrfam": "IPv4", 00:21:11.135 "traddr": "10.0.0.2", 00:21:11.135 "trsvcid": "4420" 00:21:11.135 }, 00:21:11.135 "peer_address": { 00:21:11.135 "trtype": "TCP", 00:21:11.135 "adrfam": "IPv4", 00:21:11.135 "traddr": "10.0.0.1", 00:21:11.135 "trsvcid": "48090" 00:21:11.135 }, 00:21:11.135 "auth": { 00:21:11.135 "state": "completed", 00:21:11.135 "digest": "sha256", 00:21:11.135 "dhgroup": "ffdhe6144" 00:21:11.135 } 00:21:11.135 } 00:21:11.135 ]' 00:21:11.135 00:33:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:11.135 00:33:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:11.135 00:33:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:11.135 00:33:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:11.135 00:33:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:11.135 00:33:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:11.135 00:33:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:11.135 00:33:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:11.393 00:33:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:02:ZjkzM2Y2MGE1MTM1ZTkzNDE2MTZiODIyZTliODk2OThjZDllNjhlNzVjMTU4MzIwNQF96A==: --dhchap-ctrl-secret DHHC-1:01:MGI1OGFjYjZiZDIxZmI0NmM4ZDQ3MWQwMWM3NDJmZmNekHDK: 00:21:12.768 00:33:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:12.769 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:12.769 00:33:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:21:12.769 00:33:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:12.769 00:33:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:12.769 00:33:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:12.769 00:33:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:12.769 00:33:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:21:12.769 00:33:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:21:13.027 00:33:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 3 00:21:13.027 00:33:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:13.027 00:33:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:21:13.027 00:33:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:21:13.027 00:33:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:13.027 00:33:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:13.027 00:33:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key3 00:21:13.027 00:33:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:13.027 00:33:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:13.027 00:33:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:13.027 00:33:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:13.027 00:33:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:13.594 00:21:13.594 00:33:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:13.594 00:33:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:13.594 00:33:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:13.853 00:33:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:13.853 00:33:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:13.853 00:33:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:13.853 00:33:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:13.853 00:33:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:13.853 00:33:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:13.853 { 00:21:13.853 "cntlid": 39, 00:21:13.853 "qid": 0, 00:21:13.853 "state": "enabled", 00:21:13.853 "listen_address": { 00:21:13.853 "trtype": "TCP", 00:21:13.853 "adrfam": "IPv4", 00:21:13.853 "traddr": "10.0.0.2", 00:21:13.853 "trsvcid": "4420" 00:21:13.853 }, 00:21:13.853 "peer_address": { 00:21:13.853 "trtype": "TCP", 00:21:13.853 "adrfam": "IPv4", 00:21:13.853 "traddr": "10.0.0.1", 00:21:13.853 "trsvcid": "48124" 00:21:13.853 }, 00:21:13.853 "auth": { 00:21:13.853 "state": "completed", 00:21:13.853 "digest": "sha256", 00:21:13.853 "dhgroup": "ffdhe6144" 00:21:13.853 } 00:21:13.853 } 00:21:13.853 ]' 00:21:13.853 00:33:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:13.853 00:33:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:13.853 00:33:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:14.111 00:33:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:14.111 00:33:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:14.111 00:33:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:14.111 00:33:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:14.111 00:33:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:14.368 00:33:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:03:NjU1ZTFiNjM3ZWY1ODZmMTllMjEzYTdiM2ExYzIzZTVkZDRiMDAyNTdmMDQzN2M3NzM1MjE1MDM5OGI4NWYzMBfT75I=: 00:21:15.748 00:33:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:15.748 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:15.748 00:33:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:21:15.748 00:33:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:15.748 00:33:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:15.748 00:33:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:15.748 00:33:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:15.748 00:33:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:15.748 00:33:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:21:15.748 00:33:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:21:15.748 00:33:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 0 00:21:15.748 00:33:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:15.748 00:33:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:21:15.748 00:33:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:15.748 00:33:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:15.748 00:33:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:15.748 00:33:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:15.748 00:33:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:15.748 00:33:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:15.748 00:33:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:15.748 00:33:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:15.748 00:33:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:17.148 00:21:17.148 00:33:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:17.148 00:33:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:17.148 00:33:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:17.148 00:33:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:17.148 00:33:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:17.148 00:33:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:17.148 00:33:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:17.148 00:33:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:17.148 00:33:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:17.148 { 00:21:17.148 "cntlid": 41, 00:21:17.148 "qid": 0, 00:21:17.148 "state": "enabled", 00:21:17.148 "listen_address": { 00:21:17.148 "trtype": "TCP", 00:21:17.148 "adrfam": "IPv4", 00:21:17.148 "traddr": "10.0.0.2", 00:21:17.148 "trsvcid": "4420" 00:21:17.148 }, 00:21:17.148 "peer_address": { 00:21:17.148 "trtype": "TCP", 00:21:17.148 "adrfam": "IPv4", 00:21:17.148 "traddr": "10.0.0.1", 00:21:17.148 "trsvcid": "44186" 00:21:17.148 }, 00:21:17.148 "auth": { 00:21:17.148 "state": "completed", 00:21:17.148 "digest": "sha256", 00:21:17.148 "dhgroup": "ffdhe8192" 00:21:17.148 } 00:21:17.148 } 00:21:17.148 ]' 00:21:17.148 00:33:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:17.148 00:33:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:17.148 00:33:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:17.148 00:33:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:17.148 00:33:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:17.417 00:33:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:17.417 00:33:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:17.417 00:33:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:17.675 00:33:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:00:MDQ4Njc1MzY1OWZjODQ0M2JmNjhiZjIxMDRmYWNhYzllN2FlOWI2NzJhNzU3NzAxMlqSlA==: --dhchap-ctrl-secret DHHC-1:03:Zjk0ZWVmNjIxOThmZTJhOGI4ZDA2ZWY2ODAzZjY2ZWEwNmE5MjZhMjBhZmY0NTU5NzlkYTFiM2UyNWQ5NWVlZd8/ViE=: 00:21:18.614 00:33:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:18.614 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:18.614 00:33:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:21:18.614 00:33:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:18.614 00:33:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:18.614 00:33:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:18.614 00:33:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:18.614 00:33:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:21:18.614 00:33:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:21:19.181 00:33:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 1 00:21:19.181 00:33:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:19.181 00:33:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:21:19.181 00:33:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:19.181 00:33:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:19.181 00:33:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:19.181 00:33:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:19.181 00:33:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:19.181 00:33:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:19.181 00:33:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:19.181 00:33:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:19.181 00:33:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:20.120 00:21:20.120 00:33:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:20.120 00:33:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:20.120 00:33:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:20.378 00:33:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:20.378 00:33:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:20.378 00:33:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:20.378 00:33:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:20.378 00:33:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:20.378 00:33:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:20.378 { 00:21:20.378 "cntlid": 43, 00:21:20.378 "qid": 0, 00:21:20.378 "state": "enabled", 00:21:20.378 "listen_address": { 00:21:20.378 "trtype": "TCP", 00:21:20.378 "adrfam": "IPv4", 00:21:20.378 "traddr": "10.0.0.2", 00:21:20.378 "trsvcid": "4420" 00:21:20.378 }, 00:21:20.378 "peer_address": { 00:21:20.378 "trtype": "TCP", 00:21:20.378 "adrfam": "IPv4", 00:21:20.378 "traddr": "10.0.0.1", 00:21:20.378 "trsvcid": "44196" 00:21:20.378 }, 00:21:20.378 "auth": { 00:21:20.378 "state": "completed", 00:21:20.378 "digest": "sha256", 00:21:20.378 "dhgroup": "ffdhe8192" 00:21:20.378 } 00:21:20.378 } 00:21:20.378 ]' 00:21:20.378 00:33:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:20.378 00:33:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:20.378 00:33:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:20.378 00:33:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:20.378 00:33:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:20.378 00:33:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:20.378 00:33:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:20.378 00:33:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:20.949 00:33:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:01:ZmEzOTJkOTkwZmE5MTMxN2I2MDVmZGRhNzkyYjcxODA4MVgW: --dhchap-ctrl-secret DHHC-1:02:Y2NhYTJjYWJhMDlhZDQwNzM3ZjZiMjhiYWU3MGRiMjZkZGQzZmM1NTEwMjVjNWQwijtRNA==: 00:21:21.885 00:33:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:21.885 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:21.885 00:33:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:21:21.885 00:33:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:21.885 00:33:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:21.885 00:33:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:21.885 00:33:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:21.885 00:33:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:21:21.885 00:33:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:21:22.144 00:33:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 2 00:21:22.144 00:33:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:22.144 00:33:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:21:22.144 00:33:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:22.144 00:33:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:22.144 00:33:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:22.144 00:33:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:22.144 00:33:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:22.144 00:33:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:22.144 00:33:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:22.144 00:33:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:22.144 00:33:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:23.523 00:21:23.523 00:33:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:23.523 00:33:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:23.523 00:33:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:23.523 00:33:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:23.523 00:33:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:23.523 00:33:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:23.523 00:33:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:23.523 00:33:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:23.523 00:33:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:23.523 { 00:21:23.523 "cntlid": 45, 00:21:23.523 "qid": 0, 00:21:23.523 "state": "enabled", 00:21:23.523 "listen_address": { 00:21:23.523 "trtype": "TCP", 00:21:23.523 "adrfam": "IPv4", 00:21:23.523 "traddr": "10.0.0.2", 00:21:23.523 "trsvcid": "4420" 00:21:23.523 }, 00:21:23.523 "peer_address": { 00:21:23.523 "trtype": "TCP", 00:21:23.523 "adrfam": "IPv4", 00:21:23.523 "traddr": "10.0.0.1", 00:21:23.523 "trsvcid": "44238" 00:21:23.523 }, 00:21:23.523 "auth": { 00:21:23.523 "state": "completed", 00:21:23.523 "digest": "sha256", 00:21:23.523 "dhgroup": "ffdhe8192" 00:21:23.523 } 00:21:23.523 } 00:21:23.523 ]' 00:21:23.523 00:33:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:23.523 00:33:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:23.523 00:33:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:23.782 00:33:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:23.782 00:33:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:23.782 00:33:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:23.782 00:33:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:23.782 00:33:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:24.040 00:33:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:02:ZjkzM2Y2MGE1MTM1ZTkzNDE2MTZiODIyZTliODk2OThjZDllNjhlNzVjMTU4MzIwNQF96A==: --dhchap-ctrl-secret DHHC-1:01:MGI1OGFjYjZiZDIxZmI0NmM4ZDQ3MWQwMWM3NDJmZmNekHDK: 00:21:25.416 00:33:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:25.416 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:25.416 00:33:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:21:25.416 00:33:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:25.416 00:33:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:25.416 00:33:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:25.416 00:33:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:25.416 00:33:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:21:25.416 00:33:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:21:25.416 00:33:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 3 00:21:25.416 00:33:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:25.416 00:33:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:21:25.416 00:33:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:25.416 00:33:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:25.416 00:33:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:25.416 00:33:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key3 00:21:25.416 00:33:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:25.416 00:33:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:25.416 00:33:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:25.416 00:33:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:25.416 00:33:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:26.794 00:21:26.794 00:33:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:26.794 00:33:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:26.794 00:33:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:26.794 00:33:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:26.794 00:33:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:26.794 00:33:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:26.794 00:33:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:26.794 00:33:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:26.794 00:33:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:26.794 { 00:21:26.794 "cntlid": 47, 00:21:26.794 "qid": 0, 00:21:26.794 "state": "enabled", 00:21:26.794 "listen_address": { 00:21:26.794 "trtype": "TCP", 00:21:26.794 "adrfam": "IPv4", 00:21:26.794 "traddr": "10.0.0.2", 00:21:26.794 "trsvcid": "4420" 00:21:26.794 }, 00:21:26.794 "peer_address": { 00:21:26.794 "trtype": "TCP", 00:21:26.794 "adrfam": "IPv4", 00:21:26.794 "traddr": "10.0.0.1", 00:21:26.794 "trsvcid": "48328" 00:21:26.794 }, 00:21:26.794 "auth": { 00:21:26.794 "state": "completed", 00:21:26.794 "digest": "sha256", 00:21:26.794 "dhgroup": "ffdhe8192" 00:21:26.794 } 00:21:26.794 } 00:21:26.794 ]' 00:21:26.794 00:33:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:26.794 00:33:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:26.794 00:33:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:26.794 00:33:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:26.794 00:33:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:27.054 00:33:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:27.054 00:33:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:27.054 00:33:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:27.313 00:33:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:03:NjU1ZTFiNjM3ZWY1ODZmMTllMjEzYTdiM2ExYzIzZTVkZDRiMDAyNTdmMDQzN2M3NzM1MjE1MDM5OGI4NWYzMBfT75I=: 00:21:28.701 00:33:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:28.701 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:28.701 00:33:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:21:28.701 00:33:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:28.701 00:33:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:28.701 00:33:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:28.701 00:33:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:21:28.701 00:33:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:28.701 00:33:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:28.701 00:33:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:21:28.701 00:33:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:21:28.701 00:33:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 0 00:21:28.701 00:33:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:28.701 00:33:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:28.701 00:33:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:21:28.701 00:33:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:28.701 00:33:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:28.701 00:33:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:28.701 00:33:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:28.701 00:33:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:28.701 00:33:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:28.701 00:33:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:28.701 00:33:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:28.960 00:21:28.960 00:33:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:28.960 00:33:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:28.960 00:33:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:29.526 00:33:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:29.526 00:33:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:29.526 00:33:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:29.526 00:33:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:29.526 00:33:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:29.526 00:33:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:29.526 { 00:21:29.526 "cntlid": 49, 00:21:29.526 "qid": 0, 00:21:29.526 "state": "enabled", 00:21:29.526 "listen_address": { 00:21:29.526 "trtype": "TCP", 00:21:29.526 "adrfam": "IPv4", 00:21:29.526 "traddr": "10.0.0.2", 00:21:29.526 "trsvcid": "4420" 00:21:29.526 }, 00:21:29.526 "peer_address": { 00:21:29.526 "trtype": "TCP", 00:21:29.526 "adrfam": "IPv4", 00:21:29.526 "traddr": "10.0.0.1", 00:21:29.526 "trsvcid": "48352" 00:21:29.526 }, 00:21:29.526 "auth": { 00:21:29.526 "state": "completed", 00:21:29.526 "digest": "sha384", 00:21:29.526 "dhgroup": "null" 00:21:29.526 } 00:21:29.526 } 00:21:29.526 ]' 00:21:29.526 00:33:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:29.526 00:33:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:29.526 00:33:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:29.526 00:33:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:21:29.526 00:33:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:29.526 00:33:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:29.526 00:33:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:29.526 00:33:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:29.783 00:33:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:00:MDQ4Njc1MzY1OWZjODQ0M2JmNjhiZjIxMDRmYWNhYzllN2FlOWI2NzJhNzU3NzAxMlqSlA==: --dhchap-ctrl-secret DHHC-1:03:Zjk0ZWVmNjIxOThmZTJhOGI4ZDA2ZWY2ODAzZjY2ZWEwNmE5MjZhMjBhZmY0NTU5NzlkYTFiM2UyNWQ5NWVlZd8/ViE=: 00:21:31.162 00:33:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:31.162 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:31.162 00:33:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:21:31.162 00:33:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:31.162 00:33:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:31.162 00:33:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:31.162 00:33:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:31.162 00:33:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:21:31.162 00:33:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:21:31.162 00:33:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 1 00:21:31.162 00:33:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:31.162 00:33:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:31.162 00:33:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:21:31.162 00:33:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:31.162 00:33:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:31.162 00:33:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:31.162 00:33:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:31.162 00:33:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:31.162 00:33:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:31.162 00:33:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:31.162 00:33:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:31.731 00:21:31.731 00:33:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:31.731 00:33:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:31.731 00:33:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:31.990 00:33:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:31.990 00:33:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:31.990 00:33:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:31.990 00:33:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:31.990 00:33:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:31.990 00:33:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:31.990 { 00:21:31.990 "cntlid": 51, 00:21:31.990 "qid": 0, 00:21:31.990 "state": "enabled", 00:21:31.990 "listen_address": { 00:21:31.990 "trtype": "TCP", 00:21:31.990 "adrfam": "IPv4", 00:21:31.990 "traddr": "10.0.0.2", 00:21:31.990 "trsvcid": "4420" 00:21:31.990 }, 00:21:31.990 "peer_address": { 00:21:31.990 "trtype": "TCP", 00:21:31.990 "adrfam": "IPv4", 00:21:31.990 "traddr": "10.0.0.1", 00:21:31.990 "trsvcid": "48384" 00:21:31.990 }, 00:21:31.990 "auth": { 00:21:31.990 "state": "completed", 00:21:31.990 "digest": "sha384", 00:21:31.990 "dhgroup": "null" 00:21:31.990 } 00:21:31.990 } 00:21:31.990 ]' 00:21:31.990 00:33:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:31.990 00:33:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:31.990 00:33:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:31.990 00:33:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:21:31.990 00:33:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:31.990 00:33:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:31.990 00:33:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:31.990 00:33:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:32.559 00:34:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:01:ZmEzOTJkOTkwZmE5MTMxN2I2MDVmZGRhNzkyYjcxODA4MVgW: --dhchap-ctrl-secret DHHC-1:02:Y2NhYTJjYWJhMDlhZDQwNzM3ZjZiMjhiYWU3MGRiMjZkZGQzZmM1NTEwMjVjNWQwijtRNA==: 00:21:33.495 00:34:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:33.495 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:33.495 00:34:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:21:33.495 00:34:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:33.495 00:34:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:33.495 00:34:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:33.495 00:34:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:33.495 00:34:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:21:33.495 00:34:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:21:33.769 00:34:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 2 00:21:33.769 00:34:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:33.769 00:34:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:33.769 00:34:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:21:33.769 00:34:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:33.769 00:34:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:33.769 00:34:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:33.769 00:34:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:33.769 00:34:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:33.769 00:34:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:33.769 00:34:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:33.769 00:34:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:34.352 00:21:34.352 00:34:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:34.352 00:34:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:34.352 00:34:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:34.611 00:34:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:34.611 00:34:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:34.611 00:34:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:34.611 00:34:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:34.611 00:34:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:34.611 00:34:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:34.611 { 00:21:34.611 "cntlid": 53, 00:21:34.611 "qid": 0, 00:21:34.611 "state": "enabled", 00:21:34.611 "listen_address": { 00:21:34.611 "trtype": "TCP", 00:21:34.611 "adrfam": "IPv4", 00:21:34.611 "traddr": "10.0.0.2", 00:21:34.611 "trsvcid": "4420" 00:21:34.611 }, 00:21:34.611 "peer_address": { 00:21:34.611 "trtype": "TCP", 00:21:34.611 "adrfam": "IPv4", 00:21:34.611 "traddr": "10.0.0.1", 00:21:34.611 "trsvcid": "48404" 00:21:34.611 }, 00:21:34.611 "auth": { 00:21:34.611 "state": "completed", 00:21:34.611 "digest": "sha384", 00:21:34.611 "dhgroup": "null" 00:21:34.611 } 00:21:34.611 } 00:21:34.611 ]' 00:21:34.611 00:34:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:34.611 00:34:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:34.611 00:34:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:34.611 00:34:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:21:34.611 00:34:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:34.611 00:34:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:34.611 00:34:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:34.611 00:34:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:34.870 00:34:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:02:ZjkzM2Y2MGE1MTM1ZTkzNDE2MTZiODIyZTliODk2OThjZDllNjhlNzVjMTU4MzIwNQF96A==: --dhchap-ctrl-secret DHHC-1:01:MGI1OGFjYjZiZDIxZmI0NmM4ZDQ3MWQwMWM3NDJmZmNekHDK: 00:21:36.249 00:34:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:36.249 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:36.249 00:34:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:21:36.249 00:34:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:36.249 00:34:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:36.249 00:34:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:36.249 00:34:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:36.249 00:34:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:21:36.249 00:34:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:21:36.508 00:34:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 3 00:21:36.508 00:34:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:36.508 00:34:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:36.508 00:34:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:21:36.508 00:34:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:36.508 00:34:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:36.508 00:34:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key3 00:21:36.508 00:34:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:36.508 00:34:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:36.508 00:34:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:36.508 00:34:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:36.508 00:34:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:36.766 00:21:36.766 00:34:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:36.766 00:34:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:36.766 00:34:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:37.024 00:34:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:37.024 00:34:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:37.024 00:34:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:37.024 00:34:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:37.024 00:34:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:37.024 00:34:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:37.024 { 00:21:37.024 "cntlid": 55, 00:21:37.024 "qid": 0, 00:21:37.024 "state": "enabled", 00:21:37.024 "listen_address": { 00:21:37.024 "trtype": "TCP", 00:21:37.024 "adrfam": "IPv4", 00:21:37.024 "traddr": "10.0.0.2", 00:21:37.024 "trsvcid": "4420" 00:21:37.024 }, 00:21:37.024 "peer_address": { 00:21:37.024 "trtype": "TCP", 00:21:37.024 "adrfam": "IPv4", 00:21:37.024 "traddr": "10.0.0.1", 00:21:37.024 "trsvcid": "53326" 00:21:37.024 }, 00:21:37.024 "auth": { 00:21:37.024 "state": "completed", 00:21:37.024 "digest": "sha384", 00:21:37.024 "dhgroup": "null" 00:21:37.024 } 00:21:37.024 } 00:21:37.024 ]' 00:21:37.024 00:34:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:37.024 00:34:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:37.024 00:34:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:37.283 00:34:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:21:37.283 00:34:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:37.283 00:34:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:37.283 00:34:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:37.283 00:34:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:37.541 00:34:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:03:NjU1ZTFiNjM3ZWY1ODZmMTllMjEzYTdiM2ExYzIzZTVkZDRiMDAyNTdmMDQzN2M3NzM1MjE1MDM5OGI4NWYzMBfT75I=: 00:21:38.917 00:34:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:38.917 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:38.917 00:34:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:21:38.917 00:34:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:38.917 00:34:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:38.917 00:34:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:38.917 00:34:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:38.917 00:34:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:38.917 00:34:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:38.917 00:34:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:38.917 00:34:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 0 00:21:38.917 00:34:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:38.917 00:34:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:38.917 00:34:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:21:38.917 00:34:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:38.917 00:34:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:38.917 00:34:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:38.917 00:34:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:38.917 00:34:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:38.917 00:34:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:38.917 00:34:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:38.917 00:34:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:39.484 00:21:39.484 00:34:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:39.484 00:34:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:39.484 00:34:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:39.742 00:34:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:39.743 00:34:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:39.743 00:34:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:39.743 00:34:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:39.743 00:34:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:39.743 00:34:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:39.743 { 00:21:39.743 "cntlid": 57, 00:21:39.743 "qid": 0, 00:21:39.743 "state": "enabled", 00:21:39.743 "listen_address": { 00:21:39.743 "trtype": "TCP", 00:21:39.743 "adrfam": "IPv4", 00:21:39.743 "traddr": "10.0.0.2", 00:21:39.743 "trsvcid": "4420" 00:21:39.743 }, 00:21:39.743 "peer_address": { 00:21:39.743 "trtype": "TCP", 00:21:39.743 "adrfam": "IPv4", 00:21:39.743 "traddr": "10.0.0.1", 00:21:39.743 "trsvcid": "53362" 00:21:39.743 }, 00:21:39.743 "auth": { 00:21:39.743 "state": "completed", 00:21:39.743 "digest": "sha384", 00:21:39.743 "dhgroup": "ffdhe2048" 00:21:39.743 } 00:21:39.743 } 00:21:39.743 ]' 00:21:39.743 00:34:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:39.743 00:34:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:39.743 00:34:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:39.743 00:34:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:39.743 00:34:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:39.743 00:34:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:39.743 00:34:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:39.743 00:34:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:40.311 00:34:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:00:MDQ4Njc1MzY1OWZjODQ0M2JmNjhiZjIxMDRmYWNhYzllN2FlOWI2NzJhNzU3NzAxMlqSlA==: --dhchap-ctrl-secret DHHC-1:03:Zjk0ZWVmNjIxOThmZTJhOGI4ZDA2ZWY2ODAzZjY2ZWEwNmE5MjZhMjBhZmY0NTU5NzlkYTFiM2UyNWQ5NWVlZd8/ViE=: 00:21:41.245 00:34:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:41.245 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:41.245 00:34:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:21:41.245 00:34:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:41.245 00:34:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:41.245 00:34:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:41.245 00:34:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:41.245 00:34:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:41.245 00:34:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:41.502 00:34:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 1 00:21:41.502 00:34:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:41.502 00:34:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:41.502 00:34:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:21:41.502 00:34:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:41.502 00:34:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:41.502 00:34:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:41.502 00:34:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:41.502 00:34:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:41.502 00:34:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:41.503 00:34:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:41.503 00:34:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:42.068 00:21:42.068 00:34:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:42.068 00:34:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:42.068 00:34:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:42.326 00:34:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:42.326 00:34:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:42.326 00:34:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:42.326 00:34:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:42.326 00:34:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:42.326 00:34:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:42.326 { 00:21:42.326 "cntlid": 59, 00:21:42.326 "qid": 0, 00:21:42.326 "state": "enabled", 00:21:42.326 "listen_address": { 00:21:42.326 "trtype": "TCP", 00:21:42.326 "adrfam": "IPv4", 00:21:42.326 "traddr": "10.0.0.2", 00:21:42.326 "trsvcid": "4420" 00:21:42.326 }, 00:21:42.326 "peer_address": { 00:21:42.326 "trtype": "TCP", 00:21:42.326 "adrfam": "IPv4", 00:21:42.326 "traddr": "10.0.0.1", 00:21:42.326 "trsvcid": "53390" 00:21:42.326 }, 00:21:42.326 "auth": { 00:21:42.326 "state": "completed", 00:21:42.326 "digest": "sha384", 00:21:42.326 "dhgroup": "ffdhe2048" 00:21:42.326 } 00:21:42.326 } 00:21:42.326 ]' 00:21:42.326 00:34:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:42.326 00:34:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:42.326 00:34:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:42.326 00:34:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:42.326 00:34:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:42.326 00:34:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:42.326 00:34:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:42.326 00:34:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:42.586 00:34:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:01:ZmEzOTJkOTkwZmE5MTMxN2I2MDVmZGRhNzkyYjcxODA4MVgW: --dhchap-ctrl-secret DHHC-1:02:Y2NhYTJjYWJhMDlhZDQwNzM3ZjZiMjhiYWU3MGRiMjZkZGQzZmM1NTEwMjVjNWQwijtRNA==: 00:21:43.964 00:34:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:43.964 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:43.964 00:34:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:21:43.964 00:34:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:43.964 00:34:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:43.964 00:34:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:43.964 00:34:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:43.964 00:34:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:43.964 00:34:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:44.222 00:34:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 2 00:21:44.222 00:34:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:44.222 00:34:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:44.222 00:34:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:21:44.222 00:34:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:44.222 00:34:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:44.222 00:34:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:44.222 00:34:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:44.222 00:34:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:44.222 00:34:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:44.222 00:34:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:44.222 00:34:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:44.479 00:21:44.479 00:34:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:44.479 00:34:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:44.479 00:34:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:44.743 00:34:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:44.743 00:34:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:44.743 00:34:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:44.743 00:34:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:44.743 00:34:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:44.743 00:34:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:44.743 { 00:21:44.743 "cntlid": 61, 00:21:44.743 "qid": 0, 00:21:44.743 "state": "enabled", 00:21:44.743 "listen_address": { 00:21:44.743 "trtype": "TCP", 00:21:44.743 "adrfam": "IPv4", 00:21:44.743 "traddr": "10.0.0.2", 00:21:44.743 "trsvcid": "4420" 00:21:44.743 }, 00:21:44.743 "peer_address": { 00:21:44.743 "trtype": "TCP", 00:21:44.743 "adrfam": "IPv4", 00:21:44.743 "traddr": "10.0.0.1", 00:21:44.743 "trsvcid": "53406" 00:21:44.743 }, 00:21:44.743 "auth": { 00:21:44.743 "state": "completed", 00:21:44.743 "digest": "sha384", 00:21:44.743 "dhgroup": "ffdhe2048" 00:21:44.743 } 00:21:44.743 } 00:21:44.743 ]' 00:21:44.743 00:34:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:45.005 00:34:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:45.005 00:34:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:45.005 00:34:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:45.005 00:34:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:45.005 00:34:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:45.005 00:34:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:45.005 00:34:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:45.262 00:34:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:02:ZjkzM2Y2MGE1MTM1ZTkzNDE2MTZiODIyZTliODk2OThjZDllNjhlNzVjMTU4MzIwNQF96A==: --dhchap-ctrl-secret DHHC-1:01:MGI1OGFjYjZiZDIxZmI0NmM4ZDQ3MWQwMWM3NDJmZmNekHDK: 00:21:46.632 00:34:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:46.632 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:46.632 00:34:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:21:46.632 00:34:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:46.632 00:34:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:46.632 00:34:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:46.632 00:34:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:46.632 00:34:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:46.632 00:34:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:46.632 00:34:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 3 00:21:46.632 00:34:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:46.632 00:34:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:46.632 00:34:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:21:46.632 00:34:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:46.632 00:34:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:46.632 00:34:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key3 00:21:46.632 00:34:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:46.632 00:34:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:46.632 00:34:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:46.632 00:34:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:46.632 00:34:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:47.199 00:21:47.199 00:34:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:47.199 00:34:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:47.199 00:34:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:47.457 00:34:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:47.457 00:34:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:47.457 00:34:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:47.457 00:34:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:47.457 00:34:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:47.457 00:34:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:47.457 { 00:21:47.457 "cntlid": 63, 00:21:47.457 "qid": 0, 00:21:47.457 "state": "enabled", 00:21:47.457 "listen_address": { 00:21:47.457 "trtype": "TCP", 00:21:47.457 "adrfam": "IPv4", 00:21:47.457 "traddr": "10.0.0.2", 00:21:47.457 "trsvcid": "4420" 00:21:47.457 }, 00:21:47.457 "peer_address": { 00:21:47.457 "trtype": "TCP", 00:21:47.457 "adrfam": "IPv4", 00:21:47.457 "traddr": "10.0.0.1", 00:21:47.457 "trsvcid": "56492" 00:21:47.457 }, 00:21:47.457 "auth": { 00:21:47.457 "state": "completed", 00:21:47.457 "digest": "sha384", 00:21:47.457 "dhgroup": "ffdhe2048" 00:21:47.457 } 00:21:47.457 } 00:21:47.457 ]' 00:21:47.457 00:34:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:47.457 00:34:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:47.457 00:34:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:47.457 00:34:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:47.457 00:34:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:47.457 00:34:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:47.457 00:34:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:47.457 00:34:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:48.024 00:34:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:03:NjU1ZTFiNjM3ZWY1ODZmMTllMjEzYTdiM2ExYzIzZTVkZDRiMDAyNTdmMDQzN2M3NzM1MjE1MDM5OGI4NWYzMBfT75I=: 00:21:48.980 00:34:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:48.980 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:48.980 00:34:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:21:48.980 00:34:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:48.980 00:34:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:48.980 00:34:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:48.980 00:34:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:48.980 00:34:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:48.980 00:34:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:48.980 00:34:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:49.238 00:34:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 0 00:21:49.238 00:34:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:49.238 00:34:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:49.238 00:34:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:21:49.238 00:34:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:49.238 00:34:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:49.238 00:34:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:49.238 00:34:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:49.238 00:34:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:49.238 00:34:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:49.238 00:34:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:49.238 00:34:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:49.523 00:21:49.796 00:34:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:49.796 00:34:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:49.796 00:34:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:50.056 00:34:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:50.056 00:34:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:50.056 00:34:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:50.056 00:34:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:50.056 00:34:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:50.056 00:34:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:50.056 { 00:21:50.056 "cntlid": 65, 00:21:50.056 "qid": 0, 00:21:50.056 "state": "enabled", 00:21:50.056 "listen_address": { 00:21:50.056 "trtype": "TCP", 00:21:50.056 "adrfam": "IPv4", 00:21:50.056 "traddr": "10.0.0.2", 00:21:50.056 "trsvcid": "4420" 00:21:50.056 }, 00:21:50.056 "peer_address": { 00:21:50.056 "trtype": "TCP", 00:21:50.056 "adrfam": "IPv4", 00:21:50.056 "traddr": "10.0.0.1", 00:21:50.056 "trsvcid": "56516" 00:21:50.056 }, 00:21:50.056 "auth": { 00:21:50.056 "state": "completed", 00:21:50.056 "digest": "sha384", 00:21:50.056 "dhgroup": "ffdhe3072" 00:21:50.056 } 00:21:50.056 } 00:21:50.056 ]' 00:21:50.056 00:34:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:50.056 00:34:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:50.056 00:34:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:50.056 00:34:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:50.056 00:34:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:50.056 00:34:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:50.056 00:34:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:50.056 00:34:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:50.317 00:34:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:00:MDQ4Njc1MzY1OWZjODQ0M2JmNjhiZjIxMDRmYWNhYzllN2FlOWI2NzJhNzU3NzAxMlqSlA==: --dhchap-ctrl-secret DHHC-1:03:Zjk0ZWVmNjIxOThmZTJhOGI4ZDA2ZWY2ODAzZjY2ZWEwNmE5MjZhMjBhZmY0NTU5NzlkYTFiM2UyNWQ5NWVlZd8/ViE=: 00:21:51.697 00:34:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:51.697 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:51.697 00:34:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:21:51.697 00:34:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:51.697 00:34:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:51.697 00:34:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:51.697 00:34:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:51.697 00:34:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:51.697 00:34:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:51.956 00:34:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 1 00:21:51.956 00:34:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:51.956 00:34:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:51.956 00:34:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:21:51.956 00:34:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:51.956 00:34:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:51.956 00:34:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:51.956 00:34:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:51.956 00:34:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:51.956 00:34:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:51.956 00:34:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:51.956 00:34:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:52.214 00:21:52.214 00:34:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:52.214 00:34:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:52.214 00:34:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:52.473 00:34:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:52.473 00:34:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:52.473 00:34:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:52.473 00:34:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:52.473 00:34:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:52.473 00:34:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:52.473 { 00:21:52.473 "cntlid": 67, 00:21:52.473 "qid": 0, 00:21:52.473 "state": "enabled", 00:21:52.473 "listen_address": { 00:21:52.473 "trtype": "TCP", 00:21:52.473 "adrfam": "IPv4", 00:21:52.473 "traddr": "10.0.0.2", 00:21:52.473 "trsvcid": "4420" 00:21:52.473 }, 00:21:52.473 "peer_address": { 00:21:52.473 "trtype": "TCP", 00:21:52.473 "adrfam": "IPv4", 00:21:52.473 "traddr": "10.0.0.1", 00:21:52.473 "trsvcid": "56528" 00:21:52.473 }, 00:21:52.473 "auth": { 00:21:52.473 "state": "completed", 00:21:52.473 "digest": "sha384", 00:21:52.473 "dhgroup": "ffdhe3072" 00:21:52.473 } 00:21:52.473 } 00:21:52.473 ]' 00:21:52.473 00:34:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:52.732 00:34:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:52.732 00:34:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:52.732 00:34:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:52.732 00:34:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:52.732 00:34:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:52.732 00:34:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:52.732 00:34:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:52.990 00:34:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:01:ZmEzOTJkOTkwZmE5MTMxN2I2MDVmZGRhNzkyYjcxODA4MVgW: --dhchap-ctrl-secret DHHC-1:02:Y2NhYTJjYWJhMDlhZDQwNzM3ZjZiMjhiYWU3MGRiMjZkZGQzZmM1NTEwMjVjNWQwijtRNA==: 00:21:54.366 00:34:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:54.366 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:54.366 00:34:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:21:54.366 00:34:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:54.366 00:34:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:54.366 00:34:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:54.366 00:34:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:54.366 00:34:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:54.366 00:34:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:54.625 00:34:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 2 00:21:54.625 00:34:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:54.625 00:34:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:54.625 00:34:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:21:54.625 00:34:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:54.625 00:34:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:54.625 00:34:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:54.625 00:34:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:54.625 00:34:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:54.625 00:34:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:54.625 00:34:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:54.625 00:34:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:54.884 00:21:54.884 00:34:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:54.884 00:34:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:54.884 00:34:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:55.142 00:34:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:55.142 00:34:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:55.142 00:34:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:55.142 00:34:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:55.142 00:34:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:55.142 00:34:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:55.142 { 00:21:55.142 "cntlid": 69, 00:21:55.142 "qid": 0, 00:21:55.142 "state": "enabled", 00:21:55.142 "listen_address": { 00:21:55.142 "trtype": "TCP", 00:21:55.142 "adrfam": "IPv4", 00:21:55.142 "traddr": "10.0.0.2", 00:21:55.142 "trsvcid": "4420" 00:21:55.142 }, 00:21:55.142 "peer_address": { 00:21:55.142 "trtype": "TCP", 00:21:55.142 "adrfam": "IPv4", 00:21:55.142 "traddr": "10.0.0.1", 00:21:55.142 "trsvcid": "56556" 00:21:55.142 }, 00:21:55.142 "auth": { 00:21:55.142 "state": "completed", 00:21:55.142 "digest": "sha384", 00:21:55.142 "dhgroup": "ffdhe3072" 00:21:55.142 } 00:21:55.142 } 00:21:55.142 ]' 00:21:55.142 00:34:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:55.142 00:34:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:55.142 00:34:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:55.401 00:34:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:55.401 00:34:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:55.401 00:34:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:55.401 00:34:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:55.401 00:34:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:55.659 00:34:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:02:ZjkzM2Y2MGE1MTM1ZTkzNDE2MTZiODIyZTliODk2OThjZDllNjhlNzVjMTU4MzIwNQF96A==: --dhchap-ctrl-secret DHHC-1:01:MGI1OGFjYjZiZDIxZmI0NmM4ZDQ3MWQwMWM3NDJmZmNekHDK: 00:21:57.039 00:34:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:57.039 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:57.039 00:34:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:21:57.039 00:34:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:57.039 00:34:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:57.039 00:34:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:57.039 00:34:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:57.039 00:34:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:57.039 00:34:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:57.039 00:34:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 3 00:21:57.039 00:34:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:57.039 00:34:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:57.039 00:34:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:21:57.039 00:34:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:57.039 00:34:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:57.039 00:34:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key3 00:21:57.039 00:34:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:57.039 00:34:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:57.039 00:34:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:57.039 00:34:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:57.039 00:34:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:57.608 00:21:57.608 00:34:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:57.608 00:34:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:57.608 00:34:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:57.867 00:34:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:57.867 00:34:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:57.867 00:34:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:57.867 00:34:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:57.867 00:34:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:57.867 00:34:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:57.867 { 00:21:57.867 "cntlid": 71, 00:21:57.867 "qid": 0, 00:21:57.867 "state": "enabled", 00:21:57.867 "listen_address": { 00:21:57.867 "trtype": "TCP", 00:21:57.867 "adrfam": "IPv4", 00:21:57.867 "traddr": "10.0.0.2", 00:21:57.867 "trsvcid": "4420" 00:21:57.867 }, 00:21:57.867 "peer_address": { 00:21:57.867 "trtype": "TCP", 00:21:57.867 "adrfam": "IPv4", 00:21:57.867 "traddr": "10.0.0.1", 00:21:57.867 "trsvcid": "37196" 00:21:57.867 }, 00:21:57.867 "auth": { 00:21:57.867 "state": "completed", 00:21:57.867 "digest": "sha384", 00:21:57.867 "dhgroup": "ffdhe3072" 00:21:57.867 } 00:21:57.867 } 00:21:57.867 ]' 00:21:57.867 00:34:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:57.867 00:34:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:57.867 00:34:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:57.867 00:34:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:57.867 00:34:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:57.867 00:34:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:57.867 00:34:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:57.867 00:34:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:58.128 00:34:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:03:NjU1ZTFiNjM3ZWY1ODZmMTllMjEzYTdiM2ExYzIzZTVkZDRiMDAyNTdmMDQzN2M3NzM1MjE1MDM5OGI4NWYzMBfT75I=: 00:21:59.509 00:34:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:59.509 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:59.509 00:34:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:21:59.509 00:34:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:59.509 00:34:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:59.509 00:34:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:59.509 00:34:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:59.509 00:34:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:59.509 00:34:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:59.509 00:34:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:59.769 00:34:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 0 00:21:59.769 00:34:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:59.769 00:34:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:59.769 00:34:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:21:59.769 00:34:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:59.769 00:34:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:59.769 00:34:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:59.769 00:34:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:59.769 00:34:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:59.769 00:34:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:59.769 00:34:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:59.769 00:34:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:00.028 00:22:00.028 00:34:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:00.028 00:34:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:00.028 00:34:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:00.597 00:34:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:00.597 00:34:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:00.597 00:34:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:00.597 00:34:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:00.597 00:34:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:00.597 00:34:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:00.597 { 00:22:00.597 "cntlid": 73, 00:22:00.597 "qid": 0, 00:22:00.597 "state": "enabled", 00:22:00.597 "listen_address": { 00:22:00.597 "trtype": "TCP", 00:22:00.597 "adrfam": "IPv4", 00:22:00.597 "traddr": "10.0.0.2", 00:22:00.597 "trsvcid": "4420" 00:22:00.597 }, 00:22:00.597 "peer_address": { 00:22:00.597 "trtype": "TCP", 00:22:00.597 "adrfam": "IPv4", 00:22:00.597 "traddr": "10.0.0.1", 00:22:00.597 "trsvcid": "37224" 00:22:00.597 }, 00:22:00.597 "auth": { 00:22:00.597 "state": "completed", 00:22:00.597 "digest": "sha384", 00:22:00.597 "dhgroup": "ffdhe4096" 00:22:00.597 } 00:22:00.597 } 00:22:00.597 ]' 00:22:00.597 00:34:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:00.597 00:34:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:00.597 00:34:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:00.597 00:34:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:00.597 00:34:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:00.597 00:34:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:00.597 00:34:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:00.597 00:34:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:00.858 00:34:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:00:MDQ4Njc1MzY1OWZjODQ0M2JmNjhiZjIxMDRmYWNhYzllN2FlOWI2NzJhNzU3NzAxMlqSlA==: --dhchap-ctrl-secret DHHC-1:03:Zjk0ZWVmNjIxOThmZTJhOGI4ZDA2ZWY2ODAzZjY2ZWEwNmE5MjZhMjBhZmY0NTU5NzlkYTFiM2UyNWQ5NWVlZd8/ViE=: 00:22:02.236 00:34:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:02.236 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:02.236 00:34:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:22:02.236 00:34:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:02.236 00:34:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:02.236 00:34:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:02.236 00:34:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:02.236 00:34:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:22:02.236 00:34:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:22:02.236 00:34:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 1 00:22:02.236 00:34:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:02.236 00:34:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:22:02.236 00:34:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:22:02.236 00:34:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:22:02.236 00:34:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:02.236 00:34:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:02.236 00:34:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:02.236 00:34:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:02.236 00:34:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:02.236 00:34:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:02.236 00:34:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:02.802 00:22:02.802 00:34:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:02.802 00:34:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:02.802 00:34:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:03.060 00:34:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:03.060 00:34:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:03.060 00:34:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:03.060 00:34:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:03.060 00:34:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:03.060 00:34:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:03.060 { 00:22:03.060 "cntlid": 75, 00:22:03.060 "qid": 0, 00:22:03.060 "state": "enabled", 00:22:03.060 "listen_address": { 00:22:03.060 "trtype": "TCP", 00:22:03.060 "adrfam": "IPv4", 00:22:03.060 "traddr": "10.0.0.2", 00:22:03.060 "trsvcid": "4420" 00:22:03.060 }, 00:22:03.060 "peer_address": { 00:22:03.060 "trtype": "TCP", 00:22:03.060 "adrfam": "IPv4", 00:22:03.060 "traddr": "10.0.0.1", 00:22:03.060 "trsvcid": "37242" 00:22:03.060 }, 00:22:03.060 "auth": { 00:22:03.060 "state": "completed", 00:22:03.060 "digest": "sha384", 00:22:03.060 "dhgroup": "ffdhe4096" 00:22:03.060 } 00:22:03.060 } 00:22:03.060 ]' 00:22:03.060 00:34:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:03.060 00:34:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:03.060 00:34:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:03.060 00:34:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:03.061 00:34:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:03.319 00:34:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:03.319 00:34:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:03.319 00:34:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:03.577 00:34:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:01:ZmEzOTJkOTkwZmE5MTMxN2I2MDVmZGRhNzkyYjcxODA4MVgW: --dhchap-ctrl-secret DHHC-1:02:Y2NhYTJjYWJhMDlhZDQwNzM3ZjZiMjhiYWU3MGRiMjZkZGQzZmM1NTEwMjVjNWQwijtRNA==: 00:22:04.951 00:34:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:04.951 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:04.951 00:34:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:22:04.951 00:34:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:04.951 00:34:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:04.951 00:34:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:04.951 00:34:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:04.951 00:34:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:22:04.951 00:34:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:22:04.951 00:34:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 2 00:22:04.951 00:34:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:04.951 00:34:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:22:04.951 00:34:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:22:04.951 00:34:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:22:04.951 00:34:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:04.951 00:34:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:04.951 00:34:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:04.951 00:34:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:04.951 00:34:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:04.951 00:34:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:04.951 00:34:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:05.517 00:22:05.517 00:34:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:05.517 00:34:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:05.517 00:34:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:05.813 00:34:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:05.813 00:34:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:05.813 00:34:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:05.813 00:34:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:05.813 00:34:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:05.813 00:34:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:05.813 { 00:22:05.813 "cntlid": 77, 00:22:05.813 "qid": 0, 00:22:05.813 "state": "enabled", 00:22:05.813 "listen_address": { 00:22:05.813 "trtype": "TCP", 00:22:05.813 "adrfam": "IPv4", 00:22:05.813 "traddr": "10.0.0.2", 00:22:05.813 "trsvcid": "4420" 00:22:05.813 }, 00:22:05.813 "peer_address": { 00:22:05.813 "trtype": "TCP", 00:22:05.813 "adrfam": "IPv4", 00:22:05.813 "traddr": "10.0.0.1", 00:22:05.813 "trsvcid": "37264" 00:22:05.813 }, 00:22:05.813 "auth": { 00:22:05.813 "state": "completed", 00:22:05.813 "digest": "sha384", 00:22:05.813 "dhgroup": "ffdhe4096" 00:22:05.813 } 00:22:05.813 } 00:22:05.813 ]' 00:22:05.813 00:34:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:05.813 00:34:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:05.813 00:34:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:05.813 00:34:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:05.813 00:34:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:05.813 00:34:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:05.813 00:34:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:05.813 00:34:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:06.097 00:34:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:02:ZjkzM2Y2MGE1MTM1ZTkzNDE2MTZiODIyZTliODk2OThjZDllNjhlNzVjMTU4MzIwNQF96A==: --dhchap-ctrl-secret DHHC-1:01:MGI1OGFjYjZiZDIxZmI0NmM4ZDQ3MWQwMWM3NDJmZmNekHDK: 00:22:07.471 00:34:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:07.471 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:07.471 00:34:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:22:07.471 00:34:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:07.471 00:34:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:07.471 00:34:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:07.472 00:34:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:07.472 00:34:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:22:07.472 00:34:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:22:07.730 00:34:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 3 00:22:07.730 00:34:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:07.730 00:34:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:22:07.730 00:34:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:22:07.730 00:34:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:22:07.730 00:34:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:07.730 00:34:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key3 00:22:07.730 00:34:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:07.730 00:34:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:07.730 00:34:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:07.730 00:34:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:07.730 00:34:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:07.988 00:22:07.988 00:34:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:07.988 00:34:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:07.988 00:34:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:08.553 00:34:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:08.553 00:34:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:08.553 00:34:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:08.553 00:34:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:08.554 00:34:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:08.554 00:34:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:08.554 { 00:22:08.554 "cntlid": 79, 00:22:08.554 "qid": 0, 00:22:08.554 "state": "enabled", 00:22:08.554 "listen_address": { 00:22:08.554 "trtype": "TCP", 00:22:08.554 "adrfam": "IPv4", 00:22:08.554 "traddr": "10.0.0.2", 00:22:08.554 "trsvcid": "4420" 00:22:08.554 }, 00:22:08.554 "peer_address": { 00:22:08.554 "trtype": "TCP", 00:22:08.554 "adrfam": "IPv4", 00:22:08.554 "traddr": "10.0.0.1", 00:22:08.554 "trsvcid": "47334" 00:22:08.554 }, 00:22:08.554 "auth": { 00:22:08.554 "state": "completed", 00:22:08.554 "digest": "sha384", 00:22:08.554 "dhgroup": "ffdhe4096" 00:22:08.554 } 00:22:08.554 } 00:22:08.554 ]' 00:22:08.554 00:34:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:08.554 00:34:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:08.554 00:34:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:08.554 00:34:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:08.554 00:34:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:08.554 00:34:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:08.554 00:34:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:08.554 00:34:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:08.812 00:34:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:03:NjU1ZTFiNjM3ZWY1ODZmMTllMjEzYTdiM2ExYzIzZTVkZDRiMDAyNTdmMDQzN2M3NzM1MjE1MDM5OGI4NWYzMBfT75I=: 00:22:10.186 00:34:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:10.186 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:10.186 00:34:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:22:10.186 00:34:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:10.186 00:34:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:10.186 00:34:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:10.186 00:34:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:22:10.186 00:34:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:10.186 00:34:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:22:10.187 00:34:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:22:10.445 00:34:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 0 00:22:10.445 00:34:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:10.445 00:34:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:22:10.445 00:34:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:22:10.445 00:34:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:22:10.445 00:34:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:10.445 00:34:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:10.445 00:34:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:10.445 00:34:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:10.445 00:34:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:10.445 00:34:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:10.445 00:34:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:11.011 00:22:11.011 00:34:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:11.011 00:34:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:11.011 00:34:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:11.283 00:34:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:11.283 00:34:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:11.283 00:34:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:11.283 00:34:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:11.283 00:34:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:11.283 00:34:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:11.283 { 00:22:11.283 "cntlid": 81, 00:22:11.283 "qid": 0, 00:22:11.283 "state": "enabled", 00:22:11.283 "listen_address": { 00:22:11.283 "trtype": "TCP", 00:22:11.283 "adrfam": "IPv4", 00:22:11.283 "traddr": "10.0.0.2", 00:22:11.283 "trsvcid": "4420" 00:22:11.283 }, 00:22:11.283 "peer_address": { 00:22:11.283 "trtype": "TCP", 00:22:11.283 "adrfam": "IPv4", 00:22:11.283 "traddr": "10.0.0.1", 00:22:11.283 "trsvcid": "47376" 00:22:11.283 }, 00:22:11.283 "auth": { 00:22:11.283 "state": "completed", 00:22:11.283 "digest": "sha384", 00:22:11.283 "dhgroup": "ffdhe6144" 00:22:11.283 } 00:22:11.283 } 00:22:11.283 ]' 00:22:11.283 00:34:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:11.283 00:34:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:11.283 00:34:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:11.283 00:34:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:11.283 00:34:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:11.283 00:34:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:11.283 00:34:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:11.283 00:34:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:11.541 00:34:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:00:MDQ4Njc1MzY1OWZjODQ0M2JmNjhiZjIxMDRmYWNhYzllN2FlOWI2NzJhNzU3NzAxMlqSlA==: --dhchap-ctrl-secret DHHC-1:03:Zjk0ZWVmNjIxOThmZTJhOGI4ZDA2ZWY2ODAzZjY2ZWEwNmE5MjZhMjBhZmY0NTU5NzlkYTFiM2UyNWQ5NWVlZd8/ViE=: 00:22:12.915 00:34:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:12.915 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:12.915 00:34:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:22:12.915 00:34:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:12.915 00:34:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:12.915 00:34:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:12.915 00:34:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:12.915 00:34:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:22:12.915 00:34:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:22:13.173 00:34:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 1 00:22:13.173 00:34:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:13.173 00:34:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:22:13.173 00:34:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:22:13.173 00:34:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:22:13.173 00:34:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:13.173 00:34:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:13.173 00:34:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:13.173 00:34:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:13.173 00:34:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:13.173 00:34:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:13.173 00:34:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:13.739 00:22:13.739 00:34:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:13.739 00:34:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:13.739 00:34:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:13.997 00:34:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:13.997 00:34:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:13.997 00:34:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:13.997 00:34:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:13.997 00:34:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:13.997 00:34:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:13.997 { 00:22:13.997 "cntlid": 83, 00:22:13.997 "qid": 0, 00:22:13.997 "state": "enabled", 00:22:13.997 "listen_address": { 00:22:13.997 "trtype": "TCP", 00:22:13.997 "adrfam": "IPv4", 00:22:13.997 "traddr": "10.0.0.2", 00:22:13.997 "trsvcid": "4420" 00:22:13.997 }, 00:22:13.997 "peer_address": { 00:22:13.997 "trtype": "TCP", 00:22:13.997 "adrfam": "IPv4", 00:22:13.997 "traddr": "10.0.0.1", 00:22:13.997 "trsvcid": "47406" 00:22:13.997 }, 00:22:13.997 "auth": { 00:22:13.997 "state": "completed", 00:22:13.997 "digest": "sha384", 00:22:13.997 "dhgroup": "ffdhe6144" 00:22:13.997 } 00:22:13.997 } 00:22:13.997 ]' 00:22:13.997 00:34:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:14.255 00:34:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:14.255 00:34:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:14.255 00:34:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:14.255 00:34:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:14.255 00:34:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:14.255 00:34:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:14.255 00:34:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:14.513 00:34:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:01:ZmEzOTJkOTkwZmE5MTMxN2I2MDVmZGRhNzkyYjcxODA4MVgW: --dhchap-ctrl-secret DHHC-1:02:Y2NhYTJjYWJhMDlhZDQwNzM3ZjZiMjhiYWU3MGRiMjZkZGQzZmM1NTEwMjVjNWQwijtRNA==: 00:22:15.887 00:34:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:15.887 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:15.887 00:34:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:22:15.887 00:34:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:15.887 00:34:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:15.887 00:34:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:15.887 00:34:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:15.887 00:34:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:22:15.887 00:34:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:22:15.887 00:34:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 2 00:22:15.887 00:34:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:15.887 00:34:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:22:15.888 00:34:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:22:15.888 00:34:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:22:15.888 00:34:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:15.888 00:34:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:15.888 00:34:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:15.888 00:34:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:15.888 00:34:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:15.888 00:34:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:15.888 00:34:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:16.818 00:22:16.818 00:34:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:16.818 00:34:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:16.818 00:34:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:16.818 00:34:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:16.818 00:34:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:16.818 00:34:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:16.818 00:34:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:17.076 00:34:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:17.076 00:34:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:17.076 { 00:22:17.076 "cntlid": 85, 00:22:17.076 "qid": 0, 00:22:17.076 "state": "enabled", 00:22:17.076 "listen_address": { 00:22:17.076 "trtype": "TCP", 00:22:17.076 "adrfam": "IPv4", 00:22:17.076 "traddr": "10.0.0.2", 00:22:17.076 "trsvcid": "4420" 00:22:17.076 }, 00:22:17.076 "peer_address": { 00:22:17.076 "trtype": "TCP", 00:22:17.076 "adrfam": "IPv4", 00:22:17.076 "traddr": "10.0.0.1", 00:22:17.076 "trsvcid": "57844" 00:22:17.076 }, 00:22:17.076 "auth": { 00:22:17.076 "state": "completed", 00:22:17.076 "digest": "sha384", 00:22:17.076 "dhgroup": "ffdhe6144" 00:22:17.076 } 00:22:17.076 } 00:22:17.076 ]' 00:22:17.076 00:34:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:17.076 00:34:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:17.076 00:34:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:17.076 00:34:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:17.076 00:34:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:17.076 00:34:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:17.076 00:34:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:17.076 00:34:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:17.333 00:34:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:02:ZjkzM2Y2MGE1MTM1ZTkzNDE2MTZiODIyZTliODk2OThjZDllNjhlNzVjMTU4MzIwNQF96A==: --dhchap-ctrl-secret DHHC-1:01:MGI1OGFjYjZiZDIxZmI0NmM4ZDQ3MWQwMWM3NDJmZmNekHDK: 00:22:18.706 00:34:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:18.706 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:18.706 00:34:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:22:18.706 00:34:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:18.706 00:34:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:18.706 00:34:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:18.706 00:34:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:18.706 00:34:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:22:18.706 00:34:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:22:18.963 00:34:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 3 00:22:18.963 00:34:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:18.963 00:34:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:22:18.963 00:34:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:22:18.964 00:34:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:22:18.964 00:34:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:18.964 00:34:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key3 00:22:18.964 00:34:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:18.964 00:34:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:18.964 00:34:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:18.964 00:34:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:18.964 00:34:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:19.528 00:22:19.528 00:34:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:19.528 00:34:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:19.528 00:34:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:19.785 00:34:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:19.785 00:34:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:19.785 00:34:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:19.785 00:34:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:19.785 00:34:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:19.785 00:34:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:19.785 { 00:22:19.785 "cntlid": 87, 00:22:19.785 "qid": 0, 00:22:19.785 "state": "enabled", 00:22:19.785 "listen_address": { 00:22:19.785 "trtype": "TCP", 00:22:19.785 "adrfam": "IPv4", 00:22:19.785 "traddr": "10.0.0.2", 00:22:19.785 "trsvcid": "4420" 00:22:19.785 }, 00:22:19.785 "peer_address": { 00:22:19.785 "trtype": "TCP", 00:22:19.785 "adrfam": "IPv4", 00:22:19.785 "traddr": "10.0.0.1", 00:22:19.785 "trsvcid": "57862" 00:22:19.785 }, 00:22:19.785 "auth": { 00:22:19.785 "state": "completed", 00:22:19.785 "digest": "sha384", 00:22:19.785 "dhgroup": "ffdhe6144" 00:22:19.785 } 00:22:19.785 } 00:22:19.785 ]' 00:22:19.785 00:34:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:19.785 00:34:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:19.785 00:34:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:19.785 00:34:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:20.042 00:34:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:20.042 00:34:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:20.042 00:34:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:20.042 00:34:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:20.299 00:34:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:03:NjU1ZTFiNjM3ZWY1ODZmMTllMjEzYTdiM2ExYzIzZTVkZDRiMDAyNTdmMDQzN2M3NzM1MjE1MDM5OGI4NWYzMBfT75I=: 00:22:21.668 00:34:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:21.668 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:21.668 00:34:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:22:21.668 00:34:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:21.668 00:34:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:21.668 00:34:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:21.668 00:34:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:22:21.668 00:34:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:21.668 00:34:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:21.668 00:34:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:21.668 00:34:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 0 00:22:21.668 00:34:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:21.668 00:34:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:22:21.668 00:34:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:22:21.668 00:34:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:22:21.668 00:34:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:21.668 00:34:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:21.668 00:34:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:21.668 00:34:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:21.668 00:34:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:21.668 00:34:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:21.668 00:34:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:23.041 00:22:23.041 00:34:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:23.041 00:34:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:23.041 00:34:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:23.041 00:34:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:23.041 00:34:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:23.041 00:34:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:23.041 00:34:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:23.041 00:34:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:23.041 00:34:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:23.041 { 00:22:23.041 "cntlid": 89, 00:22:23.041 "qid": 0, 00:22:23.041 "state": "enabled", 00:22:23.041 "listen_address": { 00:22:23.041 "trtype": "TCP", 00:22:23.041 "adrfam": "IPv4", 00:22:23.041 "traddr": "10.0.0.2", 00:22:23.041 "trsvcid": "4420" 00:22:23.041 }, 00:22:23.041 "peer_address": { 00:22:23.041 "trtype": "TCP", 00:22:23.041 "adrfam": "IPv4", 00:22:23.041 "traddr": "10.0.0.1", 00:22:23.041 "trsvcid": "57890" 00:22:23.041 }, 00:22:23.041 "auth": { 00:22:23.041 "state": "completed", 00:22:23.041 "digest": "sha384", 00:22:23.041 "dhgroup": "ffdhe8192" 00:22:23.041 } 00:22:23.041 } 00:22:23.041 ]' 00:22:23.041 00:34:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:23.041 00:34:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:23.041 00:34:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:23.041 00:34:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:23.041 00:34:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:23.298 00:34:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:23.298 00:34:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:23.298 00:34:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:23.556 00:34:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:00:MDQ4Njc1MzY1OWZjODQ0M2JmNjhiZjIxMDRmYWNhYzllN2FlOWI2NzJhNzU3NzAxMlqSlA==: --dhchap-ctrl-secret DHHC-1:03:Zjk0ZWVmNjIxOThmZTJhOGI4ZDA2ZWY2ODAzZjY2ZWEwNmE5MjZhMjBhZmY0NTU5NzlkYTFiM2UyNWQ5NWVlZd8/ViE=: 00:22:24.930 00:34:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:24.930 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:24.930 00:34:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:22:24.930 00:34:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:24.930 00:34:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:24.930 00:34:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:24.930 00:34:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:24.930 00:34:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:24.930 00:34:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:24.930 00:34:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 1 00:22:24.930 00:34:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:24.930 00:34:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:22:24.930 00:34:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:22:24.930 00:34:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:22:24.930 00:34:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:24.930 00:34:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:24.930 00:34:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:24.930 00:34:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:24.930 00:34:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:24.930 00:34:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:24.931 00:34:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:26.303 00:22:26.303 00:34:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:26.303 00:34:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:26.303 00:34:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:26.303 00:34:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:26.303 00:34:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:26.303 00:34:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:26.303 00:34:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:26.303 00:34:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:26.303 00:34:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:26.303 { 00:22:26.303 "cntlid": 91, 00:22:26.303 "qid": 0, 00:22:26.303 "state": "enabled", 00:22:26.303 "listen_address": { 00:22:26.303 "trtype": "TCP", 00:22:26.303 "adrfam": "IPv4", 00:22:26.303 "traddr": "10.0.0.2", 00:22:26.303 "trsvcid": "4420" 00:22:26.303 }, 00:22:26.303 "peer_address": { 00:22:26.303 "trtype": "TCP", 00:22:26.303 "adrfam": "IPv4", 00:22:26.303 "traddr": "10.0.0.1", 00:22:26.303 "trsvcid": "57904" 00:22:26.303 }, 00:22:26.303 "auth": { 00:22:26.303 "state": "completed", 00:22:26.303 "digest": "sha384", 00:22:26.303 "dhgroup": "ffdhe8192" 00:22:26.303 } 00:22:26.303 } 00:22:26.303 ]' 00:22:26.303 00:34:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:26.303 00:34:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:26.303 00:34:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:26.303 00:34:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:26.303 00:34:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:26.303 00:34:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:26.303 00:34:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:26.303 00:34:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:26.561 00:34:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:01:ZmEzOTJkOTkwZmE5MTMxN2I2MDVmZGRhNzkyYjcxODA4MVgW: --dhchap-ctrl-secret DHHC-1:02:Y2NhYTJjYWJhMDlhZDQwNzM3ZjZiMjhiYWU3MGRiMjZkZGQzZmM1NTEwMjVjNWQwijtRNA==: 00:22:27.934 00:34:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:27.934 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:27.934 00:34:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:22:27.934 00:34:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:27.934 00:34:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:27.934 00:34:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:27.934 00:34:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:27.934 00:34:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:27.934 00:34:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:28.193 00:34:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 2 00:22:28.193 00:34:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:28.193 00:34:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:22:28.193 00:34:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:22:28.193 00:34:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:22:28.193 00:34:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:28.193 00:34:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:28.193 00:34:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:28.193 00:34:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:28.193 00:34:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:28.193 00:34:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:28.193 00:34:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:29.127 00:22:29.128 00:34:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:29.128 00:34:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:29.128 00:34:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:29.385 00:34:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:29.386 00:34:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:29.386 00:34:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:29.386 00:34:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:29.644 00:34:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:29.644 00:34:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:29.644 { 00:22:29.644 "cntlid": 93, 00:22:29.644 "qid": 0, 00:22:29.644 "state": "enabled", 00:22:29.644 "listen_address": { 00:22:29.644 "trtype": "TCP", 00:22:29.644 "adrfam": "IPv4", 00:22:29.644 "traddr": "10.0.0.2", 00:22:29.644 "trsvcid": "4420" 00:22:29.644 }, 00:22:29.644 "peer_address": { 00:22:29.644 "trtype": "TCP", 00:22:29.644 "adrfam": "IPv4", 00:22:29.644 "traddr": "10.0.0.1", 00:22:29.644 "trsvcid": "42598" 00:22:29.644 }, 00:22:29.644 "auth": { 00:22:29.644 "state": "completed", 00:22:29.644 "digest": "sha384", 00:22:29.644 "dhgroup": "ffdhe8192" 00:22:29.644 } 00:22:29.644 } 00:22:29.644 ]' 00:22:29.644 00:34:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:29.644 00:34:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:29.644 00:34:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:29.644 00:34:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:29.644 00:34:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:29.644 00:34:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:29.644 00:34:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:29.644 00:34:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:29.904 00:34:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:02:ZjkzM2Y2MGE1MTM1ZTkzNDE2MTZiODIyZTliODk2OThjZDllNjhlNzVjMTU4MzIwNQF96A==: --dhchap-ctrl-secret DHHC-1:01:MGI1OGFjYjZiZDIxZmI0NmM4ZDQ3MWQwMWM3NDJmZmNekHDK: 00:22:31.290 00:34:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:31.290 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:31.290 00:34:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:22:31.290 00:34:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:31.290 00:34:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:31.290 00:34:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:31.290 00:34:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:31.290 00:34:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:31.290 00:34:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:31.549 00:34:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 3 00:22:31.549 00:34:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:31.549 00:34:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:22:31.549 00:34:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:22:31.549 00:34:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:22:31.549 00:34:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:31.549 00:34:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key3 00:22:31.549 00:34:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:31.549 00:34:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:31.549 00:34:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:31.549 00:34:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:31.549 00:34:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:32.492 00:22:32.492 00:35:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:32.492 00:35:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:32.492 00:35:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:32.751 00:35:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:32.751 00:35:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:32.751 00:35:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:32.751 00:35:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:32.751 00:35:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:32.751 00:35:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:32.751 { 00:22:32.751 "cntlid": 95, 00:22:32.751 "qid": 0, 00:22:32.751 "state": "enabled", 00:22:32.751 "listen_address": { 00:22:32.751 "trtype": "TCP", 00:22:32.751 "adrfam": "IPv4", 00:22:32.751 "traddr": "10.0.0.2", 00:22:32.751 "trsvcid": "4420" 00:22:32.751 }, 00:22:32.751 "peer_address": { 00:22:32.751 "trtype": "TCP", 00:22:32.751 "adrfam": "IPv4", 00:22:32.751 "traddr": "10.0.0.1", 00:22:32.751 "trsvcid": "42624" 00:22:32.751 }, 00:22:32.751 "auth": { 00:22:32.751 "state": "completed", 00:22:32.751 "digest": "sha384", 00:22:32.751 "dhgroup": "ffdhe8192" 00:22:32.751 } 00:22:32.751 } 00:22:32.751 ]' 00:22:32.751 00:35:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:32.751 00:35:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:32.751 00:35:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:32.751 00:35:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:32.751 00:35:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:33.010 00:35:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:33.010 00:35:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:33.010 00:35:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:33.270 00:35:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:03:NjU1ZTFiNjM3ZWY1ODZmMTllMjEzYTdiM2ExYzIzZTVkZDRiMDAyNTdmMDQzN2M3NzM1MjE1MDM5OGI4NWYzMBfT75I=: 00:22:34.267 00:35:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:34.267 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:34.267 00:35:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:22:34.267 00:35:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:34.267 00:35:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:34.526 00:35:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:34.526 00:35:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:22:34.526 00:35:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:22:34.526 00:35:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:34.526 00:35:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:22:34.526 00:35:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:22:34.797 00:35:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 0 00:22:34.797 00:35:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:34.797 00:35:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:34.797 00:35:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:22:34.797 00:35:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:22:34.797 00:35:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:34.797 00:35:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:34.797 00:35:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:34.797 00:35:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:34.797 00:35:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:34.797 00:35:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:34.797 00:35:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:35.062 00:22:35.062 00:35:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:35.062 00:35:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:35.062 00:35:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:35.320 00:35:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:35.320 00:35:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:35.320 00:35:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:35.320 00:35:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:35.320 00:35:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:35.320 00:35:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:35.320 { 00:22:35.320 "cntlid": 97, 00:22:35.320 "qid": 0, 00:22:35.320 "state": "enabled", 00:22:35.320 "listen_address": { 00:22:35.320 "trtype": "TCP", 00:22:35.320 "adrfam": "IPv4", 00:22:35.320 "traddr": "10.0.0.2", 00:22:35.320 "trsvcid": "4420" 00:22:35.320 }, 00:22:35.320 "peer_address": { 00:22:35.320 "trtype": "TCP", 00:22:35.320 "adrfam": "IPv4", 00:22:35.321 "traddr": "10.0.0.1", 00:22:35.321 "trsvcid": "42662" 00:22:35.321 }, 00:22:35.321 "auth": { 00:22:35.321 "state": "completed", 00:22:35.321 "digest": "sha512", 00:22:35.321 "dhgroup": "null" 00:22:35.321 } 00:22:35.321 } 00:22:35.321 ]' 00:22:35.321 00:35:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:35.321 00:35:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:35.321 00:35:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:35.321 00:35:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:22:35.321 00:35:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:35.579 00:35:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:35.579 00:35:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:35.579 00:35:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:35.837 00:35:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:00:MDQ4Njc1MzY1OWZjODQ0M2JmNjhiZjIxMDRmYWNhYzllN2FlOWI2NzJhNzU3NzAxMlqSlA==: --dhchap-ctrl-secret DHHC-1:03:Zjk0ZWVmNjIxOThmZTJhOGI4ZDA2ZWY2ODAzZjY2ZWEwNmE5MjZhMjBhZmY0NTU5NzlkYTFiM2UyNWQ5NWVlZd8/ViE=: 00:22:36.774 00:35:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:37.032 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:37.032 00:35:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:22:37.032 00:35:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:37.032 00:35:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:37.032 00:35:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:37.032 00:35:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:37.032 00:35:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:22:37.032 00:35:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:22:37.291 00:35:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 1 00:22:37.291 00:35:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:37.291 00:35:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:37.291 00:35:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:22:37.291 00:35:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:22:37.291 00:35:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:37.291 00:35:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:37.291 00:35:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:37.291 00:35:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:37.291 00:35:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:37.291 00:35:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:37.291 00:35:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:37.549 00:22:37.549 00:35:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:37.549 00:35:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:37.549 00:35:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:37.808 00:35:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:37.808 00:35:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:37.808 00:35:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:37.808 00:35:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:37.808 00:35:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:37.808 00:35:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:37.808 { 00:22:37.808 "cntlid": 99, 00:22:37.808 "qid": 0, 00:22:37.808 "state": "enabled", 00:22:37.808 "listen_address": { 00:22:37.808 "trtype": "TCP", 00:22:37.808 "adrfam": "IPv4", 00:22:37.808 "traddr": "10.0.0.2", 00:22:37.808 "trsvcid": "4420" 00:22:37.808 }, 00:22:37.808 "peer_address": { 00:22:37.808 "trtype": "TCP", 00:22:37.808 "adrfam": "IPv4", 00:22:37.808 "traddr": "10.0.0.1", 00:22:37.808 "trsvcid": "44792" 00:22:37.808 }, 00:22:37.808 "auth": { 00:22:37.808 "state": "completed", 00:22:37.808 "digest": "sha512", 00:22:37.808 "dhgroup": "null" 00:22:37.808 } 00:22:37.808 } 00:22:37.808 ]' 00:22:37.808 00:35:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:38.066 00:35:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:38.066 00:35:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:38.066 00:35:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:22:38.066 00:35:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:38.066 00:35:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:38.066 00:35:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:38.066 00:35:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:38.324 00:35:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:01:ZmEzOTJkOTkwZmE5MTMxN2I2MDVmZGRhNzkyYjcxODA4MVgW: --dhchap-ctrl-secret DHHC-1:02:Y2NhYTJjYWJhMDlhZDQwNzM3ZjZiMjhiYWU3MGRiMjZkZGQzZmM1NTEwMjVjNWQwijtRNA==: 00:22:39.703 00:35:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:39.703 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:39.703 00:35:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:22:39.703 00:35:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:39.703 00:35:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:39.703 00:35:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:39.703 00:35:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:39.703 00:35:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:22:39.703 00:35:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:22:39.962 00:35:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 2 00:22:39.962 00:35:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:39.962 00:35:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:39.962 00:35:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:22:39.962 00:35:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:22:39.962 00:35:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:39.962 00:35:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:39.962 00:35:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:39.962 00:35:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:39.962 00:35:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:39.962 00:35:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:39.962 00:35:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:40.220 00:22:40.220 00:35:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:40.220 00:35:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:40.220 00:35:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:40.478 00:35:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:40.478 00:35:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:40.478 00:35:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:40.478 00:35:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:40.478 00:35:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:40.478 00:35:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:40.478 { 00:22:40.478 "cntlid": 101, 00:22:40.478 "qid": 0, 00:22:40.478 "state": "enabled", 00:22:40.478 "listen_address": { 00:22:40.478 "trtype": "TCP", 00:22:40.478 "adrfam": "IPv4", 00:22:40.478 "traddr": "10.0.0.2", 00:22:40.478 "trsvcid": "4420" 00:22:40.478 }, 00:22:40.478 "peer_address": { 00:22:40.478 "trtype": "TCP", 00:22:40.478 "adrfam": "IPv4", 00:22:40.478 "traddr": "10.0.0.1", 00:22:40.478 "trsvcid": "44824" 00:22:40.478 }, 00:22:40.478 "auth": { 00:22:40.478 "state": "completed", 00:22:40.478 "digest": "sha512", 00:22:40.478 "dhgroup": "null" 00:22:40.478 } 00:22:40.478 } 00:22:40.478 ]' 00:22:40.478 00:35:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:40.478 00:35:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:40.478 00:35:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:40.478 00:35:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:22:40.478 00:35:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:40.737 00:35:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:40.737 00:35:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:40.737 00:35:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:40.995 00:35:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:02:ZjkzM2Y2MGE1MTM1ZTkzNDE2MTZiODIyZTliODk2OThjZDllNjhlNzVjMTU4MzIwNQF96A==: --dhchap-ctrl-secret DHHC-1:01:MGI1OGFjYjZiZDIxZmI0NmM4ZDQ3MWQwMWM3NDJmZmNekHDK: 00:22:42.371 00:35:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:42.371 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:42.371 00:35:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:22:42.371 00:35:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:42.371 00:35:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:42.371 00:35:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:42.371 00:35:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:42.371 00:35:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:22:42.371 00:35:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:22:42.371 00:35:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 3 00:22:42.371 00:35:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:42.371 00:35:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:42.371 00:35:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:22:42.371 00:35:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:22:42.371 00:35:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:42.371 00:35:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key3 00:22:42.371 00:35:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:42.371 00:35:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:42.371 00:35:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:42.371 00:35:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:42.371 00:35:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:42.940 00:22:42.940 00:35:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:42.940 00:35:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:42.940 00:35:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:43.199 00:35:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:43.199 00:35:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:43.199 00:35:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:43.199 00:35:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:43.199 00:35:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:43.199 00:35:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:43.199 { 00:22:43.199 "cntlid": 103, 00:22:43.199 "qid": 0, 00:22:43.199 "state": "enabled", 00:22:43.199 "listen_address": { 00:22:43.199 "trtype": "TCP", 00:22:43.199 "adrfam": "IPv4", 00:22:43.199 "traddr": "10.0.0.2", 00:22:43.199 "trsvcid": "4420" 00:22:43.199 }, 00:22:43.199 "peer_address": { 00:22:43.199 "trtype": "TCP", 00:22:43.199 "adrfam": "IPv4", 00:22:43.199 "traddr": "10.0.0.1", 00:22:43.199 "trsvcid": "44848" 00:22:43.199 }, 00:22:43.199 "auth": { 00:22:43.199 "state": "completed", 00:22:43.199 "digest": "sha512", 00:22:43.199 "dhgroup": "null" 00:22:43.199 } 00:22:43.199 } 00:22:43.199 ]' 00:22:43.199 00:35:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:43.199 00:35:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:43.199 00:35:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:43.199 00:35:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:22:43.199 00:35:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:43.199 00:35:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:43.199 00:35:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:43.199 00:35:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:43.458 00:35:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:03:NjU1ZTFiNjM3ZWY1ODZmMTllMjEzYTdiM2ExYzIzZTVkZDRiMDAyNTdmMDQzN2M3NzM1MjE1MDM5OGI4NWYzMBfT75I=: 00:22:44.836 00:35:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:44.836 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:44.836 00:35:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:22:44.836 00:35:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:44.836 00:35:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:44.836 00:35:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:44.836 00:35:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:22:44.837 00:35:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:44.837 00:35:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:44.837 00:35:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:44.837 00:35:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 0 00:22:44.837 00:35:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:44.837 00:35:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:44.837 00:35:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:22:44.837 00:35:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:22:44.837 00:35:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:44.837 00:35:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:44.837 00:35:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:44.837 00:35:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:45.096 00:35:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:45.096 00:35:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:45.096 00:35:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:45.354 00:22:45.354 00:35:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:45.354 00:35:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:45.354 00:35:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:45.612 00:35:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:45.612 00:35:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:45.612 00:35:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:45.612 00:35:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:45.612 00:35:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:45.612 00:35:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:45.612 { 00:22:45.612 "cntlid": 105, 00:22:45.612 "qid": 0, 00:22:45.612 "state": "enabled", 00:22:45.612 "listen_address": { 00:22:45.612 "trtype": "TCP", 00:22:45.612 "adrfam": "IPv4", 00:22:45.612 "traddr": "10.0.0.2", 00:22:45.612 "trsvcid": "4420" 00:22:45.612 }, 00:22:45.612 "peer_address": { 00:22:45.612 "trtype": "TCP", 00:22:45.612 "adrfam": "IPv4", 00:22:45.612 "traddr": "10.0.0.1", 00:22:45.612 "trsvcid": "44874" 00:22:45.612 }, 00:22:45.612 "auth": { 00:22:45.612 "state": "completed", 00:22:45.612 "digest": "sha512", 00:22:45.612 "dhgroup": "ffdhe2048" 00:22:45.612 } 00:22:45.612 } 00:22:45.612 ]' 00:22:45.612 00:35:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:45.613 00:35:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:45.613 00:35:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:45.613 00:35:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:22:45.613 00:35:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:45.871 00:35:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:45.871 00:35:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:45.871 00:35:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:46.129 00:35:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:00:MDQ4Njc1MzY1OWZjODQ0M2JmNjhiZjIxMDRmYWNhYzllN2FlOWI2NzJhNzU3NzAxMlqSlA==: --dhchap-ctrl-secret DHHC-1:03:Zjk0ZWVmNjIxOThmZTJhOGI4ZDA2ZWY2ODAzZjY2ZWEwNmE5MjZhMjBhZmY0NTU5NzlkYTFiM2UyNWQ5NWVlZd8/ViE=: 00:22:47.507 00:35:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:47.507 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:47.507 00:35:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:22:47.507 00:35:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:47.507 00:35:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:47.507 00:35:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:47.507 00:35:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:47.507 00:35:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:47.507 00:35:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:47.507 00:35:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 1 00:22:47.507 00:35:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:47.507 00:35:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:47.507 00:35:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:22:47.507 00:35:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:22:47.507 00:35:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:47.507 00:35:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:47.507 00:35:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:47.507 00:35:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:47.507 00:35:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:47.507 00:35:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:47.507 00:35:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:48.075 00:22:48.075 00:35:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:48.075 00:35:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:48.075 00:35:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:48.333 00:35:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:48.333 00:35:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:48.333 00:35:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:48.333 00:35:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:48.333 00:35:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:48.333 00:35:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:48.333 { 00:22:48.333 "cntlid": 107, 00:22:48.333 "qid": 0, 00:22:48.333 "state": "enabled", 00:22:48.333 "listen_address": { 00:22:48.333 "trtype": "TCP", 00:22:48.333 "adrfam": "IPv4", 00:22:48.333 "traddr": "10.0.0.2", 00:22:48.333 "trsvcid": "4420" 00:22:48.333 }, 00:22:48.333 "peer_address": { 00:22:48.333 "trtype": "TCP", 00:22:48.333 "adrfam": "IPv4", 00:22:48.333 "traddr": "10.0.0.1", 00:22:48.333 "trsvcid": "43876" 00:22:48.333 }, 00:22:48.333 "auth": { 00:22:48.333 "state": "completed", 00:22:48.333 "digest": "sha512", 00:22:48.333 "dhgroup": "ffdhe2048" 00:22:48.333 } 00:22:48.333 } 00:22:48.333 ]' 00:22:48.333 00:35:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:48.333 00:35:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:48.333 00:35:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:48.333 00:35:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:22:48.333 00:35:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:48.333 00:35:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:48.333 00:35:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:48.333 00:35:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:48.592 00:35:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:01:ZmEzOTJkOTkwZmE5MTMxN2I2MDVmZGRhNzkyYjcxODA4MVgW: --dhchap-ctrl-secret DHHC-1:02:Y2NhYTJjYWJhMDlhZDQwNzM3ZjZiMjhiYWU3MGRiMjZkZGQzZmM1NTEwMjVjNWQwijtRNA==: 00:22:49.972 00:35:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:49.972 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:49.972 00:35:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:22:49.972 00:35:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:49.972 00:35:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:49.972 00:35:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:49.972 00:35:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:49.972 00:35:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:49.972 00:35:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:50.231 00:35:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 2 00:22:50.231 00:35:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:50.231 00:35:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:50.231 00:35:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:22:50.231 00:35:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:22:50.231 00:35:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:50.232 00:35:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:50.232 00:35:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:50.232 00:35:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:50.232 00:35:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:50.232 00:35:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:50.232 00:35:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:50.489 00:22:50.489 00:35:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:50.489 00:35:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:50.489 00:35:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:50.746 00:35:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:50.746 00:35:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:50.746 00:35:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:50.746 00:35:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:50.746 00:35:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:50.746 00:35:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:50.746 { 00:22:50.746 "cntlid": 109, 00:22:50.746 "qid": 0, 00:22:50.746 "state": "enabled", 00:22:50.746 "listen_address": { 00:22:50.746 "trtype": "TCP", 00:22:50.746 "adrfam": "IPv4", 00:22:50.746 "traddr": "10.0.0.2", 00:22:50.746 "trsvcid": "4420" 00:22:50.746 }, 00:22:50.746 "peer_address": { 00:22:50.746 "trtype": "TCP", 00:22:50.746 "adrfam": "IPv4", 00:22:50.746 "traddr": "10.0.0.1", 00:22:50.746 "trsvcid": "43910" 00:22:50.746 }, 00:22:50.746 "auth": { 00:22:50.746 "state": "completed", 00:22:50.746 "digest": "sha512", 00:22:50.746 "dhgroup": "ffdhe2048" 00:22:50.746 } 00:22:50.746 } 00:22:50.746 ]' 00:22:50.746 00:35:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:50.746 00:35:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:50.746 00:35:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:51.002 00:35:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:22:51.002 00:35:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:51.002 00:35:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:51.002 00:35:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:51.002 00:35:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:51.262 00:35:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:02:ZjkzM2Y2MGE1MTM1ZTkzNDE2MTZiODIyZTliODk2OThjZDllNjhlNzVjMTU4MzIwNQF96A==: --dhchap-ctrl-secret DHHC-1:01:MGI1OGFjYjZiZDIxZmI0NmM4ZDQ3MWQwMWM3NDJmZmNekHDK: 00:22:52.639 00:35:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:52.639 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:52.639 00:35:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:22:52.639 00:35:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:52.639 00:35:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:52.639 00:35:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:52.639 00:35:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:52.639 00:35:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:52.639 00:35:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:52.639 00:35:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 3 00:22:52.639 00:35:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:52.639 00:35:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:52.639 00:35:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:22:52.639 00:35:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:22:52.639 00:35:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:52.639 00:35:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key3 00:22:52.639 00:35:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:52.639 00:35:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:52.639 00:35:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:52.639 00:35:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:52.639 00:35:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:52.899 00:22:53.158 00:35:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:53.158 00:35:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:53.158 00:35:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:53.158 00:35:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:53.158 00:35:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:53.158 00:35:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:53.158 00:35:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:53.416 00:35:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:53.416 00:35:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:53.416 { 00:22:53.416 "cntlid": 111, 00:22:53.416 "qid": 0, 00:22:53.416 "state": "enabled", 00:22:53.416 "listen_address": { 00:22:53.416 "trtype": "TCP", 00:22:53.416 "adrfam": "IPv4", 00:22:53.416 "traddr": "10.0.0.2", 00:22:53.416 "trsvcid": "4420" 00:22:53.416 }, 00:22:53.416 "peer_address": { 00:22:53.416 "trtype": "TCP", 00:22:53.416 "adrfam": "IPv4", 00:22:53.416 "traddr": "10.0.0.1", 00:22:53.416 "trsvcid": "43944" 00:22:53.416 }, 00:22:53.416 "auth": { 00:22:53.416 "state": "completed", 00:22:53.416 "digest": "sha512", 00:22:53.416 "dhgroup": "ffdhe2048" 00:22:53.416 } 00:22:53.416 } 00:22:53.416 ]' 00:22:53.416 00:35:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:53.416 00:35:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:53.416 00:35:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:53.416 00:35:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:22:53.416 00:35:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:53.416 00:35:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:53.416 00:35:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:53.416 00:35:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:53.674 00:35:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:03:NjU1ZTFiNjM3ZWY1ODZmMTllMjEzYTdiM2ExYzIzZTVkZDRiMDAyNTdmMDQzN2M3NzM1MjE1MDM5OGI4NWYzMBfT75I=: 00:22:55.051 00:35:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:55.051 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:55.051 00:35:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:22:55.051 00:35:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:55.051 00:35:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:55.051 00:35:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:55.051 00:35:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:22:55.051 00:35:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:55.051 00:35:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:55.051 00:35:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:55.051 00:35:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 0 00:22:55.051 00:35:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:55.052 00:35:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:55.052 00:35:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:22:55.052 00:35:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:22:55.052 00:35:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:55.052 00:35:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:55.052 00:35:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:55.052 00:35:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:55.309 00:35:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:55.309 00:35:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:55.309 00:35:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:55.566 00:22:55.566 00:35:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:55.566 00:35:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:55.566 00:35:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:55.824 00:35:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:55.824 00:35:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:55.824 00:35:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:55.824 00:35:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:55.824 00:35:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:55.824 00:35:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:55.824 { 00:22:55.824 "cntlid": 113, 00:22:55.824 "qid": 0, 00:22:55.824 "state": "enabled", 00:22:55.824 "listen_address": { 00:22:55.824 "trtype": "TCP", 00:22:55.824 "adrfam": "IPv4", 00:22:55.824 "traddr": "10.0.0.2", 00:22:55.824 "trsvcid": "4420" 00:22:55.824 }, 00:22:55.824 "peer_address": { 00:22:55.824 "trtype": "TCP", 00:22:55.824 "adrfam": "IPv4", 00:22:55.824 "traddr": "10.0.0.1", 00:22:55.824 "trsvcid": "52110" 00:22:55.824 }, 00:22:55.824 "auth": { 00:22:55.824 "state": "completed", 00:22:55.824 "digest": "sha512", 00:22:55.824 "dhgroup": "ffdhe3072" 00:22:55.824 } 00:22:55.824 } 00:22:55.824 ]' 00:22:55.824 00:35:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:55.824 00:35:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:55.824 00:35:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:56.082 00:35:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:22:56.082 00:35:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:56.082 00:35:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:56.082 00:35:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:56.082 00:35:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:56.340 00:35:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:00:MDQ4Njc1MzY1OWZjODQ0M2JmNjhiZjIxMDRmYWNhYzllN2FlOWI2NzJhNzU3NzAxMlqSlA==: --dhchap-ctrl-secret DHHC-1:03:Zjk0ZWVmNjIxOThmZTJhOGI4ZDA2ZWY2ODAzZjY2ZWEwNmE5MjZhMjBhZmY0NTU5NzlkYTFiM2UyNWQ5NWVlZd8/ViE=: 00:22:57.715 00:35:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:57.715 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:57.715 00:35:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:22:57.715 00:35:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:57.715 00:35:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:57.715 00:35:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:57.715 00:35:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:57.715 00:35:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:57.715 00:35:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:57.715 00:35:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 1 00:22:57.715 00:35:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:57.715 00:35:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:57.715 00:35:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:22:57.715 00:35:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:22:57.715 00:35:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:57.715 00:35:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:57.715 00:35:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:57.715 00:35:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:57.715 00:35:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:57.715 00:35:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:57.715 00:35:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:58.281 00:22:58.281 00:35:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:58.281 00:35:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:58.281 00:35:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:58.538 00:35:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:58.538 00:35:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:58.538 00:35:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:58.538 00:35:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:58.538 00:35:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:58.538 00:35:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:58.538 { 00:22:58.538 "cntlid": 115, 00:22:58.538 "qid": 0, 00:22:58.538 "state": "enabled", 00:22:58.538 "listen_address": { 00:22:58.538 "trtype": "TCP", 00:22:58.538 "adrfam": "IPv4", 00:22:58.538 "traddr": "10.0.0.2", 00:22:58.538 "trsvcid": "4420" 00:22:58.538 }, 00:22:58.538 "peer_address": { 00:22:58.538 "trtype": "TCP", 00:22:58.538 "adrfam": "IPv4", 00:22:58.538 "traddr": "10.0.0.1", 00:22:58.538 "trsvcid": "52138" 00:22:58.538 }, 00:22:58.538 "auth": { 00:22:58.538 "state": "completed", 00:22:58.538 "digest": "sha512", 00:22:58.538 "dhgroup": "ffdhe3072" 00:22:58.538 } 00:22:58.538 } 00:22:58.538 ]' 00:22:58.538 00:35:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:58.538 00:35:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:58.538 00:35:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:58.538 00:35:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:22:58.538 00:35:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:58.538 00:35:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:58.538 00:35:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:58.538 00:35:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:59.103 00:35:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:01:ZmEzOTJkOTkwZmE5MTMxN2I2MDVmZGRhNzkyYjcxODA4MVgW: --dhchap-ctrl-secret DHHC-1:02:Y2NhYTJjYWJhMDlhZDQwNzM3ZjZiMjhiYWU3MGRiMjZkZGQzZmM1NTEwMjVjNWQwijtRNA==: 00:23:00.042 00:35:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:00.042 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:00.042 00:35:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:23:00.042 00:35:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:00.042 00:35:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:00.042 00:35:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:00.042 00:35:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:00.042 00:35:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:00.042 00:35:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:00.300 00:35:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 2 00:23:00.300 00:35:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:00.300 00:35:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:23:00.300 00:35:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:23:00.300 00:35:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:23:00.300 00:35:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:00.300 00:35:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:00.300 00:35:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:00.300 00:35:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:00.300 00:35:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:00.300 00:35:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:00.300 00:35:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:00.868 00:23:00.868 00:35:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:00.868 00:35:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:00.868 00:35:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:01.127 00:35:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:01.127 00:35:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:01.127 00:35:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:01.127 00:35:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:01.127 00:35:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:01.127 00:35:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:01.127 { 00:23:01.127 "cntlid": 117, 00:23:01.127 "qid": 0, 00:23:01.127 "state": "enabled", 00:23:01.127 "listen_address": { 00:23:01.127 "trtype": "TCP", 00:23:01.127 "adrfam": "IPv4", 00:23:01.127 "traddr": "10.0.0.2", 00:23:01.127 "trsvcid": "4420" 00:23:01.127 }, 00:23:01.127 "peer_address": { 00:23:01.127 "trtype": "TCP", 00:23:01.127 "adrfam": "IPv4", 00:23:01.127 "traddr": "10.0.0.1", 00:23:01.127 "trsvcid": "52160" 00:23:01.127 }, 00:23:01.127 "auth": { 00:23:01.127 "state": "completed", 00:23:01.127 "digest": "sha512", 00:23:01.127 "dhgroup": "ffdhe3072" 00:23:01.127 } 00:23:01.127 } 00:23:01.127 ]' 00:23:01.127 00:35:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:01.127 00:35:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:01.127 00:35:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:01.127 00:35:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:23:01.127 00:35:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:01.127 00:35:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:01.127 00:35:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:01.127 00:35:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:01.386 00:35:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:02:ZjkzM2Y2MGE1MTM1ZTkzNDE2MTZiODIyZTliODk2OThjZDllNjhlNzVjMTU4MzIwNQF96A==: --dhchap-ctrl-secret DHHC-1:01:MGI1OGFjYjZiZDIxZmI0NmM4ZDQ3MWQwMWM3NDJmZmNekHDK: 00:23:02.767 00:35:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:02.767 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:02.767 00:35:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:23:02.767 00:35:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:02.767 00:35:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:02.767 00:35:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:02.767 00:35:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:02.767 00:35:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:02.767 00:35:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:03.026 00:35:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 3 00:23:03.026 00:35:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:03.026 00:35:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:23:03.026 00:35:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:23:03.026 00:35:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:23:03.026 00:35:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:03.026 00:35:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key3 00:23:03.026 00:35:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:03.026 00:35:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:03.026 00:35:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:03.026 00:35:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:23:03.026 00:35:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:23:03.285 00:23:03.285 00:35:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:03.285 00:35:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:03.285 00:35:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:03.543 00:35:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:03.543 00:35:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:03.543 00:35:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:03.543 00:35:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:03.543 00:35:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:03.543 00:35:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:03.543 { 00:23:03.543 "cntlid": 119, 00:23:03.543 "qid": 0, 00:23:03.543 "state": "enabled", 00:23:03.543 "listen_address": { 00:23:03.543 "trtype": "TCP", 00:23:03.543 "adrfam": "IPv4", 00:23:03.543 "traddr": "10.0.0.2", 00:23:03.543 "trsvcid": "4420" 00:23:03.543 }, 00:23:03.543 "peer_address": { 00:23:03.543 "trtype": "TCP", 00:23:03.543 "adrfam": "IPv4", 00:23:03.543 "traddr": "10.0.0.1", 00:23:03.543 "trsvcid": "52190" 00:23:03.543 }, 00:23:03.543 "auth": { 00:23:03.543 "state": "completed", 00:23:03.543 "digest": "sha512", 00:23:03.543 "dhgroup": "ffdhe3072" 00:23:03.543 } 00:23:03.543 } 00:23:03.543 ]' 00:23:03.543 00:35:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:03.801 00:35:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:03.801 00:35:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:03.801 00:35:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:23:03.801 00:35:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:03.801 00:35:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:03.801 00:35:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:03.801 00:35:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:04.060 00:35:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:03:NjU1ZTFiNjM3ZWY1ODZmMTllMjEzYTdiM2ExYzIzZTVkZDRiMDAyNTdmMDQzN2M3NzM1MjE1MDM5OGI4NWYzMBfT75I=: 00:23:05.437 00:35:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:05.437 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:05.437 00:35:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:23:05.437 00:35:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:05.437 00:35:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:05.437 00:35:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:05.437 00:35:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:23:05.437 00:35:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:05.437 00:35:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:05.437 00:35:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:05.437 00:35:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 0 00:23:05.437 00:35:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:05.437 00:35:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:23:05.437 00:35:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:23:05.437 00:35:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:23:05.437 00:35:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:05.437 00:35:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:05.437 00:35:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:05.437 00:35:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:05.437 00:35:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:05.437 00:35:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:05.437 00:35:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:06.020 00:23:06.020 00:35:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:06.020 00:35:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:06.020 00:35:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:06.278 00:35:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:06.278 00:35:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:06.278 00:35:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:06.278 00:35:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:06.278 00:35:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:06.278 00:35:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:06.278 { 00:23:06.278 "cntlid": 121, 00:23:06.278 "qid": 0, 00:23:06.278 "state": "enabled", 00:23:06.278 "listen_address": { 00:23:06.278 "trtype": "TCP", 00:23:06.278 "adrfam": "IPv4", 00:23:06.278 "traddr": "10.0.0.2", 00:23:06.278 "trsvcid": "4420" 00:23:06.278 }, 00:23:06.278 "peer_address": { 00:23:06.278 "trtype": "TCP", 00:23:06.278 "adrfam": "IPv4", 00:23:06.278 "traddr": "10.0.0.1", 00:23:06.278 "trsvcid": "52652" 00:23:06.278 }, 00:23:06.278 "auth": { 00:23:06.278 "state": "completed", 00:23:06.278 "digest": "sha512", 00:23:06.278 "dhgroup": "ffdhe4096" 00:23:06.278 } 00:23:06.278 } 00:23:06.278 ]' 00:23:06.279 00:35:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:06.279 00:35:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:06.279 00:35:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:06.279 00:35:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:23:06.279 00:35:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:06.537 00:35:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:06.537 00:35:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:06.537 00:35:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:06.795 00:35:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:00:MDQ4Njc1MzY1OWZjODQ0M2JmNjhiZjIxMDRmYWNhYzllN2FlOWI2NzJhNzU3NzAxMlqSlA==: --dhchap-ctrl-secret DHHC-1:03:Zjk0ZWVmNjIxOThmZTJhOGI4ZDA2ZWY2ODAzZjY2ZWEwNmE5MjZhMjBhZmY0NTU5NzlkYTFiM2UyNWQ5NWVlZd8/ViE=: 00:23:08.175 00:35:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:08.175 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:08.175 00:35:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:23:08.175 00:35:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:08.175 00:35:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:08.175 00:35:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:08.175 00:35:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:08.175 00:35:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:08.175 00:35:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:08.175 00:35:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 1 00:23:08.175 00:35:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:08.175 00:35:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:23:08.176 00:35:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:23:08.176 00:35:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:23:08.176 00:35:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:08.176 00:35:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:08.176 00:35:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:08.176 00:35:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:08.176 00:35:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:08.176 00:35:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:08.176 00:35:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:08.764 00:23:08.764 00:35:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:08.764 00:35:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:08.764 00:35:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:09.022 00:35:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:09.022 00:35:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:09.022 00:35:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:09.022 00:35:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:09.022 00:35:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:09.022 00:35:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:09.022 { 00:23:09.022 "cntlid": 123, 00:23:09.022 "qid": 0, 00:23:09.022 "state": "enabled", 00:23:09.022 "listen_address": { 00:23:09.022 "trtype": "TCP", 00:23:09.022 "adrfam": "IPv4", 00:23:09.022 "traddr": "10.0.0.2", 00:23:09.022 "trsvcid": "4420" 00:23:09.022 }, 00:23:09.022 "peer_address": { 00:23:09.022 "trtype": "TCP", 00:23:09.022 "adrfam": "IPv4", 00:23:09.022 "traddr": "10.0.0.1", 00:23:09.022 "trsvcid": "52668" 00:23:09.022 }, 00:23:09.022 "auth": { 00:23:09.022 "state": "completed", 00:23:09.022 "digest": "sha512", 00:23:09.022 "dhgroup": "ffdhe4096" 00:23:09.022 } 00:23:09.022 } 00:23:09.022 ]' 00:23:09.022 00:35:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:09.022 00:35:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:09.022 00:35:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:09.022 00:35:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:23:09.022 00:35:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:09.022 00:35:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:09.022 00:35:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:09.022 00:35:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:09.591 00:35:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:01:ZmEzOTJkOTkwZmE5MTMxN2I2MDVmZGRhNzkyYjcxODA4MVgW: --dhchap-ctrl-secret DHHC-1:02:Y2NhYTJjYWJhMDlhZDQwNzM3ZjZiMjhiYWU3MGRiMjZkZGQzZmM1NTEwMjVjNWQwijtRNA==: 00:23:10.529 00:35:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:10.529 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:10.529 00:35:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:23:10.529 00:35:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:10.529 00:35:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:10.529 00:35:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:10.529 00:35:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:10.529 00:35:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:10.529 00:35:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:11.099 00:35:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 2 00:23:11.099 00:35:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:11.099 00:35:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:23:11.099 00:35:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:23:11.099 00:35:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:23:11.099 00:35:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:11.099 00:35:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:11.099 00:35:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:11.099 00:35:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:11.099 00:35:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:11.099 00:35:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:11.099 00:35:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:11.357 00:23:11.357 00:35:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:11.357 00:35:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:11.357 00:35:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:11.616 00:35:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:11.616 00:35:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:11.616 00:35:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:11.617 00:35:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:11.617 00:35:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:11.617 00:35:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:11.617 { 00:23:11.617 "cntlid": 125, 00:23:11.617 "qid": 0, 00:23:11.617 "state": "enabled", 00:23:11.617 "listen_address": { 00:23:11.617 "trtype": "TCP", 00:23:11.617 "adrfam": "IPv4", 00:23:11.617 "traddr": "10.0.0.2", 00:23:11.617 "trsvcid": "4420" 00:23:11.617 }, 00:23:11.617 "peer_address": { 00:23:11.617 "trtype": "TCP", 00:23:11.617 "adrfam": "IPv4", 00:23:11.617 "traddr": "10.0.0.1", 00:23:11.617 "trsvcid": "52698" 00:23:11.617 }, 00:23:11.617 "auth": { 00:23:11.617 "state": "completed", 00:23:11.617 "digest": "sha512", 00:23:11.617 "dhgroup": "ffdhe4096" 00:23:11.617 } 00:23:11.617 } 00:23:11.617 ]' 00:23:11.617 00:35:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:11.617 00:35:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:11.617 00:35:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:11.875 00:35:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:23:11.875 00:35:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:11.875 00:35:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:11.875 00:35:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:11.875 00:35:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:12.133 00:35:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:02:ZjkzM2Y2MGE1MTM1ZTkzNDE2MTZiODIyZTliODk2OThjZDllNjhlNzVjMTU4MzIwNQF96A==: --dhchap-ctrl-secret DHHC-1:01:MGI1OGFjYjZiZDIxZmI0NmM4ZDQ3MWQwMWM3NDJmZmNekHDK: 00:23:13.511 00:35:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:13.511 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:13.511 00:35:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:23:13.511 00:35:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:13.511 00:35:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:13.511 00:35:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:13.511 00:35:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:13.511 00:35:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:13.511 00:35:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:13.511 00:35:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 3 00:23:13.511 00:35:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:13.511 00:35:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:23:13.511 00:35:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:23:13.511 00:35:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:23:13.511 00:35:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:13.511 00:35:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key3 00:23:13.511 00:35:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:13.511 00:35:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:13.511 00:35:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:13.511 00:35:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:23:13.511 00:35:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:23:14.078 00:23:14.078 00:35:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:14.078 00:35:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:14.078 00:35:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:14.336 00:35:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:14.336 00:35:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:14.336 00:35:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:14.336 00:35:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:14.336 00:35:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:14.336 00:35:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:14.336 { 00:23:14.336 "cntlid": 127, 00:23:14.336 "qid": 0, 00:23:14.336 "state": "enabled", 00:23:14.336 "listen_address": { 00:23:14.336 "trtype": "TCP", 00:23:14.336 "adrfam": "IPv4", 00:23:14.336 "traddr": "10.0.0.2", 00:23:14.336 "trsvcid": "4420" 00:23:14.336 }, 00:23:14.336 "peer_address": { 00:23:14.336 "trtype": "TCP", 00:23:14.336 "adrfam": "IPv4", 00:23:14.336 "traddr": "10.0.0.1", 00:23:14.336 "trsvcid": "52742" 00:23:14.336 }, 00:23:14.336 "auth": { 00:23:14.336 "state": "completed", 00:23:14.336 "digest": "sha512", 00:23:14.336 "dhgroup": "ffdhe4096" 00:23:14.336 } 00:23:14.336 } 00:23:14.336 ]' 00:23:14.336 00:35:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:14.336 00:35:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:14.336 00:35:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:14.336 00:35:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:23:14.336 00:35:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:14.595 00:35:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:14.595 00:35:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:14.595 00:35:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:14.853 00:35:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:03:NjU1ZTFiNjM3ZWY1ODZmMTllMjEzYTdiM2ExYzIzZTVkZDRiMDAyNTdmMDQzN2M3NzM1MjE1MDM5OGI4NWYzMBfT75I=: 00:23:16.230 00:35:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:16.230 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:16.230 00:35:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:23:16.230 00:35:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:16.230 00:35:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:16.230 00:35:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:16.230 00:35:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:23:16.230 00:35:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:16.230 00:35:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:16.230 00:35:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:16.230 00:35:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 0 00:23:16.230 00:35:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:16.230 00:35:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:23:16.230 00:35:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:23:16.230 00:35:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:23:16.230 00:35:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:16.230 00:35:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:16.231 00:35:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:16.231 00:35:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:16.231 00:35:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:16.231 00:35:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:16.231 00:35:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:16.801 00:23:17.060 00:35:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:17.060 00:35:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:17.060 00:35:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:17.319 00:35:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:17.319 00:35:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:17.319 00:35:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:17.319 00:35:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:17.319 00:35:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:17.319 00:35:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:17.319 { 00:23:17.319 "cntlid": 129, 00:23:17.319 "qid": 0, 00:23:17.319 "state": "enabled", 00:23:17.319 "listen_address": { 00:23:17.319 "trtype": "TCP", 00:23:17.319 "adrfam": "IPv4", 00:23:17.319 "traddr": "10.0.0.2", 00:23:17.319 "trsvcid": "4420" 00:23:17.319 }, 00:23:17.319 "peer_address": { 00:23:17.319 "trtype": "TCP", 00:23:17.319 "adrfam": "IPv4", 00:23:17.319 "traddr": "10.0.0.1", 00:23:17.319 "trsvcid": "45608" 00:23:17.319 }, 00:23:17.319 "auth": { 00:23:17.319 "state": "completed", 00:23:17.319 "digest": "sha512", 00:23:17.319 "dhgroup": "ffdhe6144" 00:23:17.319 } 00:23:17.319 } 00:23:17.319 ]' 00:23:17.319 00:35:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:17.319 00:35:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:17.319 00:35:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:17.319 00:35:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:23:17.319 00:35:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:17.319 00:35:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:17.319 00:35:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:17.319 00:35:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:17.577 00:35:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:00:MDQ4Njc1MzY1OWZjODQ0M2JmNjhiZjIxMDRmYWNhYzllN2FlOWI2NzJhNzU3NzAxMlqSlA==: --dhchap-ctrl-secret DHHC-1:03:Zjk0ZWVmNjIxOThmZTJhOGI4ZDA2ZWY2ODAzZjY2ZWEwNmE5MjZhMjBhZmY0NTU5NzlkYTFiM2UyNWQ5NWVlZd8/ViE=: 00:23:18.959 00:35:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:18.959 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:18.959 00:35:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:23:18.959 00:35:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:18.959 00:35:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:18.959 00:35:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:18.959 00:35:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:18.959 00:35:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:18.959 00:35:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:19.218 00:35:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 1 00:23:19.218 00:35:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:19.218 00:35:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:23:19.218 00:35:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:23:19.218 00:35:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:23:19.218 00:35:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:19.218 00:35:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:19.218 00:35:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:19.218 00:35:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:19.218 00:35:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:19.218 00:35:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:19.218 00:35:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:19.786 00:23:19.786 00:35:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:19.786 00:35:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:19.787 00:35:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:20.044 00:35:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:20.044 00:35:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:20.044 00:35:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:20.044 00:35:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:20.044 00:35:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:20.044 00:35:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:20.044 { 00:23:20.044 "cntlid": 131, 00:23:20.044 "qid": 0, 00:23:20.044 "state": "enabled", 00:23:20.044 "listen_address": { 00:23:20.045 "trtype": "TCP", 00:23:20.045 "adrfam": "IPv4", 00:23:20.045 "traddr": "10.0.0.2", 00:23:20.045 "trsvcid": "4420" 00:23:20.045 }, 00:23:20.045 "peer_address": { 00:23:20.045 "trtype": "TCP", 00:23:20.045 "adrfam": "IPv4", 00:23:20.045 "traddr": "10.0.0.1", 00:23:20.045 "trsvcid": "45642" 00:23:20.045 }, 00:23:20.045 "auth": { 00:23:20.045 "state": "completed", 00:23:20.045 "digest": "sha512", 00:23:20.045 "dhgroup": "ffdhe6144" 00:23:20.045 } 00:23:20.045 } 00:23:20.045 ]' 00:23:20.045 00:35:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:20.045 00:35:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:20.045 00:35:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:20.045 00:35:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:23:20.045 00:35:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:20.308 00:35:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:20.308 00:35:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:20.308 00:35:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:20.567 00:35:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:01:ZmEzOTJkOTkwZmE5MTMxN2I2MDVmZGRhNzkyYjcxODA4MVgW: --dhchap-ctrl-secret DHHC-1:02:Y2NhYTJjYWJhMDlhZDQwNzM3ZjZiMjhiYWU3MGRiMjZkZGQzZmM1NTEwMjVjNWQwijtRNA==: 00:23:21.946 00:35:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:21.946 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:21.946 00:35:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:23:21.946 00:35:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:21.946 00:35:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:21.946 00:35:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:21.946 00:35:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:21.946 00:35:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:21.946 00:35:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:21.946 00:35:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 2 00:23:21.946 00:35:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:21.946 00:35:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:23:21.946 00:35:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:23:21.946 00:35:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:23:21.946 00:35:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:21.946 00:35:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:21.946 00:35:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:21.946 00:35:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:21.946 00:35:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:21.946 00:35:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:21.946 00:35:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:22.515 00:23:22.515 00:35:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:22.515 00:35:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:22.515 00:35:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:23.111 00:35:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:23.111 00:35:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:23.111 00:35:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:23.111 00:35:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:23.111 00:35:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:23.111 00:35:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:23.111 { 00:23:23.111 "cntlid": 133, 00:23:23.111 "qid": 0, 00:23:23.111 "state": "enabled", 00:23:23.111 "listen_address": { 00:23:23.111 "trtype": "TCP", 00:23:23.111 "adrfam": "IPv4", 00:23:23.111 "traddr": "10.0.0.2", 00:23:23.111 "trsvcid": "4420" 00:23:23.111 }, 00:23:23.111 "peer_address": { 00:23:23.111 "trtype": "TCP", 00:23:23.111 "adrfam": "IPv4", 00:23:23.111 "traddr": "10.0.0.1", 00:23:23.111 "trsvcid": "45660" 00:23:23.111 }, 00:23:23.111 "auth": { 00:23:23.111 "state": "completed", 00:23:23.111 "digest": "sha512", 00:23:23.111 "dhgroup": "ffdhe6144" 00:23:23.111 } 00:23:23.111 } 00:23:23.111 ]' 00:23:23.111 00:35:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:23.111 00:35:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:23.111 00:35:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:23.111 00:35:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:23:23.111 00:35:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:23.111 00:35:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:23.111 00:35:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:23.111 00:35:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:23.367 00:35:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:02:ZjkzM2Y2MGE1MTM1ZTkzNDE2MTZiODIyZTliODk2OThjZDllNjhlNzVjMTU4MzIwNQF96A==: --dhchap-ctrl-secret DHHC-1:01:MGI1OGFjYjZiZDIxZmI0NmM4ZDQ3MWQwMWM3NDJmZmNekHDK: 00:23:24.745 00:35:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:24.745 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:24.745 00:35:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:23:24.745 00:35:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:24.745 00:35:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:24.745 00:35:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:24.745 00:35:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:24.745 00:35:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:24.745 00:35:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:25.004 00:35:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 3 00:23:25.004 00:35:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:25.004 00:35:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:23:25.004 00:35:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:23:25.004 00:35:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:23:25.004 00:35:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:25.004 00:35:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key3 00:23:25.004 00:35:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:25.004 00:35:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:25.004 00:35:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:25.004 00:35:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:23:25.004 00:35:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:23:25.569 00:23:25.569 00:35:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:25.569 00:35:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:25.569 00:35:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:25.826 00:35:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:25.826 00:35:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:25.826 00:35:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:25.826 00:35:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:25.826 00:35:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:25.826 00:35:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:25.827 { 00:23:25.827 "cntlid": 135, 00:23:25.827 "qid": 0, 00:23:25.827 "state": "enabled", 00:23:25.827 "listen_address": { 00:23:25.827 "trtype": "TCP", 00:23:25.827 "adrfam": "IPv4", 00:23:25.827 "traddr": "10.0.0.2", 00:23:25.827 "trsvcid": "4420" 00:23:25.827 }, 00:23:25.827 "peer_address": { 00:23:25.827 "trtype": "TCP", 00:23:25.827 "adrfam": "IPv4", 00:23:25.827 "traddr": "10.0.0.1", 00:23:25.827 "trsvcid": "45676" 00:23:25.827 }, 00:23:25.827 "auth": { 00:23:25.827 "state": "completed", 00:23:25.827 "digest": "sha512", 00:23:25.827 "dhgroup": "ffdhe6144" 00:23:25.827 } 00:23:25.827 } 00:23:25.827 ]' 00:23:25.827 00:35:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:25.827 00:35:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:25.827 00:35:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:25.827 00:35:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:23:25.827 00:35:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:26.084 00:35:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:26.084 00:35:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:26.084 00:35:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:26.341 00:35:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:03:NjU1ZTFiNjM3ZWY1ODZmMTllMjEzYTdiM2ExYzIzZTVkZDRiMDAyNTdmMDQzN2M3NzM1MjE1MDM5OGI4NWYzMBfT75I=: 00:23:27.712 00:35:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:27.712 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:27.712 00:35:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:23:27.712 00:35:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:27.712 00:35:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:27.712 00:35:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:27.712 00:35:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:23:27.712 00:35:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:27.712 00:35:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:27.712 00:35:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:27.712 00:35:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 0 00:23:27.712 00:35:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:27.712 00:35:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:23:27.712 00:35:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:23:27.712 00:35:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:23:27.712 00:35:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:27.713 00:35:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:27.713 00:35:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:27.713 00:35:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:27.713 00:35:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:27.713 00:35:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:27.713 00:35:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:28.645 00:23:28.902 00:35:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:28.902 00:35:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:28.902 00:35:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:29.161 00:35:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:29.161 00:35:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:29.161 00:35:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:29.161 00:35:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:29.161 00:35:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:29.161 00:35:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:29.161 { 00:23:29.161 "cntlid": 137, 00:23:29.161 "qid": 0, 00:23:29.161 "state": "enabled", 00:23:29.161 "listen_address": { 00:23:29.161 "trtype": "TCP", 00:23:29.161 "adrfam": "IPv4", 00:23:29.161 "traddr": "10.0.0.2", 00:23:29.161 "trsvcid": "4420" 00:23:29.161 }, 00:23:29.161 "peer_address": { 00:23:29.161 "trtype": "TCP", 00:23:29.161 "adrfam": "IPv4", 00:23:29.161 "traddr": "10.0.0.1", 00:23:29.161 "trsvcid": "36790" 00:23:29.161 }, 00:23:29.161 "auth": { 00:23:29.161 "state": "completed", 00:23:29.161 "digest": "sha512", 00:23:29.161 "dhgroup": "ffdhe8192" 00:23:29.161 } 00:23:29.161 } 00:23:29.161 ]' 00:23:29.161 00:35:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:29.161 00:35:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:29.161 00:35:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:29.161 00:35:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:23:29.161 00:35:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:29.161 00:35:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:29.161 00:35:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:29.161 00:35:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:29.419 00:35:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:00:MDQ4Njc1MzY1OWZjODQ0M2JmNjhiZjIxMDRmYWNhYzllN2FlOWI2NzJhNzU3NzAxMlqSlA==: --dhchap-ctrl-secret DHHC-1:03:Zjk0ZWVmNjIxOThmZTJhOGI4ZDA2ZWY2ODAzZjY2ZWEwNmE5MjZhMjBhZmY0NTU5NzlkYTFiM2UyNWQ5NWVlZd8/ViE=: 00:23:30.791 00:35:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:30.791 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:30.791 00:35:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:23:30.791 00:35:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:30.791 00:35:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:30.791 00:35:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:30.791 00:35:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:30.791 00:35:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:30.791 00:35:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:31.049 00:35:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 1 00:23:31.049 00:35:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:31.049 00:35:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:23:31.049 00:35:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:23:31.049 00:35:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:23:31.049 00:35:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:31.049 00:35:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:31.049 00:35:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:31.049 00:35:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:31.049 00:35:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:31.049 00:35:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:31.049 00:35:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:31.982 00:23:31.982 00:35:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:31.982 00:35:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:31.982 00:35:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:32.240 00:36:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:32.240 00:36:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:32.240 00:36:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:32.240 00:36:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:32.240 00:36:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:32.240 00:36:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:32.240 { 00:23:32.240 "cntlid": 139, 00:23:32.240 "qid": 0, 00:23:32.240 "state": "enabled", 00:23:32.240 "listen_address": { 00:23:32.240 "trtype": "TCP", 00:23:32.240 "adrfam": "IPv4", 00:23:32.240 "traddr": "10.0.0.2", 00:23:32.240 "trsvcid": "4420" 00:23:32.240 }, 00:23:32.240 "peer_address": { 00:23:32.240 "trtype": "TCP", 00:23:32.240 "adrfam": "IPv4", 00:23:32.240 "traddr": "10.0.0.1", 00:23:32.240 "trsvcid": "36814" 00:23:32.240 }, 00:23:32.240 "auth": { 00:23:32.240 "state": "completed", 00:23:32.240 "digest": "sha512", 00:23:32.240 "dhgroup": "ffdhe8192" 00:23:32.240 } 00:23:32.240 } 00:23:32.240 ]' 00:23:32.240 00:36:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:32.498 00:36:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:32.498 00:36:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:32.498 00:36:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:23:32.498 00:36:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:32.498 00:36:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:32.498 00:36:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:32.498 00:36:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:32.756 00:36:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:01:ZmEzOTJkOTkwZmE5MTMxN2I2MDVmZGRhNzkyYjcxODA4MVgW: --dhchap-ctrl-secret DHHC-1:02:Y2NhYTJjYWJhMDlhZDQwNzM3ZjZiMjhiYWU3MGRiMjZkZGQzZmM1NTEwMjVjNWQwijtRNA==: 00:23:34.128 00:36:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:34.128 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:34.128 00:36:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:23:34.128 00:36:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:34.128 00:36:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:34.129 00:36:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:34.129 00:36:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:34.129 00:36:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:34.129 00:36:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:34.129 00:36:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 2 00:23:34.129 00:36:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:34.129 00:36:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:23:34.129 00:36:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:23:34.129 00:36:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:23:34.129 00:36:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:34.129 00:36:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:34.129 00:36:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:34.129 00:36:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:34.129 00:36:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:34.129 00:36:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:34.129 00:36:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:35.501 00:23:35.501 00:36:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:35.501 00:36:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:35.501 00:36:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:35.501 00:36:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:35.501 00:36:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:35.501 00:36:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:35.501 00:36:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:35.501 00:36:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:35.501 00:36:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:35.501 { 00:23:35.501 "cntlid": 141, 00:23:35.501 "qid": 0, 00:23:35.501 "state": "enabled", 00:23:35.501 "listen_address": { 00:23:35.501 "trtype": "TCP", 00:23:35.501 "adrfam": "IPv4", 00:23:35.501 "traddr": "10.0.0.2", 00:23:35.501 "trsvcid": "4420" 00:23:35.501 }, 00:23:35.501 "peer_address": { 00:23:35.501 "trtype": "TCP", 00:23:35.501 "adrfam": "IPv4", 00:23:35.501 "traddr": "10.0.0.1", 00:23:35.501 "trsvcid": "36846" 00:23:35.501 }, 00:23:35.501 "auth": { 00:23:35.501 "state": "completed", 00:23:35.501 "digest": "sha512", 00:23:35.501 "dhgroup": "ffdhe8192" 00:23:35.501 } 00:23:35.501 } 00:23:35.501 ]' 00:23:35.501 00:36:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:35.759 00:36:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:35.759 00:36:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:35.759 00:36:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:23:35.759 00:36:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:35.759 00:36:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:35.759 00:36:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:35.759 00:36:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:36.031 00:36:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:02:ZjkzM2Y2MGE1MTM1ZTkzNDE2MTZiODIyZTliODk2OThjZDllNjhlNzVjMTU4MzIwNQF96A==: --dhchap-ctrl-secret DHHC-1:01:MGI1OGFjYjZiZDIxZmI0NmM4ZDQ3MWQwMWM3NDJmZmNekHDK: 00:23:37.404 00:36:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:37.404 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:37.404 00:36:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:23:37.404 00:36:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:37.404 00:36:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:37.404 00:36:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:37.404 00:36:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:23:37.404 00:36:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:37.404 00:36:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:37.684 00:36:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 3 00:23:37.684 00:36:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:37.684 00:36:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:23:37.684 00:36:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:23:37.684 00:36:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:23:37.684 00:36:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:37.684 00:36:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key3 00:23:37.684 00:36:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:37.684 00:36:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:37.684 00:36:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:37.684 00:36:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:23:37.684 00:36:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:23:38.618 00:23:38.618 00:36:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:38.618 00:36:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:38.618 00:36:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:38.876 00:36:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:38.876 00:36:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:38.876 00:36:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:38.876 00:36:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:38.876 00:36:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:38.876 00:36:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:38.876 { 00:23:38.876 "cntlid": 143, 00:23:38.876 "qid": 0, 00:23:38.876 "state": "enabled", 00:23:38.876 "listen_address": { 00:23:38.876 "trtype": "TCP", 00:23:38.876 "adrfam": "IPv4", 00:23:38.876 "traddr": "10.0.0.2", 00:23:38.876 "trsvcid": "4420" 00:23:38.876 }, 00:23:38.876 "peer_address": { 00:23:38.876 "trtype": "TCP", 00:23:38.876 "adrfam": "IPv4", 00:23:38.876 "traddr": "10.0.0.1", 00:23:38.876 "trsvcid": "57060" 00:23:38.876 }, 00:23:38.876 "auth": { 00:23:38.876 "state": "completed", 00:23:38.876 "digest": "sha512", 00:23:38.876 "dhgroup": "ffdhe8192" 00:23:38.876 } 00:23:38.876 } 00:23:38.876 ]' 00:23:38.876 00:36:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:38.876 00:36:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:38.876 00:36:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:38.876 00:36:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:23:38.876 00:36:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:39.134 00:36:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:39.134 00:36:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:39.134 00:36:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:39.392 00:36:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:03:NjU1ZTFiNjM3ZWY1ODZmMTllMjEzYTdiM2ExYzIzZTVkZDRiMDAyNTdmMDQzN2M3NzM1MjE1MDM5OGI4NWYzMBfT75I=: 00:23:40.767 00:36:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:40.767 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:40.767 00:36:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:23:40.767 00:36:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:40.767 00:36:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:40.767 00:36:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:40.767 00:36:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:23:40.767 00:36:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s sha256,sha384,sha512 00:23:40.767 00:36:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:23:40.768 00:36:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:40.768 00:36:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:40.768 00:36:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:40.768 00:36:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@114 -- # connect_authenticate sha512 ffdhe8192 0 00:23:40.768 00:36:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:40.768 00:36:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:23:40.768 00:36:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:23:40.768 00:36:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:23:40.768 00:36:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:40.768 00:36:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:40.768 00:36:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:40.768 00:36:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:40.768 00:36:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:40.768 00:36:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:40.768 00:36:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:41.765 00:23:41.765 00:36:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:41.765 00:36:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:41.765 00:36:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:42.023 00:36:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:42.023 00:36:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:42.023 00:36:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:42.023 00:36:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:42.023 00:36:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:42.023 00:36:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:42.023 { 00:23:42.023 "cntlid": 145, 00:23:42.023 "qid": 0, 00:23:42.023 "state": "enabled", 00:23:42.023 "listen_address": { 00:23:42.023 "trtype": "TCP", 00:23:42.023 "adrfam": "IPv4", 00:23:42.023 "traddr": "10.0.0.2", 00:23:42.023 "trsvcid": "4420" 00:23:42.023 }, 00:23:42.024 "peer_address": { 00:23:42.024 "trtype": "TCP", 00:23:42.024 "adrfam": "IPv4", 00:23:42.024 "traddr": "10.0.0.1", 00:23:42.024 "trsvcid": "57082" 00:23:42.024 }, 00:23:42.024 "auth": { 00:23:42.024 "state": "completed", 00:23:42.024 "digest": "sha512", 00:23:42.024 "dhgroup": "ffdhe8192" 00:23:42.024 } 00:23:42.024 } 00:23:42.024 ]' 00:23:42.024 00:36:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:42.282 00:36:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:42.282 00:36:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:42.282 00:36:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:23:42.282 00:36:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:42.282 00:36:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:42.282 00:36:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:42.282 00:36:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:42.539 00:36:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:00:MDQ4Njc1MzY1OWZjODQ0M2JmNjhiZjIxMDRmYWNhYzllN2FlOWI2NzJhNzU3NzAxMlqSlA==: --dhchap-ctrl-secret DHHC-1:03:Zjk0ZWVmNjIxOThmZTJhOGI4ZDA2ZWY2ODAzZjY2ZWEwNmE5MjZhMjBhZmY0NTU5NzlkYTFiM2UyNWQ5NWVlZd8/ViE=: 00:23:43.914 00:36:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:43.914 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:43.914 00:36:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:23:43.914 00:36:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:43.914 00:36:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:43.914 00:36:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:43.914 00:36:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@117 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key1 00:23:43.914 00:36:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:43.914 00:36:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:43.914 00:36:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:43.914 00:36:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@118 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:23:43.914 00:36:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:23:43.914 00:36:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:23:43.914 00:36:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:23:43.914 00:36:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:43.914 00:36:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:23:43.914 00:36:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:43.914 00:36:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:23:43.914 00:36:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:23:44.848 request: 00:23:44.848 { 00:23:44.848 "name": "nvme0", 00:23:44.848 "trtype": "tcp", 00:23:44.848 "traddr": "10.0.0.2", 00:23:44.848 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc", 00:23:44.848 "adrfam": "ipv4", 00:23:44.848 "trsvcid": "4420", 00:23:44.848 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:23:44.848 "dhchap_key": "key2", 00:23:44.848 "method": "bdev_nvme_attach_controller", 00:23:44.848 "req_id": 1 00:23:44.848 } 00:23:44.848 Got JSON-RPC error response 00:23:44.849 response: 00:23:44.849 { 00:23:44.849 "code": -5, 00:23:44.849 "message": "Input/output error" 00:23:44.849 } 00:23:44.849 00:36:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:23:44.849 00:36:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:44.849 00:36:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:44.849 00:36:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:44.849 00:36:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@121 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:23:44.849 00:36:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:44.849 00:36:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:44.849 00:36:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:44.849 00:36:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@124 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:44.849 00:36:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:44.849 00:36:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:44.849 00:36:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:44.849 00:36:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@125 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:23:44.849 00:36:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:23:44.849 00:36:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:23:44.849 00:36:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:23:44.849 00:36:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:44.849 00:36:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:23:44.849 00:36:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:44.849 00:36:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:23:44.849 00:36:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:23:45.782 request: 00:23:45.782 { 00:23:45.782 "name": "nvme0", 00:23:45.782 "trtype": "tcp", 00:23:45.782 "traddr": "10.0.0.2", 00:23:45.782 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc", 00:23:45.782 "adrfam": "ipv4", 00:23:45.782 "trsvcid": "4420", 00:23:45.782 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:23:45.782 "dhchap_key": "key1", 00:23:45.782 "dhchap_ctrlr_key": "ckey2", 00:23:45.782 "method": "bdev_nvme_attach_controller", 00:23:45.782 "req_id": 1 00:23:45.782 } 00:23:45.782 Got JSON-RPC error response 00:23:45.782 response: 00:23:45.782 { 00:23:45.782 "code": -5, 00:23:45.782 "message": "Input/output error" 00:23:45.782 } 00:23:45.782 00:36:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:23:45.782 00:36:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:45.782 00:36:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:45.782 00:36:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:45.782 00:36:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@128 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:23:45.782 00:36:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:45.782 00:36:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:45.782 00:36:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:45.782 00:36:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@131 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key1 00:23:45.782 00:36:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:45.782 00:36:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:45.782 00:36:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:45.782 00:36:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@132 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:45.782 00:36:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:23:45.782 00:36:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:45.782 00:36:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:23:45.782 00:36:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:45.782 00:36:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:23:45.782 00:36:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:45.782 00:36:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:45.782 00:36:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:46.716 request: 00:23:46.716 { 00:23:46.716 "name": "nvme0", 00:23:46.716 "trtype": "tcp", 00:23:46.716 "traddr": "10.0.0.2", 00:23:46.716 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc", 00:23:46.716 "adrfam": "ipv4", 00:23:46.716 "trsvcid": "4420", 00:23:46.716 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:23:46.716 "dhchap_key": "key1", 00:23:46.716 "dhchap_ctrlr_key": "ckey1", 00:23:46.716 "method": "bdev_nvme_attach_controller", 00:23:46.716 "req_id": 1 00:23:46.716 } 00:23:46.716 Got JSON-RPC error response 00:23:46.716 response: 00:23:46.716 { 00:23:46.716 "code": -5, 00:23:46.716 "message": "Input/output error" 00:23:46.716 } 00:23:46.716 00:36:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:23:46.716 00:36:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:46.716 00:36:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:46.716 00:36:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:46.716 00:36:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@135 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:23:46.716 00:36:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:46.716 00:36:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:46.716 00:36:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:46.716 00:36:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@138 -- # killprocess 962477 00:23:46.716 00:36:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@946 -- # '[' -z 962477 ']' 00:23:46.716 00:36:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@950 -- # kill -0 962477 00:23:46.716 00:36:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@951 -- # uname 00:23:46.716 00:36:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:46.716 00:36:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 962477 00:23:46.716 00:36:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:23:46.716 00:36:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:23:46.716 00:36:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@964 -- # echo 'killing process with pid 962477' 00:23:46.716 killing process with pid 962477 00:23:46.716 00:36:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@965 -- # kill 962477 00:23:46.716 00:36:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@970 -- # wait 962477 00:23:46.975 00:36:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@139 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:23:46.975 00:36:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:46.975 00:36:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@720 -- # xtrace_disable 00:23:46.975 00:36:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:46.975 00:36:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=984002 00:23:46.975 00:36:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:23:46.975 00:36:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 984002 00:23:46.975 00:36:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@827 -- # '[' -z 984002 ']' 00:23:46.975 00:36:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:46.975 00:36:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:46.975 00:36:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:46.975 00:36:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:46.975 00:36:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:47.234 00:36:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:47.234 00:36:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@860 -- # return 0 00:23:47.234 00:36:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:47.234 00:36:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:47.234 00:36:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:47.234 00:36:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:47.234 00:36:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@140 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:23:47.234 00:36:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@142 -- # waitforlisten 984002 00:23:47.234 00:36:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@827 -- # '[' -z 984002 ']' 00:23:47.234 00:36:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:47.234 00:36:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:47.234 00:36:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:47.234 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:47.234 00:36:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:47.234 00:36:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:47.493 00:36:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:47.493 00:36:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@860 -- # return 0 00:23:47.493 00:36:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@143 -- # rpc_cmd 00:23:47.493 00:36:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:47.493 00:36:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:47.751 00:36:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:47.751 00:36:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@153 -- # connect_authenticate sha512 ffdhe8192 3 00:23:47.751 00:36:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:23:47.751 00:36:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:23:47.752 00:36:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:23:47.752 00:36:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:23:47.752 00:36:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:47.752 00:36:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key3 00:23:47.752 00:36:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:47.752 00:36:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:47.752 00:36:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:47.752 00:36:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:23:47.752 00:36:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:23:48.685 00:23:48.685 00:36:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:23:48.685 00:36:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:23:48.685 00:36:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:48.957 00:36:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:48.957 00:36:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:48.957 00:36:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:48.957 00:36:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:48.957 00:36:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:48.957 00:36:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:48.957 { 00:23:48.957 "cntlid": 1, 00:23:48.957 "qid": 0, 00:23:48.957 "state": "enabled", 00:23:48.957 "listen_address": { 00:23:48.957 "trtype": "TCP", 00:23:48.957 "adrfam": "IPv4", 00:23:48.957 "traddr": "10.0.0.2", 00:23:48.957 "trsvcid": "4420" 00:23:48.957 }, 00:23:48.957 "peer_address": { 00:23:48.957 "trtype": "TCP", 00:23:48.957 "adrfam": "IPv4", 00:23:48.957 "traddr": "10.0.0.1", 00:23:48.957 "trsvcid": "43766" 00:23:48.957 }, 00:23:48.957 "auth": { 00:23:48.957 "state": "completed", 00:23:48.957 "digest": "sha512", 00:23:48.957 "dhgroup": "ffdhe8192" 00:23:48.957 } 00:23:48.957 } 00:23:48.957 ]' 00:23:48.957 00:36:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:49.215 00:36:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:49.215 00:36:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:49.215 00:36:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:23:49.215 00:36:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:49.215 00:36:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:49.215 00:36:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:49.215 00:36:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:49.473 00:36:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:03:NjU1ZTFiNjM3ZWY1ODZmMTllMjEzYTdiM2ExYzIzZTVkZDRiMDAyNTdmMDQzN2M3NzM1MjE1MDM5OGI4NWYzMBfT75I=: 00:23:50.846 00:36:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:50.846 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:50.847 00:36:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:23:50.847 00:36:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:50.847 00:36:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:50.847 00:36:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:50.847 00:36:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key3 00:23:50.847 00:36:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:50.847 00:36:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:50.847 00:36:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:50.847 00:36:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@157 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:23:50.847 00:36:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:23:50.847 00:36:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@158 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:23:50.847 00:36:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:23:50.847 00:36:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:23:50.847 00:36:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:23:50.847 00:36:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:50.847 00:36:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:23:50.847 00:36:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:50.847 00:36:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:23:50.847 00:36:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:23:51.104 request: 00:23:51.104 { 00:23:51.104 "name": "nvme0", 00:23:51.104 "trtype": "tcp", 00:23:51.104 "traddr": "10.0.0.2", 00:23:51.104 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc", 00:23:51.104 "adrfam": "ipv4", 00:23:51.104 "trsvcid": "4420", 00:23:51.104 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:23:51.104 "dhchap_key": "key3", 00:23:51.104 "method": "bdev_nvme_attach_controller", 00:23:51.104 "req_id": 1 00:23:51.104 } 00:23:51.104 Got JSON-RPC error response 00:23:51.104 response: 00:23:51.104 { 00:23:51.104 "code": -5, 00:23:51.104 "message": "Input/output error" 00:23:51.104 } 00:23:51.104 00:36:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:23:51.104 00:36:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:51.104 00:36:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:51.104 00:36:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:51.104 00:36:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # IFS=, 00:23:51.104 00:36:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@164 -- # printf %s sha256,sha384,sha512 00:23:51.104 00:36:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:23:51.104 00:36:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:23:51.363 00:36:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@169 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:23:51.363 00:36:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:23:51.363 00:36:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:23:51.363 00:36:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:23:51.363 00:36:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:51.363 00:36:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:23:51.363 00:36:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:51.363 00:36:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:23:51.363 00:36:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:23:51.621 request: 00:23:51.621 { 00:23:51.621 "name": "nvme0", 00:23:51.621 "trtype": "tcp", 00:23:51.621 "traddr": "10.0.0.2", 00:23:51.621 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc", 00:23:51.621 "adrfam": "ipv4", 00:23:51.621 "trsvcid": "4420", 00:23:51.621 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:23:51.621 "dhchap_key": "key3", 00:23:51.621 "method": "bdev_nvme_attach_controller", 00:23:51.621 "req_id": 1 00:23:51.621 } 00:23:51.621 Got JSON-RPC error response 00:23:51.621 response: 00:23:51.621 { 00:23:51.621 "code": -5, 00:23:51.621 "message": "Input/output error" 00:23:51.621 } 00:23:51.621 00:36:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:23:51.621 00:36:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:51.621 00:36:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:51.621 00:36:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:51.621 00:36:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:23:51.621 00:36:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s sha256,sha384,sha512 00:23:51.621 00:36:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:23:51.621 00:36:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:51.621 00:36:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:51.621 00:36:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:51.879 00:36:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@186 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:23:51.879 00:36:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:51.879 00:36:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:51.879 00:36:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:51.879 00:36:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@187 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:23:51.879 00:36:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:51.879 00:36:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:51.879 00:36:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:51.879 00:36:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@188 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:23:51.879 00:36:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:23:51.879 00:36:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:23:51.879 00:36:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:23:51.879 00:36:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:51.879 00:36:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:23:51.879 00:36:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:51.879 00:36:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:23:51.879 00:36:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:23:52.138 request: 00:23:52.138 { 00:23:52.138 "name": "nvme0", 00:23:52.138 "trtype": "tcp", 00:23:52.138 "traddr": "10.0.0.2", 00:23:52.138 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc", 00:23:52.138 "adrfam": "ipv4", 00:23:52.138 "trsvcid": "4420", 00:23:52.138 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:23:52.138 "dhchap_key": "key0", 00:23:52.138 "dhchap_ctrlr_key": "key1", 00:23:52.138 "method": "bdev_nvme_attach_controller", 00:23:52.138 "req_id": 1 00:23:52.138 } 00:23:52.138 Got JSON-RPC error response 00:23:52.138 response: 00:23:52.138 { 00:23:52.138 "code": -5, 00:23:52.138 "message": "Input/output error" 00:23:52.138 } 00:23:52.138 00:36:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:23:52.138 00:36:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:52.138 00:36:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:52.138 00:36:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:52.138 00:36:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@192 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:23:52.138 00:36:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:23:52.704 00:23:52.704 00:36:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # hostrpc bdev_nvme_get_controllers 00:23:52.704 00:36:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:52.704 00:36:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # jq -r '.[].name' 00:23:52.704 00:36:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:52.704 00:36:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@196 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:52.704 00:36:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:52.962 00:36:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@198 -- # trap - SIGINT SIGTERM EXIT 00:23:52.962 00:36:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@199 -- # cleanup 00:23:52.962 00:36:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 962507 00:23:52.962 00:36:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@946 -- # '[' -z 962507 ']' 00:23:52.962 00:36:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@950 -- # kill -0 962507 00:23:52.962 00:36:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@951 -- # uname 00:23:52.962 00:36:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:52.962 00:36:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 962507 00:23:52.962 00:36:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:23:52.962 00:36:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:23:52.962 00:36:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@964 -- # echo 'killing process with pid 962507' 00:23:52.962 killing process with pid 962507 00:23:52.962 00:36:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@965 -- # kill 962507 00:23:52.962 00:36:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@970 -- # wait 962507 00:23:53.529 00:36:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:23:53.529 00:36:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:53.529 00:36:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@117 -- # sync 00:23:53.529 00:36:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:53.529 00:36:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@120 -- # set +e 00:23:53.529 00:36:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:53.529 00:36:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:53.529 rmmod nvme_tcp 00:23:53.529 rmmod nvme_fabrics 00:23:53.529 rmmod nvme_keyring 00:23:53.529 00:36:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:53.529 00:36:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@124 -- # set -e 00:23:53.529 00:36:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@125 -- # return 0 00:23:53.529 00:36:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@489 -- # '[' -n 984002 ']' 00:23:53.529 00:36:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@490 -- # killprocess 984002 00:23:53.529 00:36:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@946 -- # '[' -z 984002 ']' 00:23:53.529 00:36:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@950 -- # kill -0 984002 00:23:53.529 00:36:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@951 -- # uname 00:23:53.529 00:36:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:53.529 00:36:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 984002 00:23:53.529 00:36:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:23:53.529 00:36:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:23:53.529 00:36:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@964 -- # echo 'killing process with pid 984002' 00:23:53.529 killing process with pid 984002 00:23:53.529 00:36:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@965 -- # kill 984002 00:23:53.529 00:36:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@970 -- # wait 984002 00:23:53.529 00:36:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:53.529 00:36:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:53.529 00:36:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:53.529 00:36:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:53.529 00:36:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:53.529 00:36:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:53.529 00:36:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:53.529 00:36:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:56.064 00:36:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:56.065 00:36:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.DDL /tmp/spdk.key-sha256.eiG /tmp/spdk.key-sha384.3rS /tmp/spdk.key-sha512.ZyU /tmp/spdk.key-sha512.IRI /tmp/spdk.key-sha384.1uw /tmp/spdk.key-sha256.jtd '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:23:56.065 00:23:56.065 real 3m42.139s 00:23:56.065 user 8m36.505s 00:23:56.065 sys 0m25.724s 00:23:56.065 00:36:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1122 -- # xtrace_disable 00:23:56.065 00:36:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:56.065 ************************************ 00:23:56.065 END TEST nvmf_auth_target 00:23:56.065 ************************************ 00:23:56.065 00:36:23 nvmf_tcp -- nvmf/nvmf.sh@59 -- # '[' tcp = tcp ']' 00:23:56.065 00:36:23 nvmf_tcp -- nvmf/nvmf.sh@60 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:23:56.065 00:36:23 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:23:56.065 00:36:23 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:23:56.065 00:36:23 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:56.065 ************************************ 00:23:56.065 START TEST nvmf_bdevio_no_huge 00:23:56.065 ************************************ 00:23:56.065 00:36:23 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:23:56.065 * Looking for test storage... 00:23:56.065 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:23:56.065 00:36:23 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:56.065 00:36:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:23:56.065 00:36:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:56.065 00:36:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:56.065 00:36:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:56.065 00:36:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:56.065 00:36:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:56.065 00:36:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:56.065 00:36:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:56.065 00:36:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:56.065 00:36:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:56.065 00:36:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:56.065 00:36:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:23:56.065 00:36:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:23:56.065 00:36:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:56.065 00:36:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:56.065 00:36:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:56.065 00:36:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:56.065 00:36:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:56.065 00:36:23 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:56.065 00:36:23 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:56.065 00:36:23 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:56.065 00:36:23 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:56.065 00:36:23 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:56.065 00:36:23 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:56.065 00:36:23 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:23:56.065 00:36:23 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:56.065 00:36:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@47 -- # : 0 00:23:56.065 00:36:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:56.065 00:36:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:56.065 00:36:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:56.065 00:36:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:56.065 00:36:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:56.065 00:36:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:56.065 00:36:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:56.065 00:36:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:56.065 00:36:23 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:56.065 00:36:23 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:56.065 00:36:23 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:23:56.065 00:36:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:56.065 00:36:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:56.065 00:36:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:56.065 00:36:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:56.065 00:36:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:56.065 00:36:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:56.065 00:36:23 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:56.065 00:36:23 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:56.065 00:36:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:56.065 00:36:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:56.065 00:36:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@285 -- # xtrace_disable 00:23:56.065 00:36:23 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:57.448 00:36:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:57.448 00:36:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # pci_devs=() 00:23:57.448 00:36:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:57.448 00:36:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:57.448 00:36:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:57.448 00:36:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:57.448 00:36:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:57.448 00:36:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # net_devs=() 00:23:57.448 00:36:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:57.448 00:36:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # e810=() 00:23:57.448 00:36:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # local -ga e810 00:23:57.448 00:36:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # x722=() 00:23:57.448 00:36:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # local -ga x722 00:23:57.448 00:36:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # mlx=() 00:23:57.448 00:36:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # local -ga mlx 00:23:57.448 00:36:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:57.448 00:36:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:57.448 00:36:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:57.448 00:36:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:57.448 00:36:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:57.448 00:36:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:57.448 00:36:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:57.448 00:36:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:57.448 00:36:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:57.448 00:36:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:57.448 00:36:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:57.448 00:36:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:57.448 00:36:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:57.448 00:36:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:57.448 00:36:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:57.448 00:36:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:57.448 00:36:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:57.448 00:36:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:57.448 00:36:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:23:57.448 Found 0000:08:00.0 (0x8086 - 0x159b) 00:23:57.448 00:36:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:57.448 00:36:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:57.448 00:36:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:57.448 00:36:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:57.448 00:36:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:57.448 00:36:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:57.448 00:36:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:23:57.448 Found 0000:08:00.1 (0x8086 - 0x159b) 00:23:57.448 00:36:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:57.448 00:36:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:57.448 00:36:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:57.448 00:36:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:57.448 00:36:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:57.448 00:36:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:57.448 00:36:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:57.448 00:36:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:57.448 00:36:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:57.448 00:36:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:57.448 00:36:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:57.448 00:36:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:57.448 00:36:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:57.448 00:36:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:57.448 00:36:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:57.448 00:36:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:23:57.448 Found net devices under 0000:08:00.0: cvl_0_0 00:23:57.448 00:36:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:57.448 00:36:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:57.448 00:36:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:57.448 00:36:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:57.448 00:36:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:57.448 00:36:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:57.448 00:36:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:57.448 00:36:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:57.448 00:36:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:23:57.448 Found net devices under 0000:08:00.1: cvl_0_1 00:23:57.448 00:36:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:57.448 00:36:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:57.448 00:36:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # is_hw=yes 00:23:57.448 00:36:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:57.448 00:36:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:57.448 00:36:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:57.448 00:36:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:57.448 00:36:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:57.448 00:36:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:57.448 00:36:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:57.448 00:36:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:57.448 00:36:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:57.448 00:36:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:57.448 00:36:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:57.448 00:36:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:57.448 00:36:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:57.448 00:36:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:57.448 00:36:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:57.448 00:36:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:57.448 00:36:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:57.448 00:36:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:57.448 00:36:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:57.448 00:36:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:57.448 00:36:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:57.448 00:36:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:57.448 00:36:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:57.448 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:57.448 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.178 ms 00:23:57.448 00:23:57.448 --- 10.0.0.2 ping statistics --- 00:23:57.448 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:57.448 rtt min/avg/max/mdev = 0.178/0.178/0.178/0.000 ms 00:23:57.448 00:36:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:57.706 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:57.706 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.092 ms 00:23:57.706 00:23:57.706 --- 10.0.0.1 ping statistics --- 00:23:57.706 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:57.706 rtt min/avg/max/mdev = 0.092/0.092/0.092/0.000 ms 00:23:57.706 00:36:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:57.706 00:36:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # return 0 00:23:57.706 00:36:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:57.706 00:36:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:57.706 00:36:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:57.706 00:36:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:57.706 00:36:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:57.706 00:36:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:57.706 00:36:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:57.706 00:36:25 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:23:57.706 00:36:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:57.707 00:36:25 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@720 -- # xtrace_disable 00:23:57.707 00:36:25 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:57.707 00:36:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # nvmfpid=986134 00:23:57.707 00:36:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:23:57.707 00:36:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # waitforlisten 986134 00:23:57.707 00:36:25 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@827 -- # '[' -z 986134 ']' 00:23:57.707 00:36:25 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:57.707 00:36:25 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:57.707 00:36:25 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:57.707 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:57.707 00:36:25 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:57.707 00:36:25 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:57.707 [2024-07-12 00:36:25.368464] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:23:57.707 [2024-07-12 00:36:25.368557] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:23:57.707 [2024-07-12 00:36:25.435804] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:57.707 [2024-07-12 00:36:25.523823] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:57.707 [2024-07-12 00:36:25.523885] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:57.707 [2024-07-12 00:36:25.523902] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:57.707 [2024-07-12 00:36:25.523916] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:57.707 [2024-07-12 00:36:25.523928] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:57.707 [2024-07-12 00:36:25.524039] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:23:57.707 [2024-07-12 00:36:25.524114] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:23:57.707 [2024-07-12 00:36:25.524210] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:23:57.707 [2024-07-12 00:36:25.524216] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:23:57.964 00:36:25 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:57.964 00:36:25 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@860 -- # return 0 00:23:57.964 00:36:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:57.964 00:36:25 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:57.964 00:36:25 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:57.964 00:36:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:57.964 00:36:25 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:57.964 00:36:25 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:57.964 00:36:25 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:57.964 [2024-07-12 00:36:25.648018] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:57.964 00:36:25 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:57.964 00:36:25 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:23:57.964 00:36:25 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:57.964 00:36:25 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:57.964 Malloc0 00:23:57.964 00:36:25 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:57.964 00:36:25 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:57.964 00:36:25 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:57.964 00:36:25 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:57.964 00:36:25 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:57.964 00:36:25 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:57.964 00:36:25 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:57.964 00:36:25 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:57.964 00:36:25 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:57.964 00:36:25 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:57.964 00:36:25 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:57.964 00:36:25 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:57.964 [2024-07-12 00:36:25.686647] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:57.964 00:36:25 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:57.964 00:36:25 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:23:57.964 00:36:25 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:23:57.964 00:36:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # config=() 00:23:57.964 00:36:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # local subsystem config 00:23:57.964 00:36:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:57.964 00:36:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:57.964 { 00:23:57.964 "params": { 00:23:57.964 "name": "Nvme$subsystem", 00:23:57.964 "trtype": "$TEST_TRANSPORT", 00:23:57.964 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:57.964 "adrfam": "ipv4", 00:23:57.964 "trsvcid": "$NVMF_PORT", 00:23:57.964 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:57.964 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:57.964 "hdgst": ${hdgst:-false}, 00:23:57.964 "ddgst": ${ddgst:-false} 00:23:57.964 }, 00:23:57.964 "method": "bdev_nvme_attach_controller" 00:23:57.964 } 00:23:57.964 EOF 00:23:57.964 )") 00:23:57.964 00:36:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # cat 00:23:57.964 00:36:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # jq . 00:23:57.964 00:36:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@557 -- # IFS=, 00:23:57.964 00:36:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:23:57.964 "params": { 00:23:57.964 "name": "Nvme1", 00:23:57.964 "trtype": "tcp", 00:23:57.964 "traddr": "10.0.0.2", 00:23:57.964 "adrfam": "ipv4", 00:23:57.964 "trsvcid": "4420", 00:23:57.964 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:57.964 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:57.964 "hdgst": false, 00:23:57.964 "ddgst": false 00:23:57.964 }, 00:23:57.964 "method": "bdev_nvme_attach_controller" 00:23:57.964 }' 00:23:57.964 [2024-07-12 00:36:25.736600] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:23:57.964 [2024-07-12 00:36:25.736703] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid986164 ] 00:23:57.964 [2024-07-12 00:36:25.796913] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:23:58.222 [2024-07-12 00:36:25.888635] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:58.222 [2024-07-12 00:36:25.888720] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:58.222 [2024-07-12 00:36:25.888755] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:58.480 I/O targets: 00:23:58.480 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:23:58.480 00:23:58.480 00:23:58.480 CUnit - A unit testing framework for C - Version 2.1-3 00:23:58.480 http://cunit.sourceforge.net/ 00:23:58.480 00:23:58.480 00:23:58.480 Suite: bdevio tests on: Nvme1n1 00:23:58.480 Test: blockdev write read block ...passed 00:23:58.480 Test: blockdev write zeroes read block ...passed 00:23:58.480 Test: blockdev write zeroes read no split ...passed 00:23:58.480 Test: blockdev write zeroes read split ...passed 00:23:58.480 Test: blockdev write zeroes read split partial ...passed 00:23:58.480 Test: blockdev reset ...[2024-07-12 00:36:26.254242] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:58.480 [2024-07-12 00:36:26.254359] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf95ca0 (9): Bad file descriptor 00:23:58.480 [2024-07-12 00:36:26.270404] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:23:58.480 passed 00:23:58.480 Test: blockdev write read 8 blocks ...passed 00:23:58.480 Test: blockdev write read size > 128k ...passed 00:23:58.480 Test: blockdev write read invalid size ...passed 00:23:58.739 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:23:58.739 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:23:58.739 Test: blockdev write read max offset ...passed 00:23:58.739 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:23:58.739 Test: blockdev writev readv 8 blocks ...passed 00:23:58.739 Test: blockdev writev readv 30 x 1block ...passed 00:23:58.739 Test: blockdev writev readv block ...passed 00:23:58.739 Test: blockdev writev readv size > 128k ...passed 00:23:59.031 Test: blockdev writev readv size > 128k in two iovs ...passed 00:23:59.031 Test: blockdev comparev and writev ...[2024-07-12 00:36:26.608892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:59.031 [2024-07-12 00:36:26.608936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:59.031 [2024-07-12 00:36:26.608963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:59.031 [2024-07-12 00:36:26.608982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:59.031 [2024-07-12 00:36:26.609362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:59.031 [2024-07-12 00:36:26.609390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:23:59.031 [2024-07-12 00:36:26.609414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:59.031 [2024-07-12 00:36:26.609430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:23:59.031 [2024-07-12 00:36:26.609773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:59.031 [2024-07-12 00:36:26.609799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:23:59.031 [2024-07-12 00:36:26.609823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:59.031 [2024-07-12 00:36:26.609840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:23:59.031 [2024-07-12 00:36:26.610210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:59.031 [2024-07-12 00:36:26.610236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:23:59.031 [2024-07-12 00:36:26.610259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:59.031 [2024-07-12 00:36:26.610276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:23:59.031 passed 00:23:59.031 Test: blockdev nvme passthru rw ...passed 00:23:59.031 Test: blockdev nvme passthru vendor specific ...[2024-07-12 00:36:26.692916] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:59.031 [2024-07-12 00:36:26.692947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:23:59.031 [2024-07-12 00:36:26.693131] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:59.031 [2024-07-12 00:36:26.693156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:23:59.031 [2024-07-12 00:36:26.693339] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:59.031 [2024-07-12 00:36:26.693363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:23:59.031 [2024-07-12 00:36:26.693545] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:59.031 [2024-07-12 00:36:26.693568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:23:59.031 passed 00:23:59.031 Test: blockdev nvme admin passthru ...passed 00:23:59.031 Test: blockdev copy ...passed 00:23:59.031 00:23:59.031 Run Summary: Type Total Ran Passed Failed Inactive 00:23:59.031 suites 1 1 n/a 0 0 00:23:59.031 tests 23 23 23 0 0 00:23:59.031 asserts 152 152 152 0 n/a 00:23:59.031 00:23:59.031 Elapsed time = 1.305 seconds 00:23:59.289 00:36:27 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:59.289 00:36:27 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:59.289 00:36:27 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:59.289 00:36:27 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:59.289 00:36:27 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:23:59.289 00:36:27 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:23:59.289 00:36:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:59.289 00:36:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@117 -- # sync 00:23:59.289 00:36:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:59.289 00:36:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@120 -- # set +e 00:23:59.289 00:36:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:59.289 00:36:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:59.289 rmmod nvme_tcp 00:23:59.289 rmmod nvme_fabrics 00:23:59.289 rmmod nvme_keyring 00:23:59.547 00:36:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:59.547 00:36:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set -e 00:23:59.547 00:36:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # return 0 00:23:59.547 00:36:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # '[' -n 986134 ']' 00:23:59.547 00:36:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # killprocess 986134 00:23:59.547 00:36:27 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@946 -- # '[' -z 986134 ']' 00:23:59.547 00:36:27 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@950 -- # kill -0 986134 00:23:59.547 00:36:27 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@951 -- # uname 00:23:59.547 00:36:27 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:59.547 00:36:27 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 986134 00:23:59.547 00:36:27 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # process_name=reactor_3 00:23:59.547 00:36:27 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # '[' reactor_3 = sudo ']' 00:23:59.547 00:36:27 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@964 -- # echo 'killing process with pid 986134' 00:23:59.547 killing process with pid 986134 00:23:59.547 00:36:27 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@965 -- # kill 986134 00:23:59.547 00:36:27 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@970 -- # wait 986134 00:23:59.807 00:36:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:59.807 00:36:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:59.807 00:36:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:59.807 00:36:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:59.807 00:36:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:59.807 00:36:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:59.807 00:36:27 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:59.807 00:36:27 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:02.343 00:36:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:02.343 00:24:02.343 real 0m6.133s 00:24:02.343 user 0m10.493s 00:24:02.343 sys 0m2.307s 00:24:02.343 00:36:29 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1122 -- # xtrace_disable 00:24:02.343 00:36:29 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:24:02.343 ************************************ 00:24:02.343 END TEST nvmf_bdevio_no_huge 00:24:02.343 ************************************ 00:24:02.343 00:36:29 nvmf_tcp -- nvmf/nvmf.sh@61 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:24:02.343 00:36:29 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:24:02.343 00:36:29 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:24:02.343 00:36:29 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:02.343 ************************************ 00:24:02.343 START TEST nvmf_tls 00:24:02.343 ************************************ 00:24:02.343 00:36:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:24:02.343 * Looking for test storage... 00:24:02.343 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:02.343 00:36:29 nvmf_tcp.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:02.343 00:36:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:24:02.343 00:36:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:02.343 00:36:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:02.343 00:36:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:02.343 00:36:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:02.343 00:36:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:02.343 00:36:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:02.343 00:36:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:02.343 00:36:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:02.343 00:36:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:02.343 00:36:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:02.343 00:36:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:24:02.343 00:36:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:24:02.343 00:36:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:02.343 00:36:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:02.343 00:36:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:02.343 00:36:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:02.343 00:36:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:02.343 00:36:29 nvmf_tcp.nvmf_tls -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:02.343 00:36:29 nvmf_tcp.nvmf_tls -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:02.343 00:36:29 nvmf_tcp.nvmf_tls -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:02.343 00:36:29 nvmf_tcp.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:02.343 00:36:29 nvmf_tcp.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:02.343 00:36:29 nvmf_tcp.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:02.343 00:36:29 nvmf_tcp.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:24:02.343 00:36:29 nvmf_tcp.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:02.343 00:36:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@47 -- # : 0 00:24:02.343 00:36:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:02.343 00:36:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:02.343 00:36:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:02.343 00:36:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:02.343 00:36:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:02.343 00:36:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:02.343 00:36:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:02.343 00:36:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:02.343 00:36:29 nvmf_tcp.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:02.343 00:36:29 nvmf_tcp.nvmf_tls -- target/tls.sh@62 -- # nvmftestinit 00:24:02.343 00:36:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:02.343 00:36:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:02.343 00:36:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:02.343 00:36:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:02.343 00:36:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:02.343 00:36:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:02.343 00:36:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:02.343 00:36:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:02.343 00:36:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:02.343 00:36:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:02.343 00:36:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@285 -- # xtrace_disable 00:24:02.343 00:36:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:03.721 00:36:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:03.721 00:36:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # pci_devs=() 00:24:03.721 00:36:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:03.721 00:36:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:03.721 00:36:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:03.721 00:36:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:03.721 00:36:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:03.721 00:36:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # net_devs=() 00:24:03.721 00:36:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:03.721 00:36:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # e810=() 00:24:03.721 00:36:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # local -ga e810 00:24:03.721 00:36:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # x722=() 00:24:03.721 00:36:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # local -ga x722 00:24:03.721 00:36:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # mlx=() 00:24:03.721 00:36:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # local -ga mlx 00:24:03.721 00:36:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:03.721 00:36:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:03.721 00:36:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:03.721 00:36:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:03.721 00:36:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:03.721 00:36:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:03.721 00:36:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:03.721 00:36:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:03.721 00:36:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:03.721 00:36:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:03.721 00:36:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:03.721 00:36:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:03.721 00:36:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:03.721 00:36:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:03.721 00:36:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:03.721 00:36:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:03.721 00:36:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:03.721 00:36:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:03.721 00:36:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:24:03.721 Found 0000:08:00.0 (0x8086 - 0x159b) 00:24:03.721 00:36:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:03.721 00:36:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:03.721 00:36:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:03.721 00:36:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:03.721 00:36:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:03.721 00:36:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:03.721 00:36:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:24:03.721 Found 0000:08:00.1 (0x8086 - 0x159b) 00:24:03.721 00:36:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:03.721 00:36:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:03.721 00:36:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:03.721 00:36:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:03.721 00:36:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:03.721 00:36:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:03.721 00:36:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:03.721 00:36:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:03.721 00:36:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:03.721 00:36:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:03.721 00:36:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:03.721 00:36:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:03.721 00:36:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:03.721 00:36:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:03.721 00:36:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:03.721 00:36:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:24:03.721 Found net devices under 0000:08:00.0: cvl_0_0 00:24:03.721 00:36:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:03.721 00:36:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:03.721 00:36:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:03.721 00:36:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:03.721 00:36:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:03.721 00:36:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:03.721 00:36:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:03.721 00:36:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:03.721 00:36:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:24:03.721 Found net devices under 0000:08:00.1: cvl_0_1 00:24:03.721 00:36:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:03.721 00:36:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:03.721 00:36:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # is_hw=yes 00:24:03.721 00:36:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:03.721 00:36:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:03.721 00:36:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:03.721 00:36:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:03.721 00:36:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:03.721 00:36:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:03.721 00:36:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:03.721 00:36:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:03.721 00:36:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:03.721 00:36:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:03.721 00:36:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:03.721 00:36:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:03.721 00:36:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:03.721 00:36:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:03.721 00:36:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:03.721 00:36:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:03.721 00:36:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:03.721 00:36:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:03.721 00:36:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:03.721 00:36:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:03.721 00:36:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:03.721 00:36:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:03.721 00:36:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:03.721 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:03.721 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.225 ms 00:24:03.721 00:24:03.721 --- 10.0.0.2 ping statistics --- 00:24:03.721 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:03.721 rtt min/avg/max/mdev = 0.225/0.225/0.225/0.000 ms 00:24:03.721 00:36:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:03.721 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:03.721 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.107 ms 00:24:03.721 00:24:03.721 --- 10.0.0.1 ping statistics --- 00:24:03.721 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:03.721 rtt min/avg/max/mdev = 0.107/0.107/0.107/0.000 ms 00:24:03.721 00:36:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:03.721 00:36:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@422 -- # return 0 00:24:03.721 00:36:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:03.721 00:36:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:03.721 00:36:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:03.721 00:36:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:03.721 00:36:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:03.721 00:36:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:03.721 00:36:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:03.721 00:36:31 nvmf_tcp.nvmf_tls -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:24:03.721 00:36:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:03.721 00:36:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:24:03.721 00:36:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:03.721 00:36:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=987767 00:24:03.721 00:36:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 987767 00:24:03.721 00:36:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:24:03.721 00:36:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 987767 ']' 00:24:03.721 00:36:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:03.721 00:36:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:24:03.721 00:36:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:03.721 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:03.721 00:36:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:24:03.721 00:36:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:03.721 [2024-07-12 00:36:31.542779] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:24:03.722 [2024-07-12 00:36:31.542883] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:03.980 EAL: No free 2048 kB hugepages reported on node 1 00:24:03.980 [2024-07-12 00:36:31.609656] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:03.980 [2024-07-12 00:36:31.696479] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:03.980 [2024-07-12 00:36:31.696541] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:03.980 [2024-07-12 00:36:31.696557] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:03.980 [2024-07-12 00:36:31.696571] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:03.980 [2024-07-12 00:36:31.696583] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:03.980 [2024-07-12 00:36:31.696638] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:03.980 00:36:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:24:03.980 00:36:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:24:03.980 00:36:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:03.980 00:36:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:03.980 00:36:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:03.980 00:36:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:03.980 00:36:31 nvmf_tcp.nvmf_tls -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:24:03.980 00:36:31 nvmf_tcp.nvmf_tls -- target/tls.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:24:04.545 true 00:24:04.545 00:36:32 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:24:04.545 00:36:32 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # jq -r .tls_version 00:24:04.804 00:36:32 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # version=0 00:24:04.804 00:36:32 nvmf_tcp.nvmf_tls -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:24:04.804 00:36:32 nvmf_tcp.nvmf_tls -- target/tls.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:24:05.062 00:36:32 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:24:05.062 00:36:32 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # jq -r .tls_version 00:24:05.321 00:36:32 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # version=13 00:24:05.321 00:36:32 nvmf_tcp.nvmf_tls -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:24:05.321 00:36:32 nvmf_tcp.nvmf_tls -- target/tls.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:24:05.580 00:36:33 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:24:05.580 00:36:33 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # jq -r .tls_version 00:24:05.839 00:36:33 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # version=7 00:24:05.839 00:36:33 nvmf_tcp.nvmf_tls -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:24:05.839 00:36:33 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:24:05.839 00:36:33 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # jq -r .enable_ktls 00:24:06.097 00:36:33 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # ktls=false 00:24:06.097 00:36:33 nvmf_tcp.nvmf_tls -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:24:06.097 00:36:33 nvmf_tcp.nvmf_tls -- target/tls.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:24:06.356 00:36:33 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # jq -r .enable_ktls 00:24:06.356 00:36:33 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:24:06.356 00:36:34 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # ktls=true 00:24:06.356 00:36:34 nvmf_tcp.nvmf_tls -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:24:06.356 00:36:34 nvmf_tcp.nvmf_tls -- target/tls.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:24:06.615 00:36:34 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:24:06.615 00:36:34 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # jq -r .enable_ktls 00:24:06.874 00:36:34 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # ktls=false 00:24:06.874 00:36:34 nvmf_tcp.nvmf_tls -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:24:06.874 00:36:34 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:24:06.874 00:36:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:24:06.874 00:36:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:24:06.874 00:36:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:24:06.874 00:36:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:24:06.874 00:36:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:24:06.874 00:36:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:24:06.874 00:36:34 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:24:06.874 00:36:34 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:24:06.874 00:36:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:24:06.874 00:36:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:24:06.874 00:36:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:24:06.874 00:36:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=ffeeddccbbaa99887766554433221100 00:24:06.874 00:36:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:24:06.874 00:36:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:24:07.132 00:36:34 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:24:07.132 00:36:34 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # mktemp 00:24:07.132 00:36:34 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # key_path=/tmp/tmp.8Qof0yZk1H 00:24:07.132 00:36:34 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:24:07.132 00:36:34 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.dHoet5BWBs 00:24:07.132 00:36:34 nvmf_tcp.nvmf_tls -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:24:07.132 00:36:34 nvmf_tcp.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:24:07.132 00:36:34 nvmf_tcp.nvmf_tls -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.8Qof0yZk1H 00:24:07.132 00:36:34 nvmf_tcp.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.dHoet5BWBs 00:24:07.132 00:36:34 nvmf_tcp.nvmf_tls -- target/tls.sh@130 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:24:07.391 00:36:35 nvmf_tcp.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:24:07.649 00:36:35 nvmf_tcp.nvmf_tls -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.8Qof0yZk1H 00:24:07.649 00:36:35 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.8Qof0yZk1H 00:24:07.649 00:36:35 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:24:07.907 [2024-07-12 00:36:35.601866] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:07.907 00:36:35 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:24:08.166 00:36:35 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:24:08.424 [2024-07-12 00:36:36.167451] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:08.424 [2024-07-12 00:36:36.167672] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:08.424 00:36:36 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:24:08.683 malloc0 00:24:08.683 00:36:36 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:24:08.941 00:36:36 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.8Qof0yZk1H 00:24:09.200 [2024-07-12 00:36:36.943945] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:24:09.200 00:36:36 nvmf_tcp.nvmf_tls -- target/tls.sh@137 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.8Qof0yZk1H 00:24:09.200 EAL: No free 2048 kB hugepages reported on node 1 00:24:21.400 Initializing NVMe Controllers 00:24:21.400 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:21.400 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:21.400 Initialization complete. Launching workers. 00:24:21.400 ======================================================== 00:24:21.400 Latency(us) 00:24:21.400 Device Information : IOPS MiB/s Average min max 00:24:21.400 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 7328.52 28.63 8735.92 1367.84 11303.50 00:24:21.400 ======================================================== 00:24:21.400 Total : 7328.52 28.63 8735.92 1367.84 11303.50 00:24:21.400 00:24:21.400 00:36:47 nvmf_tcp.nvmf_tls -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.8Qof0yZk1H 00:24:21.400 00:36:47 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:24:21.400 00:36:47 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:24:21.400 00:36:47 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:24:21.400 00:36:47 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.8Qof0yZk1H' 00:24:21.400 00:36:47 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:21.400 00:36:47 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=989219 00:24:21.400 00:36:47 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:21.400 00:36:47 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:21.400 00:36:47 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 989219 /var/tmp/bdevperf.sock 00:24:21.400 00:36:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 989219 ']' 00:24:21.400 00:36:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:21.400 00:36:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:24:21.400 00:36:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:21.400 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:21.400 00:36:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:24:21.400 00:36:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:21.400 [2024-07-12 00:36:47.118231] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:24:21.401 [2024-07-12 00:36:47.118320] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid989219 ] 00:24:21.401 EAL: No free 2048 kB hugepages reported on node 1 00:24:21.401 [2024-07-12 00:36:47.178836] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:21.401 [2024-07-12 00:36:47.266239] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:21.401 00:36:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:24:21.401 00:36:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:24:21.401 00:36:47 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.8Qof0yZk1H 00:24:21.401 [2024-07-12 00:36:47.624841] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:21.401 [2024-07-12 00:36:47.624947] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:24:21.401 TLSTESTn1 00:24:21.401 00:36:47 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:24:21.401 Running I/O for 10 seconds... 00:24:31.397 00:24:31.397 Latency(us) 00:24:31.397 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:31.397 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:31.397 Verification LBA range: start 0x0 length 0x2000 00:24:31.397 TLSTESTn1 : 10.02 3223.90 12.59 0.00 0.00 39632.34 8204.14 36117.62 00:24:31.397 =================================================================================================================== 00:24:31.397 Total : 3223.90 12.59 0.00 0.00 39632.34 8204.14 36117.62 00:24:31.397 0 00:24:31.397 00:36:57 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:31.397 00:36:57 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 989219 00:24:31.397 00:36:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 989219 ']' 00:24:31.397 00:36:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 989219 00:24:31.397 00:36:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:24:31.397 00:36:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:24:31.397 00:36:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 989219 00:24:31.397 00:36:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:24:31.397 00:36:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:24:31.397 00:36:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 989219' 00:24:31.397 killing process with pid 989219 00:24:31.397 00:36:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 989219 00:24:31.397 Received shutdown signal, test time was about 10.000000 seconds 00:24:31.397 00:24:31.397 Latency(us) 00:24:31.397 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:31.397 =================================================================================================================== 00:24:31.397 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:31.397 [2024-07-12 00:36:57.901931] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:24:31.397 00:36:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 989219 00:24:31.397 00:36:58 nvmf_tcp.nvmf_tls -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.dHoet5BWBs 00:24:31.397 00:36:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:24:31.397 00:36:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.dHoet5BWBs 00:24:31.397 00:36:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:24:31.397 00:36:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:31.397 00:36:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:24:31.397 00:36:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:31.397 00:36:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.dHoet5BWBs 00:24:31.397 00:36:58 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:24:31.397 00:36:58 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:24:31.397 00:36:58 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:24:31.397 00:36:58 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.dHoet5BWBs' 00:24:31.397 00:36:58 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:31.397 00:36:58 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=990210 00:24:31.397 00:36:58 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:31.397 00:36:58 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:31.397 00:36:58 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 990210 /var/tmp/bdevperf.sock 00:24:31.397 00:36:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 990210 ']' 00:24:31.397 00:36:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:31.397 00:36:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:24:31.397 00:36:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:31.397 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:31.397 00:36:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:24:31.397 00:36:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:31.397 [2024-07-12 00:36:58.116420] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:24:31.397 [2024-07-12 00:36:58.116519] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid990210 ] 00:24:31.397 EAL: No free 2048 kB hugepages reported on node 1 00:24:31.397 [2024-07-12 00:36:58.178218] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:31.397 [2024-07-12 00:36:58.268920] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:31.397 00:36:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:24:31.397 00:36:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:24:31.397 00:36:58 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.dHoet5BWBs 00:24:31.397 [2024-07-12 00:36:58.645258] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:31.397 [2024-07-12 00:36:58.645382] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:24:31.397 [2024-07-12 00:36:58.655447] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:24:31.397 [2024-07-12 00:36:58.655501] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c86c0 (107): Transport endpoint is not connected 00:24:31.397 [2024-07-12 00:36:58.656493] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c86c0 (9): Bad file descriptor 00:24:31.397 [2024-07-12 00:36:58.657493] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:31.397 [2024-07-12 00:36:58.657516] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:24:31.397 [2024-07-12 00:36:58.657536] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:31.397 request: 00:24:31.397 { 00:24:31.397 "name": "TLSTEST", 00:24:31.397 "trtype": "tcp", 00:24:31.397 "traddr": "10.0.0.2", 00:24:31.397 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:31.397 "adrfam": "ipv4", 00:24:31.397 "trsvcid": "4420", 00:24:31.397 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:31.397 "psk": "/tmp/tmp.dHoet5BWBs", 00:24:31.397 "method": "bdev_nvme_attach_controller", 00:24:31.397 "req_id": 1 00:24:31.397 } 00:24:31.397 Got JSON-RPC error response 00:24:31.397 response: 00:24:31.397 { 00:24:31.397 "code": -5, 00:24:31.397 "message": "Input/output error" 00:24:31.397 } 00:24:31.397 00:36:58 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 990210 00:24:31.397 00:36:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 990210 ']' 00:24:31.397 00:36:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 990210 00:24:31.397 00:36:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:24:31.397 00:36:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:24:31.397 00:36:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 990210 00:24:31.398 00:36:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:24:31.398 00:36:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:24:31.398 00:36:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 990210' 00:24:31.398 killing process with pid 990210 00:24:31.398 00:36:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 990210 00:24:31.398 Received shutdown signal, test time was about 10.000000 seconds 00:24:31.398 00:24:31.398 Latency(us) 00:24:31.398 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:31.398 =================================================================================================================== 00:24:31.398 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:24:31.398 [2024-07-12 00:36:58.702352] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:24:31.398 00:36:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 990210 00:24:31.398 00:36:58 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:24:31.398 00:36:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:24:31.398 00:36:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:24:31.398 00:36:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:24:31.398 00:36:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:24:31.398 00:36:58 nvmf_tcp.nvmf_tls -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.8Qof0yZk1H 00:24:31.398 00:36:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:24:31.398 00:36:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.8Qof0yZk1H 00:24:31.398 00:36:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:24:31.398 00:36:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:31.398 00:36:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:24:31.398 00:36:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:31.398 00:36:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.8Qof0yZk1H 00:24:31.398 00:36:58 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:24:31.398 00:36:58 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:24:31.398 00:36:58 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:24:31.398 00:36:58 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.8Qof0yZk1H' 00:24:31.398 00:36:58 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:31.398 00:36:58 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=990310 00:24:31.398 00:36:58 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:31.398 00:36:58 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:31.398 00:36:58 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 990310 /var/tmp/bdevperf.sock 00:24:31.398 00:36:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 990310 ']' 00:24:31.398 00:36:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:31.398 00:36:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:24:31.398 00:36:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:31.398 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:31.398 00:36:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:24:31.398 00:36:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:31.398 [2024-07-12 00:36:58.898526] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:24:31.398 [2024-07-12 00:36:58.898632] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid990310 ] 00:24:31.398 EAL: No free 2048 kB hugepages reported on node 1 00:24:31.398 [2024-07-12 00:36:58.957931] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:31.398 [2024-07-12 00:36:59.045340] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:31.398 00:36:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:24:31.398 00:36:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:24:31.398 00:36:59 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.8Qof0yZk1H 00:24:31.656 [2024-07-12 00:36:59.428090] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:31.656 [2024-07-12 00:36:59.428214] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:24:31.656 [2024-07-12 00:36:59.437386] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:24:31.656 [2024-07-12 00:36:59.437423] posix.c: 588:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:24:31.656 [2024-07-12 00:36:59.437465] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:24:31.656 [2024-07-12 00:36:59.438241] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138b6c0 (107): Transport endpoint is not connected 00:24:31.656 [2024-07-12 00:36:59.439238] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138b6c0 (9): Bad file descriptor 00:24:31.656 [2024-07-12 00:36:59.440234] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:31.656 [2024-07-12 00:36:59.440257] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:24:31.656 [2024-07-12 00:36:59.440278] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:31.656 request: 00:24:31.656 { 00:24:31.656 "name": "TLSTEST", 00:24:31.656 "trtype": "tcp", 00:24:31.656 "traddr": "10.0.0.2", 00:24:31.656 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:24:31.656 "adrfam": "ipv4", 00:24:31.656 "trsvcid": "4420", 00:24:31.656 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:31.656 "psk": "/tmp/tmp.8Qof0yZk1H", 00:24:31.656 "method": "bdev_nvme_attach_controller", 00:24:31.656 "req_id": 1 00:24:31.656 } 00:24:31.656 Got JSON-RPC error response 00:24:31.656 response: 00:24:31.656 { 00:24:31.656 "code": -5, 00:24:31.656 "message": "Input/output error" 00:24:31.656 } 00:24:31.656 00:36:59 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 990310 00:24:31.656 00:36:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 990310 ']' 00:24:31.656 00:36:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 990310 00:24:31.656 00:36:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:24:31.656 00:36:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:24:31.656 00:36:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 990310 00:24:31.656 00:36:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:24:31.656 00:36:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:24:31.656 00:36:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 990310' 00:24:31.656 killing process with pid 990310 00:24:31.656 00:36:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 990310 00:24:31.656 Received shutdown signal, test time was about 10.000000 seconds 00:24:31.656 00:24:31.656 Latency(us) 00:24:31.656 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:31.656 =================================================================================================================== 00:24:31.656 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:24:31.656 [2024-07-12 00:36:59.488969] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:24:31.656 00:36:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 990310 00:24:31.914 00:36:59 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:24:31.914 00:36:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:24:31.914 00:36:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:24:31.914 00:36:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:24:31.914 00:36:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:24:31.914 00:36:59 nvmf_tcp.nvmf_tls -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.8Qof0yZk1H 00:24:31.914 00:36:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:24:31.914 00:36:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.8Qof0yZk1H 00:24:31.914 00:36:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:24:31.914 00:36:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:31.914 00:36:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:24:31.914 00:36:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:31.914 00:36:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.8Qof0yZk1H 00:24:31.914 00:36:59 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:24:31.914 00:36:59 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:24:31.914 00:36:59 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:24:31.914 00:36:59 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.8Qof0yZk1H' 00:24:31.915 00:36:59 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:31.915 00:36:59 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=990412 00:24:31.915 00:36:59 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:31.915 00:36:59 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:31.915 00:36:59 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 990412 /var/tmp/bdevperf.sock 00:24:31.915 00:36:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 990412 ']' 00:24:31.915 00:36:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:31.915 00:36:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:24:31.915 00:36:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:31.915 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:31.915 00:36:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:24:31.915 00:36:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:31.915 [2024-07-12 00:36:59.696629] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:24:31.915 [2024-07-12 00:36:59.696727] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid990412 ] 00:24:31.915 EAL: No free 2048 kB hugepages reported on node 1 00:24:32.185 [2024-07-12 00:36:59.757573] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:32.185 [2024-07-12 00:36:59.848476] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:32.185 00:36:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:24:32.185 00:36:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:24:32.185 00:36:59 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.8Qof0yZk1H 00:24:32.446 [2024-07-12 00:37:00.220726] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:32.446 [2024-07-12 00:37:00.220852] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:24:32.446 [2024-07-12 00:37:00.229409] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:24:32.446 [2024-07-12 00:37:00.229445] posix.c: 588:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:24:32.446 [2024-07-12 00:37:00.229487] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:24:32.446 [2024-07-12 00:37:00.229978] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15fa6c0 (107): Transport endpoint is not connected 00:24:32.446 [2024-07-12 00:37:00.230970] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15fa6c0 (9): Bad file descriptor 00:24:32.446 [2024-07-12 00:37:00.231971] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:24:32.446 [2024-07-12 00:37:00.231992] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:24:32.446 [2024-07-12 00:37:00.232012] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:24:32.446 request: 00:24:32.446 { 00:24:32.446 "name": "TLSTEST", 00:24:32.446 "trtype": "tcp", 00:24:32.446 "traddr": "10.0.0.2", 00:24:32.446 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:32.446 "adrfam": "ipv4", 00:24:32.446 "trsvcid": "4420", 00:24:32.446 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:24:32.446 "psk": "/tmp/tmp.8Qof0yZk1H", 00:24:32.446 "method": "bdev_nvme_attach_controller", 00:24:32.446 "req_id": 1 00:24:32.446 } 00:24:32.447 Got JSON-RPC error response 00:24:32.447 response: 00:24:32.447 { 00:24:32.447 "code": -5, 00:24:32.447 "message": "Input/output error" 00:24:32.447 } 00:24:32.447 00:37:00 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 990412 00:24:32.447 00:37:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 990412 ']' 00:24:32.447 00:37:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 990412 00:24:32.447 00:37:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:24:32.447 00:37:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:24:32.447 00:37:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 990412 00:24:32.447 00:37:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:24:32.447 00:37:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:24:32.447 00:37:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 990412' 00:24:32.447 killing process with pid 990412 00:24:32.447 00:37:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 990412 00:24:32.447 Received shutdown signal, test time was about 10.000000 seconds 00:24:32.447 00:24:32.447 Latency(us) 00:24:32.447 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:32.447 =================================================================================================================== 00:24:32.447 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:24:32.447 [2024-07-12 00:37:00.270554] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:24:32.447 00:37:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 990412 00:24:32.705 00:37:00 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:24:32.705 00:37:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:24:32.705 00:37:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:24:32.705 00:37:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:24:32.705 00:37:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:24:32.705 00:37:00 nvmf_tcp.nvmf_tls -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:24:32.705 00:37:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:24:32.705 00:37:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:24:32.705 00:37:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:24:32.705 00:37:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:32.705 00:37:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:24:32.705 00:37:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:32.705 00:37:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:24:32.705 00:37:00 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:24:32.705 00:37:00 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:24:32.705 00:37:00 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:24:32.705 00:37:00 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk= 00:24:32.705 00:37:00 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:32.705 00:37:00 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=990467 00:24:32.705 00:37:00 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:32.705 00:37:00 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:32.705 00:37:00 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 990467 /var/tmp/bdevperf.sock 00:24:32.705 00:37:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 990467 ']' 00:24:32.705 00:37:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:32.705 00:37:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:24:32.705 00:37:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:32.706 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:32.706 00:37:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:24:32.706 00:37:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:32.706 [2024-07-12 00:37:00.466965] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:24:32.706 [2024-07-12 00:37:00.467071] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid990467 ] 00:24:32.706 EAL: No free 2048 kB hugepages reported on node 1 00:24:32.706 [2024-07-12 00:37:00.527349] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:32.963 [2024-07-12 00:37:00.616161] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:32.963 00:37:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:24:32.964 00:37:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:24:32.964 00:37:00 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:24:33.222 [2024-07-12 00:37:01.001475] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:24:33.222 [2024-07-12 00:37:01.003686] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xad5aa0 (9): Bad file descriptor 00:24:33.222 [2024-07-12 00:37:01.004681] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:33.222 [2024-07-12 00:37:01.004705] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:24:33.222 [2024-07-12 00:37:01.004726] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:33.222 request: 00:24:33.222 { 00:24:33.222 "name": "TLSTEST", 00:24:33.222 "trtype": "tcp", 00:24:33.222 "traddr": "10.0.0.2", 00:24:33.222 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:33.222 "adrfam": "ipv4", 00:24:33.222 "trsvcid": "4420", 00:24:33.222 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:33.222 "method": "bdev_nvme_attach_controller", 00:24:33.222 "req_id": 1 00:24:33.222 } 00:24:33.222 Got JSON-RPC error response 00:24:33.222 response: 00:24:33.222 { 00:24:33.222 "code": -5, 00:24:33.222 "message": "Input/output error" 00:24:33.222 } 00:24:33.222 00:37:01 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 990467 00:24:33.222 00:37:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 990467 ']' 00:24:33.222 00:37:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 990467 00:24:33.222 00:37:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:24:33.222 00:37:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:24:33.222 00:37:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 990467 00:24:33.222 00:37:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:24:33.222 00:37:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:24:33.222 00:37:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 990467' 00:24:33.222 killing process with pid 990467 00:24:33.222 00:37:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 990467 00:24:33.222 Received shutdown signal, test time was about 10.000000 seconds 00:24:33.222 00:24:33.222 Latency(us) 00:24:33.222 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:33.222 =================================================================================================================== 00:24:33.222 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:24:33.222 00:37:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 990467 00:24:33.481 00:37:01 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:24:33.481 00:37:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:24:33.481 00:37:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:24:33.481 00:37:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:24:33.481 00:37:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:24:33.481 00:37:01 nvmf_tcp.nvmf_tls -- target/tls.sh@158 -- # killprocess 987767 00:24:33.481 00:37:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 987767 ']' 00:24:33.481 00:37:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 987767 00:24:33.481 00:37:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:24:33.481 00:37:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:24:33.481 00:37:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 987767 00:24:33.481 00:37:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:24:33.481 00:37:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:24:33.481 00:37:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 987767' 00:24:33.481 killing process with pid 987767 00:24:33.481 00:37:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 987767 00:24:33.481 [2024-07-12 00:37:01.232551] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:24:33.481 00:37:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 987767 00:24:33.739 00:37:01 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:24:33.739 00:37:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:24:33.739 00:37:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:24:33.739 00:37:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:24:33.739 00:37:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:24:33.739 00:37:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=2 00:24:33.739 00:37:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:24:33.739 00:37:01 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:24:33.739 00:37:01 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # mktemp 00:24:33.739 00:37:01 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.o0WDExQloy 00:24:33.739 00:37:01 nvmf_tcp.nvmf_tls -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:24:33.739 00:37:01 nvmf_tcp.nvmf_tls -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.o0WDExQloy 00:24:33.739 00:37:01 nvmf_tcp.nvmf_tls -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:24:33.739 00:37:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:33.739 00:37:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:24:33.739 00:37:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:33.739 00:37:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=990550 00:24:33.739 00:37:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:24:33.739 00:37:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 990550 00:24:33.739 00:37:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 990550 ']' 00:24:33.739 00:37:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:33.739 00:37:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:24:33.739 00:37:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:33.739 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:33.739 00:37:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:24:33.739 00:37:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:33.740 [2024-07-12 00:37:01.519553] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:24:33.740 [2024-07-12 00:37:01.519657] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:33.740 EAL: No free 2048 kB hugepages reported on node 1 00:24:33.997 [2024-07-12 00:37:01.584721] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:33.997 [2024-07-12 00:37:01.674064] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:33.997 [2024-07-12 00:37:01.674119] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:33.997 [2024-07-12 00:37:01.674142] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:33.997 [2024-07-12 00:37:01.674156] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:33.997 [2024-07-12 00:37:01.674168] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:33.997 [2024-07-12 00:37:01.674198] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:33.997 00:37:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:24:33.997 00:37:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:24:33.997 00:37:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:33.997 00:37:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:33.997 00:37:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:33.997 00:37:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:33.997 00:37:01 nvmf_tcp.nvmf_tls -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.o0WDExQloy 00:24:33.997 00:37:01 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.o0WDExQloy 00:24:33.997 00:37:01 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:24:34.255 [2024-07-12 00:37:02.078868] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:34.513 00:37:02 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:24:34.770 00:37:02 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:24:35.027 [2024-07-12 00:37:02.668443] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:35.027 [2024-07-12 00:37:02.668671] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:35.027 00:37:02 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:24:35.285 malloc0 00:24:35.285 00:37:02 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:24:35.543 00:37:03 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.o0WDExQloy 00:24:35.800 [2024-07-12 00:37:03.561362] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:24:35.800 00:37:03 nvmf_tcp.nvmf_tls -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.o0WDExQloy 00:24:35.800 00:37:03 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:24:35.800 00:37:03 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:24:35.801 00:37:03 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:24:35.801 00:37:03 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.o0WDExQloy' 00:24:35.801 00:37:03 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:35.801 00:37:03 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=990773 00:24:35.801 00:37:03 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:35.801 00:37:03 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 990773 /var/tmp/bdevperf.sock 00:24:35.801 00:37:03 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:35.801 00:37:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 990773 ']' 00:24:35.801 00:37:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:35.801 00:37:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:24:35.801 00:37:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:35.801 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:35.801 00:37:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:24:35.801 00:37:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:35.801 [2024-07-12 00:37:03.622706] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:24:35.801 [2024-07-12 00:37:03.622793] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid990773 ] 00:24:36.058 EAL: No free 2048 kB hugepages reported on node 1 00:24:36.058 [2024-07-12 00:37:03.682332] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:36.058 [2024-07-12 00:37:03.770837] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:36.058 00:37:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:24:36.058 00:37:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:24:36.058 00:37:03 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.o0WDExQloy 00:24:36.315 [2024-07-12 00:37:04.086840] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:36.315 [2024-07-12 00:37:04.086965] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:24:36.573 TLSTESTn1 00:24:36.573 00:37:04 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:24:36.573 Running I/O for 10 seconds... 00:24:46.542 00:24:46.542 Latency(us) 00:24:46.542 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:46.542 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:46.542 Verification LBA range: start 0x0 length 0x2000 00:24:46.542 TLSTESTn1 : 10.02 3214.17 12.56 0.00 0.00 39749.75 7961.41 40389.59 00:24:46.542 =================================================================================================================== 00:24:46.542 Total : 3214.17 12.56 0.00 0.00 39749.75 7961.41 40389.59 00:24:46.542 0 00:24:46.542 00:37:14 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:46.542 00:37:14 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 990773 00:24:46.542 00:37:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 990773 ']' 00:24:46.542 00:37:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 990773 00:24:46.542 00:37:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:24:46.542 00:37:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:24:46.542 00:37:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 990773 00:24:46.542 00:37:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:24:46.542 00:37:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:24:46.542 00:37:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 990773' 00:24:46.542 killing process with pid 990773 00:24:46.542 00:37:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 990773 00:24:46.542 Received shutdown signal, test time was about 10.000000 seconds 00:24:46.542 00:24:46.542 Latency(us) 00:24:46.542 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:46.542 =================================================================================================================== 00:24:46.542 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:46.542 [2024-07-12 00:37:14.358136] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:24:46.542 00:37:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 990773 00:24:46.800 00:37:14 nvmf_tcp.nvmf_tls -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.o0WDExQloy 00:24:46.800 00:37:14 nvmf_tcp.nvmf_tls -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.o0WDExQloy 00:24:46.800 00:37:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:24:46.800 00:37:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.o0WDExQloy 00:24:46.800 00:37:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:24:46.800 00:37:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:46.800 00:37:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:24:46.800 00:37:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:46.800 00:37:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.o0WDExQloy 00:24:46.800 00:37:14 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:24:46.800 00:37:14 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:24:46.800 00:37:14 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:24:46.800 00:37:14 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.o0WDExQloy' 00:24:46.800 00:37:14 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:46.800 00:37:14 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=991766 00:24:46.800 00:37:14 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:46.800 00:37:14 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:46.800 00:37:14 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 991766 /var/tmp/bdevperf.sock 00:24:46.800 00:37:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 991766 ']' 00:24:46.800 00:37:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:46.800 00:37:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:24:46.800 00:37:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:46.800 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:46.800 00:37:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:24:46.801 00:37:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:46.801 [2024-07-12 00:37:14.575565] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:24:46.801 [2024-07-12 00:37:14.575670] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid991766 ] 00:24:46.801 EAL: No free 2048 kB hugepages reported on node 1 00:24:46.801 [2024-07-12 00:37:14.637454] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:47.059 [2024-07-12 00:37:14.725224] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:47.059 00:37:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:24:47.059 00:37:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:24:47.059 00:37:14 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.o0WDExQloy 00:24:47.318 [2024-07-12 00:37:15.099869] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:47.318 [2024-07-12 00:37:15.099953] bdev_nvme.c:6122:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:24:47.318 [2024-07-12 00:37:15.099970] bdev_nvme.c:6231:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.o0WDExQloy 00:24:47.318 request: 00:24:47.318 { 00:24:47.318 "name": "TLSTEST", 00:24:47.318 "trtype": "tcp", 00:24:47.318 "traddr": "10.0.0.2", 00:24:47.318 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:47.318 "adrfam": "ipv4", 00:24:47.318 "trsvcid": "4420", 00:24:47.318 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:47.318 "psk": "/tmp/tmp.o0WDExQloy", 00:24:47.318 "method": "bdev_nvme_attach_controller", 00:24:47.318 "req_id": 1 00:24:47.318 } 00:24:47.318 Got JSON-RPC error response 00:24:47.318 response: 00:24:47.318 { 00:24:47.318 "code": -1, 00:24:47.318 "message": "Operation not permitted" 00:24:47.318 } 00:24:47.318 00:37:15 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 991766 00:24:47.318 00:37:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 991766 ']' 00:24:47.318 00:37:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 991766 00:24:47.318 00:37:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:24:47.318 00:37:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:24:47.318 00:37:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 991766 00:24:47.318 00:37:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:24:47.318 00:37:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:24:47.318 00:37:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 991766' 00:24:47.318 killing process with pid 991766 00:24:47.318 00:37:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 991766 00:24:47.318 Received shutdown signal, test time was about 10.000000 seconds 00:24:47.318 00:24:47.318 Latency(us) 00:24:47.318 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:47.318 =================================================================================================================== 00:24:47.318 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:24:47.318 00:37:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 991766 00:24:47.577 00:37:15 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:24:47.577 00:37:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:24:47.577 00:37:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:24:47.577 00:37:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:24:47.577 00:37:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:24:47.577 00:37:15 nvmf_tcp.nvmf_tls -- target/tls.sh@174 -- # killprocess 990550 00:24:47.577 00:37:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 990550 ']' 00:24:47.577 00:37:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 990550 00:24:47.577 00:37:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:24:47.577 00:37:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:24:47.577 00:37:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 990550 00:24:47.577 00:37:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:24:47.577 00:37:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:24:47.577 00:37:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 990550' 00:24:47.577 killing process with pid 990550 00:24:47.577 00:37:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 990550 00:24:47.577 [2024-07-12 00:37:15.323255] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:24:47.577 00:37:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 990550 00:24:47.836 00:37:15 nvmf_tcp.nvmf_tls -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:24:47.836 00:37:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:47.836 00:37:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:24:47.836 00:37:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:47.836 00:37:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=991879 00:24:47.836 00:37:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:24:47.836 00:37:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 991879 00:24:47.836 00:37:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 991879 ']' 00:24:47.836 00:37:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:47.836 00:37:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:24:47.836 00:37:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:47.836 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:47.836 00:37:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:24:47.836 00:37:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:47.836 [2024-07-12 00:37:15.555840] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:24:47.836 [2024-07-12 00:37:15.555942] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:47.836 EAL: No free 2048 kB hugepages reported on node 1 00:24:47.836 [2024-07-12 00:37:15.621004] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:48.094 [2024-07-12 00:37:15.710251] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:48.094 [2024-07-12 00:37:15.710314] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:48.094 [2024-07-12 00:37:15.710330] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:48.094 [2024-07-12 00:37:15.710344] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:48.094 [2024-07-12 00:37:15.710357] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:48.094 [2024-07-12 00:37:15.710388] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:48.094 00:37:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:24:48.094 00:37:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:24:48.094 00:37:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:48.094 00:37:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:48.094 00:37:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:48.094 00:37:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:48.094 00:37:15 nvmf_tcp.nvmf_tls -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.o0WDExQloy 00:24:48.094 00:37:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:24:48.094 00:37:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.o0WDExQloy 00:24:48.094 00:37:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=setup_nvmf_tgt 00:24:48.094 00:37:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:48.094 00:37:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t setup_nvmf_tgt 00:24:48.094 00:37:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:48.094 00:37:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # setup_nvmf_tgt /tmp/tmp.o0WDExQloy 00:24:48.094 00:37:15 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.o0WDExQloy 00:24:48.094 00:37:15 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:24:48.352 [2024-07-12 00:37:16.118959] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:48.352 00:37:16 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:24:48.610 00:37:16 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:24:49.176 [2024-07-12 00:37:16.712581] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:49.176 [2024-07-12 00:37:16.712828] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:49.176 00:37:16 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:24:49.435 malloc0 00:24:49.435 00:37:17 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:24:49.693 00:37:17 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.o0WDExQloy 00:24:49.951 [2024-07-12 00:37:17.621546] tcp.c:3575:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:24:49.951 [2024-07-12 00:37:17.621599] tcp.c:3661:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:24:49.951 [2024-07-12 00:37:17.621642] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:24:49.951 request: 00:24:49.951 { 00:24:49.951 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:49.951 "host": "nqn.2016-06.io.spdk:host1", 00:24:49.951 "psk": "/tmp/tmp.o0WDExQloy", 00:24:49.951 "method": "nvmf_subsystem_add_host", 00:24:49.951 "req_id": 1 00:24:49.951 } 00:24:49.951 Got JSON-RPC error response 00:24:49.951 response: 00:24:49.951 { 00:24:49.951 "code": -32603, 00:24:49.951 "message": "Internal error" 00:24:49.951 } 00:24:49.951 00:37:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:24:49.951 00:37:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:24:49.951 00:37:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:24:49.951 00:37:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:24:49.951 00:37:17 nvmf_tcp.nvmf_tls -- target/tls.sh@180 -- # killprocess 991879 00:24:49.951 00:37:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 991879 ']' 00:24:49.951 00:37:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 991879 00:24:49.951 00:37:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:24:49.951 00:37:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:24:49.951 00:37:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 991879 00:24:49.951 00:37:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:24:49.951 00:37:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:24:49.951 00:37:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 991879' 00:24:49.951 killing process with pid 991879 00:24:49.951 00:37:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 991879 00:24:49.951 00:37:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 991879 00:24:50.210 00:37:17 nvmf_tcp.nvmf_tls -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.o0WDExQloy 00:24:50.210 00:37:17 nvmf_tcp.nvmf_tls -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:24:50.210 00:37:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:50.210 00:37:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:24:50.210 00:37:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:50.210 00:37:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=992121 00:24:50.210 00:37:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:24:50.210 00:37:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 992121 00:24:50.210 00:37:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 992121 ']' 00:24:50.210 00:37:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:50.210 00:37:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:24:50.210 00:37:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:50.210 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:50.210 00:37:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:24:50.210 00:37:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:50.210 [2024-07-12 00:37:17.904168] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:24:50.210 [2024-07-12 00:37:17.904259] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:50.210 EAL: No free 2048 kB hugepages reported on node 1 00:24:50.210 [2024-07-12 00:37:17.968111] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:50.468 [2024-07-12 00:37:18.053970] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:50.468 [2024-07-12 00:37:18.054023] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:50.468 [2024-07-12 00:37:18.054039] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:50.468 [2024-07-12 00:37:18.054052] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:50.468 [2024-07-12 00:37:18.054064] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:50.468 [2024-07-12 00:37:18.054100] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:50.468 00:37:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:24:50.468 00:37:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:24:50.468 00:37:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:50.468 00:37:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:50.468 00:37:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:50.468 00:37:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:50.468 00:37:18 nvmf_tcp.nvmf_tls -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.o0WDExQloy 00:24:50.468 00:37:18 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.o0WDExQloy 00:24:50.468 00:37:18 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:24:50.726 [2024-07-12 00:37:18.392998] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:50.726 00:37:18 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:24:50.984 00:37:18 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:24:51.243 [2024-07-12 00:37:18.930432] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:51.243 [2024-07-12 00:37:18.930648] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:51.243 00:37:18 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:24:51.501 malloc0 00:24:51.501 00:37:19 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:24:51.760 00:37:19 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.o0WDExQloy 00:24:52.018 [2024-07-12 00:37:19.763090] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:24:52.018 00:37:19 nvmf_tcp.nvmf_tls -- target/tls.sh@188 -- # bdevperf_pid=992339 00:24:52.018 00:37:19 nvmf_tcp.nvmf_tls -- target/tls.sh@187 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:52.018 00:37:19 nvmf_tcp.nvmf_tls -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:52.018 00:37:19 nvmf_tcp.nvmf_tls -- target/tls.sh@191 -- # waitforlisten 992339 /var/tmp/bdevperf.sock 00:24:52.018 00:37:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 992339 ']' 00:24:52.018 00:37:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:52.019 00:37:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:24:52.019 00:37:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:52.019 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:52.019 00:37:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:24:52.019 00:37:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:52.019 [2024-07-12 00:37:19.829914] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:24:52.019 [2024-07-12 00:37:19.830016] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid992339 ] 00:24:52.336 EAL: No free 2048 kB hugepages reported on node 1 00:24:52.336 [2024-07-12 00:37:19.892099] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:52.336 [2024-07-12 00:37:19.979751] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:52.336 00:37:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:24:52.336 00:37:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:24:52.336 00:37:20 nvmf_tcp.nvmf_tls -- target/tls.sh@192 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.o0WDExQloy 00:24:52.597 [2024-07-12 00:37:20.372145] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:52.597 [2024-07-12 00:37:20.372270] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:24:52.855 TLSTESTn1 00:24:52.855 00:37:20 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:24:53.113 00:37:20 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # tgtconf='{ 00:24:53.114 "subsystems": [ 00:24:53.114 { 00:24:53.114 "subsystem": "keyring", 00:24:53.114 "config": [] 00:24:53.114 }, 00:24:53.114 { 00:24:53.114 "subsystem": "iobuf", 00:24:53.114 "config": [ 00:24:53.114 { 00:24:53.114 "method": "iobuf_set_options", 00:24:53.114 "params": { 00:24:53.114 "small_pool_count": 8192, 00:24:53.114 "large_pool_count": 1024, 00:24:53.114 "small_bufsize": 8192, 00:24:53.114 "large_bufsize": 135168 00:24:53.114 } 00:24:53.114 } 00:24:53.114 ] 00:24:53.114 }, 00:24:53.114 { 00:24:53.114 "subsystem": "sock", 00:24:53.114 "config": [ 00:24:53.114 { 00:24:53.114 "method": "sock_set_default_impl", 00:24:53.114 "params": { 00:24:53.114 "impl_name": "posix" 00:24:53.114 } 00:24:53.114 }, 00:24:53.114 { 00:24:53.114 "method": "sock_impl_set_options", 00:24:53.114 "params": { 00:24:53.114 "impl_name": "ssl", 00:24:53.114 "recv_buf_size": 4096, 00:24:53.114 "send_buf_size": 4096, 00:24:53.114 "enable_recv_pipe": true, 00:24:53.114 "enable_quickack": false, 00:24:53.114 "enable_placement_id": 0, 00:24:53.114 "enable_zerocopy_send_server": true, 00:24:53.114 "enable_zerocopy_send_client": false, 00:24:53.114 "zerocopy_threshold": 0, 00:24:53.114 "tls_version": 0, 00:24:53.114 "enable_ktls": false 00:24:53.114 } 00:24:53.114 }, 00:24:53.114 { 00:24:53.114 "method": "sock_impl_set_options", 00:24:53.114 "params": { 00:24:53.114 "impl_name": "posix", 00:24:53.114 "recv_buf_size": 2097152, 00:24:53.114 "send_buf_size": 2097152, 00:24:53.114 "enable_recv_pipe": true, 00:24:53.114 "enable_quickack": false, 00:24:53.114 "enable_placement_id": 0, 00:24:53.114 "enable_zerocopy_send_server": true, 00:24:53.114 "enable_zerocopy_send_client": false, 00:24:53.114 "zerocopy_threshold": 0, 00:24:53.114 "tls_version": 0, 00:24:53.114 "enable_ktls": false 00:24:53.114 } 00:24:53.114 } 00:24:53.114 ] 00:24:53.114 }, 00:24:53.114 { 00:24:53.114 "subsystem": "vmd", 00:24:53.114 "config": [] 00:24:53.114 }, 00:24:53.114 { 00:24:53.114 "subsystem": "accel", 00:24:53.114 "config": [ 00:24:53.114 { 00:24:53.114 "method": "accel_set_options", 00:24:53.114 "params": { 00:24:53.114 "small_cache_size": 128, 00:24:53.114 "large_cache_size": 16, 00:24:53.114 "task_count": 2048, 00:24:53.114 "sequence_count": 2048, 00:24:53.114 "buf_count": 2048 00:24:53.114 } 00:24:53.114 } 00:24:53.114 ] 00:24:53.114 }, 00:24:53.114 { 00:24:53.114 "subsystem": "bdev", 00:24:53.114 "config": [ 00:24:53.114 { 00:24:53.114 "method": "bdev_set_options", 00:24:53.114 "params": { 00:24:53.114 "bdev_io_pool_size": 65535, 00:24:53.114 "bdev_io_cache_size": 256, 00:24:53.114 "bdev_auto_examine": true, 00:24:53.114 "iobuf_small_cache_size": 128, 00:24:53.114 "iobuf_large_cache_size": 16 00:24:53.114 } 00:24:53.114 }, 00:24:53.114 { 00:24:53.114 "method": "bdev_raid_set_options", 00:24:53.114 "params": { 00:24:53.114 "process_window_size_kb": 1024 00:24:53.114 } 00:24:53.114 }, 00:24:53.114 { 00:24:53.114 "method": "bdev_iscsi_set_options", 00:24:53.114 "params": { 00:24:53.114 "timeout_sec": 30 00:24:53.114 } 00:24:53.114 }, 00:24:53.114 { 00:24:53.114 "method": "bdev_nvme_set_options", 00:24:53.114 "params": { 00:24:53.114 "action_on_timeout": "none", 00:24:53.114 "timeout_us": 0, 00:24:53.114 "timeout_admin_us": 0, 00:24:53.114 "keep_alive_timeout_ms": 10000, 00:24:53.114 "arbitration_burst": 0, 00:24:53.114 "low_priority_weight": 0, 00:24:53.114 "medium_priority_weight": 0, 00:24:53.114 "high_priority_weight": 0, 00:24:53.114 "nvme_adminq_poll_period_us": 10000, 00:24:53.114 "nvme_ioq_poll_period_us": 0, 00:24:53.114 "io_queue_requests": 0, 00:24:53.114 "delay_cmd_submit": true, 00:24:53.114 "transport_retry_count": 4, 00:24:53.114 "bdev_retry_count": 3, 00:24:53.114 "transport_ack_timeout": 0, 00:24:53.114 "ctrlr_loss_timeout_sec": 0, 00:24:53.114 "reconnect_delay_sec": 0, 00:24:53.114 "fast_io_fail_timeout_sec": 0, 00:24:53.114 "disable_auto_failback": false, 00:24:53.114 "generate_uuids": false, 00:24:53.114 "transport_tos": 0, 00:24:53.114 "nvme_error_stat": false, 00:24:53.114 "rdma_srq_size": 0, 00:24:53.114 "io_path_stat": false, 00:24:53.114 "allow_accel_sequence": false, 00:24:53.114 "rdma_max_cq_size": 0, 00:24:53.114 "rdma_cm_event_timeout_ms": 0, 00:24:53.114 "dhchap_digests": [ 00:24:53.114 "sha256", 00:24:53.114 "sha384", 00:24:53.114 "sha512" 00:24:53.114 ], 00:24:53.114 "dhchap_dhgroups": [ 00:24:53.114 "null", 00:24:53.114 "ffdhe2048", 00:24:53.114 "ffdhe3072", 00:24:53.114 "ffdhe4096", 00:24:53.114 "ffdhe6144", 00:24:53.114 "ffdhe8192" 00:24:53.114 ] 00:24:53.114 } 00:24:53.114 }, 00:24:53.114 { 00:24:53.114 "method": "bdev_nvme_set_hotplug", 00:24:53.114 "params": { 00:24:53.114 "period_us": 100000, 00:24:53.114 "enable": false 00:24:53.114 } 00:24:53.114 }, 00:24:53.114 { 00:24:53.114 "method": "bdev_malloc_create", 00:24:53.114 "params": { 00:24:53.114 "name": "malloc0", 00:24:53.114 "num_blocks": 8192, 00:24:53.114 "block_size": 4096, 00:24:53.114 "physical_block_size": 4096, 00:24:53.114 "uuid": "b06b8f72-f949-4d86-ba3a-bc60899b0b09", 00:24:53.114 "optimal_io_boundary": 0 00:24:53.114 } 00:24:53.114 }, 00:24:53.114 { 00:24:53.114 "method": "bdev_wait_for_examine" 00:24:53.114 } 00:24:53.114 ] 00:24:53.114 }, 00:24:53.114 { 00:24:53.114 "subsystem": "nbd", 00:24:53.114 "config": [] 00:24:53.114 }, 00:24:53.114 { 00:24:53.114 "subsystem": "scheduler", 00:24:53.114 "config": [ 00:24:53.114 { 00:24:53.114 "method": "framework_set_scheduler", 00:24:53.114 "params": { 00:24:53.114 "name": "static" 00:24:53.114 } 00:24:53.114 } 00:24:53.114 ] 00:24:53.114 }, 00:24:53.114 { 00:24:53.114 "subsystem": "nvmf", 00:24:53.114 "config": [ 00:24:53.114 { 00:24:53.114 "method": "nvmf_set_config", 00:24:53.114 "params": { 00:24:53.114 "discovery_filter": "match_any", 00:24:53.114 "admin_cmd_passthru": { 00:24:53.114 "identify_ctrlr": false 00:24:53.114 } 00:24:53.114 } 00:24:53.114 }, 00:24:53.114 { 00:24:53.114 "method": "nvmf_set_max_subsystems", 00:24:53.114 "params": { 00:24:53.114 "max_subsystems": 1024 00:24:53.114 } 00:24:53.114 }, 00:24:53.114 { 00:24:53.114 "method": "nvmf_set_crdt", 00:24:53.114 "params": { 00:24:53.114 "crdt1": 0, 00:24:53.114 "crdt2": 0, 00:24:53.114 "crdt3": 0 00:24:53.114 } 00:24:53.114 }, 00:24:53.114 { 00:24:53.114 "method": "nvmf_create_transport", 00:24:53.114 "params": { 00:24:53.114 "trtype": "TCP", 00:24:53.114 "max_queue_depth": 128, 00:24:53.114 "max_io_qpairs_per_ctrlr": 127, 00:24:53.114 "in_capsule_data_size": 4096, 00:24:53.114 "max_io_size": 131072, 00:24:53.114 "io_unit_size": 131072, 00:24:53.114 "max_aq_depth": 128, 00:24:53.114 "num_shared_buffers": 511, 00:24:53.114 "buf_cache_size": 4294967295, 00:24:53.114 "dif_insert_or_strip": false, 00:24:53.114 "zcopy": false, 00:24:53.114 "c2h_success": false, 00:24:53.114 "sock_priority": 0, 00:24:53.114 "abort_timeout_sec": 1, 00:24:53.114 "ack_timeout": 0, 00:24:53.114 "data_wr_pool_size": 0 00:24:53.114 } 00:24:53.114 }, 00:24:53.114 { 00:24:53.114 "method": "nvmf_create_subsystem", 00:24:53.114 "params": { 00:24:53.114 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:53.114 "allow_any_host": false, 00:24:53.114 "serial_number": "SPDK00000000000001", 00:24:53.114 "model_number": "SPDK bdev Controller", 00:24:53.114 "max_namespaces": 10, 00:24:53.114 "min_cntlid": 1, 00:24:53.114 "max_cntlid": 65519, 00:24:53.114 "ana_reporting": false 00:24:53.114 } 00:24:53.114 }, 00:24:53.114 { 00:24:53.114 "method": "nvmf_subsystem_add_host", 00:24:53.114 "params": { 00:24:53.114 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:53.115 "host": "nqn.2016-06.io.spdk:host1", 00:24:53.115 "psk": "/tmp/tmp.o0WDExQloy" 00:24:53.115 } 00:24:53.115 }, 00:24:53.115 { 00:24:53.115 "method": "nvmf_subsystem_add_ns", 00:24:53.115 "params": { 00:24:53.115 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:53.115 "namespace": { 00:24:53.115 "nsid": 1, 00:24:53.115 "bdev_name": "malloc0", 00:24:53.115 "nguid": "B06B8F72F9494D86BA3ABC60899B0B09", 00:24:53.115 "uuid": "b06b8f72-f949-4d86-ba3a-bc60899b0b09", 00:24:53.115 "no_auto_visible": false 00:24:53.115 } 00:24:53.115 } 00:24:53.115 }, 00:24:53.115 { 00:24:53.115 "method": "nvmf_subsystem_add_listener", 00:24:53.115 "params": { 00:24:53.115 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:53.115 "listen_address": { 00:24:53.115 "trtype": "TCP", 00:24:53.115 "adrfam": "IPv4", 00:24:53.115 "traddr": "10.0.0.2", 00:24:53.115 "trsvcid": "4420" 00:24:53.115 }, 00:24:53.115 "secure_channel": true 00:24:53.115 } 00:24:53.115 } 00:24:53.115 ] 00:24:53.115 } 00:24:53.115 ] 00:24:53.115 }' 00:24:53.115 00:37:20 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:24:53.373 00:37:21 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # bdevperfconf='{ 00:24:53.373 "subsystems": [ 00:24:53.373 { 00:24:53.373 "subsystem": "keyring", 00:24:53.373 "config": [] 00:24:53.373 }, 00:24:53.373 { 00:24:53.373 "subsystem": "iobuf", 00:24:53.373 "config": [ 00:24:53.373 { 00:24:53.373 "method": "iobuf_set_options", 00:24:53.373 "params": { 00:24:53.373 "small_pool_count": 8192, 00:24:53.373 "large_pool_count": 1024, 00:24:53.373 "small_bufsize": 8192, 00:24:53.373 "large_bufsize": 135168 00:24:53.373 } 00:24:53.373 } 00:24:53.373 ] 00:24:53.373 }, 00:24:53.373 { 00:24:53.373 "subsystem": "sock", 00:24:53.373 "config": [ 00:24:53.373 { 00:24:53.373 "method": "sock_set_default_impl", 00:24:53.373 "params": { 00:24:53.373 "impl_name": "posix" 00:24:53.373 } 00:24:53.373 }, 00:24:53.373 { 00:24:53.373 "method": "sock_impl_set_options", 00:24:53.373 "params": { 00:24:53.373 "impl_name": "ssl", 00:24:53.373 "recv_buf_size": 4096, 00:24:53.373 "send_buf_size": 4096, 00:24:53.373 "enable_recv_pipe": true, 00:24:53.373 "enable_quickack": false, 00:24:53.373 "enable_placement_id": 0, 00:24:53.373 "enable_zerocopy_send_server": true, 00:24:53.373 "enable_zerocopy_send_client": false, 00:24:53.373 "zerocopy_threshold": 0, 00:24:53.373 "tls_version": 0, 00:24:53.373 "enable_ktls": false 00:24:53.373 } 00:24:53.373 }, 00:24:53.373 { 00:24:53.373 "method": "sock_impl_set_options", 00:24:53.373 "params": { 00:24:53.373 "impl_name": "posix", 00:24:53.373 "recv_buf_size": 2097152, 00:24:53.373 "send_buf_size": 2097152, 00:24:53.373 "enable_recv_pipe": true, 00:24:53.373 "enable_quickack": false, 00:24:53.373 "enable_placement_id": 0, 00:24:53.373 "enable_zerocopy_send_server": true, 00:24:53.373 "enable_zerocopy_send_client": false, 00:24:53.373 "zerocopy_threshold": 0, 00:24:53.373 "tls_version": 0, 00:24:53.373 "enable_ktls": false 00:24:53.373 } 00:24:53.373 } 00:24:53.373 ] 00:24:53.373 }, 00:24:53.373 { 00:24:53.373 "subsystem": "vmd", 00:24:53.373 "config": [] 00:24:53.373 }, 00:24:53.373 { 00:24:53.373 "subsystem": "accel", 00:24:53.373 "config": [ 00:24:53.373 { 00:24:53.373 "method": "accel_set_options", 00:24:53.373 "params": { 00:24:53.373 "small_cache_size": 128, 00:24:53.373 "large_cache_size": 16, 00:24:53.373 "task_count": 2048, 00:24:53.373 "sequence_count": 2048, 00:24:53.373 "buf_count": 2048 00:24:53.373 } 00:24:53.373 } 00:24:53.373 ] 00:24:53.373 }, 00:24:53.373 { 00:24:53.373 "subsystem": "bdev", 00:24:53.373 "config": [ 00:24:53.373 { 00:24:53.373 "method": "bdev_set_options", 00:24:53.373 "params": { 00:24:53.373 "bdev_io_pool_size": 65535, 00:24:53.373 "bdev_io_cache_size": 256, 00:24:53.373 "bdev_auto_examine": true, 00:24:53.373 "iobuf_small_cache_size": 128, 00:24:53.373 "iobuf_large_cache_size": 16 00:24:53.373 } 00:24:53.373 }, 00:24:53.373 { 00:24:53.373 "method": "bdev_raid_set_options", 00:24:53.373 "params": { 00:24:53.373 "process_window_size_kb": 1024 00:24:53.373 } 00:24:53.373 }, 00:24:53.373 { 00:24:53.373 "method": "bdev_iscsi_set_options", 00:24:53.373 "params": { 00:24:53.373 "timeout_sec": 30 00:24:53.373 } 00:24:53.373 }, 00:24:53.373 { 00:24:53.373 "method": "bdev_nvme_set_options", 00:24:53.373 "params": { 00:24:53.373 "action_on_timeout": "none", 00:24:53.373 "timeout_us": 0, 00:24:53.373 "timeout_admin_us": 0, 00:24:53.373 "keep_alive_timeout_ms": 10000, 00:24:53.373 "arbitration_burst": 0, 00:24:53.373 "low_priority_weight": 0, 00:24:53.373 "medium_priority_weight": 0, 00:24:53.373 "high_priority_weight": 0, 00:24:53.373 "nvme_adminq_poll_period_us": 10000, 00:24:53.373 "nvme_ioq_poll_period_us": 0, 00:24:53.373 "io_queue_requests": 512, 00:24:53.373 "delay_cmd_submit": true, 00:24:53.373 "transport_retry_count": 4, 00:24:53.373 "bdev_retry_count": 3, 00:24:53.373 "transport_ack_timeout": 0, 00:24:53.373 "ctrlr_loss_timeout_sec": 0, 00:24:53.373 "reconnect_delay_sec": 0, 00:24:53.373 "fast_io_fail_timeout_sec": 0, 00:24:53.373 "disable_auto_failback": false, 00:24:53.373 "generate_uuids": false, 00:24:53.373 "transport_tos": 0, 00:24:53.373 "nvme_error_stat": false, 00:24:53.373 "rdma_srq_size": 0, 00:24:53.373 "io_path_stat": false, 00:24:53.373 "allow_accel_sequence": false, 00:24:53.373 "rdma_max_cq_size": 0, 00:24:53.373 "rdma_cm_event_timeout_ms": 0, 00:24:53.373 "dhchap_digests": [ 00:24:53.373 "sha256", 00:24:53.373 "sha384", 00:24:53.373 "sha512" 00:24:53.373 ], 00:24:53.373 "dhchap_dhgroups": [ 00:24:53.373 "null", 00:24:53.373 "ffdhe2048", 00:24:53.373 "ffdhe3072", 00:24:53.373 "ffdhe4096", 00:24:53.373 "ffdhe6144", 00:24:53.373 "ffdhe8192" 00:24:53.373 ] 00:24:53.373 } 00:24:53.373 }, 00:24:53.373 { 00:24:53.373 "method": "bdev_nvme_attach_controller", 00:24:53.373 "params": { 00:24:53.373 "name": "TLSTEST", 00:24:53.373 "trtype": "TCP", 00:24:53.373 "adrfam": "IPv4", 00:24:53.373 "traddr": "10.0.0.2", 00:24:53.373 "trsvcid": "4420", 00:24:53.373 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:53.373 "prchk_reftag": false, 00:24:53.373 "prchk_guard": false, 00:24:53.373 "ctrlr_loss_timeout_sec": 0, 00:24:53.373 "reconnect_delay_sec": 0, 00:24:53.374 "fast_io_fail_timeout_sec": 0, 00:24:53.374 "psk": "/tmp/tmp.o0WDExQloy", 00:24:53.374 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:53.374 "hdgst": false, 00:24:53.374 "ddgst": false 00:24:53.374 } 00:24:53.374 }, 00:24:53.374 { 00:24:53.374 "method": "bdev_nvme_set_hotplug", 00:24:53.374 "params": { 00:24:53.374 "period_us": 100000, 00:24:53.374 "enable": false 00:24:53.374 } 00:24:53.374 }, 00:24:53.374 { 00:24:53.374 "method": "bdev_wait_for_examine" 00:24:53.374 } 00:24:53.374 ] 00:24:53.374 }, 00:24:53.374 { 00:24:53.374 "subsystem": "nbd", 00:24:53.374 "config": [] 00:24:53.374 } 00:24:53.374 ] 00:24:53.374 }' 00:24:53.374 00:37:21 nvmf_tcp.nvmf_tls -- target/tls.sh@199 -- # killprocess 992339 00:24:53.374 00:37:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 992339 ']' 00:24:53.374 00:37:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 992339 00:24:53.374 00:37:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:24:53.374 00:37:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:24:53.374 00:37:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 992339 00:24:53.374 00:37:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:24:53.374 00:37:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:24:53.374 00:37:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 992339' 00:24:53.374 killing process with pid 992339 00:24:53.374 00:37:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 992339 00:24:53.374 Received shutdown signal, test time was about 10.000000 seconds 00:24:53.374 00:24:53.374 Latency(us) 00:24:53.374 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:53.374 =================================================================================================================== 00:24:53.374 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:24:53.374 [2024-07-12 00:37:21.210539] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:24:53.374 00:37:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 992339 00:24:53.632 00:37:21 nvmf_tcp.nvmf_tls -- target/tls.sh@200 -- # killprocess 992121 00:24:53.632 00:37:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 992121 ']' 00:24:53.632 00:37:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 992121 00:24:53.632 00:37:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:24:53.632 00:37:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:24:53.632 00:37:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 992121 00:24:53.632 00:37:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:24:53.632 00:37:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:24:53.632 00:37:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 992121' 00:24:53.632 killing process with pid 992121 00:24:53.632 00:37:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 992121 00:24:53.632 [2024-07-12 00:37:21.392211] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:24:53.632 00:37:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 992121 00:24:53.892 00:37:21 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:24:53.892 00:37:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:53.892 00:37:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:24:53.892 00:37:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:53.892 00:37:21 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # echo '{ 00:24:53.892 "subsystems": [ 00:24:53.892 { 00:24:53.892 "subsystem": "keyring", 00:24:53.892 "config": [] 00:24:53.892 }, 00:24:53.892 { 00:24:53.892 "subsystem": "iobuf", 00:24:53.892 "config": [ 00:24:53.892 { 00:24:53.892 "method": "iobuf_set_options", 00:24:53.892 "params": { 00:24:53.892 "small_pool_count": 8192, 00:24:53.892 "large_pool_count": 1024, 00:24:53.892 "small_bufsize": 8192, 00:24:53.892 "large_bufsize": 135168 00:24:53.892 } 00:24:53.892 } 00:24:53.892 ] 00:24:53.892 }, 00:24:53.892 { 00:24:53.892 "subsystem": "sock", 00:24:53.892 "config": [ 00:24:53.892 { 00:24:53.892 "method": "sock_set_default_impl", 00:24:53.892 "params": { 00:24:53.892 "impl_name": "posix" 00:24:53.892 } 00:24:53.892 }, 00:24:53.892 { 00:24:53.892 "method": "sock_impl_set_options", 00:24:53.892 "params": { 00:24:53.892 "impl_name": "ssl", 00:24:53.892 "recv_buf_size": 4096, 00:24:53.892 "send_buf_size": 4096, 00:24:53.892 "enable_recv_pipe": true, 00:24:53.892 "enable_quickack": false, 00:24:53.892 "enable_placement_id": 0, 00:24:53.892 "enable_zerocopy_send_server": true, 00:24:53.892 "enable_zerocopy_send_client": false, 00:24:53.892 "zerocopy_threshold": 0, 00:24:53.892 "tls_version": 0, 00:24:53.892 "enable_ktls": false 00:24:53.892 } 00:24:53.892 }, 00:24:53.892 { 00:24:53.892 "method": "sock_impl_set_options", 00:24:53.892 "params": { 00:24:53.892 "impl_name": "posix", 00:24:53.892 "recv_buf_size": 2097152, 00:24:53.892 "send_buf_size": 2097152, 00:24:53.892 "enable_recv_pipe": true, 00:24:53.892 "enable_quickack": false, 00:24:53.892 "enable_placement_id": 0, 00:24:53.892 "enable_zerocopy_send_server": true, 00:24:53.892 "enable_zerocopy_send_client": false, 00:24:53.892 "zerocopy_threshold": 0, 00:24:53.892 "tls_version": 0, 00:24:53.892 "enable_ktls": false 00:24:53.892 } 00:24:53.892 } 00:24:53.892 ] 00:24:53.892 }, 00:24:53.892 { 00:24:53.892 "subsystem": "vmd", 00:24:53.892 "config": [] 00:24:53.892 }, 00:24:53.892 { 00:24:53.892 "subsystem": "accel", 00:24:53.892 "config": [ 00:24:53.892 { 00:24:53.892 "method": "accel_set_options", 00:24:53.892 "params": { 00:24:53.892 "small_cache_size": 128, 00:24:53.892 "large_cache_size": 16, 00:24:53.892 "task_count": 2048, 00:24:53.892 "sequence_count": 2048, 00:24:53.892 "buf_count": 2048 00:24:53.892 } 00:24:53.892 } 00:24:53.892 ] 00:24:53.892 }, 00:24:53.892 { 00:24:53.892 "subsystem": "bdev", 00:24:53.892 "config": [ 00:24:53.892 { 00:24:53.892 "method": "bdev_set_options", 00:24:53.892 "params": { 00:24:53.892 "bdev_io_pool_size": 65535, 00:24:53.892 "bdev_io_cache_size": 256, 00:24:53.892 "bdev_auto_examine": true, 00:24:53.892 "iobuf_small_cache_size": 128, 00:24:53.892 "iobuf_large_cache_size": 16 00:24:53.892 } 00:24:53.892 }, 00:24:53.892 { 00:24:53.892 "method": "bdev_raid_set_options", 00:24:53.892 "params": { 00:24:53.892 "process_window_size_kb": 1024 00:24:53.892 } 00:24:53.892 }, 00:24:53.892 { 00:24:53.892 "method": "bdev_iscsi_set_options", 00:24:53.892 "params": { 00:24:53.892 "timeout_sec": 30 00:24:53.892 } 00:24:53.892 }, 00:24:53.892 { 00:24:53.892 "method": "bdev_nvme_set_options", 00:24:53.893 "params": { 00:24:53.893 "action_on_timeout": "none", 00:24:53.893 "timeout_us": 0, 00:24:53.893 "timeout_admin_us": 0, 00:24:53.893 "keep_alive_timeout_ms": 10000, 00:24:53.893 "arbitration_burst": 0, 00:24:53.893 "low_priority_weight": 0, 00:24:53.893 "medium_priority_weight": 0, 00:24:53.893 "high_priority_weight": 0, 00:24:53.893 "nvme_adminq_poll_period_us": 10000, 00:24:53.893 "nvme_ioq_poll_period_us": 0, 00:24:53.893 "io_queue_requests": 0, 00:24:53.893 "delay_cmd_submit": true, 00:24:53.893 "transport_retry_count": 4, 00:24:53.893 "bdev_retry_count": 3, 00:24:53.893 "transport_ack_timeout": 0, 00:24:53.893 "ctrlr_loss_timeout_sec": 0, 00:24:53.893 "reconnect_delay_sec": 0, 00:24:53.893 "fast_io_fail_timeout_sec": 0, 00:24:53.893 "disable_auto_failback": false, 00:24:53.893 "generate_uuids": false, 00:24:53.893 "transport_tos": 0, 00:24:53.893 "nvme_error_stat": false, 00:24:53.893 "rdma_srq_size": 0, 00:24:53.893 "io_path_stat": false, 00:24:53.893 "allow_accel_sequence": false, 00:24:53.893 "rdma_max_cq_size": 0, 00:24:53.893 "rdma_cm_event_timeout_ms": 0, 00:24:53.893 "dhchap_digests": [ 00:24:53.893 "sha256", 00:24:53.893 "sha384", 00:24:53.893 "sha512" 00:24:53.893 ], 00:24:53.893 "dhchap_dhgroups": [ 00:24:53.893 "null", 00:24:53.893 "ffdhe2048", 00:24:53.893 "ffdhe3072", 00:24:53.893 "ffdhe4096", 00:24:53.893 "ffdhe6144", 00:24:53.893 "ffdhe8192" 00:24:53.893 ] 00:24:53.893 } 00:24:53.893 }, 00:24:53.893 { 00:24:53.893 "method": "bdev_nvme_set_hotplug", 00:24:53.893 "params": { 00:24:53.893 "period_us": 100000, 00:24:53.893 "enable": false 00:24:53.893 } 00:24:53.893 }, 00:24:53.893 { 00:24:53.893 "method": "bdev_malloc_create", 00:24:53.893 "params": { 00:24:53.893 "name": "malloc0", 00:24:53.893 "num_blocks": 8192, 00:24:53.893 "block_size": 4096, 00:24:53.893 "physical_block_size": 4096, 00:24:53.893 "uuid": "b06b8f72-f949-4d86-ba3a-bc60899b0b09", 00:24:53.893 "optimal_io_boundary": 0 00:24:53.893 } 00:24:53.893 }, 00:24:53.893 { 00:24:53.893 "method": "bdev_wait_for_examine" 00:24:53.893 } 00:24:53.893 ] 00:24:53.893 }, 00:24:53.893 { 00:24:53.893 "subsystem": "nbd", 00:24:53.893 "config": [] 00:24:53.893 }, 00:24:53.893 { 00:24:53.893 "subsystem": "scheduler", 00:24:53.893 "config": [ 00:24:53.893 { 00:24:53.893 "method": "framework_set_scheduler", 00:24:53.893 "params": { 00:24:53.893 "name": "static" 00:24:53.893 } 00:24:53.893 } 00:24:53.893 ] 00:24:53.893 }, 00:24:53.893 { 00:24:53.893 "subsystem": "nvmf", 00:24:53.893 "config": [ 00:24:53.893 { 00:24:53.893 "method": "nvmf_set_config", 00:24:53.893 "params": { 00:24:53.893 "discovery_filter": "match_any", 00:24:53.893 "admin_cmd_passthru": { 00:24:53.893 "identify_ctrlr": false 00:24:53.893 } 00:24:53.893 } 00:24:53.893 }, 00:24:53.893 { 00:24:53.893 "method": "nvmf_set_max_subsystems", 00:24:53.893 "params": { 00:24:53.893 "max_subsystems": 1024 00:24:53.893 } 00:24:53.893 }, 00:24:53.893 { 00:24:53.893 "method": "nvmf_set_crdt", 00:24:53.893 "params": { 00:24:53.893 "crdt1": 0, 00:24:53.893 "crdt2": 0, 00:24:53.893 "crdt3": 0 00:24:53.893 } 00:24:53.893 }, 00:24:53.893 { 00:24:53.893 "method": "nvmf_create_transport", 00:24:53.893 "params": { 00:24:53.893 "trtype": "TCP", 00:24:53.893 "max_queue_depth": 128, 00:24:53.893 "max_io_qpairs_per_ctrlr": 127, 00:24:53.893 "in_capsule_data_size": 4096, 00:24:53.893 "max_io_size": 131072, 00:24:53.893 "io_unit_size": 131072, 00:24:53.893 "max_aq_depth": 128, 00:24:53.893 "num_shared_buffers": 511, 00:24:53.893 "buf_cache_size": 4294967295, 00:24:53.893 "dif_insert_or_strip": false, 00:24:53.893 "zcopy": false, 00:24:53.893 "c2h_success": false, 00:24:53.893 "sock_priority": 0, 00:24:53.893 "abort_timeout_sec": 1, 00:24:53.893 "ack_timeout": 0, 00:24:53.893 "data_wr_pool_size": 0 00:24:53.893 } 00:24:53.893 }, 00:24:53.893 { 00:24:53.893 "method": "nvmf_create_subsystem", 00:24:53.893 "params": { 00:24:53.893 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:53.893 "allow_any_host": false, 00:24:53.893 "serial_number": "SPDK00000000000001", 00:24:53.893 "model_number": "SPDK bdev Controller", 00:24:53.893 "max_namespaces": 10, 00:24:53.893 "min_cntlid": 1, 00:24:53.893 "max_cntlid": 65519, 00:24:53.893 "ana_reporting": false 00:24:53.893 } 00:24:53.893 }, 00:24:53.893 { 00:24:53.893 "method": "nvmf_subsystem_add_host", 00:24:53.893 "params": { 00:24:53.893 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:53.893 "host": "nqn.2016-06.io.spdk:host1", 00:24:53.893 "psk": "/tmp/tmp.o0WDExQloy" 00:24:53.893 } 00:24:53.893 }, 00:24:53.893 { 00:24:53.893 "method": "nvmf_subsystem_add_ns", 00:24:53.893 "params": { 00:24:53.893 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:53.893 "namespace": { 00:24:53.893 "nsid": 1, 00:24:53.893 "bdev_name": "malloc0", 00:24:53.893 "nguid": "B06B8F72F9494D86BA3ABC60899B0B09", 00:24:53.893 "uuid": "b06b8f72-f949-4d86-ba3a-bc60899b0b09", 00:24:53.893 "no_auto_visible": false 00:24:53.893 } 00:24:53.893 } 00:24:53.893 }, 00:24:53.893 { 00:24:53.893 "method": "nvmf_subsystem_add_listener", 00:24:53.893 "params": { 00:24:53.893 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:53.893 "listen_address": { 00:24:53.893 "trtype": "TCP", 00:24:53.893 "adrfam": "IPv4", 00:24:53.893 "traddr": "10.0.0.2", 00:24:53.893 "trsvcid": "4420" 00:24:53.893 }, 00:24:53.893 "secure_channel": true 00:24:53.893 } 00:24:53.893 } 00:24:53.893 ] 00:24:53.893 } 00:24:53.893 ] 00:24:53.893 }' 00:24:53.893 00:37:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=992556 00:24:53.893 00:37:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:24:53.893 00:37:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 992556 00:24:53.893 00:37:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 992556 ']' 00:24:53.893 00:37:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:53.893 00:37:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:24:53.893 00:37:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:53.893 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:53.893 00:37:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:24:53.893 00:37:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:53.893 [2024-07-12 00:37:21.614585] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:24:53.893 [2024-07-12 00:37:21.614697] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:53.893 EAL: No free 2048 kB hugepages reported on node 1 00:24:53.893 [2024-07-12 00:37:21.678153] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:54.154 [2024-07-12 00:37:21.764021] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:54.154 [2024-07-12 00:37:21.764080] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:54.154 [2024-07-12 00:37:21.764097] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:54.154 [2024-07-12 00:37:21.764111] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:54.154 [2024-07-12 00:37:21.764123] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:54.154 [2024-07-12 00:37:21.764205] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:54.154 [2024-07-12 00:37:21.984435] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:54.412 [2024-07-12 00:37:22.000357] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:24:54.412 [2024-07-12 00:37:22.016411] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:54.412 [2024-07-12 00:37:22.024779] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:54.984 00:37:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:24:54.984 00:37:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:24:54.984 00:37:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:54.984 00:37:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:54.984 00:37:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:54.984 00:37:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:54.984 00:37:22 nvmf_tcp.nvmf_tls -- target/tls.sh@207 -- # bdevperf_pid=992672 00:24:54.984 00:37:22 nvmf_tcp.nvmf_tls -- target/tls.sh@208 -- # waitforlisten 992672 /var/tmp/bdevperf.sock 00:24:54.984 00:37:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 992672 ']' 00:24:54.984 00:37:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:54.984 00:37:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:24:54.984 00:37:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:54.984 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:54.984 00:37:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:24:54.984 00:37:22 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:24:54.984 00:37:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:54.984 00:37:22 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # echo '{ 00:24:54.984 "subsystems": [ 00:24:54.984 { 00:24:54.984 "subsystem": "keyring", 00:24:54.984 "config": [] 00:24:54.984 }, 00:24:54.984 { 00:24:54.984 "subsystem": "iobuf", 00:24:54.984 "config": [ 00:24:54.984 { 00:24:54.984 "method": "iobuf_set_options", 00:24:54.984 "params": { 00:24:54.984 "small_pool_count": 8192, 00:24:54.984 "large_pool_count": 1024, 00:24:54.984 "small_bufsize": 8192, 00:24:54.984 "large_bufsize": 135168 00:24:54.984 } 00:24:54.984 } 00:24:54.984 ] 00:24:54.984 }, 00:24:54.984 { 00:24:54.984 "subsystem": "sock", 00:24:54.984 "config": [ 00:24:54.984 { 00:24:54.984 "method": "sock_set_default_impl", 00:24:54.984 "params": { 00:24:54.984 "impl_name": "posix" 00:24:54.984 } 00:24:54.984 }, 00:24:54.984 { 00:24:54.984 "method": "sock_impl_set_options", 00:24:54.984 "params": { 00:24:54.984 "impl_name": "ssl", 00:24:54.984 "recv_buf_size": 4096, 00:24:54.984 "send_buf_size": 4096, 00:24:54.984 "enable_recv_pipe": true, 00:24:54.984 "enable_quickack": false, 00:24:54.984 "enable_placement_id": 0, 00:24:54.984 "enable_zerocopy_send_server": true, 00:24:54.984 "enable_zerocopy_send_client": false, 00:24:54.984 "zerocopy_threshold": 0, 00:24:54.984 "tls_version": 0, 00:24:54.984 "enable_ktls": false 00:24:54.984 } 00:24:54.984 }, 00:24:54.984 { 00:24:54.984 "method": "sock_impl_set_options", 00:24:54.984 "params": { 00:24:54.984 "impl_name": "posix", 00:24:54.984 "recv_buf_size": 2097152, 00:24:54.984 "send_buf_size": 2097152, 00:24:54.984 "enable_recv_pipe": true, 00:24:54.984 "enable_quickack": false, 00:24:54.984 "enable_placement_id": 0, 00:24:54.984 "enable_zerocopy_send_server": true, 00:24:54.984 "enable_zerocopy_send_client": false, 00:24:54.984 "zerocopy_threshold": 0, 00:24:54.984 "tls_version": 0, 00:24:54.984 "enable_ktls": false 00:24:54.984 } 00:24:54.984 } 00:24:54.984 ] 00:24:54.984 }, 00:24:54.984 { 00:24:54.984 "subsystem": "vmd", 00:24:54.984 "config": [] 00:24:54.984 }, 00:24:54.984 { 00:24:54.984 "subsystem": "accel", 00:24:54.984 "config": [ 00:24:54.984 { 00:24:54.984 "method": "accel_set_options", 00:24:54.984 "params": { 00:24:54.984 "small_cache_size": 128, 00:24:54.984 "large_cache_size": 16, 00:24:54.984 "task_count": 2048, 00:24:54.984 "sequence_count": 2048, 00:24:54.984 "buf_count": 2048 00:24:54.984 } 00:24:54.984 } 00:24:54.984 ] 00:24:54.984 }, 00:24:54.984 { 00:24:54.984 "subsystem": "bdev", 00:24:54.984 "config": [ 00:24:54.984 { 00:24:54.984 "method": "bdev_set_options", 00:24:54.984 "params": { 00:24:54.984 "bdev_io_pool_size": 65535, 00:24:54.984 "bdev_io_cache_size": 256, 00:24:54.984 "bdev_auto_examine": true, 00:24:54.984 "iobuf_small_cache_size": 128, 00:24:54.984 "iobuf_large_cache_size": 16 00:24:54.984 } 00:24:54.984 }, 00:24:54.984 { 00:24:54.984 "method": "bdev_raid_set_options", 00:24:54.984 "params": { 00:24:54.984 "process_window_size_kb": 1024 00:24:54.984 } 00:24:54.984 }, 00:24:54.984 { 00:24:54.984 "method": "bdev_iscsi_set_options", 00:24:54.984 "params": { 00:24:54.984 "timeout_sec": 30 00:24:54.984 } 00:24:54.984 }, 00:24:54.984 { 00:24:54.984 "method": "bdev_nvme_set_options", 00:24:54.984 "params": { 00:24:54.984 "action_on_timeout": "none", 00:24:54.984 "timeout_us": 0, 00:24:54.984 "timeout_admin_us": 0, 00:24:54.984 "keep_alive_timeout_ms": 10000, 00:24:54.984 "arbitration_burst": 0, 00:24:54.984 "low_priority_weight": 0, 00:24:54.984 "medium_priority_weight": 0, 00:24:54.984 "high_priority_weight": 0, 00:24:54.984 "nvme_adminq_poll_period_us": 10000, 00:24:54.984 "nvme_ioq_poll_period_us": 0, 00:24:54.984 "io_queue_requests": 512, 00:24:54.984 "delay_cmd_submit": true, 00:24:54.984 "transport_retry_count": 4, 00:24:54.984 "bdev_retry_count": 3, 00:24:54.984 "transport_ack_timeout": 0, 00:24:54.984 "ctrlr_loss_timeout_sec": 0, 00:24:54.984 "reconnect_delay_sec": 0, 00:24:54.984 "fast_io_fail_timeout_sec": 0, 00:24:54.984 "disable_auto_failback": false, 00:24:54.984 "generate_uuids": false, 00:24:54.984 "transport_tos": 0, 00:24:54.984 "nvme_error_stat": false, 00:24:54.984 "rdma_srq_size": 0, 00:24:54.984 "io_path_stat": false, 00:24:54.984 "allow_accel_sequence": false, 00:24:54.985 "rdma_max_cq_size": 0, 00:24:54.985 "rdma_cm_event_timeout_ms": 0, 00:24:54.985 "dhchap_digests": [ 00:24:54.985 "sha256", 00:24:54.985 "sha384", 00:24:54.985 "sha512" 00:24:54.985 ], 00:24:54.985 "dhchap_dhgroups": [ 00:24:54.985 "null", 00:24:54.985 "ffdhe2048", 00:24:54.985 "ffdhe3072", 00:24:54.985 "ffdhe4096", 00:24:54.985 "ffdhe6144", 00:24:54.985 "ffdhe8192" 00:24:54.985 ] 00:24:54.985 } 00:24:54.985 }, 00:24:54.985 { 00:24:54.985 "method": "bdev_nvme_attach_controller", 00:24:54.985 "params": { 00:24:54.985 "name": "TLSTEST", 00:24:54.985 "trtype": "TCP", 00:24:54.985 "adrfam": "IPv4", 00:24:54.985 "traddr": "10.0.0.2", 00:24:54.985 "trsvcid": "4420", 00:24:54.985 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:54.985 "prchk_reftag": false, 00:24:54.985 "prchk_guard": false, 00:24:54.985 "ctrlr_loss_timeout_sec": 0, 00:24:54.985 "reconnect_delay_sec": 0, 00:24:54.985 "fast_io_fail_timeout_sec": 0, 00:24:54.985 "psk": "/tmp/tmp.o0WDExQloy", 00:24:54.985 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:54.985 "hdgst": false, 00:24:54.985 "ddgst": false 00:24:54.985 } 00:24:54.985 }, 00:24:54.985 { 00:24:54.985 "method": "bdev_nvme_set_hotplug", 00:24:54.985 "params": { 00:24:54.985 "period_us": 100000, 00:24:54.985 "enable": false 00:24:54.985 } 00:24:54.985 }, 00:24:54.985 { 00:24:54.985 "method": "bdev_wait_for_examine" 00:24:54.985 } 00:24:54.985 ] 00:24:54.985 }, 00:24:54.985 { 00:24:54.985 "subsystem": "nbd", 00:24:54.985 "config": [] 00:24:54.985 } 00:24:54.985 ] 00:24:54.985 }' 00:24:54.985 [2024-07-12 00:37:22.727387] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:24:54.985 [2024-07-12 00:37:22.727494] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid992672 ] 00:24:54.985 EAL: No free 2048 kB hugepages reported on node 1 00:24:54.985 [2024-07-12 00:37:22.787990] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:55.242 [2024-07-12 00:37:22.876472] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:55.242 [2024-07-12 00:37:23.029037] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:55.242 [2024-07-12 00:37:23.029168] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:24:55.499 00:37:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:24:55.499 00:37:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:24:55.499 00:37:23 nvmf_tcp.nvmf_tls -- target/tls.sh@211 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:24:55.499 Running I/O for 10 seconds... 00:25:05.480 00:25:05.480 Latency(us) 00:25:05.480 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:05.480 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:25:05.480 Verification LBA range: start 0x0 length 0x2000 00:25:05.480 TLSTESTn1 : 10.03 3149.83 12.30 0.00 0.00 40539.80 12281.93 37671.06 00:25:05.480 =================================================================================================================== 00:25:05.480 Total : 3149.83 12.30 0.00 0.00 40539.80 12281.93 37671.06 00:25:05.480 0 00:25:05.480 00:37:33 nvmf_tcp.nvmf_tls -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:05.480 00:37:33 nvmf_tcp.nvmf_tls -- target/tls.sh@214 -- # killprocess 992672 00:25:05.480 00:37:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 992672 ']' 00:25:05.480 00:37:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 992672 00:25:05.480 00:37:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:25:05.740 00:37:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:25:05.740 00:37:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 992672 00:25:05.740 00:37:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:25:05.740 00:37:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:25:05.740 00:37:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 992672' 00:25:05.740 killing process with pid 992672 00:25:05.740 00:37:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 992672 00:25:05.740 Received shutdown signal, test time was about 10.000000 seconds 00:25:05.740 00:25:05.740 Latency(us) 00:25:05.740 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:05.740 =================================================================================================================== 00:25:05.740 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:05.740 [2024-07-12 00:37:33.343785] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:25:05.740 00:37:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 992672 00:25:05.740 00:37:33 nvmf_tcp.nvmf_tls -- target/tls.sh@215 -- # killprocess 992556 00:25:05.740 00:37:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 992556 ']' 00:25:05.740 00:37:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 992556 00:25:05.740 00:37:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:25:05.740 00:37:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:25:05.740 00:37:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 992556 00:25:05.740 00:37:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:25:05.740 00:37:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:25:05.740 00:37:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 992556' 00:25:05.740 killing process with pid 992556 00:25:05.740 00:37:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 992556 00:25:05.740 [2024-07-12 00:37:33.537041] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:25:05.740 00:37:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 992556 00:25:06.001 00:37:33 nvmf_tcp.nvmf_tls -- target/tls.sh@218 -- # nvmfappstart 00:25:06.001 00:37:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:06.001 00:37:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:25:06.001 00:37:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:06.001 00:37:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=993677 00:25:06.001 00:37:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:25:06.001 00:37:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 993677 00:25:06.001 00:37:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 993677 ']' 00:25:06.001 00:37:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:06.001 00:37:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:25:06.001 00:37:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:06.001 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:06.001 00:37:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:25:06.001 00:37:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:06.001 [2024-07-12 00:37:33.765316] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:25:06.001 [2024-07-12 00:37:33.765415] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:06.001 EAL: No free 2048 kB hugepages reported on node 1 00:25:06.001 [2024-07-12 00:37:33.830119] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:06.259 [2024-07-12 00:37:33.919338] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:06.259 [2024-07-12 00:37:33.919399] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:06.259 [2024-07-12 00:37:33.919415] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:06.259 [2024-07-12 00:37:33.919429] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:06.259 [2024-07-12 00:37:33.919441] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:06.259 [2024-07-12 00:37:33.919471] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:06.259 00:37:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:25:06.259 00:37:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:25:06.259 00:37:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:06.259 00:37:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:06.259 00:37:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:06.259 00:37:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:06.259 00:37:34 nvmf_tcp.nvmf_tls -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.o0WDExQloy 00:25:06.259 00:37:34 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.o0WDExQloy 00:25:06.259 00:37:34 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:25:06.517 [2024-07-12 00:37:34.322547] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:06.517 00:37:34 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:25:07.082 00:37:34 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:25:07.082 [2024-07-12 00:37:34.916162] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:25:07.082 [2024-07-12 00:37:34.916378] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:07.340 00:37:34 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:25:07.599 malloc0 00:25:07.599 00:37:35 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:25:07.857 00:37:35 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.o0WDExQloy 00:25:08.116 [2024-07-12 00:37:35.792878] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:25:08.116 00:37:35 nvmf_tcp.nvmf_tls -- target/tls.sh@222 -- # bdevperf_pid=993829 00:25:08.116 00:37:35 nvmf_tcp.nvmf_tls -- target/tls.sh@220 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:25:08.116 00:37:35 nvmf_tcp.nvmf_tls -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:25:08.116 00:37:35 nvmf_tcp.nvmf_tls -- target/tls.sh@225 -- # waitforlisten 993829 /var/tmp/bdevperf.sock 00:25:08.116 00:37:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 993829 ']' 00:25:08.116 00:37:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:08.116 00:37:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:25:08.116 00:37:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:08.116 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:08.116 00:37:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:25:08.116 00:37:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:08.116 [2024-07-12 00:37:35.857850] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:25:08.116 [2024-07-12 00:37:35.857938] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid993829 ] 00:25:08.116 EAL: No free 2048 kB hugepages reported on node 1 00:25:08.116 [2024-07-12 00:37:35.917097] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:08.374 [2024-07-12 00:37:36.004228] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:08.374 00:37:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:25:08.374 00:37:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:25:08.374 00:37:36 nvmf_tcp.nvmf_tls -- target/tls.sh@227 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.o0WDExQloy 00:25:08.632 00:37:36 nvmf_tcp.nvmf_tls -- target/tls.sh@228 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:25:08.891 [2024-07-12 00:37:36.691790] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:09.150 nvme0n1 00:25:09.150 00:37:36 nvmf_tcp.nvmf_tls -- target/tls.sh@232 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:09.150 Running I/O for 1 seconds... 00:25:10.088 00:25:10.088 Latency(us) 00:25:10.088 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:10.088 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:25:10.088 Verification LBA range: start 0x0 length 0x2000 00:25:10.088 nvme0n1 : 1.02 3106.11 12.13 0.00 0.00 40726.50 7864.32 40195.41 00:25:10.088 =================================================================================================================== 00:25:10.088 Total : 3106.11 12.13 0.00 0.00 40726.50 7864.32 40195.41 00:25:10.088 0 00:25:10.349 00:37:37 nvmf_tcp.nvmf_tls -- target/tls.sh@234 -- # killprocess 993829 00:25:10.349 00:37:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 993829 ']' 00:25:10.349 00:37:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 993829 00:25:10.349 00:37:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:25:10.349 00:37:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:25:10.349 00:37:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 993829 00:25:10.349 00:37:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:25:10.349 00:37:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:25:10.349 00:37:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 993829' 00:25:10.349 killing process with pid 993829 00:25:10.349 00:37:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 993829 00:25:10.349 Received shutdown signal, test time was about 1.000000 seconds 00:25:10.349 00:25:10.349 Latency(us) 00:25:10.349 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:10.349 =================================================================================================================== 00:25:10.349 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:10.349 00:37:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 993829 00:25:10.349 00:37:38 nvmf_tcp.nvmf_tls -- target/tls.sh@235 -- # killprocess 993677 00:25:10.349 00:37:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 993677 ']' 00:25:10.349 00:37:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 993677 00:25:10.349 00:37:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:25:10.349 00:37:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:25:10.349 00:37:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 993677 00:25:10.349 00:37:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:25:10.349 00:37:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:25:10.349 00:37:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 993677' 00:25:10.349 killing process with pid 993677 00:25:10.349 00:37:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 993677 00:25:10.349 [2024-07-12 00:37:38.154607] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:25:10.349 00:37:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 993677 00:25:10.610 00:37:38 nvmf_tcp.nvmf_tls -- target/tls.sh@238 -- # nvmfappstart 00:25:10.610 00:37:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:10.610 00:37:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:25:10.610 00:37:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:10.610 00:37:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=994125 00:25:10.610 00:37:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:25:10.610 00:37:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 994125 00:25:10.610 00:37:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 994125 ']' 00:25:10.610 00:37:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:10.610 00:37:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:25:10.610 00:37:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:10.610 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:10.610 00:37:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:25:10.610 00:37:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:10.610 [2024-07-12 00:37:38.386413] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:25:10.610 [2024-07-12 00:37:38.386517] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:10.610 EAL: No free 2048 kB hugepages reported on node 1 00:25:10.870 [2024-07-12 00:37:38.452766] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:10.870 [2024-07-12 00:37:38.542215] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:10.870 [2024-07-12 00:37:38.542277] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:10.870 [2024-07-12 00:37:38.542292] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:10.870 [2024-07-12 00:37:38.542306] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:10.870 [2024-07-12 00:37:38.542317] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:10.870 [2024-07-12 00:37:38.542364] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:10.870 00:37:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:25:10.870 00:37:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:25:10.870 00:37:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:10.870 00:37:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:10.870 00:37:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:10.870 00:37:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:10.870 00:37:38 nvmf_tcp.nvmf_tls -- target/tls.sh@239 -- # rpc_cmd 00:25:10.870 00:37:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:10.870 00:37:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:10.870 [2024-07-12 00:37:38.674075] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:10.870 malloc0 00:25:10.870 [2024-07-12 00:37:38.704438] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:25:10.870 [2024-07-12 00:37:38.704676] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:11.129 00:37:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:11.129 00:37:38 nvmf_tcp.nvmf_tls -- target/tls.sh@252 -- # bdevperf_pid=994153 00:25:11.129 00:37:38 nvmf_tcp.nvmf_tls -- target/tls.sh@254 -- # waitforlisten 994153 /var/tmp/bdevperf.sock 00:25:11.129 00:37:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 994153 ']' 00:25:11.129 00:37:38 nvmf_tcp.nvmf_tls -- target/tls.sh@250 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:25:11.129 00:37:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:11.129 00:37:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:25:11.129 00:37:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:11.129 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:11.129 00:37:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:25:11.129 00:37:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:11.129 [2024-07-12 00:37:38.778316] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:25:11.129 [2024-07-12 00:37:38.778413] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid994153 ] 00:25:11.129 EAL: No free 2048 kB hugepages reported on node 1 00:25:11.129 [2024-07-12 00:37:38.839055] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:11.129 [2024-07-12 00:37:38.926332] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:11.387 00:37:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:25:11.387 00:37:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:25:11.387 00:37:39 nvmf_tcp.nvmf_tls -- target/tls.sh@255 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.o0WDExQloy 00:25:11.645 00:37:39 nvmf_tcp.nvmf_tls -- target/tls.sh@256 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:25:11.902 [2024-07-12 00:37:39.621676] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:11.902 nvme0n1 00:25:11.902 00:37:39 nvmf_tcp.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:12.162 Running I/O for 1 seconds... 00:25:13.099 00:25:13.099 Latency(us) 00:25:13.099 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:13.099 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:25:13.099 Verification LBA range: start 0x0 length 0x2000 00:25:13.099 nvme0n1 : 1.03 3062.63 11.96 0.00 0.00 41174.95 10145.94 39030.33 00:25:13.099 =================================================================================================================== 00:25:13.099 Total : 3062.63 11.96 0.00 0.00 41174.95 10145.94 39030.33 00:25:13.099 0 00:25:13.099 00:37:40 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # rpc_cmd save_config 00:25:13.099 00:37:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:13.099 00:37:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:13.358 00:37:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:13.358 00:37:40 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # tgtcfg='{ 00:25:13.358 "subsystems": [ 00:25:13.358 { 00:25:13.358 "subsystem": "keyring", 00:25:13.358 "config": [ 00:25:13.358 { 00:25:13.358 "method": "keyring_file_add_key", 00:25:13.358 "params": { 00:25:13.358 "name": "key0", 00:25:13.358 "path": "/tmp/tmp.o0WDExQloy" 00:25:13.358 } 00:25:13.358 } 00:25:13.358 ] 00:25:13.358 }, 00:25:13.358 { 00:25:13.358 "subsystem": "iobuf", 00:25:13.358 "config": [ 00:25:13.358 { 00:25:13.358 "method": "iobuf_set_options", 00:25:13.358 "params": { 00:25:13.358 "small_pool_count": 8192, 00:25:13.358 "large_pool_count": 1024, 00:25:13.358 "small_bufsize": 8192, 00:25:13.358 "large_bufsize": 135168 00:25:13.358 } 00:25:13.358 } 00:25:13.358 ] 00:25:13.358 }, 00:25:13.358 { 00:25:13.358 "subsystem": "sock", 00:25:13.358 "config": [ 00:25:13.358 { 00:25:13.358 "method": "sock_set_default_impl", 00:25:13.358 "params": { 00:25:13.358 "impl_name": "posix" 00:25:13.358 } 00:25:13.358 }, 00:25:13.358 { 00:25:13.358 "method": "sock_impl_set_options", 00:25:13.358 "params": { 00:25:13.358 "impl_name": "ssl", 00:25:13.358 "recv_buf_size": 4096, 00:25:13.358 "send_buf_size": 4096, 00:25:13.358 "enable_recv_pipe": true, 00:25:13.358 "enable_quickack": false, 00:25:13.358 "enable_placement_id": 0, 00:25:13.358 "enable_zerocopy_send_server": true, 00:25:13.358 "enable_zerocopy_send_client": false, 00:25:13.358 "zerocopy_threshold": 0, 00:25:13.358 "tls_version": 0, 00:25:13.358 "enable_ktls": false 00:25:13.358 } 00:25:13.358 }, 00:25:13.358 { 00:25:13.358 "method": "sock_impl_set_options", 00:25:13.358 "params": { 00:25:13.358 "impl_name": "posix", 00:25:13.358 "recv_buf_size": 2097152, 00:25:13.358 "send_buf_size": 2097152, 00:25:13.358 "enable_recv_pipe": true, 00:25:13.358 "enable_quickack": false, 00:25:13.358 "enable_placement_id": 0, 00:25:13.358 "enable_zerocopy_send_server": true, 00:25:13.358 "enable_zerocopy_send_client": false, 00:25:13.358 "zerocopy_threshold": 0, 00:25:13.358 "tls_version": 0, 00:25:13.358 "enable_ktls": false 00:25:13.358 } 00:25:13.358 } 00:25:13.358 ] 00:25:13.358 }, 00:25:13.358 { 00:25:13.358 "subsystem": "vmd", 00:25:13.358 "config": [] 00:25:13.358 }, 00:25:13.358 { 00:25:13.358 "subsystem": "accel", 00:25:13.358 "config": [ 00:25:13.358 { 00:25:13.358 "method": "accel_set_options", 00:25:13.358 "params": { 00:25:13.358 "small_cache_size": 128, 00:25:13.358 "large_cache_size": 16, 00:25:13.358 "task_count": 2048, 00:25:13.358 "sequence_count": 2048, 00:25:13.358 "buf_count": 2048 00:25:13.358 } 00:25:13.358 } 00:25:13.358 ] 00:25:13.358 }, 00:25:13.358 { 00:25:13.358 "subsystem": "bdev", 00:25:13.358 "config": [ 00:25:13.358 { 00:25:13.358 "method": "bdev_set_options", 00:25:13.358 "params": { 00:25:13.358 "bdev_io_pool_size": 65535, 00:25:13.358 "bdev_io_cache_size": 256, 00:25:13.358 "bdev_auto_examine": true, 00:25:13.358 "iobuf_small_cache_size": 128, 00:25:13.358 "iobuf_large_cache_size": 16 00:25:13.358 } 00:25:13.358 }, 00:25:13.358 { 00:25:13.358 "method": "bdev_raid_set_options", 00:25:13.358 "params": { 00:25:13.358 "process_window_size_kb": 1024 00:25:13.358 } 00:25:13.358 }, 00:25:13.358 { 00:25:13.358 "method": "bdev_iscsi_set_options", 00:25:13.358 "params": { 00:25:13.358 "timeout_sec": 30 00:25:13.358 } 00:25:13.358 }, 00:25:13.358 { 00:25:13.358 "method": "bdev_nvme_set_options", 00:25:13.358 "params": { 00:25:13.358 "action_on_timeout": "none", 00:25:13.358 "timeout_us": 0, 00:25:13.358 "timeout_admin_us": 0, 00:25:13.358 "keep_alive_timeout_ms": 10000, 00:25:13.358 "arbitration_burst": 0, 00:25:13.358 "low_priority_weight": 0, 00:25:13.358 "medium_priority_weight": 0, 00:25:13.358 "high_priority_weight": 0, 00:25:13.358 "nvme_adminq_poll_period_us": 10000, 00:25:13.358 "nvme_ioq_poll_period_us": 0, 00:25:13.358 "io_queue_requests": 0, 00:25:13.358 "delay_cmd_submit": true, 00:25:13.358 "transport_retry_count": 4, 00:25:13.358 "bdev_retry_count": 3, 00:25:13.358 "transport_ack_timeout": 0, 00:25:13.358 "ctrlr_loss_timeout_sec": 0, 00:25:13.358 "reconnect_delay_sec": 0, 00:25:13.358 "fast_io_fail_timeout_sec": 0, 00:25:13.358 "disable_auto_failback": false, 00:25:13.358 "generate_uuids": false, 00:25:13.358 "transport_tos": 0, 00:25:13.358 "nvme_error_stat": false, 00:25:13.358 "rdma_srq_size": 0, 00:25:13.358 "io_path_stat": false, 00:25:13.358 "allow_accel_sequence": false, 00:25:13.358 "rdma_max_cq_size": 0, 00:25:13.358 "rdma_cm_event_timeout_ms": 0, 00:25:13.358 "dhchap_digests": [ 00:25:13.358 "sha256", 00:25:13.358 "sha384", 00:25:13.358 "sha512" 00:25:13.358 ], 00:25:13.358 "dhchap_dhgroups": [ 00:25:13.358 "null", 00:25:13.358 "ffdhe2048", 00:25:13.358 "ffdhe3072", 00:25:13.358 "ffdhe4096", 00:25:13.358 "ffdhe6144", 00:25:13.358 "ffdhe8192" 00:25:13.358 ] 00:25:13.358 } 00:25:13.358 }, 00:25:13.358 { 00:25:13.358 "method": "bdev_nvme_set_hotplug", 00:25:13.358 "params": { 00:25:13.358 "period_us": 100000, 00:25:13.358 "enable": false 00:25:13.358 } 00:25:13.358 }, 00:25:13.358 { 00:25:13.358 "method": "bdev_malloc_create", 00:25:13.358 "params": { 00:25:13.358 "name": "malloc0", 00:25:13.358 "num_blocks": 8192, 00:25:13.358 "block_size": 4096, 00:25:13.358 "physical_block_size": 4096, 00:25:13.358 "uuid": "54044908-e665-405c-94bc-fd7c94744443", 00:25:13.358 "optimal_io_boundary": 0 00:25:13.358 } 00:25:13.358 }, 00:25:13.358 { 00:25:13.358 "method": "bdev_wait_for_examine" 00:25:13.358 } 00:25:13.358 ] 00:25:13.358 }, 00:25:13.358 { 00:25:13.358 "subsystem": "nbd", 00:25:13.358 "config": [] 00:25:13.358 }, 00:25:13.358 { 00:25:13.358 "subsystem": "scheduler", 00:25:13.358 "config": [ 00:25:13.358 { 00:25:13.358 "method": "framework_set_scheduler", 00:25:13.358 "params": { 00:25:13.358 "name": "static" 00:25:13.358 } 00:25:13.358 } 00:25:13.358 ] 00:25:13.358 }, 00:25:13.358 { 00:25:13.358 "subsystem": "nvmf", 00:25:13.358 "config": [ 00:25:13.358 { 00:25:13.358 "method": "nvmf_set_config", 00:25:13.358 "params": { 00:25:13.358 "discovery_filter": "match_any", 00:25:13.358 "admin_cmd_passthru": { 00:25:13.358 "identify_ctrlr": false 00:25:13.358 } 00:25:13.358 } 00:25:13.358 }, 00:25:13.358 { 00:25:13.358 "method": "nvmf_set_max_subsystems", 00:25:13.358 "params": { 00:25:13.359 "max_subsystems": 1024 00:25:13.359 } 00:25:13.359 }, 00:25:13.359 { 00:25:13.359 "method": "nvmf_set_crdt", 00:25:13.359 "params": { 00:25:13.359 "crdt1": 0, 00:25:13.359 "crdt2": 0, 00:25:13.359 "crdt3": 0 00:25:13.359 } 00:25:13.359 }, 00:25:13.359 { 00:25:13.359 "method": "nvmf_create_transport", 00:25:13.359 "params": { 00:25:13.359 "trtype": "TCP", 00:25:13.359 "max_queue_depth": 128, 00:25:13.359 "max_io_qpairs_per_ctrlr": 127, 00:25:13.359 "in_capsule_data_size": 4096, 00:25:13.359 "max_io_size": 131072, 00:25:13.359 "io_unit_size": 131072, 00:25:13.359 "max_aq_depth": 128, 00:25:13.359 "num_shared_buffers": 511, 00:25:13.359 "buf_cache_size": 4294967295, 00:25:13.359 "dif_insert_or_strip": false, 00:25:13.359 "zcopy": false, 00:25:13.359 "c2h_success": false, 00:25:13.359 "sock_priority": 0, 00:25:13.359 "abort_timeout_sec": 1, 00:25:13.359 "ack_timeout": 0, 00:25:13.359 "data_wr_pool_size": 0 00:25:13.359 } 00:25:13.359 }, 00:25:13.359 { 00:25:13.359 "method": "nvmf_create_subsystem", 00:25:13.359 "params": { 00:25:13.359 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:13.359 "allow_any_host": false, 00:25:13.359 "serial_number": "00000000000000000000", 00:25:13.359 "model_number": "SPDK bdev Controller", 00:25:13.359 "max_namespaces": 32, 00:25:13.359 "min_cntlid": 1, 00:25:13.359 "max_cntlid": 65519, 00:25:13.359 "ana_reporting": false 00:25:13.359 } 00:25:13.359 }, 00:25:13.359 { 00:25:13.359 "method": "nvmf_subsystem_add_host", 00:25:13.359 "params": { 00:25:13.359 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:13.359 "host": "nqn.2016-06.io.spdk:host1", 00:25:13.359 "psk": "key0" 00:25:13.359 } 00:25:13.359 }, 00:25:13.359 { 00:25:13.359 "method": "nvmf_subsystem_add_ns", 00:25:13.359 "params": { 00:25:13.359 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:13.359 "namespace": { 00:25:13.359 "nsid": 1, 00:25:13.359 "bdev_name": "malloc0", 00:25:13.359 "nguid": "54044908E665405C94BCFD7C94744443", 00:25:13.359 "uuid": "54044908-e665-405c-94bc-fd7c94744443", 00:25:13.359 "no_auto_visible": false 00:25:13.359 } 00:25:13.359 } 00:25:13.359 }, 00:25:13.359 { 00:25:13.359 "method": "nvmf_subsystem_add_listener", 00:25:13.359 "params": { 00:25:13.359 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:13.359 "listen_address": { 00:25:13.359 "trtype": "TCP", 00:25:13.359 "adrfam": "IPv4", 00:25:13.359 "traddr": "10.0.0.2", 00:25:13.359 "trsvcid": "4420" 00:25:13.359 }, 00:25:13.359 "secure_channel": true 00:25:13.359 } 00:25:13.359 } 00:25:13.359 ] 00:25:13.359 } 00:25:13.359 ] 00:25:13.359 }' 00:25:13.359 00:37:40 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:25:13.619 00:37:41 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # bperfcfg='{ 00:25:13.619 "subsystems": [ 00:25:13.619 { 00:25:13.619 "subsystem": "keyring", 00:25:13.619 "config": [ 00:25:13.619 { 00:25:13.619 "method": "keyring_file_add_key", 00:25:13.619 "params": { 00:25:13.619 "name": "key0", 00:25:13.619 "path": "/tmp/tmp.o0WDExQloy" 00:25:13.619 } 00:25:13.619 } 00:25:13.619 ] 00:25:13.619 }, 00:25:13.619 { 00:25:13.619 "subsystem": "iobuf", 00:25:13.619 "config": [ 00:25:13.619 { 00:25:13.619 "method": "iobuf_set_options", 00:25:13.619 "params": { 00:25:13.619 "small_pool_count": 8192, 00:25:13.619 "large_pool_count": 1024, 00:25:13.619 "small_bufsize": 8192, 00:25:13.619 "large_bufsize": 135168 00:25:13.619 } 00:25:13.619 } 00:25:13.619 ] 00:25:13.619 }, 00:25:13.619 { 00:25:13.619 "subsystem": "sock", 00:25:13.619 "config": [ 00:25:13.619 { 00:25:13.619 "method": "sock_set_default_impl", 00:25:13.619 "params": { 00:25:13.619 "impl_name": "posix" 00:25:13.619 } 00:25:13.619 }, 00:25:13.619 { 00:25:13.619 "method": "sock_impl_set_options", 00:25:13.619 "params": { 00:25:13.619 "impl_name": "ssl", 00:25:13.619 "recv_buf_size": 4096, 00:25:13.619 "send_buf_size": 4096, 00:25:13.619 "enable_recv_pipe": true, 00:25:13.619 "enable_quickack": false, 00:25:13.619 "enable_placement_id": 0, 00:25:13.619 "enable_zerocopy_send_server": true, 00:25:13.619 "enable_zerocopy_send_client": false, 00:25:13.619 "zerocopy_threshold": 0, 00:25:13.619 "tls_version": 0, 00:25:13.619 "enable_ktls": false 00:25:13.619 } 00:25:13.619 }, 00:25:13.619 { 00:25:13.619 "method": "sock_impl_set_options", 00:25:13.619 "params": { 00:25:13.619 "impl_name": "posix", 00:25:13.619 "recv_buf_size": 2097152, 00:25:13.619 "send_buf_size": 2097152, 00:25:13.619 "enable_recv_pipe": true, 00:25:13.619 "enable_quickack": false, 00:25:13.619 "enable_placement_id": 0, 00:25:13.619 "enable_zerocopy_send_server": true, 00:25:13.619 "enable_zerocopy_send_client": false, 00:25:13.619 "zerocopy_threshold": 0, 00:25:13.619 "tls_version": 0, 00:25:13.619 "enable_ktls": false 00:25:13.619 } 00:25:13.619 } 00:25:13.619 ] 00:25:13.619 }, 00:25:13.619 { 00:25:13.619 "subsystem": "vmd", 00:25:13.619 "config": [] 00:25:13.619 }, 00:25:13.619 { 00:25:13.619 "subsystem": "accel", 00:25:13.619 "config": [ 00:25:13.619 { 00:25:13.619 "method": "accel_set_options", 00:25:13.619 "params": { 00:25:13.619 "small_cache_size": 128, 00:25:13.619 "large_cache_size": 16, 00:25:13.619 "task_count": 2048, 00:25:13.619 "sequence_count": 2048, 00:25:13.619 "buf_count": 2048 00:25:13.619 } 00:25:13.619 } 00:25:13.619 ] 00:25:13.619 }, 00:25:13.619 { 00:25:13.619 "subsystem": "bdev", 00:25:13.619 "config": [ 00:25:13.619 { 00:25:13.619 "method": "bdev_set_options", 00:25:13.619 "params": { 00:25:13.619 "bdev_io_pool_size": 65535, 00:25:13.619 "bdev_io_cache_size": 256, 00:25:13.619 "bdev_auto_examine": true, 00:25:13.619 "iobuf_small_cache_size": 128, 00:25:13.619 "iobuf_large_cache_size": 16 00:25:13.619 } 00:25:13.619 }, 00:25:13.619 { 00:25:13.619 "method": "bdev_raid_set_options", 00:25:13.619 "params": { 00:25:13.619 "process_window_size_kb": 1024 00:25:13.619 } 00:25:13.619 }, 00:25:13.619 { 00:25:13.619 "method": "bdev_iscsi_set_options", 00:25:13.619 "params": { 00:25:13.619 "timeout_sec": 30 00:25:13.619 } 00:25:13.619 }, 00:25:13.619 { 00:25:13.619 "method": "bdev_nvme_set_options", 00:25:13.619 "params": { 00:25:13.619 "action_on_timeout": "none", 00:25:13.619 "timeout_us": 0, 00:25:13.619 "timeout_admin_us": 0, 00:25:13.619 "keep_alive_timeout_ms": 10000, 00:25:13.619 "arbitration_burst": 0, 00:25:13.619 "low_priority_weight": 0, 00:25:13.619 "medium_priority_weight": 0, 00:25:13.619 "high_priority_weight": 0, 00:25:13.619 "nvme_adminq_poll_period_us": 10000, 00:25:13.619 "nvme_ioq_poll_period_us": 0, 00:25:13.619 "io_queue_requests": 512, 00:25:13.619 "delay_cmd_submit": true, 00:25:13.619 "transport_retry_count": 4, 00:25:13.619 "bdev_retry_count": 3, 00:25:13.619 "transport_ack_timeout": 0, 00:25:13.619 "ctrlr_loss_timeout_sec": 0, 00:25:13.619 "reconnect_delay_sec": 0, 00:25:13.619 "fast_io_fail_timeout_sec": 0, 00:25:13.619 "disable_auto_failback": false, 00:25:13.619 "generate_uuids": false, 00:25:13.619 "transport_tos": 0, 00:25:13.619 "nvme_error_stat": false, 00:25:13.619 "rdma_srq_size": 0, 00:25:13.619 "io_path_stat": false, 00:25:13.619 "allow_accel_sequence": false, 00:25:13.619 "rdma_max_cq_size": 0, 00:25:13.619 "rdma_cm_event_timeout_ms": 0, 00:25:13.619 "dhchap_digests": [ 00:25:13.619 "sha256", 00:25:13.619 "sha384", 00:25:13.619 "sha512" 00:25:13.619 ], 00:25:13.619 "dhchap_dhgroups": [ 00:25:13.619 "null", 00:25:13.619 "ffdhe2048", 00:25:13.619 "ffdhe3072", 00:25:13.619 "ffdhe4096", 00:25:13.619 "ffdhe6144", 00:25:13.619 "ffdhe8192" 00:25:13.619 ] 00:25:13.619 } 00:25:13.619 }, 00:25:13.619 { 00:25:13.619 "method": "bdev_nvme_attach_controller", 00:25:13.619 "params": { 00:25:13.619 "name": "nvme0", 00:25:13.619 "trtype": "TCP", 00:25:13.619 "adrfam": "IPv4", 00:25:13.619 "traddr": "10.0.0.2", 00:25:13.619 "trsvcid": "4420", 00:25:13.619 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:13.619 "prchk_reftag": false, 00:25:13.619 "prchk_guard": false, 00:25:13.619 "ctrlr_loss_timeout_sec": 0, 00:25:13.619 "reconnect_delay_sec": 0, 00:25:13.619 "fast_io_fail_timeout_sec": 0, 00:25:13.619 "psk": "key0", 00:25:13.619 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:13.619 "hdgst": false, 00:25:13.619 "ddgst": false 00:25:13.619 } 00:25:13.619 }, 00:25:13.619 { 00:25:13.619 "method": "bdev_nvme_set_hotplug", 00:25:13.619 "params": { 00:25:13.619 "period_us": 100000, 00:25:13.619 "enable": false 00:25:13.619 } 00:25:13.619 }, 00:25:13.619 { 00:25:13.619 "method": "bdev_enable_histogram", 00:25:13.619 "params": { 00:25:13.619 "name": "nvme0n1", 00:25:13.619 "enable": true 00:25:13.619 } 00:25:13.619 }, 00:25:13.619 { 00:25:13.619 "method": "bdev_wait_for_examine" 00:25:13.619 } 00:25:13.619 ] 00:25:13.619 }, 00:25:13.619 { 00:25:13.619 "subsystem": "nbd", 00:25:13.619 "config": [] 00:25:13.619 } 00:25:13.619 ] 00:25:13.619 }' 00:25:13.619 00:37:41 nvmf_tcp.nvmf_tls -- target/tls.sh@266 -- # killprocess 994153 00:25:13.619 00:37:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 994153 ']' 00:25:13.619 00:37:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 994153 00:25:13.619 00:37:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:25:13.619 00:37:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:25:13.619 00:37:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 994153 00:25:13.619 00:37:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:25:13.619 00:37:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:25:13.619 00:37:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 994153' 00:25:13.619 killing process with pid 994153 00:25:13.619 00:37:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 994153 00:25:13.619 Received shutdown signal, test time was about 1.000000 seconds 00:25:13.619 00:25:13.619 Latency(us) 00:25:13.619 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:13.619 =================================================================================================================== 00:25:13.620 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:13.620 00:37:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 994153 00:25:13.879 00:37:41 nvmf_tcp.nvmf_tls -- target/tls.sh@267 -- # killprocess 994125 00:25:13.879 00:37:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 994125 ']' 00:25:13.879 00:37:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 994125 00:25:13.879 00:37:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:25:13.879 00:37:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:25:13.879 00:37:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 994125 00:25:13.879 00:37:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:25:13.879 00:37:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:25:13.879 00:37:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 994125' 00:25:13.879 killing process with pid 994125 00:25:13.879 00:37:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 994125 00:25:13.879 00:37:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 994125 00:25:13.879 00:37:41 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # nvmfappstart -c /dev/fd/62 00:25:13.879 00:37:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:13.879 00:37:41 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # echo '{ 00:25:13.879 "subsystems": [ 00:25:13.879 { 00:25:13.879 "subsystem": "keyring", 00:25:13.879 "config": [ 00:25:13.879 { 00:25:13.879 "method": "keyring_file_add_key", 00:25:13.879 "params": { 00:25:13.879 "name": "key0", 00:25:13.879 "path": "/tmp/tmp.o0WDExQloy" 00:25:13.879 } 00:25:13.879 } 00:25:13.879 ] 00:25:13.879 }, 00:25:13.879 { 00:25:13.879 "subsystem": "iobuf", 00:25:13.879 "config": [ 00:25:13.879 { 00:25:13.879 "method": "iobuf_set_options", 00:25:13.879 "params": { 00:25:13.879 "small_pool_count": 8192, 00:25:13.879 "large_pool_count": 1024, 00:25:13.879 "small_bufsize": 8192, 00:25:13.879 "large_bufsize": 135168 00:25:13.879 } 00:25:13.879 } 00:25:13.879 ] 00:25:13.879 }, 00:25:13.879 { 00:25:13.879 "subsystem": "sock", 00:25:13.879 "config": [ 00:25:13.879 { 00:25:13.879 "method": "sock_set_default_impl", 00:25:13.879 "params": { 00:25:13.879 "impl_name": "posix" 00:25:13.879 } 00:25:13.879 }, 00:25:13.879 { 00:25:13.879 "method": "sock_impl_set_options", 00:25:13.879 "params": { 00:25:13.879 "impl_name": "ssl", 00:25:13.879 "recv_buf_size": 4096, 00:25:13.879 "send_buf_size": 4096, 00:25:13.879 "enable_recv_pipe": true, 00:25:13.879 "enable_quickack": false, 00:25:13.879 "enable_placement_id": 0, 00:25:13.879 "enable_zerocopy_send_server": true, 00:25:13.879 "enable_zerocopy_send_client": false, 00:25:13.879 "zerocopy_threshold": 0, 00:25:13.879 "tls_version": 0, 00:25:13.879 "enable_ktls": false 00:25:13.879 } 00:25:13.879 }, 00:25:13.879 { 00:25:13.879 "method": "sock_impl_set_options", 00:25:13.879 "params": { 00:25:13.879 "impl_name": "posix", 00:25:13.879 "recv_buf_size": 2097152, 00:25:13.879 "send_buf_size": 2097152, 00:25:13.879 "enable_recv_pipe": true, 00:25:13.879 "enable_quickack": false, 00:25:13.879 "enable_placement_id": 0, 00:25:13.879 "enable_zerocopy_send_server": true, 00:25:13.879 "enable_zerocopy_send_client": false, 00:25:13.879 "zerocopy_threshold": 0, 00:25:13.879 "tls_version": 0, 00:25:13.879 "enable_ktls": false 00:25:13.879 } 00:25:13.879 } 00:25:13.879 ] 00:25:13.879 }, 00:25:13.879 { 00:25:13.879 "subsystem": "vmd", 00:25:13.879 "config": [] 00:25:13.879 }, 00:25:13.879 { 00:25:13.879 "subsystem": "accel", 00:25:13.879 "config": [ 00:25:13.879 { 00:25:13.879 "method": "accel_set_options", 00:25:13.879 "params": { 00:25:13.879 "small_cache_size": 128, 00:25:13.879 "large_cache_size": 16, 00:25:13.879 "task_count": 2048, 00:25:13.879 "sequence_count": 2048, 00:25:13.879 "buf_count": 2048 00:25:13.879 } 00:25:13.879 } 00:25:13.879 ] 00:25:13.879 }, 00:25:13.879 { 00:25:13.879 "subsystem": "bdev", 00:25:13.879 "config": [ 00:25:13.879 { 00:25:13.879 "method": "bdev_set_options", 00:25:13.879 "params": { 00:25:13.879 "bdev_io_pool_size": 65535, 00:25:13.879 "bdev_io_cache_size": 256, 00:25:13.879 "bdev_auto_examine": true, 00:25:13.879 "iobuf_small_cache_size": 128, 00:25:13.879 "iobuf_large_cache_size": 16 00:25:13.879 } 00:25:13.879 }, 00:25:13.879 { 00:25:13.879 "method": "bdev_raid_set_options", 00:25:13.879 "params": { 00:25:13.879 "process_window_size_kb": 1024 00:25:13.879 } 00:25:13.879 }, 00:25:13.879 { 00:25:13.879 "method": "bdev_iscsi_set_options", 00:25:13.879 "params": { 00:25:13.879 "timeout_sec": 30 00:25:13.879 } 00:25:13.879 }, 00:25:13.879 { 00:25:13.879 "method": "bdev_nvme_set_options", 00:25:13.879 "params": { 00:25:13.879 "action_on_timeout": "none", 00:25:13.879 "timeout_us": 0, 00:25:13.879 "timeout_admin_us": 0, 00:25:13.879 "keep_alive_timeout_ms": 10000, 00:25:13.879 "arbitration_burst": 0, 00:25:13.879 "low_priority_weight": 0, 00:25:13.879 "medium_priority_weight": 0, 00:25:13.879 "high_priority_weight": 0, 00:25:13.879 "nvme_adminq_poll_period_us": 10000, 00:25:13.879 "nvme_ioq_poll_period_us": 0, 00:25:13.879 "io_queue_requests": 0, 00:25:13.879 "delay_cmd_submit": true, 00:25:13.879 "transport_retry_count": 4, 00:25:13.879 "bdev_retry_count": 3, 00:25:13.879 "transport_ack_timeout": 0, 00:25:13.879 "ctrlr_loss_timeout_sec": 0, 00:25:13.879 "reconnect_delay_sec": 0, 00:25:13.879 "fast_io_fail_timeout_sec": 0, 00:25:13.879 "disable_auto_failback": false, 00:25:13.879 "generate_uuids": false, 00:25:13.879 "transport_tos": 0, 00:25:13.879 "nvme_error_stat": false, 00:25:13.879 "rdma_srq_size": 0, 00:25:13.879 "io_path_stat": false, 00:25:13.879 "allow_accel_sequence": false, 00:25:13.879 "rdma_max_cq_size": 0, 00:25:13.879 "rdma_cm_event_timeout_ms": 0, 00:25:13.879 "dhchap_digests": [ 00:25:13.879 "sha256", 00:25:13.879 "sha384", 00:25:13.879 "sha512" 00:25:13.879 ], 00:25:13.879 "dhchap_dhgroups": [ 00:25:13.879 "null", 00:25:13.879 "ffdhe2048", 00:25:13.879 "ffdhe3072", 00:25:13.879 "ffdhe4096", 00:25:13.879 "ffdhe6144", 00:25:13.880 "ffdhe8192" 00:25:13.880 ] 00:25:13.880 } 00:25:13.880 }, 00:25:13.880 { 00:25:13.880 "method": "bdev_nvme_set_hotplug", 00:25:13.880 "params": { 00:25:13.880 "period_us": 100000, 00:25:13.880 "enable": false 00:25:13.880 } 00:25:13.880 }, 00:25:13.880 { 00:25:13.880 "method": "bdev_malloc_create", 00:25:13.880 "params": { 00:25:13.880 "name": "malloc0", 00:25:13.880 "num_blocks": 8192, 00:25:13.880 "block_size": 4096, 00:25:13.880 "physical_block_size": 4096, 00:25:13.880 "uuid": "54044908-e665-405c-94bc-fd7c94744443", 00:25:13.880 "optimal_io_boundary": 0 00:25:13.880 } 00:25:13.880 }, 00:25:13.880 { 00:25:13.880 "method": "bdev_wait_for_examine" 00:25:13.880 } 00:25:13.880 ] 00:25:13.880 }, 00:25:13.880 { 00:25:13.880 "subsystem": "nbd", 00:25:13.880 "config": [] 00:25:13.880 }, 00:25:13.880 { 00:25:13.880 "subsystem": "scheduler", 00:25:13.880 "config": [ 00:25:13.880 { 00:25:13.880 "method": "framework_set_scheduler", 00:25:13.880 "params": { 00:25:13.880 "name": "static" 00:25:13.880 } 00:25:13.880 } 00:25:13.880 ] 00:25:13.880 }, 00:25:13.880 { 00:25:13.880 "subsystem": "nvmf", 00:25:13.880 "config": [ 00:25:13.880 { 00:25:13.880 "method": "nvmf_set_config", 00:25:13.880 "params": { 00:25:13.880 "discovery_filter": "match_any", 00:25:13.880 "admin_cmd_passthru": { 00:25:13.880 "identify_ctrlr": false 00:25:13.880 } 00:25:13.880 } 00:25:13.880 }, 00:25:13.880 { 00:25:13.880 "method": "nvmf_set_max_subsystems", 00:25:13.880 "params": { 00:25:13.880 "max_subsystems": 1024 00:25:13.880 } 00:25:13.880 }, 00:25:13.880 { 00:25:13.880 "method": "nvmf_set_crdt", 00:25:13.880 "params": { 00:25:13.880 "crdt1": 0, 00:25:13.880 "crdt2": 0, 00:25:13.880 "crdt3": 0 00:25:13.880 } 00:25:13.880 }, 00:25:13.880 { 00:25:13.880 "method": "nvmf_create_transport", 00:25:13.880 "params": { 00:25:13.880 "trtype": "TCP", 00:25:13.880 "max_queue_depth": 128, 00:25:13.880 "max_io_qpairs_per_ctrlr": 127, 00:25:13.880 "in_capsule_data_size": 4096, 00:25:13.880 "max_io_size": 131072, 00:25:13.880 "io_unit_size": 131072, 00:25:13.880 "max_aq_depth": 128, 00:25:13.880 "num_shared_buffers": 511, 00:25:13.880 "buf_cache_size": 4294967295, 00:25:13.880 "dif_insert_or_strip": false, 00:25:13.880 "zcopy": false, 00:25:13.880 "c2h_success": false, 00:25:13.880 "sock_priority": 0, 00:25:13.880 "abort_timeout_sec": 1, 00:25:13.880 "ack_timeout": 0, 00:25:13.880 "data_wr_pool_size": 0 00:25:13.880 } 00:25:13.880 }, 00:25:13.880 { 00:25:13.880 "method": "nvmf_create_subsystem", 00:25:13.880 "params": { 00:25:13.880 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:13.880 "allow_any_host": false, 00:25:13.880 "serial_number": "00000000000000000000", 00:25:13.880 "model_number": "SPDK bdev Controller", 00:25:13.880 "max_namespaces": 32, 00:25:13.880 "min_cntlid": 1, 00:25:13.880 "max_cntlid": 65519, 00:25:13.880 "ana_reporting": false 00:25:13.880 } 00:25:13.880 }, 00:25:13.880 { 00:25:13.880 "method": "nvmf_subsystem_add_host", 00:25:13.880 "params": { 00:25:13.880 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:13.880 "host": "nqn.2016-06.io.spdk:host1", 00:25:13.880 "psk": "key0" 00:25:13.880 } 00:25:13.880 }, 00:25:13.880 { 00:25:13.880 "method": "nvmf_subsystem_add_ns", 00:25:13.880 "params": { 00:25:13.880 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:13.880 "namespace": { 00:25:13.880 "nsid": 1, 00:25:13.880 "bdev_name": "malloc0", 00:25:13.880 "nguid": "54044908E665405C94BCFD7C94744443", 00:25:13.880 "uuid": "54044908-e665-405c-94bc-fd7c94744443", 00:25:13.880 "no_auto_visible": false 00:25:13.880 } 00:25:13.880 } 00:25:13.880 }, 00:25:13.880 { 00:25:13.880 "method": "nvmf_subsystem_add_listener", 00:25:13.880 "params": { 00:25:13.880 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:13.880 "listen_address": { 00:25:13.880 "trtype": "TCP", 00:25:13.880 "adrfam": "IPv4", 00:25:13.880 "traddr": "10.0.0.2", 00:25:13.880 "trsvcid": "4420" 00:25:13.880 }, 00:25:13.880 "secure_channel": true 00:25:13.880 } 00:25:13.880 } 00:25:13.880 ] 00:25:13.880 } 00:25:13.880 ] 00:25:13.880 }' 00:25:13.880 00:37:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:25:13.880 00:37:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:13.880 00:37:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=994465 00:25:13.880 00:37:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:25:13.880 00:37:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 994465 00:25:13.880 00:37:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 994465 ']' 00:25:13.880 00:37:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:13.880 00:37:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:25:13.880 00:37:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:13.880 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:13.880 00:37:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:25:13.880 00:37:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:14.141 [2024-07-12 00:37:41.741615] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:25:14.141 [2024-07-12 00:37:41.741714] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:14.141 EAL: No free 2048 kB hugepages reported on node 1 00:25:14.141 [2024-07-12 00:37:41.806989] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:14.141 [2024-07-12 00:37:41.896170] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:14.141 [2024-07-12 00:37:41.896233] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:14.141 [2024-07-12 00:37:41.896249] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:14.141 [2024-07-12 00:37:41.896262] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:14.141 [2024-07-12 00:37:41.896274] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:14.141 [2024-07-12 00:37:41.896360] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:14.411 [2024-07-12 00:37:42.124162] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:14.411 [2024-07-12 00:37:42.156163] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:25:14.411 [2024-07-12 00:37:42.164781] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:14.988 00:37:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:25:14.988 00:37:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:25:14.988 00:37:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:14.988 00:37:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:14.988 00:37:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:14.988 00:37:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:14.988 00:37:42 nvmf_tcp.nvmf_tls -- target/tls.sh@272 -- # bdevperf_pid=994586 00:25:14.988 00:37:42 nvmf_tcp.nvmf_tls -- target/tls.sh@273 -- # waitforlisten 994586 /var/tmp/bdevperf.sock 00:25:14.988 00:37:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 994586 ']' 00:25:14.988 00:37:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:14.988 00:37:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:25:14.988 00:37:42 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:25:14.988 00:37:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:14.988 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:14.988 00:37:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:25:14.988 00:37:42 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # echo '{ 00:25:14.988 "subsystems": [ 00:25:14.988 { 00:25:14.988 "subsystem": "keyring", 00:25:14.988 "config": [ 00:25:14.988 { 00:25:14.988 "method": "keyring_file_add_key", 00:25:14.988 "params": { 00:25:14.988 "name": "key0", 00:25:14.988 "path": "/tmp/tmp.o0WDExQloy" 00:25:14.988 } 00:25:14.988 } 00:25:14.988 ] 00:25:14.988 }, 00:25:14.988 { 00:25:14.988 "subsystem": "iobuf", 00:25:14.988 "config": [ 00:25:14.988 { 00:25:14.988 "method": "iobuf_set_options", 00:25:14.988 "params": { 00:25:14.988 "small_pool_count": 8192, 00:25:14.988 "large_pool_count": 1024, 00:25:14.988 "small_bufsize": 8192, 00:25:14.988 "large_bufsize": 135168 00:25:14.988 } 00:25:14.988 } 00:25:14.988 ] 00:25:14.988 }, 00:25:14.988 { 00:25:14.988 "subsystem": "sock", 00:25:14.988 "config": [ 00:25:14.988 { 00:25:14.988 "method": "sock_set_default_impl", 00:25:14.988 "params": { 00:25:14.988 "impl_name": "posix" 00:25:14.988 } 00:25:14.988 }, 00:25:14.988 { 00:25:14.988 "method": "sock_impl_set_options", 00:25:14.988 "params": { 00:25:14.988 "impl_name": "ssl", 00:25:14.988 "recv_buf_size": 4096, 00:25:14.988 "send_buf_size": 4096, 00:25:14.988 "enable_recv_pipe": true, 00:25:14.988 "enable_quickack": false, 00:25:14.988 "enable_placement_id": 0, 00:25:14.988 "enable_zerocopy_send_server": true, 00:25:14.988 "enable_zerocopy_send_client": false, 00:25:14.988 "zerocopy_threshold": 0, 00:25:14.988 "tls_version": 0, 00:25:14.988 "enable_ktls": false 00:25:14.988 } 00:25:14.988 }, 00:25:14.988 { 00:25:14.988 "method": "sock_impl_set_options", 00:25:14.988 "params": { 00:25:14.988 "impl_name": "posix", 00:25:14.988 "recv_buf_size": 2097152, 00:25:14.988 "send_buf_size": 2097152, 00:25:14.988 "enable_recv_pipe": true, 00:25:14.988 "enable_quickack": false, 00:25:14.988 "enable_placement_id": 0, 00:25:14.988 "enable_zerocopy_send_server": true, 00:25:14.988 "enable_zerocopy_send_client": false, 00:25:14.988 "zerocopy_threshold": 0, 00:25:14.988 "tls_version": 0, 00:25:14.988 "enable_ktls": false 00:25:14.988 } 00:25:14.988 } 00:25:14.988 ] 00:25:14.988 }, 00:25:14.988 { 00:25:14.988 "subsystem": "vmd", 00:25:14.988 "config": [] 00:25:14.988 }, 00:25:14.988 { 00:25:14.988 "subsystem": "accel", 00:25:14.988 "config": [ 00:25:14.988 { 00:25:14.988 "method": "accel_set_options", 00:25:14.988 "params": { 00:25:14.988 "small_cache_size": 128, 00:25:14.988 "large_cache_size": 16, 00:25:14.988 "task_count": 2048, 00:25:14.988 "sequence_count": 2048, 00:25:14.988 "buf_count": 2048 00:25:14.988 } 00:25:14.988 } 00:25:14.988 ] 00:25:14.988 }, 00:25:14.988 { 00:25:14.988 "subsystem": "bdev", 00:25:14.988 "config": [ 00:25:14.988 { 00:25:14.988 "method": "bdev_set_options", 00:25:14.988 "params": { 00:25:14.988 "bdev_io_pool_size": 65535, 00:25:14.988 "bdev_io_cache_size": 256, 00:25:14.988 "bdev_auto_examine": true, 00:25:14.988 "iobuf_small_cache_size": 128, 00:25:14.988 "iobuf_large_cache_size": 16 00:25:14.988 } 00:25:14.988 }, 00:25:14.988 { 00:25:14.988 "method": "bdev_raid_set_options", 00:25:14.988 "params": { 00:25:14.988 "process_window_size_kb": 1024 00:25:14.988 } 00:25:14.988 }, 00:25:14.988 { 00:25:14.988 "method": "bdev_iscsi_set_options", 00:25:14.988 "params": { 00:25:14.988 "timeout_sec": 30 00:25:14.988 } 00:25:14.988 }, 00:25:14.988 { 00:25:14.988 "method": "bdev_nvme_set_options", 00:25:14.988 "params": { 00:25:14.988 "action_on_timeout": "none", 00:25:14.988 "timeout_us": 0, 00:25:14.988 "timeout_admin_us": 0, 00:25:14.988 "keep_alive_timeout_ms": 10000, 00:25:14.988 "arbitration_burst": 0, 00:25:14.988 "low_priority_weight": 0, 00:25:14.988 "medium_priority_weight": 0, 00:25:14.988 "high_priority_weight": 0, 00:25:14.988 "nvme_adminq_poll_period_us": 10000, 00:25:14.988 "nvme_ioq_poll_period_us": 0, 00:25:14.988 "io_queue_requests": 512, 00:25:14.988 "delay_cmd_submit": true, 00:25:14.988 "transport_retry_count": 4, 00:25:14.988 "bdev_retry_count": 3, 00:25:14.988 "transport_ack_timeout": 0, 00:25:14.988 "ctrlr_loss_timeout_sec": 0, 00:25:14.988 "reconnect_delay_sec": 0, 00:25:14.988 "fast_io_fail_timeout_sec": 0, 00:25:14.988 "disable_auto_failback": false, 00:25:14.988 "generate_uuids": false, 00:25:14.988 "transport_tos": 0, 00:25:14.988 "nvme_error_stat": false, 00:25:14.988 "rdma_srq_size": 0, 00:25:14.988 "io_path_stat": false, 00:25:14.988 "allow_accel_sequence": false, 00:25:14.988 "rdma_max_cq_size": 0, 00:25:14.988 "rdma_cm_event_timeout_ms": 0, 00:25:14.988 "dhchap_digests": [ 00:25:14.988 "sha256", 00:25:14.988 "sha384", 00:25:14.988 "sha512" 00:25:14.988 ], 00:25:14.988 "dhchap_dhgroups": [ 00:25:14.988 "null", 00:25:14.988 "ffdhe2048", 00:25:14.988 "ffdhe3072", 00:25:14.988 "ffdhe4096", 00:25:14.988 "ffdhe6144", 00:25:14.988 "ffdhe8192" 00:25:14.988 ] 00:25:14.988 } 00:25:14.988 }, 00:25:14.988 { 00:25:14.988 "method": "bdev_nvme_attach_controller", 00:25:14.988 "params": { 00:25:14.988 "name": "nvme0", 00:25:14.988 "trtype": "TCP", 00:25:14.988 "adrfam": "IPv4", 00:25:14.988 "traddr": "10.0.0.2", 00:25:14.988 "trsvcid": "4420", 00:25:14.988 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:14.988 "prchk_reftag": false, 00:25:14.988 "prchk_guard": false, 00:25:14.988 "ctrlr_loss_timeout_sec": 0, 00:25:14.988 "reconnect_delay_sec": 0, 00:25:14.988 "fast_io_fail_timeout_sec": 0, 00:25:14.988 "psk": "key0", 00:25:14.988 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:14.988 "hdgst": false, 00:25:14.989 "ddgst": false 00:25:14.989 } 00:25:14.989 }, 00:25:14.989 { 00:25:14.989 "method": "bdev_nvme_set_hotplug", 00:25:14.989 "params": { 00:25:14.989 "period_us": 100000, 00:25:14.989 "enable": false 00:25:14.989 } 00:25:14.989 }, 00:25:14.989 { 00:25:14.989 "method": "bdev_enable_histogram", 00:25:14.989 "params": { 00:25:14.989 "name": "nvme0n1", 00:25:14.989 "enable": true 00:25:14.989 } 00:25:14.989 }, 00:25:14.989 { 00:25:14.989 "method": "bdev_wait_for_examine" 00:25:14.989 } 00:25:14.989 ] 00:25:14.989 }, 00:25:14.989 { 00:25:14.989 "subsystem": "nbd", 00:25:14.989 "config": [] 00:25:14.989 } 00:25:14.989 ] 00:25:14.989 }' 00:25:14.989 00:37:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:15.247 [2024-07-12 00:37:42.845694] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:25:15.247 [2024-07-12 00:37:42.845780] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid994586 ] 00:25:15.247 EAL: No free 2048 kB hugepages reported on node 1 00:25:15.247 [2024-07-12 00:37:42.904443] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:15.247 [2024-07-12 00:37:42.991661] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:15.505 [2024-07-12 00:37:43.151331] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:15.505 00:37:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:25:15.505 00:37:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:25:15.505 00:37:43 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:15.505 00:37:43 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # jq -r '.[].name' 00:25:15.761 00:37:43 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:15.761 00:37:43 nvmf_tcp.nvmf_tls -- target/tls.sh@276 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:16.020 Running I/O for 1 seconds... 00:25:16.960 00:25:16.960 Latency(us) 00:25:16.960 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:16.960 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:25:16.960 Verification LBA range: start 0x0 length 0x2000 00:25:16.960 nvme0n1 : 1.02 3195.98 12.48 0.00 0.00 39686.05 7233.23 34952.53 00:25:16.960 =================================================================================================================== 00:25:16.960 Total : 3195.98 12.48 0.00 0.00 39686.05 7233.23 34952.53 00:25:16.960 0 00:25:16.960 00:37:44 nvmf_tcp.nvmf_tls -- target/tls.sh@278 -- # trap - SIGINT SIGTERM EXIT 00:25:16.960 00:37:44 nvmf_tcp.nvmf_tls -- target/tls.sh@279 -- # cleanup 00:25:16.960 00:37:44 nvmf_tcp.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:25:16.960 00:37:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@804 -- # type=--id 00:25:16.960 00:37:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@805 -- # id=0 00:25:16.960 00:37:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@806 -- # '[' --id = --pid ']' 00:25:16.960 00:37:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@810 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:25:16.960 00:37:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@810 -- # shm_files=nvmf_trace.0 00:25:16.960 00:37:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # [[ -z nvmf_trace.0 ]] 00:25:16.960 00:37:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@816 -- # for n in $shm_files 00:25:16.960 00:37:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@817 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:25:16.960 nvmf_trace.0 00:25:16.960 00:37:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@819 -- # return 0 00:25:16.960 00:37:44 nvmf_tcp.nvmf_tls -- target/tls.sh@16 -- # killprocess 994586 00:25:16.960 00:37:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 994586 ']' 00:25:16.960 00:37:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 994586 00:25:16.960 00:37:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:25:16.960 00:37:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:25:16.960 00:37:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 994586 00:25:16.960 00:37:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:25:16.960 00:37:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:25:16.960 00:37:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 994586' 00:25:16.960 killing process with pid 994586 00:25:16.960 00:37:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 994586 00:25:16.960 Received shutdown signal, test time was about 1.000000 seconds 00:25:16.960 00:25:16.960 Latency(us) 00:25:16.960 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:16.960 =================================================================================================================== 00:25:16.960 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:16.960 00:37:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 994586 00:25:17.221 00:37:44 nvmf_tcp.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:25:17.221 00:37:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:17.221 00:37:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@117 -- # sync 00:25:17.221 00:37:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:17.221 00:37:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@120 -- # set +e 00:25:17.221 00:37:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:17.221 00:37:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:17.221 rmmod nvme_tcp 00:25:17.221 rmmod nvme_fabrics 00:25:17.221 rmmod nvme_keyring 00:25:17.221 00:37:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:17.221 00:37:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@124 -- # set -e 00:25:17.221 00:37:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@125 -- # return 0 00:25:17.221 00:37:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@489 -- # '[' -n 994465 ']' 00:25:17.221 00:37:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@490 -- # killprocess 994465 00:25:17.221 00:37:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 994465 ']' 00:25:17.221 00:37:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 994465 00:25:17.221 00:37:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:25:17.221 00:37:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:25:17.221 00:37:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 994465 00:25:17.221 00:37:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:25:17.221 00:37:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:25:17.221 00:37:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 994465' 00:25:17.221 killing process with pid 994465 00:25:17.221 00:37:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 994465 00:25:17.221 00:37:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 994465 00:25:17.482 00:37:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:17.482 00:37:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:17.482 00:37:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:17.482 00:37:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:17.482 00:37:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:17.482 00:37:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:17.482 00:37:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:17.482 00:37:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:19.389 00:37:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:19.649 00:37:47 nvmf_tcp.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.8Qof0yZk1H /tmp/tmp.dHoet5BWBs /tmp/tmp.o0WDExQloy 00:25:19.649 00:25:19.649 real 1m17.622s 00:25:19.649 user 2m7.236s 00:25:19.649 sys 0m23.610s 00:25:19.649 00:37:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1122 -- # xtrace_disable 00:25:19.649 00:37:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:19.649 ************************************ 00:25:19.649 END TEST nvmf_tls 00:25:19.649 ************************************ 00:25:19.649 00:37:47 nvmf_tcp -- nvmf/nvmf.sh@62 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:25:19.649 00:37:47 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:25:19.649 00:37:47 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:25:19.649 00:37:47 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:19.649 ************************************ 00:25:19.649 START TEST nvmf_fips 00:25:19.649 ************************************ 00:25:19.649 00:37:47 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:25:19.649 * Looking for test storage... 00:25:19.649 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:25:19.649 00:37:47 nvmf_tcp.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:19.649 00:37:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:25:19.649 00:37:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:19.649 00:37:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:19.649 00:37:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:19.649 00:37:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:19.649 00:37:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:19.649 00:37:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:19.649 00:37:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:19.649 00:37:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:19.649 00:37:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:19.649 00:37:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:19.649 00:37:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:25:19.649 00:37:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:25:19.649 00:37:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:19.649 00:37:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:19.649 00:37:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:19.649 00:37:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:19.649 00:37:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:19.649 00:37:47 nvmf_tcp.nvmf_fips -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:19.649 00:37:47 nvmf_tcp.nvmf_fips -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:19.649 00:37:47 nvmf_tcp.nvmf_fips -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:19.649 00:37:47 nvmf_tcp.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:19.649 00:37:47 nvmf_tcp.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:19.649 00:37:47 nvmf_tcp.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:19.649 00:37:47 nvmf_tcp.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:25:19.649 00:37:47 nvmf_tcp.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:19.649 00:37:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@47 -- # : 0 00:25:19.649 00:37:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:19.649 00:37:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:19.649 00:37:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:19.649 00:37:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:19.649 00:37:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:19.649 00:37:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:19.649 00:37:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:19.649 00:37:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:19.649 00:37:47 nvmf_tcp.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:25:19.649 00:37:47 nvmf_tcp.nvmf_fips -- fips/fips.sh@89 -- # check_openssl_version 00:25:19.649 00:37:47 nvmf_tcp.nvmf_fips -- fips/fips.sh@83 -- # local target=3.0.0 00:25:19.649 00:37:47 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # openssl version 00:25:19.649 00:37:47 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # awk '{print $2}' 00:25:19.649 00:37:47 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:25:19.649 00:37:47 nvmf_tcp.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:25:19.649 00:37:47 nvmf_tcp.nvmf_fips -- scripts/common.sh@330 -- # local ver1 ver1_l 00:25:19.649 00:37:47 nvmf_tcp.nvmf_fips -- scripts/common.sh@331 -- # local ver2 ver2_l 00:25:19.649 00:37:47 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # IFS=.-: 00:25:19.649 00:37:47 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # read -ra ver1 00:25:19.649 00:37:47 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # IFS=.-: 00:25:19.649 00:37:47 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # read -ra ver2 00:25:19.649 00:37:47 nvmf_tcp.nvmf_fips -- scripts/common.sh@335 -- # local 'op=>=' 00:25:19.649 00:37:47 nvmf_tcp.nvmf_fips -- scripts/common.sh@337 -- # ver1_l=3 00:25:19.649 00:37:47 nvmf_tcp.nvmf_fips -- scripts/common.sh@338 -- # ver2_l=3 00:25:19.649 00:37:47 nvmf_tcp.nvmf_fips -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:25:19.649 00:37:47 nvmf_tcp.nvmf_fips -- scripts/common.sh@341 -- # case "$op" in 00:25:19.649 00:37:47 nvmf_tcp.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:25:19.649 00:37:47 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v = 0 )) 00:25:19.649 00:37:47 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:19.649 00:37:47 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 3 00:25:19.649 00:37:47 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:25:19.649 00:37:47 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:25:19.649 00:37:47 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:25:19.649 00:37:47 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=3 00:25:19.649 00:37:47 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 3 00:25:19.649 00:37:47 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:25:19.649 00:37:47 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:25:19.649 00:37:47 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:25:19.649 00:37:47 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=3 00:25:19.649 00:37:47 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:25:19.649 00:37:47 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:25:19.649 00:37:47 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:25:19.649 00:37:47 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:19.649 00:37:47 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 0 00:25:19.649 00:37:47 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:25:19.649 00:37:47 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:25:19.649 00:37:47 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:25:19.649 00:37:47 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=0 00:25:19.649 00:37:47 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:25:19.649 00:37:47 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:25:19.649 00:37:47 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:25:19.649 00:37:47 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:25:19.649 00:37:47 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:25:19.649 00:37:47 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:25:19.649 00:37:47 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:25:19.649 00:37:47 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:25:19.649 00:37:47 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:19.649 00:37:47 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 9 00:25:19.649 00:37:47 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=9 00:25:19.649 00:37:47 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:25:19.649 00:37:47 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 9 00:25:19.649 00:37:47 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=9 00:25:19.649 00:37:47 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:25:19.649 00:37:47 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:25:19.649 00:37:47 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:25:19.649 00:37:47 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:25:19.649 00:37:47 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:25:19.649 00:37:47 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:25:19.649 00:37:47 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # return 0 00:25:19.649 00:37:47 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # openssl info -modulesdir 00:25:19.649 00:37:47 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:25:19.649 00:37:47 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:25:19.649 00:37:47 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:25:19.649 00:37:47 nvmf_tcp.nvmf_fips -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:25:19.649 00:37:47 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:25:19.650 00:37:47 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # callback=build_openssl_config 00:25:19.650 00:37:47 nvmf_tcp.nvmf_fips -- fips/fips.sh@113 -- # build_openssl_config 00:25:19.650 00:37:47 nvmf_tcp.nvmf_fips -- fips/fips.sh@37 -- # cat 00:25:19.650 00:37:47 nvmf_tcp.nvmf_fips -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:25:19.650 00:37:47 nvmf_tcp.nvmf_fips -- fips/fips.sh@58 -- # cat - 00:25:19.650 00:37:47 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:25:19.650 00:37:47 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:25:19.650 00:37:47 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # mapfile -t providers 00:25:19.650 00:37:47 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # openssl list -providers 00:25:19.650 00:37:47 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # grep name 00:25:19.908 00:37:47 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:25:19.908 00:37:47 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:25:19.908 00:37:47 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:25:19.908 00:37:47 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:25:19.908 00:37:47 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@648 -- # local es=0 00:25:19.908 00:37:47 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # : 00:25:19.908 00:37:47 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@650 -- # valid_exec_arg openssl md5 /dev/fd/62 00:25:19.908 00:37:47 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@636 -- # local arg=openssl 00:25:19.908 00:37:47 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:19.908 00:37:47 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # type -t openssl 00:25:19.908 00:37:47 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:19.908 00:37:47 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # type -P openssl 00:25:19.908 00:37:47 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:19.908 00:37:47 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # arg=/usr/bin/openssl 00:25:19.908 00:37:47 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # [[ -x /usr/bin/openssl ]] 00:25:19.908 00:37:47 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # openssl md5 /dev/fd/62 00:25:19.908 Error setting digest 00:25:19.908 00422F6E8D7F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:25:19.908 00422F6E8D7F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:25:19.908 00:37:47 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # es=1 00:25:19.908 00:37:47 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:25:19.908 00:37:47 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:25:19.908 00:37:47 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:25:19.908 00:37:47 nvmf_tcp.nvmf_fips -- fips/fips.sh@130 -- # nvmftestinit 00:25:19.908 00:37:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:19.908 00:37:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:19.908 00:37:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:19.908 00:37:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:19.909 00:37:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:19.909 00:37:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:19.909 00:37:47 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:19.909 00:37:47 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:19.909 00:37:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:19.909 00:37:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:19.909 00:37:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@285 -- # xtrace_disable 00:25:19.909 00:37:47 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:25:21.816 00:37:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:21.816 00:37:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # pci_devs=() 00:25:21.816 00:37:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:21.816 00:37:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:21.816 00:37:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:21.816 00:37:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:21.816 00:37:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:21.816 00:37:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # net_devs=() 00:25:21.816 00:37:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:21.816 00:37:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # e810=() 00:25:21.816 00:37:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # local -ga e810 00:25:21.816 00:37:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # x722=() 00:25:21.816 00:37:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # local -ga x722 00:25:21.816 00:37:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # mlx=() 00:25:21.816 00:37:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # local -ga mlx 00:25:21.816 00:37:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:21.816 00:37:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:21.816 00:37:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:21.816 00:37:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:21.816 00:37:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:21.816 00:37:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:21.816 00:37:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:21.816 00:37:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:21.816 00:37:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:21.816 00:37:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:21.816 00:37:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:21.816 00:37:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:21.816 00:37:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:21.816 00:37:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:21.816 00:37:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:21.816 00:37:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:21.816 00:37:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:21.816 00:37:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:21.816 00:37:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:25:21.816 Found 0000:08:00.0 (0x8086 - 0x159b) 00:25:21.816 00:37:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:21.816 00:37:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:21.816 00:37:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:21.816 00:37:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:21.816 00:37:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:21.816 00:37:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:21.816 00:37:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:25:21.816 Found 0000:08:00.1 (0x8086 - 0x159b) 00:25:21.816 00:37:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:21.816 00:37:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:21.816 00:37:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:21.816 00:37:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:21.816 00:37:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:21.816 00:37:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:21.816 00:37:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:21.816 00:37:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:21.816 00:37:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:21.816 00:37:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:21.816 00:37:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:21.816 00:37:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:21.816 00:37:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:21.816 00:37:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:21.816 00:37:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:21.816 00:37:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:25:21.816 Found net devices under 0000:08:00.0: cvl_0_0 00:25:21.816 00:37:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:21.816 00:37:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:21.816 00:37:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:21.816 00:37:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:21.816 00:37:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:21.816 00:37:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:21.816 00:37:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:21.816 00:37:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:21.816 00:37:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:25:21.816 Found net devices under 0000:08:00.1: cvl_0_1 00:25:21.816 00:37:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:21.816 00:37:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:21.816 00:37:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # is_hw=yes 00:25:21.816 00:37:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:21.816 00:37:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:21.816 00:37:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:21.816 00:37:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:21.816 00:37:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:21.816 00:37:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:21.816 00:37:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:21.816 00:37:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:21.816 00:37:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:21.816 00:37:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:21.816 00:37:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:21.816 00:37:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:21.816 00:37:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:21.816 00:37:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:21.816 00:37:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:21.816 00:37:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:21.816 00:37:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:21.816 00:37:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:21.816 00:37:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:21.816 00:37:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:21.816 00:37:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:21.816 00:37:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:21.816 00:37:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:21.816 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:21.816 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.227 ms 00:25:21.816 00:25:21.816 --- 10.0.0.2 ping statistics --- 00:25:21.816 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:21.816 rtt min/avg/max/mdev = 0.227/0.227/0.227/0.000 ms 00:25:21.817 00:37:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:21.817 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:21.817 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.102 ms 00:25:21.817 00:25:21.817 --- 10.0.0.1 ping statistics --- 00:25:21.817 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:21.817 rtt min/avg/max/mdev = 0.102/0.102/0.102/0.000 ms 00:25:21.817 00:37:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:21.817 00:37:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@422 -- # return 0 00:25:21.817 00:37:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:21.817 00:37:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:21.817 00:37:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:21.817 00:37:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:21.817 00:37:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:21.817 00:37:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:21.817 00:37:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:21.817 00:37:49 nvmf_tcp.nvmf_fips -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:25:21.817 00:37:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:21.817 00:37:49 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@720 -- # xtrace_disable 00:25:21.817 00:37:49 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:25:21.817 00:37:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@481 -- # nvmfpid=996319 00:25:21.817 00:37:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:25:21.817 00:37:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@482 -- # waitforlisten 996319 00:25:21.817 00:37:49 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@827 -- # '[' -z 996319 ']' 00:25:21.817 00:37:49 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:21.817 00:37:49 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@832 -- # local max_retries=100 00:25:21.817 00:37:49 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:21.817 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:21.817 00:37:49 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # xtrace_disable 00:25:21.817 00:37:49 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:25:21.817 [2024-07-12 00:37:49.480134] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:25:21.817 [2024-07-12 00:37:49.480233] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:21.817 EAL: No free 2048 kB hugepages reported on node 1 00:25:21.817 [2024-07-12 00:37:49.545635] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:21.817 [2024-07-12 00:37:49.634977] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:21.817 [2024-07-12 00:37:49.635039] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:21.817 [2024-07-12 00:37:49.635055] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:21.817 [2024-07-12 00:37:49.635069] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:21.817 [2024-07-12 00:37:49.635081] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:21.817 [2024-07-12 00:37:49.635111] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:22.075 00:37:49 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:25:22.075 00:37:49 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@860 -- # return 0 00:25:22.075 00:37:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:22.075 00:37:49 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:22.075 00:37:49 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:25:22.075 00:37:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:22.075 00:37:49 nvmf_tcp.nvmf_fips -- fips/fips.sh@133 -- # trap cleanup EXIT 00:25:22.075 00:37:49 nvmf_tcp.nvmf_fips -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:25:22.075 00:37:49 nvmf_tcp.nvmf_fips -- fips/fips.sh@137 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:25:22.075 00:37:49 nvmf_tcp.nvmf_fips -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:25:22.075 00:37:49 nvmf_tcp.nvmf_fips -- fips/fips.sh@139 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:25:22.075 00:37:49 nvmf_tcp.nvmf_fips -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:25:22.075 00:37:49 nvmf_tcp.nvmf_fips -- fips/fips.sh@22 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:25:22.075 00:37:49 nvmf_tcp.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:25:22.334 [2024-07-12 00:37:50.047267] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:22.334 [2024-07-12 00:37:50.063211] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:25:22.334 [2024-07-12 00:37:50.063433] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:22.334 [2024-07-12 00:37:50.093776] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:25:22.334 malloc0 00:25:22.334 00:37:50 nvmf_tcp.nvmf_fips -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:22.334 00:37:50 nvmf_tcp.nvmf_fips -- fips/fips.sh@147 -- # bdevperf_pid=996349 00:25:22.334 00:37:50 nvmf_tcp.nvmf_fips -- fips/fips.sh@148 -- # waitforlisten 996349 /var/tmp/bdevperf.sock 00:25:22.334 00:37:50 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@827 -- # '[' -z 996349 ']' 00:25:22.334 00:37:50 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:22.334 00:37:50 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@832 -- # local max_retries=100 00:25:22.334 00:37:50 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:22.334 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:22.334 00:37:50 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # xtrace_disable 00:25:22.334 00:37:50 nvmf_tcp.nvmf_fips -- fips/fips.sh@145 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:25:22.334 00:37:50 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:25:22.594 [2024-07-12 00:37:50.199651] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:25:22.594 [2024-07-12 00:37:50.199751] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid996349 ] 00:25:22.594 EAL: No free 2048 kB hugepages reported on node 1 00:25:22.594 [2024-07-12 00:37:50.261448] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:22.594 [2024-07-12 00:37:50.349017] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:22.852 00:37:50 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:25:22.852 00:37:50 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@860 -- # return 0 00:25:22.852 00:37:50 nvmf_tcp.nvmf_fips -- fips/fips.sh@150 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:25:23.110 [2024-07-12 00:37:50.721763] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:23.110 [2024-07-12 00:37:50.721888] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:25:23.110 TLSTESTn1 00:25:23.110 00:37:50 nvmf_tcp.nvmf_fips -- fips/fips.sh@154 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:23.110 Running I/O for 10 seconds... 00:25:35.323 00:25:35.323 Latency(us) 00:25:35.323 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:35.323 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:25:35.323 Verification LBA range: start 0x0 length 0x2000 00:25:35.323 TLSTESTn1 : 10.02 3124.14 12.20 0.00 0.00 40896.04 7524.50 76118.85 00:25:35.323 =================================================================================================================== 00:25:35.323 Total : 3124.14 12.20 0.00 0.00 40896.04 7524.50 76118.85 00:25:35.323 0 00:25:35.323 00:38:00 nvmf_tcp.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:25:35.323 00:38:00 nvmf_tcp.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:25:35.323 00:38:00 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@804 -- # type=--id 00:25:35.323 00:38:00 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@805 -- # id=0 00:25:35.323 00:38:00 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@806 -- # '[' --id = --pid ']' 00:25:35.323 00:38:00 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@810 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:25:35.323 00:38:00 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@810 -- # shm_files=nvmf_trace.0 00:25:35.323 00:38:00 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # [[ -z nvmf_trace.0 ]] 00:25:35.323 00:38:00 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@816 -- # for n in $shm_files 00:25:35.323 00:38:00 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@817 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:25:35.323 nvmf_trace.0 00:25:35.323 00:38:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@819 -- # return 0 00:25:35.323 00:38:01 nvmf_tcp.nvmf_fips -- fips/fips.sh@16 -- # killprocess 996349 00:25:35.323 00:38:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@946 -- # '[' -z 996349 ']' 00:25:35.323 00:38:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@950 -- # kill -0 996349 00:25:35.323 00:38:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@951 -- # uname 00:25:35.323 00:38:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:25:35.323 00:38:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 996349 00:25:35.323 00:38:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:25:35.323 00:38:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:25:35.323 00:38:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@964 -- # echo 'killing process with pid 996349' 00:25:35.323 killing process with pid 996349 00:25:35.323 00:38:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@965 -- # kill 996349 00:25:35.323 Received shutdown signal, test time was about 10.000000 seconds 00:25:35.323 00:25:35.323 Latency(us) 00:25:35.323 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:35.323 =================================================================================================================== 00:25:35.323 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:35.323 [2024-07-12 00:38:01.092482] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:25:35.323 00:38:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@970 -- # wait 996349 00:25:35.323 00:38:01 nvmf_tcp.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:25:35.323 00:38:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:35.323 00:38:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@117 -- # sync 00:25:35.323 00:38:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:35.323 00:38:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@120 -- # set +e 00:25:35.323 00:38:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:35.323 00:38:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:35.323 rmmod nvme_tcp 00:25:35.323 rmmod nvme_fabrics 00:25:35.323 rmmod nvme_keyring 00:25:35.323 00:38:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:35.323 00:38:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@124 -- # set -e 00:25:35.323 00:38:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@125 -- # return 0 00:25:35.323 00:38:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@489 -- # '[' -n 996319 ']' 00:25:35.323 00:38:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@490 -- # killprocess 996319 00:25:35.323 00:38:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@946 -- # '[' -z 996319 ']' 00:25:35.323 00:38:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@950 -- # kill -0 996319 00:25:35.323 00:38:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@951 -- # uname 00:25:35.323 00:38:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:25:35.323 00:38:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 996319 00:25:35.323 00:38:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:25:35.323 00:38:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:25:35.323 00:38:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@964 -- # echo 'killing process with pid 996319' 00:25:35.323 killing process with pid 996319 00:25:35.323 00:38:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@965 -- # kill 996319 00:25:35.323 [2024-07-12 00:38:01.336301] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:25:35.323 00:38:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@970 -- # wait 996319 00:25:35.324 00:38:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:35.324 00:38:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:35.324 00:38:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:35.324 00:38:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:35.324 00:38:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:35.324 00:38:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:35.324 00:38:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:35.324 00:38:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:35.893 00:38:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:35.893 00:38:03 nvmf_tcp.nvmf_fips -- fips/fips.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:25:35.893 00:25:35.893 real 0m16.273s 00:25:35.893 user 0m21.752s 00:25:35.893 sys 0m4.982s 00:25:35.893 00:38:03 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1122 -- # xtrace_disable 00:25:35.893 00:38:03 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:25:35.893 ************************************ 00:25:35.893 END TEST nvmf_fips 00:25:35.893 ************************************ 00:25:35.893 00:38:03 nvmf_tcp -- nvmf/nvmf.sh@65 -- # '[' 1 -eq 1 ']' 00:25:35.893 00:38:03 nvmf_tcp -- nvmf/nvmf.sh@66 -- # run_test nvmf_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:25:35.893 00:38:03 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:25:35.893 00:38:03 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:25:35.893 00:38:03 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:35.893 ************************************ 00:25:35.893 START TEST nvmf_fuzz 00:25:35.893 ************************************ 00:25:35.893 00:38:03 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:25:35.893 * Looking for test storage... 00:25:35.893 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:35.893 00:38:03 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:35.893 00:38:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@7 -- # uname -s 00:25:35.893 00:38:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:35.893 00:38:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:35.893 00:38:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:35.893 00:38:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:35.893 00:38:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:35.893 00:38:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:35.893 00:38:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:35.893 00:38:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:35.893 00:38:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:35.893 00:38:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:35.893 00:38:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:25:35.893 00:38:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:25:35.893 00:38:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:35.893 00:38:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:35.893 00:38:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:35.893 00:38:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:35.893 00:38:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:35.893 00:38:03 nvmf_tcp.nvmf_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:35.893 00:38:03 nvmf_tcp.nvmf_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:35.893 00:38:03 nvmf_tcp.nvmf_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:35.893 00:38:03 nvmf_tcp.nvmf_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:35.893 00:38:03 nvmf_tcp.nvmf_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:35.893 00:38:03 nvmf_tcp.nvmf_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:35.893 00:38:03 nvmf_tcp.nvmf_fuzz -- paths/export.sh@5 -- # export PATH 00:25:35.893 00:38:03 nvmf_tcp.nvmf_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:35.893 00:38:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@47 -- # : 0 00:25:35.893 00:38:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:35.893 00:38:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:35.893 00:38:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:35.893 00:38:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:35.893 00:38:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:35.893 00:38:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:35.893 00:38:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:35.893 00:38:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:35.893 00:38:03 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:25:35.893 00:38:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:35.893 00:38:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:35.893 00:38:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:35.893 00:38:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:35.893 00:38:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:35.893 00:38:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:35.893 00:38:03 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:35.893 00:38:03 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:35.893 00:38:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:35.893 00:38:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:35.893 00:38:03 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@285 -- # xtrace_disable 00:25:35.893 00:38:03 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:37.837 00:38:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:37.837 00:38:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@291 -- # pci_devs=() 00:25:37.837 00:38:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:37.837 00:38:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:37.837 00:38:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:37.837 00:38:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:37.837 00:38:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:37.837 00:38:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@295 -- # net_devs=() 00:25:37.837 00:38:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:37.837 00:38:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@296 -- # e810=() 00:25:37.837 00:38:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@296 -- # local -ga e810 00:25:37.837 00:38:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@297 -- # x722=() 00:25:37.837 00:38:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@297 -- # local -ga x722 00:25:37.837 00:38:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@298 -- # mlx=() 00:25:37.837 00:38:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@298 -- # local -ga mlx 00:25:37.837 00:38:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:37.837 00:38:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:37.837 00:38:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:37.837 00:38:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:37.837 00:38:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:37.837 00:38:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:37.837 00:38:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:37.837 00:38:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:37.837 00:38:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:37.837 00:38:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:37.837 00:38:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:37.837 00:38:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:37.837 00:38:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:37.837 00:38:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:37.837 00:38:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:37.837 00:38:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:37.837 00:38:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:37.837 00:38:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:37.837 00:38:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:25:37.837 Found 0000:08:00.0 (0x8086 - 0x159b) 00:25:37.837 00:38:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:37.837 00:38:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:37.837 00:38:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:37.837 00:38:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:37.837 00:38:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:37.837 00:38:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:37.837 00:38:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:25:37.837 Found 0000:08:00.1 (0x8086 - 0x159b) 00:25:37.837 00:38:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:37.837 00:38:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:37.837 00:38:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:37.837 00:38:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:37.837 00:38:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:37.837 00:38:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:37.837 00:38:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:37.837 00:38:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:37.837 00:38:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:37.837 00:38:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:37.837 00:38:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:37.837 00:38:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:37.837 00:38:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:37.837 00:38:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:37.837 00:38:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:37.837 00:38:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:25:37.837 Found net devices under 0000:08:00.0: cvl_0_0 00:25:37.837 00:38:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:37.837 00:38:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:37.837 00:38:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:37.837 00:38:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:37.837 00:38:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:37.837 00:38:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:37.837 00:38:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:37.837 00:38:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:37.837 00:38:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:25:37.837 Found net devices under 0000:08:00.1: cvl_0_1 00:25:37.837 00:38:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:37.837 00:38:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:37.837 00:38:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@414 -- # is_hw=yes 00:25:37.837 00:38:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:37.837 00:38:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:37.837 00:38:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:37.837 00:38:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:37.837 00:38:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:37.837 00:38:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:37.837 00:38:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:37.837 00:38:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:37.837 00:38:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:37.837 00:38:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:37.837 00:38:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:37.837 00:38:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:37.837 00:38:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:37.837 00:38:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:37.837 00:38:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:37.837 00:38:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:37.837 00:38:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:37.837 00:38:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:37.837 00:38:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:37.837 00:38:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:37.837 00:38:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:37.837 00:38:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:37.837 00:38:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:37.837 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:37.837 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.218 ms 00:25:37.837 00:25:37.837 --- 10.0.0.2 ping statistics --- 00:25:37.837 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:37.837 rtt min/avg/max/mdev = 0.218/0.218/0.218/0.000 ms 00:25:37.837 00:38:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:37.837 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:37.838 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.105 ms 00:25:37.838 00:25:37.838 --- 10.0.0.1 ping statistics --- 00:25:37.838 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:37.838 rtt min/avg/max/mdev = 0.105/0.105/0.105/0.000 ms 00:25:37.838 00:38:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:37.838 00:38:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@422 -- # return 0 00:25:37.838 00:38:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:37.838 00:38:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:37.838 00:38:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:37.838 00:38:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:37.838 00:38:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:37.838 00:38:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:37.838 00:38:05 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:37.838 00:38:05 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@14 -- # nvmfpid=998836 00:25:37.838 00:38:05 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:25:37.838 00:38:05 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@13 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:25:37.838 00:38:05 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@18 -- # waitforlisten 998836 00:25:37.838 00:38:05 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@827 -- # '[' -z 998836 ']' 00:25:37.838 00:38:05 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:37.838 00:38:05 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@832 -- # local max_retries=100 00:25:37.838 00:38:05 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:37.838 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:37.838 00:38:05 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@836 -- # xtrace_disable 00:25:37.838 00:38:05 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:37.838 00:38:05 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:25:37.838 00:38:05 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@860 -- # return 0 00:25:37.838 00:38:05 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:37.838 00:38:05 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:37.838 00:38:05 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:37.838 00:38:05 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:37.838 00:38:05 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:25:37.838 00:38:05 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:37.838 00:38:05 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:37.838 Malloc0 00:25:37.838 00:38:05 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:37.838 00:38:05 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:37.838 00:38:05 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:37.838 00:38:05 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:38.098 00:38:05 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:38.099 00:38:05 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:38.099 00:38:05 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:38.099 00:38:05 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:38.099 00:38:05 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:38.099 00:38:05 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:38.099 00:38:05 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:38.099 00:38:05 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:38.099 00:38:05 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:38.099 00:38:05 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@27 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' 00:25:38.099 00:38:05 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -N -a 00:26:10.213 Fuzzing completed. Shutting down the fuzz application 00:26:10.213 00:26:10.213 Dumping successful admin opcodes: 00:26:10.213 8, 9, 10, 24, 00:26:10.213 Dumping successful io opcodes: 00:26:10.213 0, 9, 00:26:10.213 NS: 0x200003aeff00 I/O qp, Total commands completed: 470733, total successful commands: 2715, random_seed: 1722213248 00:26:10.213 NS: 0x200003aeff00 admin qp, Total commands completed: 55592, total successful commands: 443, random_seed: 2754550784 00:26:10.213 00:38:36 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -j /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:26:10.213 Fuzzing completed. Shutting down the fuzz application 00:26:10.213 00:26:10.213 Dumping successful admin opcodes: 00:26:10.213 24, 00:26:10.213 Dumping successful io opcodes: 00:26:10.213 00:26:10.213 NS: 0x200003aeff00 I/O qp, Total commands completed: 0, total successful commands: 0, random_seed: 1849727663 00:26:10.213 NS: 0x200003aeff00 admin qp, Total commands completed: 16, total successful commands: 4, random_seed: 1849859923 00:26:10.213 00:38:37 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:10.213 00:38:37 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:10.213 00:38:37 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:26:10.213 00:38:37 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:10.213 00:38:37 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:26:10.213 00:38:37 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:26:10.213 00:38:37 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:10.213 00:38:37 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@117 -- # sync 00:26:10.213 00:38:37 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:10.213 00:38:37 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@120 -- # set +e 00:26:10.213 00:38:37 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:10.213 00:38:37 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:10.213 rmmod nvme_tcp 00:26:10.213 rmmod nvme_fabrics 00:26:10.214 rmmod nvme_keyring 00:26:10.214 00:38:37 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:10.214 00:38:37 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@124 -- # set -e 00:26:10.214 00:38:37 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@125 -- # return 0 00:26:10.214 00:38:37 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@489 -- # '[' -n 998836 ']' 00:26:10.214 00:38:37 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@490 -- # killprocess 998836 00:26:10.214 00:38:37 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@946 -- # '[' -z 998836 ']' 00:26:10.214 00:38:37 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@950 -- # kill -0 998836 00:26:10.214 00:38:37 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@951 -- # uname 00:26:10.214 00:38:37 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:26:10.214 00:38:37 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 998836 00:26:10.214 00:38:37 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:26:10.214 00:38:37 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:26:10.214 00:38:37 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@964 -- # echo 'killing process with pid 998836' 00:26:10.214 killing process with pid 998836 00:26:10.214 00:38:37 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@965 -- # kill 998836 00:26:10.214 00:38:37 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@970 -- # wait 998836 00:26:10.214 00:38:37 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:10.214 00:38:37 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:10.214 00:38:37 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:10.214 00:38:37 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:10.214 00:38:37 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:10.214 00:38:37 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:10.214 00:38:37 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:10.214 00:38:37 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:12.123 00:38:39 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:12.123 00:38:39 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@39 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs1.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs2.txt 00:26:12.123 00:26:12.123 real 0m36.248s 00:26:12.123 user 0m51.613s 00:26:12.123 sys 0m13.493s 00:26:12.123 00:38:39 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@1122 -- # xtrace_disable 00:26:12.123 00:38:39 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:26:12.123 ************************************ 00:26:12.123 END TEST nvmf_fuzz 00:26:12.123 ************************************ 00:26:12.123 00:38:39 nvmf_tcp -- nvmf/nvmf.sh@67 -- # run_test nvmf_multiconnection /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:26:12.123 00:38:39 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:26:12.123 00:38:39 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:26:12.123 00:38:39 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:12.123 ************************************ 00:26:12.123 START TEST nvmf_multiconnection 00:26:12.123 ************************************ 00:26:12.123 00:38:39 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:26:12.123 * Looking for test storage... 00:26:12.383 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:26:12.383 00:38:39 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:12.383 00:38:39 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@7 -- # uname -s 00:26:12.383 00:38:39 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:12.383 00:38:39 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:12.383 00:38:39 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:12.383 00:38:39 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:12.383 00:38:39 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:12.383 00:38:39 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:12.383 00:38:39 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:12.383 00:38:39 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:12.383 00:38:39 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:12.383 00:38:39 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:12.383 00:38:39 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:26:12.383 00:38:39 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:26:12.383 00:38:39 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:12.383 00:38:39 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:12.383 00:38:39 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:12.383 00:38:39 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:12.383 00:38:39 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:12.383 00:38:39 nvmf_tcp.nvmf_multiconnection -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:12.383 00:38:39 nvmf_tcp.nvmf_multiconnection -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:12.383 00:38:39 nvmf_tcp.nvmf_multiconnection -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:12.383 00:38:39 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:12.383 00:38:39 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:12.383 00:38:39 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:12.383 00:38:39 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@5 -- # export PATH 00:26:12.383 00:38:39 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:12.383 00:38:39 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@47 -- # : 0 00:26:12.383 00:38:39 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:12.383 00:38:39 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:12.383 00:38:39 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:12.383 00:38:39 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:12.383 00:38:39 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:12.383 00:38:39 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:12.383 00:38:39 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:12.383 00:38:39 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:12.383 00:38:39 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:12.383 00:38:39 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:26:12.383 00:38:39 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:26:12.383 00:38:39 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@16 -- # nvmftestinit 00:26:12.383 00:38:39 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:12.383 00:38:39 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:12.383 00:38:39 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:12.383 00:38:39 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:12.383 00:38:39 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:12.383 00:38:39 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:12.383 00:38:39 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:12.383 00:38:39 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:12.383 00:38:39 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:12.383 00:38:39 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:12.383 00:38:39 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@285 -- # xtrace_disable 00:26:12.383 00:38:39 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:14.286 00:38:41 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:14.286 00:38:41 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@291 -- # pci_devs=() 00:26:14.286 00:38:41 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:14.286 00:38:41 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:14.286 00:38:41 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:14.286 00:38:41 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:14.286 00:38:41 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:14.286 00:38:41 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@295 -- # net_devs=() 00:26:14.286 00:38:41 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:14.286 00:38:41 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@296 -- # e810=() 00:26:14.286 00:38:41 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@296 -- # local -ga e810 00:26:14.286 00:38:41 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@297 -- # x722=() 00:26:14.286 00:38:41 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@297 -- # local -ga x722 00:26:14.286 00:38:41 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@298 -- # mlx=() 00:26:14.286 00:38:41 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@298 -- # local -ga mlx 00:26:14.286 00:38:41 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:14.286 00:38:41 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:14.286 00:38:41 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:14.286 00:38:41 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:14.286 00:38:41 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:14.286 00:38:41 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:14.286 00:38:41 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:14.286 00:38:41 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:14.286 00:38:41 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:14.286 00:38:41 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:14.286 00:38:41 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:14.286 00:38:41 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:14.286 00:38:41 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:14.286 00:38:41 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:14.286 00:38:41 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:14.286 00:38:41 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:14.286 00:38:41 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:14.286 00:38:41 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:14.286 00:38:41 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:26:14.286 Found 0000:08:00.0 (0x8086 - 0x159b) 00:26:14.286 00:38:41 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:14.286 00:38:41 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:14.286 00:38:41 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:14.286 00:38:41 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:14.286 00:38:41 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:14.286 00:38:41 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:14.286 00:38:41 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:26:14.286 Found 0000:08:00.1 (0x8086 - 0x159b) 00:26:14.286 00:38:41 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:14.286 00:38:41 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:14.286 00:38:41 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:14.286 00:38:41 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:14.286 00:38:41 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:14.286 00:38:41 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:14.286 00:38:41 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:14.286 00:38:41 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:14.286 00:38:41 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:14.286 00:38:41 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:14.286 00:38:41 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:14.286 00:38:41 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:14.286 00:38:41 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:14.286 00:38:41 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:14.286 00:38:41 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:14.286 00:38:41 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:26:14.286 Found net devices under 0000:08:00.0: cvl_0_0 00:26:14.286 00:38:41 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:14.286 00:38:41 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:14.286 00:38:41 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:14.286 00:38:41 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:14.286 00:38:41 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:14.286 00:38:41 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:14.286 00:38:41 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:14.286 00:38:41 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:14.286 00:38:41 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:26:14.286 Found net devices under 0000:08:00.1: cvl_0_1 00:26:14.286 00:38:41 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:14.286 00:38:41 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:14.286 00:38:41 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@414 -- # is_hw=yes 00:26:14.286 00:38:41 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:14.287 00:38:41 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:14.287 00:38:41 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:14.287 00:38:41 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:14.287 00:38:41 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:14.287 00:38:41 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:14.287 00:38:41 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:14.287 00:38:41 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:14.287 00:38:41 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:14.287 00:38:41 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:14.287 00:38:41 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:14.287 00:38:41 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:14.287 00:38:41 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:14.287 00:38:41 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:14.287 00:38:41 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:14.287 00:38:41 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:14.287 00:38:41 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:14.287 00:38:41 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:14.287 00:38:41 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:14.287 00:38:41 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:14.287 00:38:41 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:14.287 00:38:41 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:14.287 00:38:41 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:14.287 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:14.287 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.243 ms 00:26:14.287 00:26:14.287 --- 10.0.0.2 ping statistics --- 00:26:14.287 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:14.287 rtt min/avg/max/mdev = 0.243/0.243/0.243/0.000 ms 00:26:14.287 00:38:41 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:14.287 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:14.287 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.105 ms 00:26:14.287 00:26:14.287 --- 10.0.0.1 ping statistics --- 00:26:14.287 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:14.287 rtt min/avg/max/mdev = 0.105/0.105/0.105/0.000 ms 00:26:14.287 00:38:41 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:14.287 00:38:41 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@422 -- # return 0 00:26:14.287 00:38:41 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:14.287 00:38:41 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:14.287 00:38:41 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:14.287 00:38:41 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:14.287 00:38:41 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:14.287 00:38:41 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:14.287 00:38:41 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:14.287 00:38:41 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:26:14.287 00:38:41 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:14.287 00:38:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@720 -- # xtrace_disable 00:26:14.287 00:38:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:14.287 00:38:41 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@481 -- # nvmfpid=1003197 00:26:14.287 00:38:41 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:26:14.287 00:38:41 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@482 -- # waitforlisten 1003197 00:26:14.287 00:38:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@827 -- # '[' -z 1003197 ']' 00:26:14.287 00:38:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:14.287 00:38:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@832 -- # local max_retries=100 00:26:14.287 00:38:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:14.287 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:14.287 00:38:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@836 -- # xtrace_disable 00:26:14.287 00:38:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:14.287 [2024-07-12 00:38:41.833594] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:26:14.287 [2024-07-12 00:38:41.833727] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:14.287 EAL: No free 2048 kB hugepages reported on node 1 00:26:14.287 [2024-07-12 00:38:41.902385] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:14.287 [2024-07-12 00:38:41.994766] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:14.287 [2024-07-12 00:38:41.994825] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:14.287 [2024-07-12 00:38:41.994841] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:14.287 [2024-07-12 00:38:41.994854] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:14.287 [2024-07-12 00:38:41.994866] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:14.287 [2024-07-12 00:38:41.994947] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:14.287 [2024-07-12 00:38:41.995001] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:26:14.287 [2024-07-12 00:38:41.995053] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:26:14.287 [2024-07-12 00:38:41.995057] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:14.287 00:38:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:26:14.287 00:38:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@860 -- # return 0 00:26:14.287 00:38:42 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:14.287 00:38:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:14.287 00:38:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:14.545 00:38:42 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:14.545 00:38:42 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:14.545 00:38:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:14.545 00:38:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:14.545 [2024-07-12 00:38:42.135089] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:14.545 00:38:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:14.545 00:38:42 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # seq 1 11 00:26:14.545 00:38:42 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:14.545 00:38:42 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:26:14.545 00:38:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:14.545 00:38:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:14.545 Malloc1 00:26:14.545 00:38:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:14.545 00:38:42 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:26:14.545 00:38:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:14.545 00:38:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:14.545 00:38:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:14.545 00:38:42 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:26:14.545 00:38:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:14.545 00:38:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:14.545 00:38:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:14.545 00:38:42 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:14.545 00:38:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:14.545 00:38:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:14.545 [2024-07-12 00:38:42.187802] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:14.545 00:38:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:14.545 00:38:42 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:14.545 00:38:42 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:26:14.545 00:38:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:14.545 00:38:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:14.545 Malloc2 00:26:14.545 00:38:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:14.545 00:38:42 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:26:14.545 00:38:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:14.545 00:38:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:14.545 00:38:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:14.545 00:38:42 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:26:14.545 00:38:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:14.546 00:38:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:14.546 00:38:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:14.546 00:38:42 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:26:14.546 00:38:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:14.546 00:38:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:14.546 00:38:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:14.546 00:38:42 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:14.546 00:38:42 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:26:14.546 00:38:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:14.546 00:38:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:14.546 Malloc3 00:26:14.546 00:38:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:14.546 00:38:42 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:26:14.546 00:38:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:14.546 00:38:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:14.546 00:38:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:14.546 00:38:42 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:26:14.546 00:38:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:14.546 00:38:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:14.546 00:38:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:14.546 00:38:42 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:26:14.546 00:38:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:14.546 00:38:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:14.546 00:38:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:14.546 00:38:42 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:14.546 00:38:42 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:26:14.546 00:38:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:14.546 00:38:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:14.546 Malloc4 00:26:14.546 00:38:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:14.546 00:38:42 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:26:14.546 00:38:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:14.546 00:38:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:14.546 00:38:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:14.546 00:38:42 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:26:14.546 00:38:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:14.546 00:38:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:14.546 00:38:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:14.546 00:38:42 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:26:14.546 00:38:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:14.546 00:38:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:14.546 00:38:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:14.546 00:38:42 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:14.546 00:38:42 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:26:14.546 00:38:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:14.546 00:38:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:14.546 Malloc5 00:26:14.546 00:38:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:14.546 00:38:42 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:26:14.546 00:38:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:14.546 00:38:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:14.546 00:38:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:14.546 00:38:42 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:26:14.546 00:38:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:14.546 00:38:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:14.546 00:38:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:14.546 00:38:42 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t tcp -a 10.0.0.2 -s 4420 00:26:14.546 00:38:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:14.546 00:38:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:14.546 00:38:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:14.546 00:38:42 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:14.546 00:38:42 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:26:14.546 00:38:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:14.546 00:38:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:14.805 Malloc6 00:26:14.805 00:38:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:14.805 00:38:42 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:26:14.805 00:38:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:14.805 00:38:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:14.805 00:38:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:14.805 00:38:42 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:26:14.805 00:38:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:14.805 00:38:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:14.805 00:38:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:14.805 00:38:42 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t tcp -a 10.0.0.2 -s 4420 00:26:14.805 00:38:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:14.805 00:38:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:14.805 00:38:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:14.805 00:38:42 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:14.805 00:38:42 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:26:14.805 00:38:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:14.805 00:38:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:14.805 Malloc7 00:26:14.805 00:38:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:14.805 00:38:42 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:26:14.805 00:38:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:14.805 00:38:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:14.805 00:38:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:14.805 00:38:42 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:26:14.805 00:38:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:14.805 00:38:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:14.805 00:38:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:14.805 00:38:42 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t tcp -a 10.0.0.2 -s 4420 00:26:14.805 00:38:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:14.805 00:38:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:14.805 00:38:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:14.805 00:38:42 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:14.805 00:38:42 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:26:14.805 00:38:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:14.805 00:38:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:14.805 Malloc8 00:26:14.805 00:38:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:14.805 00:38:42 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:26:14.805 00:38:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:14.805 00:38:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:14.805 00:38:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:14.805 00:38:42 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:26:14.805 00:38:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:14.805 00:38:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:14.805 00:38:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:14.805 00:38:42 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t tcp -a 10.0.0.2 -s 4420 00:26:14.805 00:38:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:14.805 00:38:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:14.805 00:38:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:14.805 00:38:42 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:14.805 00:38:42 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:26:14.805 00:38:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:14.805 00:38:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:14.805 Malloc9 00:26:14.805 00:38:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:14.805 00:38:42 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:26:14.805 00:38:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:14.805 00:38:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:14.805 00:38:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:14.805 00:38:42 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:26:14.805 00:38:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:14.805 00:38:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:14.805 00:38:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:14.805 00:38:42 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t tcp -a 10.0.0.2 -s 4420 00:26:14.805 00:38:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:14.805 00:38:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:14.805 00:38:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:14.805 00:38:42 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:14.805 00:38:42 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:26:14.805 00:38:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:14.805 00:38:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:14.805 Malloc10 00:26:14.805 00:38:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:14.805 00:38:42 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:26:14.805 00:38:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:14.805 00:38:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:14.805 00:38:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:14.805 00:38:42 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:26:14.805 00:38:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:14.805 00:38:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:14.805 00:38:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:14.805 00:38:42 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t tcp -a 10.0.0.2 -s 4420 00:26:14.805 00:38:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:14.805 00:38:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:14.805 00:38:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:14.805 00:38:42 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:14.805 00:38:42 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:26:14.805 00:38:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:14.805 00:38:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:14.805 Malloc11 00:26:14.805 00:38:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:14.805 00:38:42 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:26:14.805 00:38:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:14.805 00:38:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:14.805 00:38:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:14.805 00:38:42 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:26:14.805 00:38:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:14.805 00:38:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:14.805 00:38:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:14.805 00:38:42 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t tcp -a 10.0.0.2 -s 4420 00:26:14.805 00:38:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:14.806 00:38:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:15.065 00:38:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:15.065 00:38:42 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # seq 1 11 00:26:15.065 00:38:42 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:15.065 00:38:42 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:26:15.325 00:38:43 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:26:15.325 00:38:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:26:15.325 00:38:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:26:15.325 00:38:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:26:15.325 00:38:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:26:17.861 00:38:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:26:17.861 00:38:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:26:17.861 00:38:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK1 00:26:17.861 00:38:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:26:17.861 00:38:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:26:17.861 00:38:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:26:17.861 00:38:45 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:17.861 00:38:45 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -n nqn.2016-06.io.spdk:cnode2 -a 10.0.0.2 -s 4420 00:26:17.861 00:38:45 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:26:17.861 00:38:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:26:17.861 00:38:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:26:17.861 00:38:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:26:17.861 00:38:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:26:20.395 00:38:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:26:20.395 00:38:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:26:20.395 00:38:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK2 00:26:20.395 00:38:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:26:20.395 00:38:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:26:20.395 00:38:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:26:20.395 00:38:47 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:20.395 00:38:47 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -n nqn.2016-06.io.spdk:cnode3 -a 10.0.0.2 -s 4420 00:26:20.395 00:38:48 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:26:20.395 00:38:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:26:20.395 00:38:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:26:20.395 00:38:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:26:20.395 00:38:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:26:22.927 00:38:50 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:26:22.927 00:38:50 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:26:22.927 00:38:50 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK3 00:26:22.927 00:38:50 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:26:22.927 00:38:50 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:26:22.927 00:38:50 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:26:22.927 00:38:50 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:22.927 00:38:50 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -n nqn.2016-06.io.spdk:cnode4 -a 10.0.0.2 -s 4420 00:26:22.927 00:38:50 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:26:22.927 00:38:50 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:26:22.927 00:38:50 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:26:22.928 00:38:50 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:26:22.928 00:38:50 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:26:24.832 00:38:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:26:24.832 00:38:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:26:24.832 00:38:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK4 00:26:24.832 00:38:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:26:24.832 00:38:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:26:24.832 00:38:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:26:24.832 00:38:52 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:24.832 00:38:52 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -n nqn.2016-06.io.spdk:cnode5 -a 10.0.0.2 -s 4420 00:26:25.401 00:38:53 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:26:25.401 00:38:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:26:25.402 00:38:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:26:25.402 00:38:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:26:25.402 00:38:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:26:27.936 00:38:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:26:27.936 00:38:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:26:27.936 00:38:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK5 00:26:27.936 00:38:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:26:27.936 00:38:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:26:27.936 00:38:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:26:27.936 00:38:55 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:27.936 00:38:55 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -n nqn.2016-06.io.spdk:cnode6 -a 10.0.0.2 -s 4420 00:26:28.194 00:38:55 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:26:28.194 00:38:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:26:28.194 00:38:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:26:28.194 00:38:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:26:28.194 00:38:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:26:30.127 00:38:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:26:30.127 00:38:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:26:30.127 00:38:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK6 00:26:30.127 00:38:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:26:30.127 00:38:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:26:30.127 00:38:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:26:30.127 00:38:57 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:30.127 00:38:57 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -n nqn.2016-06.io.spdk:cnode7 -a 10.0.0.2 -s 4420 00:26:30.693 00:38:58 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:26:30.693 00:38:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:26:30.693 00:38:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:26:30.693 00:38:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:26:30.693 00:38:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:26:33.224 00:39:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:26:33.224 00:39:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:26:33.224 00:39:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK7 00:26:33.224 00:39:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:26:33.224 00:39:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:26:33.224 00:39:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:26:33.224 00:39:00 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:33.224 00:39:00 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -n nqn.2016-06.io.spdk:cnode8 -a 10.0.0.2 -s 4420 00:26:33.481 00:39:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:26:33.481 00:39:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:26:33.481 00:39:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:26:33.481 00:39:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:26:33.481 00:39:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:26:35.381 00:39:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:26:35.381 00:39:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:26:35.381 00:39:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK8 00:26:35.381 00:39:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:26:35.381 00:39:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:26:35.381 00:39:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:26:35.381 00:39:03 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:35.381 00:39:03 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -n nqn.2016-06.io.spdk:cnode9 -a 10.0.0.2 -s 4420 00:26:35.945 00:39:03 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:26:35.945 00:39:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:26:35.945 00:39:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:26:35.945 00:39:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:26:35.945 00:39:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:26:38.472 00:39:05 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:26:38.472 00:39:05 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:26:38.472 00:39:05 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK9 00:26:38.472 00:39:05 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:26:38.472 00:39:05 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:26:38.472 00:39:05 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:26:38.472 00:39:05 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:38.472 00:39:05 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -n nqn.2016-06.io.spdk:cnode10 -a 10.0.0.2 -s 4420 00:26:38.729 00:39:06 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:26:38.729 00:39:06 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:26:38.729 00:39:06 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:26:38.729 00:39:06 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:26:38.729 00:39:06 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:26:40.628 00:39:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:26:40.628 00:39:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:26:40.628 00:39:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK10 00:26:40.628 00:39:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:26:40.628 00:39:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:26:40.628 00:39:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:26:40.628 00:39:08 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:40.628 00:39:08 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -n nqn.2016-06.io.spdk:cnode11 -a 10.0.0.2 -s 4420 00:26:41.560 00:39:09 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:26:41.560 00:39:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:26:41.560 00:39:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:26:41.560 00:39:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:26:41.560 00:39:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:26:43.458 00:39:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:26:43.458 00:39:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:26:43.458 00:39:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK11 00:26:43.458 00:39:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:26:43.458 00:39:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:26:43.458 00:39:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:26:43.458 00:39:11 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:26:43.458 [global] 00:26:43.458 thread=1 00:26:43.458 invalidate=1 00:26:43.458 rw=read 00:26:43.458 time_based=1 00:26:43.458 runtime=10 00:26:43.458 ioengine=libaio 00:26:43.458 direct=1 00:26:43.458 bs=262144 00:26:43.458 iodepth=64 00:26:43.458 norandommap=1 00:26:43.458 numjobs=1 00:26:43.458 00:26:43.458 [job0] 00:26:43.458 filename=/dev/nvme0n1 00:26:43.458 [job1] 00:26:43.458 filename=/dev/nvme10n1 00:26:43.458 [job2] 00:26:43.458 filename=/dev/nvme1n1 00:26:43.458 [job3] 00:26:43.458 filename=/dev/nvme2n1 00:26:43.458 [job4] 00:26:43.458 filename=/dev/nvme3n1 00:26:43.458 [job5] 00:26:43.458 filename=/dev/nvme4n1 00:26:43.458 [job6] 00:26:43.458 filename=/dev/nvme5n1 00:26:43.458 [job7] 00:26:43.458 filename=/dev/nvme6n1 00:26:43.458 [job8] 00:26:43.458 filename=/dev/nvme7n1 00:26:43.458 [job9] 00:26:43.458 filename=/dev/nvme8n1 00:26:43.458 [job10] 00:26:43.458 filename=/dev/nvme9n1 00:26:43.714 Could not set queue depth (nvme0n1) 00:26:43.714 Could not set queue depth (nvme10n1) 00:26:43.714 Could not set queue depth (nvme1n1) 00:26:43.714 Could not set queue depth (nvme2n1) 00:26:43.714 Could not set queue depth (nvme3n1) 00:26:43.714 Could not set queue depth (nvme4n1) 00:26:43.714 Could not set queue depth (nvme5n1) 00:26:43.714 Could not set queue depth (nvme6n1) 00:26:43.714 Could not set queue depth (nvme7n1) 00:26:43.714 Could not set queue depth (nvme8n1) 00:26:43.714 Could not set queue depth (nvme9n1) 00:26:43.714 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:43.714 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:43.714 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:43.714 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:43.714 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:43.714 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:43.714 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:43.714 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:43.714 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:43.714 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:43.714 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:43.714 fio-3.35 00:26:43.714 Starting 11 threads 00:26:55.905 00:26:55.905 job0: (groupid=0, jobs=1): err= 0: pid=1007076: Fri Jul 12 00:39:21 2024 00:26:55.905 read: IOPS=755, BW=189MiB/s (198MB/s)(1911MiB/10122msec) 00:26:55.905 slat (usec): min=10, max=98729, avg=1109.82, stdev=4862.66 00:26:55.905 clat (usec): min=1798, max=263956, avg=83548.55, stdev=50088.43 00:26:55.905 lat (usec): min=1819, max=265528, avg=84658.37, stdev=50821.62 00:26:55.905 clat percentiles (msec): 00:26:55.905 | 1.00th=[ 3], 5.00th=[ 14], 10.00th=[ 26], 20.00th=[ 36], 00:26:55.905 | 30.00th=[ 51], 40.00th=[ 65], 50.00th=[ 75], 60.00th=[ 89], 00:26:55.905 | 70.00th=[ 112], 80.00th=[ 132], 90.00th=[ 153], 95.00th=[ 174], 00:26:55.905 | 99.00th=[ 197], 99.50th=[ 241], 99.90th=[ 264], 99.95th=[ 264], 00:26:55.905 | 99.99th=[ 264] 00:26:55.905 bw ( KiB/s): min=94208, max=415744, per=10.59%, avg=194075.00, stdev=90154.95, samples=20 00:26:55.905 iops : min= 368, max= 1624, avg=758.10, stdev=352.16, samples=20 00:26:55.905 lat (msec) : 2=0.20%, 4=2.16%, 10=1.58%, 20=3.22%, 50=22.80% 00:26:55.905 lat (msec) : 100=35.28%, 250=34.44%, 500=0.33% 00:26:55.905 cpu : usr=0.44%, sys=2.19%, ctx=1243, majf=0, minf=4097 00:26:55.905 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:26:55.905 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:55.905 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:55.905 issued rwts: total=7645,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:55.905 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:55.905 job1: (groupid=0, jobs=1): err= 0: pid=1007077: Fri Jul 12 00:39:21 2024 00:26:55.905 read: IOPS=595, BW=149MiB/s (156MB/s)(1491MiB/10016msec) 00:26:55.905 slat (usec): min=10, max=116957, avg=1112.60, stdev=4953.31 00:26:55.905 clat (usec): min=756, max=284117, avg=106302.27, stdev=53700.90 00:26:55.905 lat (usec): min=772, max=291420, avg=107414.86, stdev=54368.49 00:26:55.905 clat percentiles (msec): 00:26:55.905 | 1.00th=[ 7], 5.00th=[ 12], 10.00th=[ 23], 20.00th=[ 58], 00:26:55.905 | 30.00th=[ 80], 40.00th=[ 96], 50.00th=[ 111], 60.00th=[ 130], 00:26:55.905 | 70.00th=[ 144], 80.00th=[ 155], 90.00th=[ 171], 95.00th=[ 184], 00:26:55.905 | 99.00th=[ 205], 99.50th=[ 222], 99.90th=[ 284], 99.95th=[ 284], 00:26:55.905 | 99.99th=[ 284] 00:26:55.905 bw ( KiB/s): min=91136, max=297472, per=8.24%, avg=151027.10, stdev=49110.66, samples=20 00:26:55.905 iops : min= 356, max= 1162, avg=589.90, stdev=191.86, samples=20 00:26:55.905 lat (usec) : 1000=0.05% 00:26:55.905 lat (msec) : 2=0.02%, 4=0.13%, 10=3.14%, 20=5.65%, 50=9.86% 00:26:55.905 lat (msec) : 100=23.83%, 250=57.15%, 500=0.17% 00:26:55.905 cpu : usr=0.36%, sys=1.80%, ctx=1152, majf=0, minf=3972 00:26:55.905 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:26:55.905 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:55.905 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:55.905 issued rwts: total=5963,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:55.905 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:55.905 job2: (groupid=0, jobs=1): err= 0: pid=1007078: Fri Jul 12 00:39:21 2024 00:26:55.905 read: IOPS=589, BW=147MiB/s (155MB/s)(1492MiB/10118msec) 00:26:55.905 slat (usec): min=10, max=68334, avg=1192.39, stdev=4704.52 00:26:55.905 clat (usec): min=1360, max=308839, avg=107216.69, stdev=50982.15 00:26:55.905 lat (usec): min=1380, max=323077, avg=108409.08, stdev=51821.51 00:26:55.905 clat percentiles (msec): 00:26:55.905 | 1.00th=[ 6], 5.00th=[ 40], 10.00th=[ 53], 20.00th=[ 68], 00:26:55.905 | 30.00th=[ 78], 40.00th=[ 85], 50.00th=[ 93], 60.00th=[ 108], 00:26:55.905 | 70.00th=[ 136], 80.00th=[ 157], 90.00th=[ 178], 95.00th=[ 197], 00:26:55.905 | 99.00th=[ 259], 99.50th=[ 271], 99.90th=[ 296], 99.95th=[ 305], 00:26:55.905 | 99.99th=[ 309] 00:26:55.905 bw ( KiB/s): min=72704, max=249856, per=8.25%, avg=151130.25, stdev=55628.75, samples=20 00:26:55.905 iops : min= 284, max= 976, avg=590.30, stdev=217.32, samples=20 00:26:55.905 lat (msec) : 2=0.13%, 4=0.20%, 10=1.26%, 20=0.84%, 50=6.49% 00:26:55.905 lat (msec) : 100=46.98%, 250=42.90%, 500=1.21% 00:26:55.905 cpu : usr=0.42%, sys=1.86%, ctx=1104, majf=0, minf=4097 00:26:55.905 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:26:55.905 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:55.905 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:55.905 issued rwts: total=5967,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:55.905 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:55.905 job3: (groupid=0, jobs=1): err= 0: pid=1007079: Fri Jul 12 00:39:21 2024 00:26:55.905 read: IOPS=591, BW=148MiB/s (155MB/s)(1494MiB/10112msec) 00:26:55.905 slat (usec): min=10, max=74553, avg=1231.05, stdev=4711.31 00:26:55.905 clat (msec): min=2, max=292, avg=106.95, stdev=51.52 00:26:55.905 lat (msec): min=2, max=308, avg=108.19, stdev=52.14 00:26:55.905 clat percentiles (msec): 00:26:55.905 | 1.00th=[ 7], 5.00th=[ 17], 10.00th=[ 36], 20.00th=[ 62], 00:26:55.905 | 30.00th=[ 78], 40.00th=[ 93], 50.00th=[ 113], 60.00th=[ 128], 00:26:55.905 | 70.00th=[ 138], 80.00th=[ 150], 90.00th=[ 167], 95.00th=[ 184], 00:26:55.905 | 99.00th=[ 226], 99.50th=[ 284], 99.90th=[ 292], 99.95th=[ 292], 00:26:55.905 | 99.99th=[ 292] 00:26:55.905 bw ( KiB/s): min=86528, max=262656, per=8.26%, avg=151412.30, stdev=49202.45, samples=20 00:26:55.905 iops : min= 338, max= 1026, avg=591.45, stdev=192.20, samples=20 00:26:55.905 lat (msec) : 4=0.27%, 10=2.04%, 20=3.51%, 50=10.77%, 100=27.47% 00:26:55.905 lat (msec) : 250=55.14%, 500=0.79% 00:26:55.905 cpu : usr=0.39%, sys=1.88%, ctx=1084, majf=0, minf=4097 00:26:55.905 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:26:55.905 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:55.905 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:55.905 issued rwts: total=5977,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:55.905 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:55.905 job4: (groupid=0, jobs=1): err= 0: pid=1007080: Fri Jul 12 00:39:21 2024 00:26:55.905 read: IOPS=450, BW=113MiB/s (118MB/s)(1135MiB/10080msec) 00:26:55.905 slat (usec): min=12, max=141572, avg=1902.33, stdev=6736.89 00:26:55.905 clat (msec): min=24, max=310, avg=140.13, stdev=41.27 00:26:55.905 lat (msec): min=25, max=320, avg=142.03, stdev=42.24 00:26:55.905 clat percentiles (msec): 00:26:55.905 | 1.00th=[ 36], 5.00th=[ 66], 10.00th=[ 92], 20.00th=[ 110], 00:26:55.905 | 30.00th=[ 125], 40.00th=[ 133], 50.00th=[ 142], 60.00th=[ 148], 00:26:55.905 | 70.00th=[ 157], 80.00th=[ 171], 90.00th=[ 188], 95.00th=[ 205], 00:26:55.905 | 99.00th=[ 262], 99.50th=[ 271], 99.90th=[ 288], 99.95th=[ 313], 00:26:55.905 | 99.99th=[ 313] 00:26:55.905 bw ( KiB/s): min=66560, max=151040, per=6.25%, avg=114585.60, stdev=25472.27, samples=20 00:26:55.905 iops : min= 260, max= 590, avg=447.60, stdev=99.50, samples=20 00:26:55.905 lat (msec) : 50=3.04%, 100=10.97%, 250=84.47%, 500=1.52% 00:26:55.905 cpu : usr=0.27%, sys=1.55%, ctx=845, majf=0, minf=4097 00:26:55.905 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:26:55.905 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:55.905 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:55.906 issued rwts: total=4539,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:55.906 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:55.906 job5: (groupid=0, jobs=1): err= 0: pid=1007087: Fri Jul 12 00:39:21 2024 00:26:55.906 read: IOPS=616, BW=154MiB/s (162MB/s)(1554MiB/10083msec) 00:26:55.906 slat (usec): min=14, max=107028, avg=1467.53, stdev=5295.85 00:26:55.906 clat (usec): min=900, max=298829, avg=102271.10, stdev=49098.27 00:26:55.906 lat (usec): min=922, max=333558, avg=103738.63, stdev=49849.85 00:26:55.906 clat percentiles (msec): 00:26:55.906 | 1.00th=[ 17], 5.00th=[ 41], 10.00th=[ 49], 20.00th=[ 59], 00:26:55.906 | 30.00th=[ 70], 40.00th=[ 84], 50.00th=[ 95], 60.00th=[ 107], 00:26:55.906 | 70.00th=[ 127], 80.00th=[ 144], 90.00th=[ 169], 95.00th=[ 190], 00:26:55.906 | 99.00th=[ 257], 99.50th=[ 275], 99.90th=[ 284], 99.95th=[ 300], 00:26:55.906 | 99.99th=[ 300] 00:26:55.906 bw ( KiB/s): min=68096, max=280576, per=8.59%, avg=157477.75, stdev=61541.41, samples=20 00:26:55.906 iops : min= 266, max= 1096, avg=615.10, stdev=240.41, samples=20 00:26:55.906 lat (usec) : 1000=0.02% 00:26:55.906 lat (msec) : 2=0.05%, 4=0.26%, 10=0.06%, 20=1.37%, 50=10.07% 00:26:55.906 lat (msec) : 100=42.30%, 250=44.70%, 500=1.17% 00:26:55.906 cpu : usr=0.41%, sys=2.07%, ctx=984, majf=0, minf=4097 00:26:55.906 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:26:55.906 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:55.906 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:55.906 issued rwts: total=6215,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:55.906 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:55.906 job6: (groupid=0, jobs=1): err= 0: pid=1007088: Fri Jul 12 00:39:21 2024 00:26:55.906 read: IOPS=597, BW=149MiB/s (157MB/s)(1507MiB/10082msec) 00:26:55.906 slat (usec): min=10, max=100663, avg=763.69, stdev=4036.93 00:26:55.906 clat (usec): min=632, max=237376, avg=106212.03, stdev=46266.49 00:26:55.906 lat (usec): min=649, max=271964, avg=106975.73, stdev=46687.34 00:26:55.906 clat percentiles (msec): 00:26:55.906 | 1.00th=[ 3], 5.00th=[ 20], 10.00th=[ 43], 20.00th=[ 71], 00:26:55.906 | 30.00th=[ 85], 40.00th=[ 96], 50.00th=[ 107], 60.00th=[ 118], 00:26:55.906 | 70.00th=[ 131], 80.00th=[ 146], 90.00th=[ 163], 95.00th=[ 184], 00:26:55.906 | 99.00th=[ 207], 99.50th=[ 215], 99.90th=[ 224], 99.95th=[ 226], 00:26:55.906 | 99.99th=[ 239] 00:26:55.906 bw ( KiB/s): min=91648, max=218624, per=8.33%, avg=152609.10, stdev=35471.86, samples=20 00:26:55.906 iops : min= 358, max= 854, avg=596.10, stdev=138.54, samples=20 00:26:55.906 lat (usec) : 750=0.02%, 1000=0.02% 00:26:55.906 lat (msec) : 2=0.65%, 4=1.43%, 10=1.11%, 20=2.06%, 50=5.92% 00:26:55.906 lat (msec) : 100=33.39%, 250=55.41% 00:26:55.906 cpu : usr=0.37%, sys=1.73%, ctx=1244, majf=0, minf=4097 00:26:55.906 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:26:55.906 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:55.906 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:55.906 issued rwts: total=6026,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:55.906 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:55.906 job7: (groupid=0, jobs=1): err= 0: pid=1007089: Fri Jul 12 00:39:21 2024 00:26:55.906 read: IOPS=634, BW=159MiB/s (166MB/s)(1599MiB/10083msec) 00:26:55.906 slat (usec): min=10, max=101616, avg=785.62, stdev=4439.19 00:26:55.906 clat (usec): min=1183, max=272575, avg=100039.40, stdev=48821.40 00:26:55.906 lat (usec): min=1209, max=301050, avg=100825.02, stdev=49437.18 00:26:55.906 clat percentiles (msec): 00:26:55.906 | 1.00th=[ 5], 5.00th=[ 20], 10.00th=[ 39], 20.00th=[ 57], 00:26:55.906 | 30.00th=[ 69], 40.00th=[ 85], 50.00th=[ 100], 60.00th=[ 115], 00:26:55.906 | 70.00th=[ 130], 80.00th=[ 144], 90.00th=[ 169], 95.00th=[ 180], 00:26:55.906 | 99.00th=[ 199], 99.50th=[ 209], 99.90th=[ 234], 99.95th=[ 249], 00:26:55.906 | 99.99th=[ 271] 00:26:55.906 bw ( KiB/s): min=97792, max=250880, per=8.84%, avg=162053.80, stdev=45912.10, samples=20 00:26:55.906 iops : min= 382, max= 980, avg=633.00, stdev=179.33, samples=20 00:26:55.906 lat (msec) : 2=0.17%, 4=0.81%, 10=2.69%, 20=1.45%, 50=11.50% 00:26:55.906 lat (msec) : 100=34.02%, 250=49.33%, 500=0.03% 00:26:55.906 cpu : usr=0.39%, sys=1.87%, ctx=1185, majf=0, minf=4097 00:26:55.906 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:26:55.906 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:55.906 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:55.906 issued rwts: total=6394,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:55.906 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:55.906 job8: (groupid=0, jobs=1): err= 0: pid=1007090: Fri Jul 12 00:39:21 2024 00:26:55.906 read: IOPS=834, BW=209MiB/s (219MB/s)(2112MiB/10119msec) 00:26:55.906 slat (usec): min=10, max=145105, avg=970.52, stdev=4385.72 00:26:55.906 clat (usec): min=878, max=386834, avg=75615.63, stdev=43125.10 00:26:55.906 lat (usec): min=902, max=386881, avg=76586.15, stdev=43691.44 00:26:55.906 clat percentiles (msec): 00:26:55.906 | 1.00th=[ 9], 5.00th=[ 24], 10.00th=[ 30], 20.00th=[ 34], 00:26:55.906 | 30.00th=[ 46], 40.00th=[ 58], 50.00th=[ 68], 60.00th=[ 80], 00:26:55.906 | 70.00th=[ 95], 80.00th=[ 113], 90.00th=[ 136], 95.00th=[ 155], 00:26:55.906 | 99.00th=[ 197], 99.50th=[ 222], 99.90th=[ 243], 99.95th=[ 243], 00:26:55.906 | 99.99th=[ 388] 00:26:55.906 bw ( KiB/s): min=121344, max=439296, per=11.71%, avg=214637.80, stdev=81329.00, samples=20 00:26:55.906 iops : min= 474, max= 1716, avg=838.35, stdev=317.74, samples=20 00:26:55.906 lat (usec) : 1000=0.01% 00:26:55.906 lat (msec) : 2=0.30%, 4=0.37%, 10=0.56%, 20=2.33%, 50=28.78% 00:26:55.906 lat (msec) : 100=39.91%, 250=27.72%, 500=0.02% 00:26:55.906 cpu : usr=0.48%, sys=2.44%, ctx=1258, majf=0, minf=4097 00:26:55.906 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:26:55.906 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:55.906 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:55.906 issued rwts: total=8448,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:55.906 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:55.906 job9: (groupid=0, jobs=1): err= 0: pid=1007091: Fri Jul 12 00:39:21 2024 00:26:55.906 read: IOPS=831, BW=208MiB/s (218MB/s)(2105MiB/10120msec) 00:26:55.906 slat (usec): min=10, max=91775, avg=935.66, stdev=3901.71 00:26:55.906 clat (usec): min=1920, max=300123, avg=75933.48, stdev=47077.21 00:26:55.906 lat (usec): min=1942, max=300166, avg=76869.14, stdev=47695.05 00:26:55.906 clat percentiles (msec): 00:26:55.906 | 1.00th=[ 5], 5.00th=[ 20], 10.00th=[ 28], 20.00th=[ 35], 00:26:55.906 | 30.00th=[ 42], 40.00th=[ 52], 50.00th=[ 63], 60.00th=[ 79], 00:26:55.906 | 70.00th=[ 97], 80.00th=[ 123], 90.00th=[ 148], 95.00th=[ 165], 00:26:55.906 | 99.00th=[ 190], 99.50th=[ 203], 99.90th=[ 266], 99.95th=[ 268], 00:26:55.906 | 99.99th=[ 300] 00:26:55.906 bw ( KiB/s): min=95744, max=453632, per=11.67%, avg=213927.50, stdev=100391.38, samples=20 00:26:55.906 iops : min= 374, max= 1772, avg=835.65, stdev=392.15, samples=20 00:26:55.906 lat (msec) : 2=0.01%, 4=0.91%, 10=1.94%, 20=2.38%, 50=33.32% 00:26:55.906 lat (msec) : 100=32.95%, 250=28.35%, 500=0.14% 00:26:55.906 cpu : usr=0.41%, sys=2.60%, ctx=1149, majf=0, minf=4097 00:26:55.906 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:26:55.906 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:55.906 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:55.906 issued rwts: total=8419,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:55.906 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:55.906 job10: (groupid=0, jobs=1): err= 0: pid=1007092: Fri Jul 12 00:39:21 2024 00:26:55.906 read: IOPS=679, BW=170MiB/s (178MB/s)(1720MiB/10124msec) 00:26:55.906 slat (usec): min=15, max=166162, avg=1240.90, stdev=5718.11 00:26:55.906 clat (msec): min=2, max=368, avg=92.82, stdev=56.24 00:26:55.906 lat (msec): min=2, max=368, avg=94.06, stdev=57.15 00:26:55.906 clat percentiles (msec): 00:26:55.906 | 1.00th=[ 7], 5.00th=[ 27], 10.00th=[ 32], 20.00th=[ 43], 00:26:55.906 | 30.00th=[ 57], 40.00th=[ 67], 50.00th=[ 77], 60.00th=[ 92], 00:26:55.906 | 70.00th=[ 118], 80.00th=[ 150], 90.00th=[ 174], 95.00th=[ 186], 00:26:55.906 | 99.00th=[ 266], 99.50th=[ 271], 99.90th=[ 309], 99.95th=[ 330], 00:26:55.906 | 99.99th=[ 368] 00:26:55.906 bw ( KiB/s): min=68096, max=407040, per=9.52%, avg=174514.55, stdev=84590.07, samples=20 00:26:55.906 iops : min= 266, max= 1590, avg=681.65, stdev=330.38, samples=20 00:26:55.906 lat (msec) : 4=0.44%, 10=0.74%, 20=1.32%, 50=22.15%, 100=39.50% 00:26:55.906 lat (msec) : 250=34.33%, 500=1.53% 00:26:55.906 cpu : usr=0.46%, sys=2.35%, ctx=1105, majf=0, minf=4097 00:26:55.906 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:26:55.906 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:55.906 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:55.906 issued rwts: total=6881,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:55.906 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:55.906 00:26:55.906 Run status group 0 (all jobs): 00:26:55.906 READ: bw=1790MiB/s (1877MB/s), 113MiB/s-209MiB/s (118MB/s-219MB/s), io=17.7GiB (19.0GB), run=10016-10124msec 00:26:55.906 00:26:55.906 Disk stats (read/write): 00:26:55.906 nvme0n1: ios=15122/0, merge=0/0, ticks=1234873/0, in_queue=1234873, util=97.18% 00:26:55.906 nvme10n1: ios=11556/0, merge=0/0, ticks=1244851/0, in_queue=1244851, util=97.39% 00:26:55.906 nvme1n1: ios=11739/0, merge=0/0, ticks=1237728/0, in_queue=1237728, util=97.65% 00:26:55.906 nvme2n1: ios=11774/0, merge=0/0, ticks=1234634/0, in_queue=1234634, util=97.79% 00:26:55.906 nvme3n1: ios=8877/0, merge=0/0, ticks=1236425/0, in_queue=1236425, util=97.87% 00:26:55.906 nvme4n1: ios=12242/0, merge=0/0, ticks=1239142/0, in_queue=1239142, util=98.21% 00:26:55.906 nvme5n1: ios=11857/0, merge=0/0, ticks=1246951/0, in_queue=1246951, util=98.38% 00:26:55.906 nvme6n1: ios=12586/0, merge=0/0, ticks=1243772/0, in_queue=1243772, util=98.49% 00:26:55.906 nvme7n1: ios=16730/0, merge=0/0, ticks=1233232/0, in_queue=1233232, util=98.87% 00:26:55.906 nvme8n1: ios=16635/0, merge=0/0, ticks=1231598/0, in_queue=1231598, util=99.05% 00:26:55.906 nvme9n1: ios=13586/0, merge=0/0, ticks=1232223/0, in_queue=1232223, util=99.20% 00:26:55.906 00:39:22 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:26:55.906 [global] 00:26:55.906 thread=1 00:26:55.906 invalidate=1 00:26:55.906 rw=randwrite 00:26:55.906 time_based=1 00:26:55.906 runtime=10 00:26:55.906 ioengine=libaio 00:26:55.906 direct=1 00:26:55.906 bs=262144 00:26:55.906 iodepth=64 00:26:55.906 norandommap=1 00:26:55.906 numjobs=1 00:26:55.906 00:26:55.906 [job0] 00:26:55.906 filename=/dev/nvme0n1 00:26:55.906 [job1] 00:26:55.906 filename=/dev/nvme10n1 00:26:55.906 [job2] 00:26:55.906 filename=/dev/nvme1n1 00:26:55.906 [job3] 00:26:55.906 filename=/dev/nvme2n1 00:26:55.906 [job4] 00:26:55.906 filename=/dev/nvme3n1 00:26:55.906 [job5] 00:26:55.906 filename=/dev/nvme4n1 00:26:55.907 [job6] 00:26:55.907 filename=/dev/nvme5n1 00:26:55.907 [job7] 00:26:55.907 filename=/dev/nvme6n1 00:26:55.907 [job8] 00:26:55.907 filename=/dev/nvme7n1 00:26:55.907 [job9] 00:26:55.907 filename=/dev/nvme8n1 00:26:55.907 [job10] 00:26:55.907 filename=/dev/nvme9n1 00:26:55.907 Could not set queue depth (nvme0n1) 00:26:55.907 Could not set queue depth (nvme10n1) 00:26:55.907 Could not set queue depth (nvme1n1) 00:26:55.907 Could not set queue depth (nvme2n1) 00:26:55.907 Could not set queue depth (nvme3n1) 00:26:55.907 Could not set queue depth (nvme4n1) 00:26:55.907 Could not set queue depth (nvme5n1) 00:26:55.907 Could not set queue depth (nvme6n1) 00:26:55.907 Could not set queue depth (nvme7n1) 00:26:55.907 Could not set queue depth (nvme8n1) 00:26:55.907 Could not set queue depth (nvme9n1) 00:26:55.907 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:55.907 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:55.907 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:55.907 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:55.907 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:55.907 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:55.907 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:55.907 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:55.907 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:55.907 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:55.907 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:55.907 fio-3.35 00:26:55.907 Starting 11 threads 00:27:05.885 00:27:05.885 job0: (groupid=0, jobs=1): err= 0: pid=1007939: Fri Jul 12 00:39:32 2024 00:27:05.885 write: IOPS=476, BW=119MiB/s (125MB/s)(1210MiB/10160msec); 0 zone resets 00:27:05.885 slat (usec): min=17, max=143817, avg=1076.24, stdev=5676.02 00:27:05.885 clat (usec): min=766, max=418157, avg=133245.26, stdev=98572.54 00:27:05.885 lat (usec): min=823, max=420973, avg=134321.50, stdev=99490.46 00:27:05.885 clat percentiles (usec): 00:27:05.885 | 1.00th=[ 1893], 5.00th=[ 7701], 10.00th=[ 13566], 20.00th=[ 25035], 00:27:05.885 | 30.00th=[ 47973], 40.00th=[ 84411], 50.00th=[139461], 60.00th=[173016], 00:27:05.885 | 70.00th=[193987], 80.00th=[217056], 90.00th=[261096], 95.00th=[299893], 00:27:05.885 | 99.00th=[392168], 99.50th=[400557], 99.90th=[413139], 99.95th=[417334], 00:27:05.885 | 99.99th=[417334] 00:27:05.885 bw ( KiB/s): min=63488, max=242203, per=8.37%, avg=122215.75, stdev=41172.49, samples=20 00:27:05.885 iops : min= 248, max= 946, avg=477.40, stdev=160.81, samples=20 00:27:05.885 lat (usec) : 1000=0.31% 00:27:05.885 lat (msec) : 2=0.85%, 4=1.72%, 10=4.28%, 20=8.74%, 50=14.90% 00:27:05.885 lat (msec) : 100=12.15%, 250=45.69%, 500=11.37% 00:27:05.885 cpu : usr=1.36%, sys=1.74%, ctx=3506, majf=0, minf=1 00:27:05.885 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:27:05.885 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:05.885 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:05.885 issued rwts: total=0,4839,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:05.885 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:05.885 job1: (groupid=0, jobs=1): err= 0: pid=1007951: Fri Jul 12 00:39:32 2024 00:27:05.885 write: IOPS=488, BW=122MiB/s (128MB/s)(1238MiB/10134msec); 0 zone resets 00:27:05.885 slat (usec): min=18, max=85719, avg=1009.27, stdev=3752.23 00:27:05.885 clat (usec): min=960, max=344206, avg=129911.96, stdev=82052.27 00:27:05.885 lat (usec): min=1023, max=348268, avg=130921.24, stdev=82892.38 00:27:05.885 clat percentiles (msec): 00:27:05.885 | 1.00th=[ 5], 5.00th=[ 12], 10.00th=[ 21], 20.00th=[ 41], 00:27:05.885 | 30.00th=[ 72], 40.00th=[ 106], 50.00th=[ 126], 60.00th=[ 153], 00:27:05.885 | 70.00th=[ 182], 80.00th=[ 209], 90.00th=[ 236], 95.00th=[ 271], 00:27:05.885 | 99.00th=[ 321], 99.50th=[ 326], 99.90th=[ 338], 99.95th=[ 342], 00:27:05.885 | 99.99th=[ 347] 00:27:05.885 bw ( KiB/s): min=58368, max=225280, per=8.57%, avg=125146.50, stdev=46032.78, samples=20 00:27:05.885 iops : min= 228, max= 880, avg=488.85, stdev=179.82, samples=20 00:27:05.885 lat (usec) : 1000=0.04% 00:27:05.885 lat (msec) : 2=0.26%, 4=0.50%, 10=2.77%, 20=6.38%, 50=13.47% 00:27:05.885 lat (msec) : 100=14.72%, 250=54.38%, 500=7.47% 00:27:05.885 cpu : usr=1.50%, sys=1.68%, ctx=3562, majf=0, minf=1 00:27:05.885 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.7% 00:27:05.885 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:05.885 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:05.885 issued rwts: total=0,4952,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:05.885 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:05.885 job2: (groupid=0, jobs=1): err= 0: pid=1007952: Fri Jul 12 00:39:32 2024 00:27:05.885 write: IOPS=541, BW=135MiB/s (142MB/s)(1374MiB/10145msec); 0 zone resets 00:27:05.885 slat (usec): min=15, max=155448, avg=1248.96, stdev=5209.97 00:27:05.885 clat (usec): min=742, max=533060, avg=116841.40, stdev=91745.71 00:27:05.885 lat (usec): min=778, max=533102, avg=118090.37, stdev=92823.50 00:27:05.885 clat percentiles (msec): 00:27:05.885 | 1.00th=[ 3], 5.00th=[ 5], 10.00th=[ 9], 20.00th=[ 22], 00:27:05.885 | 30.00th=[ 45], 40.00th=[ 65], 50.00th=[ 104], 60.00th=[ 146], 00:27:05.885 | 70.00th=[ 176], 80.00th=[ 207], 90.00th=[ 239], 95.00th=[ 255], 00:27:05.885 | 99.00th=[ 376], 99.50th=[ 397], 99.90th=[ 418], 99.95th=[ 435], 00:27:05.885 | 99.99th=[ 535] 00:27:05.885 bw ( KiB/s): min=61440, max=257536, per=9.51%, avg=138998.55, stdev=62625.89, samples=20 00:27:05.885 iops : min= 240, max= 1006, avg=542.95, stdev=244.64, samples=20 00:27:05.885 lat (usec) : 750=0.02%, 1000=0.18% 00:27:05.885 lat (msec) : 2=0.69%, 4=2.91%, 10=7.14%, 20=8.08%, 50=16.40% 00:27:05.885 lat (msec) : 100=13.96%, 250=44.39%, 500=6.21%, 750=0.02% 00:27:05.885 cpu : usr=1.46%, sys=1.91%, ctx=3521, majf=0, minf=1 00:27:05.885 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:27:05.885 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:05.885 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:05.885 issued rwts: total=0,5494,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:05.885 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:05.885 job3: (groupid=0, jobs=1): err= 0: pid=1007955: Fri Jul 12 00:39:32 2024 00:27:05.885 write: IOPS=623, BW=156MiB/s (164MB/s)(1580MiB/10127msec); 0 zone resets 00:27:05.885 slat (usec): min=17, max=77238, avg=893.91, stdev=2973.07 00:27:05.885 clat (usec): min=755, max=317353, avg=101615.74, stdev=63688.54 00:27:05.885 lat (usec): min=794, max=317415, avg=102509.65, stdev=64285.81 00:27:05.885 clat percentiles (msec): 00:27:05.885 | 1.00th=[ 4], 5.00th=[ 10], 10.00th=[ 18], 20.00th=[ 41], 00:27:05.885 | 30.00th=[ 56], 40.00th=[ 82], 50.00th=[ 102], 60.00th=[ 115], 00:27:05.885 | 70.00th=[ 136], 80.00th=[ 157], 90.00th=[ 184], 95.00th=[ 224], 00:27:05.885 | 99.00th=[ 251], 99.50th=[ 271], 99.90th=[ 296], 99.95th=[ 309], 00:27:05.885 | 99.99th=[ 317] 00:27:05.885 bw ( KiB/s): min=96768, max=293376, per=10.96%, avg=160112.95, stdev=51678.79, samples=20 00:27:05.885 iops : min= 378, max= 1146, avg=625.40, stdev=201.88, samples=20 00:27:05.885 lat (usec) : 1000=0.06% 00:27:05.885 lat (msec) : 2=0.36%, 4=0.90%, 10=4.08%, 20=5.82%, 50=13.33% 00:27:05.885 lat (msec) : 100=24.63%, 250=49.78%, 500=1.03% 00:27:05.885 cpu : usr=1.85%, sys=2.10%, ctx=4159, majf=0, minf=1 00:27:05.885 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:27:05.885 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:05.885 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:05.885 issued rwts: total=0,6318,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:05.885 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:05.885 job4: (groupid=0, jobs=1): err= 0: pid=1007956: Fri Jul 12 00:39:32 2024 00:27:05.885 write: IOPS=430, BW=108MiB/s (113MB/s)(1091MiB/10129msec); 0 zone resets 00:27:05.885 slat (usec): min=16, max=111903, avg=1522.90, stdev=4946.03 00:27:05.885 clat (usec): min=724, max=396194, avg=146979.03, stdev=92672.79 00:27:05.885 lat (usec): min=757, max=396254, avg=148501.93, stdev=93878.37 00:27:05.885 clat percentiles (usec): 00:27:05.885 | 1.00th=[ 1680], 5.00th=[ 10159], 10.00th=[ 19268], 20.00th=[ 38011], 00:27:05.885 | 30.00th=[ 78119], 40.00th=[133694], 50.00th=[160433], 60.00th=[179307], 00:27:05.885 | 70.00th=[202376], 80.00th=[229639], 90.00th=[267387], 95.00th=[291505], 00:27:05.886 | 99.00th=[354419], 99.50th=[375391], 99.90th=[396362], 99.95th=[396362], 00:27:05.886 | 99.99th=[396362] 00:27:05.886 bw ( KiB/s): min=71168, max=173568, per=7.53%, avg=110037.05, stdev=37405.93, samples=20 00:27:05.886 iops : min= 278, max= 678, avg=429.80, stdev=146.06, samples=20 00:27:05.886 lat (usec) : 750=0.02%, 1000=0.30% 00:27:05.886 lat (msec) : 2=0.85%, 4=1.17%, 10=2.57%, 20=5.85%, 50=13.09% 00:27:05.886 lat (msec) : 100=10.66%, 250=52.02%, 500=13.48% 00:27:05.886 cpu : usr=1.15%, sys=1.59%, ctx=2838, majf=0, minf=1 00:27:05.886 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:27:05.886 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:05.886 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:05.886 issued rwts: total=0,4362,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:05.886 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:05.886 job5: (groupid=0, jobs=1): err= 0: pid=1007957: Fri Jul 12 00:39:32 2024 00:27:05.886 write: IOPS=509, BW=127MiB/s (134MB/s)(1285MiB/10082msec); 0 zone resets 00:27:05.886 slat (usec): min=16, max=86075, avg=1187.72, stdev=4202.34 00:27:05.886 clat (usec): min=797, max=402195, avg=124322.86, stdev=83951.72 00:27:05.886 lat (usec): min=841, max=402248, avg=125510.59, stdev=84905.72 00:27:05.886 clat percentiles (usec): 00:27:05.886 | 1.00th=[ 1811], 5.00th=[ 7046], 10.00th=[ 15270], 20.00th=[ 50070], 00:27:05.886 | 30.00th=[ 78119], 40.00th=[ 94897], 50.00th=[113771], 60.00th=[127402], 00:27:05.886 | 70.00th=[147850], 80.00th=[193987], 90.00th=[250610], 95.00th=[291505], 00:27:05.886 | 99.00th=[358613], 99.50th=[379585], 99.90th=[400557], 99.95th=[400557], 00:27:05.886 | 99.99th=[400557] 00:27:05.886 bw ( KiB/s): min=61440, max=287232, per=8.89%, avg=129928.10, stdev=55418.51, samples=20 00:27:05.886 iops : min= 240, max= 1122, avg=507.50, stdev=216.45, samples=20 00:27:05.886 lat (usec) : 1000=0.21% 00:27:05.886 lat (msec) : 2=0.99%, 4=1.50%, 10=3.93%, 20=5.00%, 50=8.29% 00:27:05.886 lat (msec) : 100=23.18%, 250=46.68%, 500=10.22% 00:27:05.886 cpu : usr=1.55%, sys=1.62%, ctx=3328, majf=0, minf=1 00:27:05.886 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:27:05.886 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:05.886 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:05.886 issued rwts: total=0,5139,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:05.886 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:05.886 job6: (groupid=0, jobs=1): err= 0: pid=1007958: Fri Jul 12 00:39:32 2024 00:27:05.886 write: IOPS=533, BW=133MiB/s (140MB/s)(1351MiB/10134msec); 0 zone resets 00:27:05.886 slat (usec): min=21, max=84623, avg=888.01, stdev=3725.85 00:27:05.886 clat (usec): min=829, max=417552, avg=118830.54, stdev=81668.99 00:27:05.886 lat (usec): min=884, max=417889, avg=119718.55, stdev=82509.06 00:27:05.886 clat percentiles (msec): 00:27:05.886 | 1.00th=[ 5], 5.00th=[ 15], 10.00th=[ 22], 20.00th=[ 40], 00:27:05.886 | 30.00th=[ 57], 40.00th=[ 78], 50.00th=[ 113], 60.00th=[ 142], 00:27:05.886 | 70.00th=[ 161], 80.00th=[ 184], 90.00th=[ 228], 95.00th=[ 268], 00:27:05.886 | 99.00th=[ 351], 99.50th=[ 363], 99.90th=[ 405], 99.95th=[ 405], 00:27:05.886 | 99.99th=[ 418] 00:27:05.886 bw ( KiB/s): min=40448, max=207872, per=9.36%, avg=136714.70, stdev=42848.49, samples=20 00:27:05.886 iops : min= 158, max= 812, avg=534.00, stdev=167.36, samples=20 00:27:05.886 lat (usec) : 1000=0.04% 00:27:05.886 lat (msec) : 2=0.22%, 4=0.74%, 10=1.87%, 20=5.90%, 50=15.99% 00:27:05.886 lat (msec) : 100=21.61%, 250=47.13%, 500=6.50% 00:27:05.886 cpu : usr=1.59%, sys=2.10%, ctx=4036, majf=0, minf=1 00:27:05.886 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.8% 00:27:05.886 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:05.886 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:05.886 issued rwts: total=0,5404,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:05.886 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:05.886 job7: (groupid=0, jobs=1): err= 0: pid=1007959: Fri Jul 12 00:39:32 2024 00:27:05.886 write: IOPS=453, BW=113MiB/s (119MB/s)(1148MiB/10133msec); 0 zone resets 00:27:05.886 slat (usec): min=16, max=134500, avg=1192.73, stdev=4442.76 00:27:05.886 clat (usec): min=900, max=432090, avg=139945.66, stdev=89325.03 00:27:05.886 lat (usec): min=926, max=432142, avg=141138.39, stdev=90305.66 00:27:05.886 clat percentiles (msec): 00:27:05.886 | 1.00th=[ 4], 5.00th=[ 11], 10.00th=[ 21], 20.00th=[ 46], 00:27:05.886 | 30.00th=[ 86], 40.00th=[ 112], 50.00th=[ 134], 60.00th=[ 161], 00:27:05.886 | 70.00th=[ 190], 80.00th=[ 224], 90.00th=[ 264], 95.00th=[ 296], 00:27:05.886 | 99.00th=[ 342], 99.50th=[ 347], 99.90th=[ 355], 99.95th=[ 355], 00:27:05.886 | 99.99th=[ 435] 00:27:05.886 bw ( KiB/s): min=55296, max=227840, per=7.94%, avg=115957.40, stdev=46853.97, samples=20 00:27:05.886 iops : min= 216, max= 890, avg=452.95, stdev=183.03, samples=20 00:27:05.886 lat (usec) : 1000=0.13% 00:27:05.886 lat (msec) : 2=0.44%, 4=0.89%, 10=3.35%, 20=5.20%, 50=10.84% 00:27:05.886 lat (msec) : 100=14.26%, 250=52.10%, 500=12.78% 00:27:05.886 cpu : usr=1.23%, sys=1.59%, ctx=3184, majf=0, minf=1 00:27:05.886 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.6% 00:27:05.886 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:05.886 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:05.886 issued rwts: total=0,4593,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:05.886 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:05.886 job8: (groupid=0, jobs=1): err= 0: pid=1007962: Fri Jul 12 00:39:32 2024 00:27:05.886 write: IOPS=535, BW=134MiB/s (140MB/s)(1359MiB/10153msec); 0 zone resets 00:27:05.886 slat (usec): min=21, max=65241, avg=898.84, stdev=3295.25 00:27:05.886 clat (usec): min=1733, max=321768, avg=118579.43, stdev=77465.17 00:27:05.886 lat (usec): min=1769, max=326377, avg=119478.27, stdev=78231.97 00:27:05.886 clat percentiles (msec): 00:27:05.886 | 1.00th=[ 5], 5.00th=[ 16], 10.00th=[ 27], 20.00th=[ 45], 00:27:05.886 | 30.00th=[ 57], 40.00th=[ 80], 50.00th=[ 111], 60.00th=[ 142], 00:27:05.886 | 70.00th=[ 159], 80.00th=[ 184], 90.00th=[ 226], 95.00th=[ 266], 00:27:05.886 | 99.00th=[ 309], 99.50th=[ 313], 99.90th=[ 317], 99.95th=[ 317], 00:27:05.886 | 99.99th=[ 321] 00:27:05.886 bw ( KiB/s): min=68608, max=273920, per=9.41%, avg=137531.60, stdev=56374.17, samples=20 00:27:05.886 iops : min= 268, max= 1070, avg=537.20, stdev=220.19, samples=20 00:27:05.886 lat (msec) : 2=0.02%, 4=0.66%, 10=2.13%, 20=3.90%, 50=17.97% 00:27:05.886 lat (msec) : 100=21.98%, 250=46.96%, 500=6.36% 00:27:05.886 cpu : usr=1.48%, sys=1.85%, ctx=3764, majf=0, minf=1 00:27:05.886 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.8% 00:27:05.886 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:05.886 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:05.886 issued rwts: total=0,5436,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:05.886 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:05.886 job9: (groupid=0, jobs=1): err= 0: pid=1007963: Fri Jul 12 00:39:32 2024 00:27:05.886 write: IOPS=561, BW=140MiB/s (147MB/s)(1420MiB/10125msec); 0 zone resets 00:27:05.886 slat (usec): min=16, max=49873, avg=853.07, stdev=3164.81 00:27:05.886 clat (usec): min=735, max=322632, avg=113159.13, stdev=73973.06 00:27:05.886 lat (usec): min=841, max=322694, avg=114012.20, stdev=74750.71 00:27:05.886 clat percentiles (msec): 00:27:05.886 | 1.00th=[ 3], 5.00th=[ 7], 10.00th=[ 14], 20.00th=[ 33], 00:27:05.886 | 30.00th=[ 68], 40.00th=[ 95], 50.00th=[ 111], 60.00th=[ 133], 00:27:05.886 | 70.00th=[ 153], 80.00th=[ 176], 90.00th=[ 218], 95.00th=[ 245], 00:27:05.886 | 99.00th=[ 288], 99.50th=[ 305], 99.90th=[ 321], 99.95th=[ 321], 00:27:05.886 | 99.99th=[ 321] 00:27:05.886 bw ( KiB/s): min=57344, max=260608, per=9.84%, avg=143809.25, stdev=53355.88, samples=20 00:27:05.886 iops : min= 224, max= 1018, avg=561.75, stdev=208.42, samples=20 00:27:05.886 lat (usec) : 750=0.02%, 1000=0.07% 00:27:05.886 lat (msec) : 2=0.56%, 4=2.17%, 10=5.18%, 20=6.50%, 50=11.18% 00:27:05.886 lat (msec) : 100=16.81%, 250=53.27%, 500=4.26% 00:27:05.886 cpu : usr=1.67%, sys=1.87%, ctx=4048, majf=0, minf=1 00:27:05.886 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:27:05.886 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:05.886 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:05.886 issued rwts: total=0,5681,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:05.886 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:05.886 job10: (groupid=0, jobs=1): err= 0: pid=1007964: Fri Jul 12 00:39:32 2024 00:27:05.886 write: IOPS=567, BW=142MiB/s (149MB/s)(1441MiB/10154msec); 0 zone resets 00:27:05.886 slat (usec): min=17, max=103607, avg=780.78, stdev=3554.29 00:27:05.886 clat (usec): min=741, max=427498, avg=111897.83, stdev=89971.23 00:27:05.886 lat (usec): min=773, max=435148, avg=112678.60, stdev=90864.82 00:27:05.886 clat percentiles (usec): 00:27:05.886 | 1.00th=[ 1631], 5.00th=[ 4228], 10.00th=[ 8979], 20.00th=[ 19792], 00:27:05.886 | 30.00th=[ 42730], 40.00th=[ 78119], 50.00th=[ 94897], 60.00th=[117965], 00:27:05.886 | 70.00th=[156238], 80.00th=[196084], 90.00th=[238027], 95.00th=[278922], 00:27:05.886 | 99.00th=[367002], 99.50th=[379585], 99.90th=[417334], 99.95th=[425722], 00:27:05.886 | 99.99th=[425722] 00:27:05.886 bw ( KiB/s): min=61952, max=243200, per=9.99%, avg=145956.25, stdev=51956.76, samples=20 00:27:05.886 iops : min= 242, max= 950, avg=570.10, stdev=202.95, samples=20 00:27:05.886 lat (usec) : 750=0.02%, 1000=0.26% 00:27:05.886 lat (msec) : 2=1.35%, 4=3.04%, 10=6.23%, 20=9.35%, 50=11.76% 00:27:05.886 lat (msec) : 100=20.80%, 250=39.22%, 500=7.98% 00:27:05.886 cpu : usr=1.59%, sys=1.80%, ctx=4441, majf=0, minf=1 00:27:05.886 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:27:05.886 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:05.886 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:05.886 issued rwts: total=0,5765,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:05.886 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:05.886 00:27:05.886 Run status group 0 (all jobs): 00:27:05.886 WRITE: bw=1427MiB/s (1496MB/s), 108MiB/s-156MiB/s (113MB/s-164MB/s), io=14.2GiB (15.2GB), run=10082-10160msec 00:27:05.886 00:27:05.886 Disk stats (read/write): 00:27:05.886 nvme0n1: ios=42/9507, merge=0/0, ticks=4133/1206481, in_queue=1210614, util=99.89% 00:27:05.886 nvme10n1: ios=49/9684, merge=0/0, ticks=77/1228301, in_queue=1228378, util=97.80% 00:27:05.886 nvme1n1: ios=44/10845, merge=0/0, ticks=1265/1200807, in_queue=1202072, util=100.00% 00:27:05.886 nvme2n1: ios=50/12459, merge=0/0, ticks=783/1220212, in_queue=1220995, util=99.95% 00:27:05.886 nvme3n1: ios=47/8476, merge=0/0, ticks=1039/1215404, in_queue=1216443, util=99.99% 00:27:05.886 nvme4n1: ios=44/10047, merge=0/0, ticks=109/1222132, in_queue=1222241, util=98.97% 00:27:05.886 nvme5n1: ios=44/10639, merge=0/0, ticks=1151/1225123, in_queue=1226274, util=100.00% 00:27:05.886 nvme6n1: ios=0/8964, merge=0/0, ticks=0/1222574, in_queue=1222574, util=98.32% 00:27:05.886 nvme7n1: ios=0/10694, merge=0/0, ticks=0/1228169, in_queue=1228169, util=98.72% 00:27:05.886 nvme8n1: ios=0/11089, merge=0/0, ticks=0/1225487, in_queue=1225487, util=98.91% 00:27:05.886 nvme9n1: ios=0/11349, merge=0/0, ticks=0/1231790, in_queue=1231790, util=99.05% 00:27:05.886 00:39:32 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@36 -- # sync 00:27:05.887 00:39:32 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # seq 1 11 00:27:05.887 00:39:32 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:05.887 00:39:32 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:27:05.887 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:27:05.887 00:39:33 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:27:05.887 00:39:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:27:05.887 00:39:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:27:05.887 00:39:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK1 00:27:05.887 00:39:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:27:05.887 00:39:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK1 00:27:05.887 00:39:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:27:05.887 00:39:33 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:05.887 00:39:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:05.887 00:39:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:05.887 00:39:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:05.887 00:39:33 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:05.887 00:39:33 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:27:05.887 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:27:05.887 00:39:33 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:27:05.887 00:39:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:27:05.887 00:39:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:27:05.887 00:39:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK2 00:27:05.887 00:39:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:27:05.887 00:39:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK2 00:27:05.887 00:39:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:27:05.887 00:39:33 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:27:05.887 00:39:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:05.887 00:39:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:05.887 00:39:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:05.887 00:39:33 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:05.887 00:39:33 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:27:05.887 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:27:05.887 00:39:33 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:27:05.887 00:39:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:27:05.887 00:39:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:27:05.887 00:39:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK3 00:27:05.887 00:39:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:27:05.887 00:39:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK3 00:27:06.147 00:39:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:27:06.147 00:39:33 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:27:06.147 00:39:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:06.147 00:39:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:06.147 00:39:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:06.147 00:39:33 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:06.147 00:39:33 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:27:06.406 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:27:06.406 00:39:34 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:27:06.406 00:39:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:27:06.406 00:39:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:27:06.406 00:39:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK4 00:27:06.406 00:39:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:27:06.406 00:39:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK4 00:27:06.406 00:39:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:27:06.406 00:39:34 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:27:06.406 00:39:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:06.406 00:39:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:06.406 00:39:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:06.406 00:39:34 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:06.406 00:39:34 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:27:06.664 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:27:06.664 00:39:34 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:27:06.664 00:39:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:27:06.664 00:39:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:27:06.664 00:39:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK5 00:27:06.664 00:39:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:27:06.664 00:39:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK5 00:27:06.664 00:39:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:27:06.664 00:39:34 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:27:06.664 00:39:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:06.664 00:39:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:06.664 00:39:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:06.664 00:39:34 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:06.664 00:39:34 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:27:06.922 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:27:06.922 00:39:34 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:27:06.922 00:39:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:27:06.922 00:39:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:27:06.922 00:39:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK6 00:27:06.922 00:39:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:27:06.922 00:39:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK6 00:27:06.922 00:39:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:27:06.923 00:39:34 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:27:06.923 00:39:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:06.923 00:39:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:06.923 00:39:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:06.923 00:39:34 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:06.923 00:39:34 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:27:06.923 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:27:06.923 00:39:34 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:27:06.923 00:39:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:27:06.923 00:39:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:27:06.923 00:39:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK7 00:27:06.923 00:39:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:27:06.923 00:39:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK7 00:27:06.923 00:39:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:27:06.923 00:39:34 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:27:06.923 00:39:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:06.923 00:39:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:06.923 00:39:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:06.923 00:39:34 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:06.923 00:39:34 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:27:07.182 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:27:07.182 00:39:34 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:27:07.182 00:39:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:27:07.182 00:39:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:27:07.182 00:39:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK8 00:27:07.182 00:39:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:27:07.182 00:39:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK8 00:27:07.182 00:39:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:27:07.182 00:39:34 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:27:07.182 00:39:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:07.182 00:39:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:07.182 00:39:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:07.182 00:39:34 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:07.182 00:39:34 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:27:07.441 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:27:07.441 00:39:35 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:27:07.441 00:39:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:27:07.441 00:39:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:27:07.441 00:39:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK9 00:27:07.441 00:39:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:27:07.441 00:39:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK9 00:27:07.441 00:39:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:27:07.441 00:39:35 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:27:07.441 00:39:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:07.441 00:39:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:07.441 00:39:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:07.441 00:39:35 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:07.441 00:39:35 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:27:07.441 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:27:07.441 00:39:35 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:27:07.441 00:39:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:27:07.441 00:39:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:27:07.441 00:39:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK10 00:27:07.441 00:39:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:27:07.441 00:39:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK10 00:27:07.441 00:39:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:27:07.441 00:39:35 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:27:07.441 00:39:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:07.441 00:39:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:07.441 00:39:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:07.441 00:39:35 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:07.441 00:39:35 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:27:07.441 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:27:07.441 00:39:35 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:27:07.441 00:39:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:27:07.441 00:39:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:27:07.441 00:39:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK11 00:27:07.441 00:39:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:27:07.441 00:39:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK11 00:27:07.441 00:39:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:27:07.441 00:39:35 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:27:07.441 00:39:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:07.441 00:39:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:07.441 00:39:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:07.441 00:39:35 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:27:07.441 00:39:35 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:27:07.441 00:39:35 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@47 -- # nvmftestfini 00:27:07.441 00:39:35 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:07.441 00:39:35 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@117 -- # sync 00:27:07.441 00:39:35 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:07.441 00:39:35 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@120 -- # set +e 00:27:07.441 00:39:35 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:07.442 00:39:35 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:07.701 rmmod nvme_tcp 00:27:07.701 rmmod nvme_fabrics 00:27:07.701 rmmod nvme_keyring 00:27:07.701 00:39:35 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:07.701 00:39:35 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@124 -- # set -e 00:27:07.701 00:39:35 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@125 -- # return 0 00:27:07.701 00:39:35 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@489 -- # '[' -n 1003197 ']' 00:27:07.701 00:39:35 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@490 -- # killprocess 1003197 00:27:07.701 00:39:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@946 -- # '[' -z 1003197 ']' 00:27:07.701 00:39:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@950 -- # kill -0 1003197 00:27:07.701 00:39:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@951 -- # uname 00:27:07.701 00:39:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:27:07.701 00:39:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1003197 00:27:07.701 00:39:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:27:07.701 00:39:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:27:07.701 00:39:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1003197' 00:27:07.701 killing process with pid 1003197 00:27:07.701 00:39:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@965 -- # kill 1003197 00:27:07.701 00:39:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@970 -- # wait 1003197 00:27:07.961 00:39:35 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:07.961 00:39:35 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:07.961 00:39:35 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:07.961 00:39:35 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:07.961 00:39:35 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:07.961 00:39:35 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:07.961 00:39:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:07.961 00:39:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:10.504 00:39:37 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:10.504 00:27:10.504 real 0m57.854s 00:27:10.504 user 3m14.933s 00:27:10.504 sys 0m23.414s 00:27:10.504 00:39:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1122 -- # xtrace_disable 00:27:10.504 00:39:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:10.504 ************************************ 00:27:10.504 END TEST nvmf_multiconnection 00:27:10.504 ************************************ 00:27:10.504 00:39:37 nvmf_tcp -- nvmf/nvmf.sh@68 -- # run_test nvmf_initiator_timeout /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:27:10.504 00:39:37 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:27:10.504 00:39:37 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:27:10.504 00:39:37 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:10.504 ************************************ 00:27:10.504 START TEST nvmf_initiator_timeout 00:27:10.504 ************************************ 00:27:10.504 00:39:37 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:27:10.504 * Looking for test storage... 00:27:10.504 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:10.504 00:39:37 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:10.504 00:39:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # uname -s 00:27:10.504 00:39:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:10.504 00:39:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:10.504 00:39:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:10.504 00:39:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:10.504 00:39:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:10.504 00:39:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:10.504 00:39:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:10.504 00:39:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:10.505 00:39:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:10.505 00:39:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:10.505 00:39:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:27:10.505 00:39:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:27:10.505 00:39:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:10.505 00:39:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:10.505 00:39:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:10.505 00:39:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:10.505 00:39:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:10.505 00:39:37 nvmf_tcp.nvmf_initiator_timeout -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:10.505 00:39:37 nvmf_tcp.nvmf_initiator_timeout -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:10.505 00:39:37 nvmf_tcp.nvmf_initiator_timeout -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:10.505 00:39:37 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:10.505 00:39:37 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:10.505 00:39:37 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:10.505 00:39:37 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@5 -- # export PATH 00:27:10.505 00:39:37 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:10.505 00:39:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@47 -- # : 0 00:27:10.505 00:39:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:10.505 00:39:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:10.505 00:39:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:10.505 00:39:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:10.505 00:39:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:10.505 00:39:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:10.505 00:39:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:10.505 00:39:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:10.505 00:39:37 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:10.505 00:39:37 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:10.505 00:39:37 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:27:10.505 00:39:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:10.505 00:39:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:10.505 00:39:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:10.505 00:39:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:10.505 00:39:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:10.505 00:39:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:10.505 00:39:37 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:10.505 00:39:37 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:10.505 00:39:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:10.505 00:39:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:10.505 00:39:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@285 -- # xtrace_disable 00:27:10.505 00:39:37 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:11.884 00:39:39 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:11.884 00:39:39 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@291 -- # pci_devs=() 00:27:11.884 00:39:39 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:11.884 00:39:39 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:11.884 00:39:39 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:11.884 00:39:39 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:11.884 00:39:39 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:11.884 00:39:39 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@295 -- # net_devs=() 00:27:11.884 00:39:39 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:11.884 00:39:39 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@296 -- # e810=() 00:27:11.884 00:39:39 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@296 -- # local -ga e810 00:27:11.884 00:39:39 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@297 -- # x722=() 00:27:11.884 00:39:39 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@297 -- # local -ga x722 00:27:11.884 00:39:39 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@298 -- # mlx=() 00:27:11.884 00:39:39 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@298 -- # local -ga mlx 00:27:11.884 00:39:39 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:11.884 00:39:39 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:11.884 00:39:39 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:11.884 00:39:39 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:11.884 00:39:39 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:11.884 00:39:39 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:11.884 00:39:39 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:11.884 00:39:39 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:11.884 00:39:39 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:11.884 00:39:39 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:11.884 00:39:39 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:11.884 00:39:39 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:11.884 00:39:39 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:11.884 00:39:39 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:11.884 00:39:39 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:11.884 00:39:39 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:11.884 00:39:39 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:11.884 00:39:39 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:11.884 00:39:39 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:27:11.884 Found 0000:08:00.0 (0x8086 - 0x159b) 00:27:11.884 00:39:39 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:11.884 00:39:39 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:11.884 00:39:39 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:11.884 00:39:39 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:11.884 00:39:39 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:11.884 00:39:39 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:11.884 00:39:39 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:27:11.884 Found 0000:08:00.1 (0x8086 - 0x159b) 00:27:11.884 00:39:39 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:11.884 00:39:39 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:11.884 00:39:39 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:11.884 00:39:39 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:11.884 00:39:39 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:11.884 00:39:39 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:11.884 00:39:39 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:11.884 00:39:39 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:11.884 00:39:39 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:11.884 00:39:39 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:11.884 00:39:39 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:11.884 00:39:39 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:11.884 00:39:39 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:11.884 00:39:39 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:11.884 00:39:39 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:11.884 00:39:39 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:27:11.884 Found net devices under 0000:08:00.0: cvl_0_0 00:27:11.884 00:39:39 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:11.884 00:39:39 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:11.884 00:39:39 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:11.884 00:39:39 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:11.884 00:39:39 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:11.884 00:39:39 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:11.884 00:39:39 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:11.884 00:39:39 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:11.884 00:39:39 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:27:11.884 Found net devices under 0000:08:00.1: cvl_0_1 00:27:11.884 00:39:39 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:11.884 00:39:39 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:11.884 00:39:39 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # is_hw=yes 00:27:11.884 00:39:39 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:11.884 00:39:39 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:11.884 00:39:39 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:11.884 00:39:39 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:11.884 00:39:39 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:11.884 00:39:39 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:11.884 00:39:39 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:11.884 00:39:39 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:11.884 00:39:39 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:11.884 00:39:39 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:11.885 00:39:39 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:11.885 00:39:39 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:11.885 00:39:39 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:11.885 00:39:39 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:11.885 00:39:39 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:11.885 00:39:39 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:11.885 00:39:39 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:11.885 00:39:39 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:11.885 00:39:39 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:11.885 00:39:39 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:11.885 00:39:39 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:11.885 00:39:39 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:11.885 00:39:39 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:11.885 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:11.885 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.219 ms 00:27:11.885 00:27:11.885 --- 10.0.0.2 ping statistics --- 00:27:11.885 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:11.885 rtt min/avg/max/mdev = 0.219/0.219/0.219/0.000 ms 00:27:11.885 00:39:39 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:11.885 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:11.885 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.065 ms 00:27:11.885 00:27:11.885 --- 10.0.0.1 ping statistics --- 00:27:11.885 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:11.885 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:27:11.885 00:39:39 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:11.885 00:39:39 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@422 -- # return 0 00:27:11.885 00:39:39 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:11.885 00:39:39 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:11.885 00:39:39 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:11.885 00:39:39 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:11.885 00:39:39 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:11.885 00:39:39 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:11.885 00:39:39 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:11.885 00:39:39 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:27:11.885 00:39:39 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:11.885 00:39:39 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@720 -- # xtrace_disable 00:27:11.885 00:39:39 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:11.885 00:39:39 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@481 -- # nvmfpid=1010596 00:27:11.885 00:39:39 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:27:11.885 00:39:39 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@482 -- # waitforlisten 1010596 00:27:11.885 00:39:39 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@827 -- # '[' -z 1010596 ']' 00:27:11.885 00:39:39 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:11.885 00:39:39 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@832 -- # local max_retries=100 00:27:11.885 00:39:39 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:11.885 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:11.885 00:39:39 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@836 -- # xtrace_disable 00:27:11.885 00:39:39 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:11.885 [2024-07-12 00:39:39.615389] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:27:11.885 [2024-07-12 00:39:39.615478] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:11.885 EAL: No free 2048 kB hugepages reported on node 1 00:27:11.885 [2024-07-12 00:39:39.680902] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:12.145 [2024-07-12 00:39:39.768368] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:12.145 [2024-07-12 00:39:39.768416] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:12.145 [2024-07-12 00:39:39.768431] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:12.145 [2024-07-12 00:39:39.768445] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:12.145 [2024-07-12 00:39:39.768456] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:12.145 [2024-07-12 00:39:39.768775] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:12.145 [2024-07-12 00:39:39.768855] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:27:12.145 [2024-07-12 00:39:39.771605] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:27:12.145 [2024-07-12 00:39:39.771619] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:12.145 00:39:39 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:27:12.145 00:39:39 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@860 -- # return 0 00:27:12.145 00:39:39 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:12.145 00:39:39 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:12.145 00:39:39 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:12.145 00:39:39 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:12.145 00:39:39 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:27:12.145 00:39:39 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:12.145 00:39:39 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:12.145 00:39:39 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:12.145 Malloc0 00:27:12.145 00:39:39 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:12.145 00:39:39 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:27:12.145 00:39:39 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:12.145 00:39:39 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:12.145 Delay0 00:27:12.145 00:39:39 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:12.145 00:39:39 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:12.145 00:39:39 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:12.145 00:39:39 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:12.145 [2024-07-12 00:39:39.940014] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:12.145 00:39:39 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:12.145 00:39:39 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:27:12.145 00:39:39 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:12.145 00:39:39 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:12.145 00:39:39 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:12.145 00:39:39 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:12.145 00:39:39 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:12.145 00:39:39 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:12.145 00:39:39 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:12.145 00:39:39 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:12.145 00:39:39 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:12.145 00:39:39 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:12.145 [2024-07-12 00:39:39.968258] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:12.145 00:39:39 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:12.145 00:39:39 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:27:12.715 00:39:40 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:27:12.715 00:39:40 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1194 -- # local i=0 00:27:12.715 00:39:40 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:27:12.715 00:39:40 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:27:12.715 00:39:40 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1201 -- # sleep 2 00:27:14.645 00:39:42 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:27:14.645 00:39:42 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:27:14.645 00:39:42 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:27:14.645 00:39:42 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:27:14.645 00:39:42 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:27:14.645 00:39:42 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1204 -- # return 0 00:27:14.645 00:39:42 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@35 -- # fio_pid=1010837 00:27:14.645 00:39:42 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:27:14.645 00:39:42 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@37 -- # sleep 3 00:27:14.645 [global] 00:27:14.645 thread=1 00:27:14.645 invalidate=1 00:27:14.645 rw=write 00:27:14.645 time_based=1 00:27:14.645 runtime=60 00:27:14.645 ioengine=libaio 00:27:14.645 direct=1 00:27:14.645 bs=4096 00:27:14.645 iodepth=1 00:27:14.645 norandommap=0 00:27:14.645 numjobs=1 00:27:14.645 00:27:14.645 verify_dump=1 00:27:14.645 verify_backlog=512 00:27:14.645 verify_state_save=0 00:27:14.645 do_verify=1 00:27:14.645 verify=crc32c-intel 00:27:14.645 [job0] 00:27:14.645 filename=/dev/nvme0n1 00:27:14.645 Could not set queue depth (nvme0n1) 00:27:14.903 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:27:14.903 fio-3.35 00:27:14.903 Starting 1 thread 00:27:18.191 00:39:45 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:27:18.191 00:39:45 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:18.191 00:39:45 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:18.191 true 00:27:18.191 00:39:45 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:18.191 00:39:45 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:27:18.191 00:39:45 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:18.191 00:39:45 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:18.191 true 00:27:18.191 00:39:45 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:18.191 00:39:45 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:27:18.191 00:39:45 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:18.191 00:39:45 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:18.191 true 00:27:18.191 00:39:45 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:18.191 00:39:45 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:27:18.191 00:39:45 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:18.191 00:39:45 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:18.191 true 00:27:18.191 00:39:45 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:18.191 00:39:45 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@45 -- # sleep 3 00:27:20.720 00:39:48 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:27:20.720 00:39:48 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:20.720 00:39:48 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:20.720 true 00:27:20.720 00:39:48 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:20.720 00:39:48 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:27:20.720 00:39:48 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:20.720 00:39:48 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:20.720 true 00:27:20.720 00:39:48 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:20.720 00:39:48 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:27:20.720 00:39:48 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:20.720 00:39:48 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:20.720 true 00:27:20.720 00:39:48 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:20.720 00:39:48 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:27:20.720 00:39:48 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:20.720 00:39:48 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:20.720 true 00:27:20.720 00:39:48 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:20.720 00:39:48 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@53 -- # fio_status=0 00:27:20.720 00:39:48 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@54 -- # wait 1010837 00:28:17.015 00:28:17.015 job0: (groupid=0, jobs=1): err= 0: pid=1010897: Fri Jul 12 00:40:42 2024 00:28:17.015 read: IOPS=640, BW=2560KiB/s (2621kB/s)(150MiB/60000msec) 00:28:17.015 slat (nsec): min=6080, max=64527, avg=8407.13, stdev=2012.13 00:28:17.015 clat (usec): min=215, max=40974k, avg=1328.60, stdev=209091.07 00:28:17.015 lat (usec): min=222, max=40974k, avg=1337.00, stdev=209091.06 00:28:17.015 clat percentiles (usec): 00:28:17.015 | 1.00th=[ 231], 5.00th=[ 239], 10.00th=[ 241], 20.00th=[ 247], 00:28:17.015 | 30.00th=[ 249], 40.00th=[ 253], 50.00th=[ 258], 60.00th=[ 262], 00:28:17.015 | 70.00th=[ 269], 80.00th=[ 273], 90.00th=[ 281], 95.00th=[ 293], 00:28:17.015 | 99.00th=[ 330], 99.50th=[ 474], 99.90th=[ 490], 99.95th=[ 502], 00:28:17.015 | 99.99th=[ 947] 00:28:17.015 write: IOPS=644, BW=2578KiB/s (2640kB/s)(151MiB/60000msec); 0 zone resets 00:28:17.015 slat (usec): min=7, max=30106, avg=11.64, stdev=153.08 00:28:17.015 clat (usec): min=166, max=1865, avg=206.97, stdev=23.34 00:28:17.015 lat (usec): min=179, max=30379, avg=218.61, stdev=155.22 00:28:17.015 clat percentiles (usec): 00:28:17.015 | 1.00th=[ 180], 5.00th=[ 184], 10.00th=[ 188], 20.00th=[ 192], 00:28:17.015 | 30.00th=[ 196], 40.00th=[ 200], 50.00th=[ 202], 60.00th=[ 206], 00:28:17.015 | 70.00th=[ 212], 80.00th=[ 219], 90.00th=[ 231], 95.00th=[ 241], 00:28:17.016 | 99.00th=[ 302], 99.50th=[ 306], 99.90th=[ 322], 99.95th=[ 330], 00:28:17.016 | 99.99th=[ 709] 00:28:17.016 bw ( KiB/s): min= 3016, max= 8672, per=100.00%, avg=7976.42, stdev=910.84, samples=38 00:28:17.016 iops : min= 754, max= 2168, avg=1994.11, stdev=227.71, samples=38 00:28:17.016 lat (usec) : 250=63.85%, 500=36.11%, 750=0.03%, 1000=0.01% 00:28:17.016 lat (msec) : 2=0.01%, 4=0.01%, >=2000=0.01% 00:28:17.016 cpu : usr=0.80%, sys=1.48%, ctx=77072, majf=0, minf=131 00:28:17.016 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:17.016 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:17.016 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:17.016 issued rwts: total=38400,38667,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:17.016 latency : target=0, window=0, percentile=100.00%, depth=1 00:28:17.016 00:28:17.016 Run status group 0 (all jobs): 00:28:17.016 READ: bw=2560KiB/s (2621kB/s), 2560KiB/s-2560KiB/s (2621kB/s-2621kB/s), io=150MiB (157MB), run=60000-60000msec 00:28:17.016 WRITE: bw=2578KiB/s (2640kB/s), 2578KiB/s-2578KiB/s (2640kB/s-2640kB/s), io=151MiB (158MB), run=60000-60000msec 00:28:17.016 00:28:17.016 Disk stats (read/write): 00:28:17.016 nvme0n1: ios=38405/38400, merge=0/0, ticks=12810/7697, in_queue=20507, util=99.99% 00:28:17.016 00:40:42 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:28:17.016 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:28:17.016 00:40:42 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:28:17.016 00:40:42 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1215 -- # local i=0 00:28:17.016 00:40:42 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:28:17.016 00:40:42 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:28:17.016 00:40:42 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:28:17.016 00:40:42 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:28:17.016 00:40:42 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1227 -- # return 0 00:28:17.016 00:40:42 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:28:17.016 00:40:42 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:28:17.016 nvmf hotplug test: fio successful as expected 00:28:17.016 00:40:42 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:17.016 00:40:42 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:17.016 00:40:42 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:17.016 00:40:42 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:17.016 00:40:42 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:28:17.016 00:40:42 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:28:17.016 00:40:42 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:28:17.016 00:40:42 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:17.016 00:40:42 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@117 -- # sync 00:28:17.016 00:40:42 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:17.016 00:40:42 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@120 -- # set +e 00:28:17.016 00:40:42 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:17.016 00:40:42 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:17.016 rmmod nvme_tcp 00:28:17.016 rmmod nvme_fabrics 00:28:17.016 rmmod nvme_keyring 00:28:17.016 00:40:42 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:17.016 00:40:42 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@124 -- # set -e 00:28:17.016 00:40:42 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@125 -- # return 0 00:28:17.016 00:40:42 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@489 -- # '[' -n 1010596 ']' 00:28:17.016 00:40:42 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@490 -- # killprocess 1010596 00:28:17.016 00:40:42 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@946 -- # '[' -z 1010596 ']' 00:28:17.016 00:40:42 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@950 -- # kill -0 1010596 00:28:17.016 00:40:42 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@951 -- # uname 00:28:17.016 00:40:42 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:28:17.016 00:40:42 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1010596 00:28:17.016 00:40:42 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:28:17.016 00:40:42 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:28:17.016 00:40:42 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1010596' 00:28:17.016 killing process with pid 1010596 00:28:17.016 00:40:42 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@965 -- # kill 1010596 00:28:17.016 00:40:42 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@970 -- # wait 1010596 00:28:17.016 00:40:43 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:17.016 00:40:43 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:17.016 00:40:43 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:17.016 00:40:43 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:17.016 00:40:43 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:17.016 00:40:43 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:17.016 00:40:43 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:17.016 00:40:43 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:17.583 00:40:45 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:17.583 00:28:17.583 real 1m7.407s 00:28:17.583 user 4m6.021s 00:28:17.583 sys 0m9.197s 00:28:17.583 00:40:45 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1122 -- # xtrace_disable 00:28:17.583 00:40:45 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:17.583 ************************************ 00:28:17.583 END TEST nvmf_initiator_timeout 00:28:17.583 ************************************ 00:28:17.583 00:40:45 nvmf_tcp -- nvmf/nvmf.sh@71 -- # [[ phy == phy ]] 00:28:17.583 00:40:45 nvmf_tcp -- nvmf/nvmf.sh@72 -- # '[' tcp = tcp ']' 00:28:17.583 00:40:45 nvmf_tcp -- nvmf/nvmf.sh@73 -- # gather_supported_nvmf_pci_devs 00:28:17.583 00:40:45 nvmf_tcp -- nvmf/common.sh@285 -- # xtrace_disable 00:28:17.583 00:40:45 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:18.958 00:40:46 nvmf_tcp -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:18.958 00:40:46 nvmf_tcp -- nvmf/common.sh@291 -- # pci_devs=() 00:28:18.958 00:40:46 nvmf_tcp -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:18.958 00:40:46 nvmf_tcp -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:18.958 00:40:46 nvmf_tcp -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:18.958 00:40:46 nvmf_tcp -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:18.958 00:40:46 nvmf_tcp -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:18.958 00:40:46 nvmf_tcp -- nvmf/common.sh@295 -- # net_devs=() 00:28:18.958 00:40:46 nvmf_tcp -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:18.958 00:40:46 nvmf_tcp -- nvmf/common.sh@296 -- # e810=() 00:28:18.958 00:40:46 nvmf_tcp -- nvmf/common.sh@296 -- # local -ga e810 00:28:18.958 00:40:46 nvmf_tcp -- nvmf/common.sh@297 -- # x722=() 00:28:18.958 00:40:46 nvmf_tcp -- nvmf/common.sh@297 -- # local -ga x722 00:28:18.958 00:40:46 nvmf_tcp -- nvmf/common.sh@298 -- # mlx=() 00:28:18.958 00:40:46 nvmf_tcp -- nvmf/common.sh@298 -- # local -ga mlx 00:28:18.958 00:40:46 nvmf_tcp -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:18.958 00:40:46 nvmf_tcp -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:18.958 00:40:46 nvmf_tcp -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:18.958 00:40:46 nvmf_tcp -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:18.958 00:40:46 nvmf_tcp -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:18.958 00:40:46 nvmf_tcp -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:18.958 00:40:46 nvmf_tcp -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:18.958 00:40:46 nvmf_tcp -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:18.958 00:40:46 nvmf_tcp -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:18.958 00:40:46 nvmf_tcp -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:18.958 00:40:46 nvmf_tcp -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:18.958 00:40:46 nvmf_tcp -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:18.958 00:40:46 nvmf_tcp -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:18.958 00:40:46 nvmf_tcp -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:18.958 00:40:46 nvmf_tcp -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:18.958 00:40:46 nvmf_tcp -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:18.958 00:40:46 nvmf_tcp -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:18.958 00:40:46 nvmf_tcp -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:18.958 00:40:46 nvmf_tcp -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:28:18.958 Found 0000:08:00.0 (0x8086 - 0x159b) 00:28:18.958 00:40:46 nvmf_tcp -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:18.958 00:40:46 nvmf_tcp -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:18.958 00:40:46 nvmf_tcp -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:18.958 00:40:46 nvmf_tcp -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:18.958 00:40:46 nvmf_tcp -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:18.958 00:40:46 nvmf_tcp -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:18.958 00:40:46 nvmf_tcp -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:28:18.958 Found 0000:08:00.1 (0x8086 - 0x159b) 00:28:18.958 00:40:46 nvmf_tcp -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:18.958 00:40:46 nvmf_tcp -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:18.958 00:40:46 nvmf_tcp -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:18.958 00:40:46 nvmf_tcp -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:18.958 00:40:46 nvmf_tcp -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:19.217 00:40:46 nvmf_tcp -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:19.217 00:40:46 nvmf_tcp -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:19.217 00:40:46 nvmf_tcp -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:19.217 00:40:46 nvmf_tcp -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:19.217 00:40:46 nvmf_tcp -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:19.217 00:40:46 nvmf_tcp -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:19.217 00:40:46 nvmf_tcp -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:19.217 00:40:46 nvmf_tcp -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:19.217 00:40:46 nvmf_tcp -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:19.218 00:40:46 nvmf_tcp -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:19.218 00:40:46 nvmf_tcp -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:28:19.218 Found net devices under 0000:08:00.0: cvl_0_0 00:28:19.218 00:40:46 nvmf_tcp -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:19.218 00:40:46 nvmf_tcp -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:19.218 00:40:46 nvmf_tcp -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:19.218 00:40:46 nvmf_tcp -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:19.218 00:40:46 nvmf_tcp -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:19.218 00:40:46 nvmf_tcp -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:19.218 00:40:46 nvmf_tcp -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:19.218 00:40:46 nvmf_tcp -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:19.218 00:40:46 nvmf_tcp -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:28:19.218 Found net devices under 0000:08:00.1: cvl_0_1 00:28:19.218 00:40:46 nvmf_tcp -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:19.218 00:40:46 nvmf_tcp -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:19.218 00:40:46 nvmf_tcp -- nvmf/nvmf.sh@74 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:19.218 00:40:46 nvmf_tcp -- nvmf/nvmf.sh@75 -- # (( 2 > 0 )) 00:28:19.218 00:40:46 nvmf_tcp -- nvmf/nvmf.sh@76 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:28:19.218 00:40:46 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:28:19.218 00:40:46 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:28:19.218 00:40:46 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:19.218 ************************************ 00:28:19.218 START TEST nvmf_perf_adq 00:28:19.218 ************************************ 00:28:19.218 00:40:46 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:28:19.218 * Looking for test storage... 00:28:19.218 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:19.218 00:40:46 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:19.218 00:40:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:28:19.218 00:40:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:19.218 00:40:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:19.218 00:40:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:19.218 00:40:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:19.218 00:40:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:19.218 00:40:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:19.218 00:40:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:19.218 00:40:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:19.218 00:40:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:19.218 00:40:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:19.218 00:40:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:28:19.218 00:40:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:28:19.218 00:40:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:19.218 00:40:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:19.218 00:40:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:19.218 00:40:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:19.218 00:40:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:19.218 00:40:46 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:19.218 00:40:46 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:19.218 00:40:46 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:19.218 00:40:46 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:19.218 00:40:46 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:19.218 00:40:46 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:19.218 00:40:46 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:28:19.218 00:40:46 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:19.218 00:40:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@47 -- # : 0 00:28:19.218 00:40:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:19.218 00:40:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:19.218 00:40:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:19.218 00:40:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:19.218 00:40:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:19.218 00:40:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:19.218 00:40:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:19.218 00:40:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:19.218 00:40:46 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:28:19.218 00:40:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:28:19.218 00:40:46 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:21.125 00:40:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:21.125 00:40:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:28:21.125 00:40:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:21.125 00:40:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:21.125 00:40:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:21.125 00:40:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:21.125 00:40:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:21.125 00:40:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:28:21.125 00:40:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:21.125 00:40:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:28:21.125 00:40:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:28:21.125 00:40:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:28:21.125 00:40:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:28:21.125 00:40:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:28:21.125 00:40:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:28:21.125 00:40:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:21.125 00:40:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:21.125 00:40:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:21.125 00:40:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:21.125 00:40:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:21.125 00:40:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:21.125 00:40:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:21.125 00:40:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:21.125 00:40:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:21.125 00:40:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:21.125 00:40:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:21.125 00:40:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:21.125 00:40:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:21.125 00:40:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:21.125 00:40:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:21.125 00:40:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:21.125 00:40:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:21.125 00:40:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:21.125 00:40:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:28:21.125 Found 0000:08:00.0 (0x8086 - 0x159b) 00:28:21.125 00:40:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:21.125 00:40:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:21.125 00:40:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:21.125 00:40:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:21.125 00:40:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:21.125 00:40:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:21.125 00:40:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:28:21.125 Found 0000:08:00.1 (0x8086 - 0x159b) 00:28:21.125 00:40:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:21.125 00:40:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:21.125 00:40:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:21.125 00:40:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:21.125 00:40:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:21.125 00:40:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:21.125 00:40:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:21.125 00:40:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:21.125 00:40:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:21.125 00:40:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:21.125 00:40:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:21.125 00:40:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:21.125 00:40:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:21.125 00:40:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:21.125 00:40:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:21.125 00:40:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:28:21.125 Found net devices under 0000:08:00.0: cvl_0_0 00:28:21.125 00:40:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:21.125 00:40:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:21.126 00:40:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:21.126 00:40:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:21.126 00:40:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:21.126 00:40:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:21.126 00:40:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:21.126 00:40:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:21.126 00:40:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:28:21.126 Found net devices under 0000:08:00.1: cvl_0_1 00:28:21.126 00:40:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:21.126 00:40:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:21.126 00:40:48 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:21.126 00:40:48 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:28:21.126 00:40:48 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:28:21.126 00:40:48 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@60 -- # adq_reload_driver 00:28:21.126 00:40:48 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:28:21.384 00:40:49 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:28:24.669 00:40:52 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:28:29.946 00:40:57 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@68 -- # nvmftestinit 00:28:29.946 00:40:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:29.946 00:40:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:29.946 00:40:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:29.946 00:40:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:29.946 00:40:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:29.946 00:40:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:29.946 00:40:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:29.946 00:40:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:29.946 00:40:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:29.946 00:40:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:29.946 00:40:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:28:29.946 00:40:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:29.946 00:40:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:29.946 00:40:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:28:29.946 00:40:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:29.946 00:40:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:29.946 00:40:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:29.946 00:40:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:29.946 00:40:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:29.946 00:40:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:28:29.946 00:40:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:29.946 00:40:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:28:29.946 00:40:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:28:29.946 00:40:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:28:29.946 00:40:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:28:29.946 00:40:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:28:29.946 00:40:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:28:29.947 00:40:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:29.947 00:40:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:29.947 00:40:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:29.947 00:40:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:29.947 00:40:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:29.947 00:40:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:29.947 00:40:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:29.947 00:40:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:29.947 00:40:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:29.947 00:40:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:29.947 00:40:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:29.947 00:40:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:29.947 00:40:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:29.947 00:40:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:29.947 00:40:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:29.947 00:40:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:29.947 00:40:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:29.947 00:40:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:29.947 00:40:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:28:29.947 Found 0000:08:00.0 (0x8086 - 0x159b) 00:28:29.947 00:40:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:29.947 00:40:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:29.947 00:40:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:29.947 00:40:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:29.947 00:40:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:29.947 00:40:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:29.947 00:40:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:28:29.947 Found 0000:08:00.1 (0x8086 - 0x159b) 00:28:29.947 00:40:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:29.947 00:40:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:29.947 00:40:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:29.947 00:40:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:29.947 00:40:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:29.947 00:40:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:29.947 00:40:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:29.947 00:40:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:29.947 00:40:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:29.947 00:40:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:29.947 00:40:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:29.947 00:40:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:29.947 00:40:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:29.947 00:40:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:29.947 00:40:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:29.947 00:40:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:28:29.947 Found net devices under 0000:08:00.0: cvl_0_0 00:28:29.947 00:40:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:29.947 00:40:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:29.947 00:40:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:29.947 00:40:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:29.947 00:40:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:29.947 00:40:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:29.947 00:40:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:29.947 00:40:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:29.947 00:40:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:28:29.947 Found net devices under 0000:08:00.1: cvl_0_1 00:28:29.947 00:40:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:29.947 00:40:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:29.947 00:40:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:28:29.947 00:40:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:29.947 00:40:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:29.947 00:40:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:29.947 00:40:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:29.947 00:40:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:29.947 00:40:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:29.947 00:40:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:29.947 00:40:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:29.947 00:40:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:29.947 00:40:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:29.947 00:40:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:29.947 00:40:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:29.947 00:40:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:29.947 00:40:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:29.947 00:40:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:29.947 00:40:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:29.947 00:40:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:29.947 00:40:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:29.947 00:40:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:29.947 00:40:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:29.947 00:40:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:29.947 00:40:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:29.947 00:40:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:29.947 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:29.947 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.325 ms 00:28:29.947 00:28:29.947 --- 10.0.0.2 ping statistics --- 00:28:29.947 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:29.947 rtt min/avg/max/mdev = 0.325/0.325/0.325/0.000 ms 00:28:29.947 00:40:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:29.947 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:29.947 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.110 ms 00:28:29.947 00:28:29.947 --- 10.0.0.1 ping statistics --- 00:28:29.947 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:29.947 rtt min/avg/max/mdev = 0.110/0.110/0.110/0.000 ms 00:28:29.947 00:40:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:29.947 00:40:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:28:29.947 00:40:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:29.947 00:40:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:29.947 00:40:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:29.947 00:40:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:29.947 00:40:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:29.947 00:40:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:29.947 00:40:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:29.947 00:40:57 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@69 -- # nvmfappstart -m 0xF --wait-for-rpc 00:28:29.947 00:40:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:29.947 00:40:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@720 -- # xtrace_disable 00:28:29.947 00:40:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:29.947 00:40:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=1019954 00:28:29.947 00:40:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:28:29.947 00:40:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 1019954 00:28:29.947 00:40:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@827 -- # '[' -z 1019954 ']' 00:28:29.947 00:40:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:29.947 00:40:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@832 -- # local max_retries=100 00:28:29.947 00:40:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:29.947 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:29.947 00:40:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@836 -- # xtrace_disable 00:28:29.947 00:40:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:29.947 [2024-07-12 00:40:57.599394] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:28:29.947 [2024-07-12 00:40:57.599486] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:29.947 EAL: No free 2048 kB hugepages reported on node 1 00:28:29.947 [2024-07-12 00:40:57.664822] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:29.947 [2024-07-12 00:40:57.756504] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:29.947 [2024-07-12 00:40:57.756561] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:29.947 [2024-07-12 00:40:57.756576] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:29.947 [2024-07-12 00:40:57.756597] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:29.947 [2024-07-12 00:40:57.756610] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:29.947 [2024-07-12 00:40:57.756688] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:29.947 [2024-07-12 00:40:57.756766] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:28:29.947 [2024-07-12 00:40:57.756847] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:28:29.947 [2024-07-12 00:40:57.756851] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:30.208 00:40:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:28:30.208 00:40:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@860 -- # return 0 00:28:30.208 00:40:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:30.208 00:40:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:30.208 00:40:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:30.208 00:40:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:30.208 00:40:57 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@70 -- # adq_configure_nvmf_target 0 00:28:30.208 00:40:57 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:28:30.208 00:40:57 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:28:30.208 00:40:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:30.208 00:40:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:30.208 00:40:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:30.208 00:40:57 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:28:30.208 00:40:57 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:28:30.208 00:40:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:30.208 00:40:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:30.208 00:40:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:30.208 00:40:57 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:28:30.208 00:40:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:30.208 00:40:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:30.208 00:40:58 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:30.208 00:40:58 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:28:30.208 00:40:58 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:30.208 00:40:58 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:30.208 [2024-07-12 00:40:58.025138] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:30.208 00:40:58 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:30.208 00:40:58 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:28:30.208 00:40:58 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:30.208 00:40:58 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:30.467 Malloc1 00:28:30.467 00:40:58 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:30.467 00:40:58 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:30.467 00:40:58 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:30.467 00:40:58 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:30.467 00:40:58 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:30.467 00:40:58 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:28:30.467 00:40:58 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:30.467 00:40:58 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:30.467 00:40:58 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:30.467 00:40:58 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:30.467 00:40:58 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:30.467 00:40:58 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:30.467 [2024-07-12 00:40:58.075121] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:30.467 00:40:58 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:30.467 00:40:58 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@74 -- # perfpid=1019988 00:28:30.467 00:40:58 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@75 -- # sleep 2 00:28:30.467 00:40:58 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:28:30.467 EAL: No free 2048 kB hugepages reported on node 1 00:28:32.372 00:41:00 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@77 -- # rpc_cmd nvmf_get_stats 00:28:32.372 00:41:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:32.372 00:41:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:32.372 00:41:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:32.372 00:41:00 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmf_stats='{ 00:28:32.372 "tick_rate": 2700000000, 00:28:32.372 "poll_groups": [ 00:28:32.372 { 00:28:32.372 "name": "nvmf_tgt_poll_group_000", 00:28:32.372 "admin_qpairs": 1, 00:28:32.372 "io_qpairs": 1, 00:28:32.372 "current_admin_qpairs": 1, 00:28:32.372 "current_io_qpairs": 1, 00:28:32.372 "pending_bdev_io": 0, 00:28:32.372 "completed_nvme_io": 18997, 00:28:32.372 "transports": [ 00:28:32.372 { 00:28:32.372 "trtype": "TCP" 00:28:32.372 } 00:28:32.372 ] 00:28:32.372 }, 00:28:32.372 { 00:28:32.372 "name": "nvmf_tgt_poll_group_001", 00:28:32.372 "admin_qpairs": 0, 00:28:32.372 "io_qpairs": 1, 00:28:32.372 "current_admin_qpairs": 0, 00:28:32.372 "current_io_qpairs": 1, 00:28:32.372 "pending_bdev_io": 0, 00:28:32.372 "completed_nvme_io": 18739, 00:28:32.372 "transports": [ 00:28:32.372 { 00:28:32.372 "trtype": "TCP" 00:28:32.372 } 00:28:32.372 ] 00:28:32.372 }, 00:28:32.372 { 00:28:32.372 "name": "nvmf_tgt_poll_group_002", 00:28:32.372 "admin_qpairs": 0, 00:28:32.372 "io_qpairs": 1, 00:28:32.372 "current_admin_qpairs": 0, 00:28:32.372 "current_io_qpairs": 1, 00:28:32.372 "pending_bdev_io": 0, 00:28:32.372 "completed_nvme_io": 18435, 00:28:32.372 "transports": [ 00:28:32.372 { 00:28:32.372 "trtype": "TCP" 00:28:32.372 } 00:28:32.372 ] 00:28:32.372 }, 00:28:32.372 { 00:28:32.372 "name": "nvmf_tgt_poll_group_003", 00:28:32.372 "admin_qpairs": 0, 00:28:32.372 "io_qpairs": 1, 00:28:32.372 "current_admin_qpairs": 0, 00:28:32.372 "current_io_qpairs": 1, 00:28:32.372 "pending_bdev_io": 0, 00:28:32.372 "completed_nvme_io": 18204, 00:28:32.372 "transports": [ 00:28:32.372 { 00:28:32.372 "trtype": "TCP" 00:28:32.372 } 00:28:32.372 ] 00:28:32.372 } 00:28:32.372 ] 00:28:32.372 }' 00:28:32.372 00:41:00 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:28:32.372 00:41:00 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # wc -l 00:28:32.372 00:41:00 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # count=4 00:28:32.372 00:41:00 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@79 -- # [[ 4 -ne 4 ]] 00:28:32.372 00:41:00 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@83 -- # wait 1019988 00:28:40.493 Initializing NVMe Controllers 00:28:40.493 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:40.493 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:28:40.493 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:28:40.493 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:28:40.493 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:28:40.493 Initialization complete. Launching workers. 00:28:40.493 ======================================================== 00:28:40.493 Latency(us) 00:28:40.493 Device Information : IOPS MiB/s Average min max 00:28:40.493 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 9638.72 37.65 6640.26 2852.19 10647.87 00:28:40.493 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 9802.52 38.29 6531.28 2780.66 10916.96 00:28:40.493 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 9766.72 38.15 6552.36 2399.95 10935.04 00:28:40.493 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 9931.82 38.80 6445.98 2379.06 11115.06 00:28:40.493 ======================================================== 00:28:40.493 Total : 39139.78 152.89 6541.73 2379.06 11115.06 00:28:40.493 00:28:40.493 00:41:08 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@84 -- # nvmftestfini 00:28:40.493 00:41:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:40.493 00:41:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:28:40.493 00:41:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:40.493 00:41:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:28:40.493 00:41:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:40.493 00:41:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:40.493 rmmod nvme_tcp 00:28:40.493 rmmod nvme_fabrics 00:28:40.493 rmmod nvme_keyring 00:28:40.493 00:41:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:40.493 00:41:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:28:40.493 00:41:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:28:40.493 00:41:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 1019954 ']' 00:28:40.493 00:41:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 1019954 00:28:40.493 00:41:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@946 -- # '[' -z 1019954 ']' 00:28:40.493 00:41:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@950 -- # kill -0 1019954 00:28:40.493 00:41:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@951 -- # uname 00:28:40.493 00:41:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:28:40.493 00:41:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1019954 00:28:40.493 00:41:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:28:40.493 00:41:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:28:40.493 00:41:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1019954' 00:28:40.493 killing process with pid 1019954 00:28:40.493 00:41:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@965 -- # kill 1019954 00:28:40.493 00:41:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@970 -- # wait 1019954 00:28:40.752 00:41:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:40.752 00:41:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:40.752 00:41:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:40.752 00:41:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:40.752 00:41:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:40.752 00:41:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:40.752 00:41:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:40.752 00:41:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:42.700 00:41:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:42.700 00:41:10 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@86 -- # adq_reload_driver 00:28:42.700 00:41:10 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:28:43.269 00:41:11 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:28:45.171 00:41:12 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:28:50.445 00:41:17 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@89 -- # nvmftestinit 00:28:50.445 00:41:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:50.445 00:41:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:50.445 00:41:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:50.445 00:41:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:50.445 00:41:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:50.445 00:41:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:50.445 00:41:17 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:50.445 00:41:17 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:50.445 00:41:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:50.445 00:41:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:50.445 00:41:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:28:50.445 00:41:17 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:50.445 00:41:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:50.445 00:41:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:28:50.445 00:41:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:50.445 00:41:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:50.445 00:41:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:50.445 00:41:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:50.445 00:41:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:50.445 00:41:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:28:50.445 00:41:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:50.445 00:41:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:28:50.445 00:41:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:28:50.445 00:41:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:28:50.445 00:41:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:28:50.445 00:41:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:28:50.445 00:41:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:28:50.445 00:41:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:50.445 00:41:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:50.445 00:41:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:50.445 00:41:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:50.445 00:41:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:50.445 00:41:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:50.445 00:41:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:50.445 00:41:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:50.445 00:41:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:50.445 00:41:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:50.445 00:41:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:50.446 00:41:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:50.446 00:41:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:50.446 00:41:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:50.446 00:41:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:50.446 00:41:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:50.446 00:41:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:50.446 00:41:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:50.446 00:41:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:28:50.446 Found 0000:08:00.0 (0x8086 - 0x159b) 00:28:50.446 00:41:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:50.446 00:41:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:50.446 00:41:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:50.446 00:41:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:50.446 00:41:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:50.446 00:41:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:50.446 00:41:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:28:50.446 Found 0000:08:00.1 (0x8086 - 0x159b) 00:28:50.446 00:41:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:50.446 00:41:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:50.446 00:41:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:50.446 00:41:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:50.446 00:41:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:50.446 00:41:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:50.446 00:41:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:50.446 00:41:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:50.446 00:41:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:50.446 00:41:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:50.446 00:41:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:50.446 00:41:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:50.446 00:41:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:50.446 00:41:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:50.446 00:41:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:50.446 00:41:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:28:50.446 Found net devices under 0000:08:00.0: cvl_0_0 00:28:50.446 00:41:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:50.446 00:41:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:50.446 00:41:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:50.446 00:41:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:50.446 00:41:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:50.446 00:41:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:50.446 00:41:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:50.446 00:41:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:50.446 00:41:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:28:50.446 Found net devices under 0000:08:00.1: cvl_0_1 00:28:50.446 00:41:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:50.446 00:41:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:50.446 00:41:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:28:50.446 00:41:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:50.446 00:41:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:50.446 00:41:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:50.446 00:41:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:50.446 00:41:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:50.446 00:41:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:50.446 00:41:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:50.446 00:41:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:50.446 00:41:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:50.446 00:41:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:50.446 00:41:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:50.446 00:41:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:50.446 00:41:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:50.446 00:41:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:50.446 00:41:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:50.446 00:41:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:50.446 00:41:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:50.446 00:41:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:50.446 00:41:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:50.446 00:41:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:50.446 00:41:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:50.446 00:41:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:50.446 00:41:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:50.446 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:50.446 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.317 ms 00:28:50.446 00:28:50.446 --- 10.0.0.2 ping statistics --- 00:28:50.446 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:50.446 rtt min/avg/max/mdev = 0.317/0.317/0.317/0.000 ms 00:28:50.446 00:41:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:50.446 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:50.446 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.154 ms 00:28:50.446 00:28:50.446 --- 10.0.0.1 ping statistics --- 00:28:50.446 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:50.446 rtt min/avg/max/mdev = 0.154/0.154/0.154/0.000 ms 00:28:50.446 00:41:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:50.446 00:41:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:28:50.446 00:41:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:50.446 00:41:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:50.446 00:41:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:50.446 00:41:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:50.446 00:41:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:50.446 00:41:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:50.446 00:41:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:50.446 00:41:18 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@90 -- # adq_configure_driver 00:28:50.446 00:41:18 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:28:50.446 00:41:18 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:28:50.446 00:41:18 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:28:50.446 net.core.busy_poll = 1 00:28:50.446 00:41:18 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:28:50.446 net.core.busy_read = 1 00:28:50.446 00:41:18 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:28:50.446 00:41:18 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:28:50.446 00:41:18 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:28:50.446 00:41:18 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:28:50.446 00:41:18 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:28:50.446 00:41:18 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@91 -- # nvmfappstart -m 0xF --wait-for-rpc 00:28:50.446 00:41:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:50.446 00:41:18 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@720 -- # xtrace_disable 00:28:50.446 00:41:18 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:50.705 00:41:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=1021993 00:28:50.705 00:41:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:28:50.705 00:41:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 1021993 00:28:50.705 00:41:18 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@827 -- # '[' -z 1021993 ']' 00:28:50.705 00:41:18 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:50.705 00:41:18 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@832 -- # local max_retries=100 00:28:50.705 00:41:18 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:50.705 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:50.705 00:41:18 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@836 -- # xtrace_disable 00:28:50.705 00:41:18 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:50.705 [2024-07-12 00:41:18.336625] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:28:50.705 [2024-07-12 00:41:18.336730] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:50.705 EAL: No free 2048 kB hugepages reported on node 1 00:28:50.705 [2024-07-12 00:41:18.402462] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:50.705 [2024-07-12 00:41:18.493660] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:50.705 [2024-07-12 00:41:18.493720] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:50.705 [2024-07-12 00:41:18.493736] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:50.705 [2024-07-12 00:41:18.493749] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:50.705 [2024-07-12 00:41:18.493761] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:50.705 [2024-07-12 00:41:18.493836] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:50.705 [2024-07-12 00:41:18.493922] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:28:50.705 [2024-07-12 00:41:18.494002] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:28:50.705 [2024-07-12 00:41:18.494006] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:50.963 00:41:18 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:28:50.963 00:41:18 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@860 -- # return 0 00:28:50.963 00:41:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:50.963 00:41:18 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:50.963 00:41:18 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:50.963 00:41:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:50.963 00:41:18 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@92 -- # adq_configure_nvmf_target 1 00:28:50.963 00:41:18 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:28:50.963 00:41:18 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:28:50.963 00:41:18 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:50.963 00:41:18 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:50.963 00:41:18 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:50.963 00:41:18 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:28:50.963 00:41:18 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:28:50.963 00:41:18 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:50.963 00:41:18 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:50.963 00:41:18 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:50.963 00:41:18 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:28:50.963 00:41:18 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:50.963 00:41:18 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:50.963 00:41:18 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:50.963 00:41:18 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:28:50.963 00:41:18 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:50.963 00:41:18 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:50.963 [2024-07-12 00:41:18.760939] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:50.963 00:41:18 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:50.963 00:41:18 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:28:50.963 00:41:18 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:50.963 00:41:18 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:50.963 Malloc1 00:28:50.963 00:41:18 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:50.963 00:41:18 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:50.963 00:41:18 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:50.963 00:41:18 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:50.963 00:41:18 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:50.963 00:41:18 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:28:50.963 00:41:18 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:50.963 00:41:18 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:51.221 00:41:18 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:51.221 00:41:18 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:51.221 00:41:18 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:51.221 00:41:18 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:51.221 [2024-07-12 00:41:18.809659] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:51.221 00:41:18 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:51.221 00:41:18 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@96 -- # perfpid=1022109 00:28:51.221 00:41:18 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:28:51.221 00:41:18 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@97 -- # sleep 2 00:28:51.221 EAL: No free 2048 kB hugepages reported on node 1 00:28:53.119 00:41:20 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@99 -- # rpc_cmd nvmf_get_stats 00:28:53.119 00:41:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:53.119 00:41:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:53.119 00:41:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:53.119 00:41:20 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmf_stats='{ 00:28:53.119 "tick_rate": 2700000000, 00:28:53.119 "poll_groups": [ 00:28:53.119 { 00:28:53.119 "name": "nvmf_tgt_poll_group_000", 00:28:53.119 "admin_qpairs": 1, 00:28:53.119 "io_qpairs": 0, 00:28:53.119 "current_admin_qpairs": 1, 00:28:53.119 "current_io_qpairs": 0, 00:28:53.119 "pending_bdev_io": 0, 00:28:53.119 "completed_nvme_io": 0, 00:28:53.119 "transports": [ 00:28:53.119 { 00:28:53.119 "trtype": "TCP" 00:28:53.119 } 00:28:53.119 ] 00:28:53.119 }, 00:28:53.119 { 00:28:53.119 "name": "nvmf_tgt_poll_group_001", 00:28:53.119 "admin_qpairs": 0, 00:28:53.119 "io_qpairs": 4, 00:28:53.119 "current_admin_qpairs": 0, 00:28:53.119 "current_io_qpairs": 4, 00:28:53.119 "pending_bdev_io": 0, 00:28:53.119 "completed_nvme_io": 29691, 00:28:53.119 "transports": [ 00:28:53.119 { 00:28:53.119 "trtype": "TCP" 00:28:53.119 } 00:28:53.119 ] 00:28:53.119 }, 00:28:53.119 { 00:28:53.119 "name": "nvmf_tgt_poll_group_002", 00:28:53.119 "admin_qpairs": 0, 00:28:53.119 "io_qpairs": 0, 00:28:53.119 "current_admin_qpairs": 0, 00:28:53.119 "current_io_qpairs": 0, 00:28:53.119 "pending_bdev_io": 0, 00:28:53.119 "completed_nvme_io": 0, 00:28:53.119 "transports": [ 00:28:53.119 { 00:28:53.119 "trtype": "TCP" 00:28:53.119 } 00:28:53.119 ] 00:28:53.119 }, 00:28:53.119 { 00:28:53.119 "name": "nvmf_tgt_poll_group_003", 00:28:53.119 "admin_qpairs": 0, 00:28:53.119 "io_qpairs": 0, 00:28:53.119 "current_admin_qpairs": 0, 00:28:53.119 "current_io_qpairs": 0, 00:28:53.119 "pending_bdev_io": 0, 00:28:53.119 "completed_nvme_io": 0, 00:28:53.119 "transports": [ 00:28:53.119 { 00:28:53.119 "trtype": "TCP" 00:28:53.119 } 00:28:53.119 ] 00:28:53.119 } 00:28:53.119 ] 00:28:53.119 }' 00:28:53.119 00:41:20 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:28:53.119 00:41:20 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # wc -l 00:28:53.119 00:41:20 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # count=3 00:28:53.119 00:41:20 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@101 -- # [[ 3 -lt 2 ]] 00:28:53.119 00:41:20 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@106 -- # wait 1022109 00:29:01.227 Initializing NVMe Controllers 00:29:01.227 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:01.227 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:29:01.227 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:29:01.227 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:29:01.227 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:29:01.227 Initialization complete. Launching workers. 00:29:01.227 ======================================================== 00:29:01.227 Latency(us) 00:29:01.227 Device Information : IOPS MiB/s Average min max 00:29:01.227 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 3615.10 14.12 17713.79 1943.60 67403.56 00:29:01.227 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 3239.80 12.66 19764.73 2599.74 65413.75 00:29:01.227 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 4146.30 16.20 15444.44 2204.82 64327.82 00:29:01.227 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 4566.00 17.84 14025.48 2181.72 61688.78 00:29:01.227 ======================================================== 00:29:01.227 Total : 15567.20 60.81 16454.38 1943.60 67403.56 00:29:01.227 00:29:01.227 00:41:28 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmftestfini 00:29:01.227 00:41:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:01.227 00:41:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:29:01.227 00:41:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:01.227 00:41:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:29:01.227 00:41:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:01.227 00:41:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:01.227 rmmod nvme_tcp 00:29:01.227 rmmod nvme_fabrics 00:29:01.227 rmmod nvme_keyring 00:29:01.227 00:41:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:01.227 00:41:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:29:01.227 00:41:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:29:01.227 00:41:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 1021993 ']' 00:29:01.227 00:41:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 1021993 00:29:01.227 00:41:29 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@946 -- # '[' -z 1021993 ']' 00:29:01.227 00:41:29 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@950 -- # kill -0 1021993 00:29:01.227 00:41:29 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@951 -- # uname 00:29:01.227 00:41:29 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:29:01.227 00:41:29 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1021993 00:29:01.227 00:41:29 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:29:01.227 00:41:29 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:29:01.227 00:41:29 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1021993' 00:29:01.227 killing process with pid 1021993 00:29:01.227 00:41:29 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@965 -- # kill 1021993 00:29:01.227 00:41:29 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@970 -- # wait 1021993 00:29:01.487 00:41:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:29:01.487 00:41:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:29:01.487 00:41:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:29:01.487 00:41:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:01.487 00:41:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:01.487 00:41:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:01.487 00:41:29 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:01.487 00:41:29 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:04.025 00:41:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:04.025 00:41:31 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:29:04.025 00:29:04.025 real 0m44.446s 00:29:04.025 user 2m37.922s 00:29:04.025 sys 0m11.756s 00:29:04.025 00:41:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@1122 -- # xtrace_disable 00:29:04.025 00:41:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:04.026 ************************************ 00:29:04.026 END TEST nvmf_perf_adq 00:29:04.026 ************************************ 00:29:04.026 00:41:31 nvmf_tcp -- nvmf/nvmf.sh@83 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:29:04.026 00:41:31 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:29:04.026 00:41:31 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:29:04.026 00:41:31 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:04.026 ************************************ 00:29:04.026 START TEST nvmf_shutdown 00:29:04.026 ************************************ 00:29:04.026 00:41:31 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:29:04.026 * Looking for test storage... 00:29:04.026 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:04.026 00:41:31 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:04.026 00:41:31 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:29:04.026 00:41:31 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:04.026 00:41:31 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:04.026 00:41:31 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:04.026 00:41:31 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:04.026 00:41:31 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:04.026 00:41:31 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:04.026 00:41:31 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:04.026 00:41:31 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:04.026 00:41:31 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:04.026 00:41:31 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:04.026 00:41:31 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:29:04.026 00:41:31 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:29:04.026 00:41:31 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:04.026 00:41:31 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:04.026 00:41:31 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:04.026 00:41:31 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:04.026 00:41:31 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:04.026 00:41:31 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:04.026 00:41:31 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:04.026 00:41:31 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:04.026 00:41:31 nvmf_tcp.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:04.026 00:41:31 nvmf_tcp.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:04.026 00:41:31 nvmf_tcp.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:04.026 00:41:31 nvmf_tcp.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:29:04.026 00:41:31 nvmf_tcp.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:04.026 00:41:31 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@47 -- # : 0 00:29:04.026 00:41:31 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:04.026 00:41:31 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:04.026 00:41:31 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:04.026 00:41:31 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:04.026 00:41:31 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:04.026 00:41:31 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:04.026 00:41:31 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:04.026 00:41:31 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:04.026 00:41:31 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:04.026 00:41:31 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:04.026 00:41:31 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:29:04.026 00:41:31 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:29:04.026 00:41:31 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1103 -- # xtrace_disable 00:29:04.026 00:41:31 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:04.026 ************************************ 00:29:04.026 START TEST nvmf_shutdown_tc1 00:29:04.026 ************************************ 00:29:04.026 00:41:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1121 -- # nvmf_shutdown_tc1 00:29:04.026 00:41:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@74 -- # starttarget 00:29:04.026 00:41:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@15 -- # nvmftestinit 00:29:04.026 00:41:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:29:04.026 00:41:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:04.026 00:41:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:04.026 00:41:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:04.026 00:41:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:04.026 00:41:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:04.026 00:41:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:04.026 00:41:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:04.026 00:41:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:29:04.026 00:41:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:29:04.026 00:41:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@285 -- # xtrace_disable 00:29:04.026 00:41:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:05.401 00:41:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:05.401 00:41:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # pci_devs=() 00:29:05.401 00:41:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # local -a pci_devs 00:29:05.401 00:41:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:29:05.401 00:41:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:29:05.401 00:41:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # pci_drivers=() 00:29:05.401 00:41:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:29:05.401 00:41:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # net_devs=() 00:29:05.401 00:41:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # local -ga net_devs 00:29:05.401 00:41:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # e810=() 00:29:05.401 00:41:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # local -ga e810 00:29:05.401 00:41:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # x722=() 00:29:05.401 00:41:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # local -ga x722 00:29:05.401 00:41:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # mlx=() 00:29:05.401 00:41:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # local -ga mlx 00:29:05.401 00:41:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:05.401 00:41:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:05.401 00:41:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:05.401 00:41:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:05.401 00:41:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:05.401 00:41:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:05.401 00:41:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:05.401 00:41:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:05.401 00:41:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:05.401 00:41:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:05.401 00:41:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:05.401 00:41:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:29:05.401 00:41:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:29:05.402 00:41:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:29:05.402 00:41:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:29:05.402 00:41:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:29:05.402 00:41:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:29:05.402 00:41:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:05.402 00:41:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:29:05.402 Found 0000:08:00.0 (0x8086 - 0x159b) 00:29:05.402 00:41:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:05.402 00:41:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:05.402 00:41:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:05.402 00:41:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:05.402 00:41:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:05.402 00:41:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:05.402 00:41:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:29:05.402 Found 0000:08:00.1 (0x8086 - 0x159b) 00:29:05.402 00:41:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:05.402 00:41:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:05.402 00:41:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:05.402 00:41:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:05.402 00:41:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:05.402 00:41:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:29:05.402 00:41:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:29:05.402 00:41:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:29:05.402 00:41:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:05.402 00:41:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:05.402 00:41:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:05.402 00:41:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:05.402 00:41:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:05.402 00:41:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:05.402 00:41:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:05.402 00:41:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:29:05.402 Found net devices under 0000:08:00.0: cvl_0_0 00:29:05.402 00:41:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:05.402 00:41:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:05.402 00:41:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:05.402 00:41:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:05.402 00:41:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:05.402 00:41:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:05.402 00:41:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:05.402 00:41:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:05.402 00:41:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:29:05.402 Found net devices under 0000:08:00.1: cvl_0_1 00:29:05.402 00:41:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:05.402 00:41:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:29:05.402 00:41:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # is_hw=yes 00:29:05.402 00:41:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:29:05.402 00:41:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:29:05.402 00:41:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:29:05.402 00:41:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:05.402 00:41:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:05.402 00:41:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:05.402 00:41:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:29:05.402 00:41:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:05.402 00:41:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:05.402 00:41:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:29:05.402 00:41:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:05.402 00:41:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:05.402 00:41:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:29:05.402 00:41:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:29:05.402 00:41:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:29:05.402 00:41:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:05.402 00:41:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:05.402 00:41:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:05.402 00:41:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:29:05.402 00:41:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:05.402 00:41:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:05.402 00:41:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:05.402 00:41:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:29:05.402 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:05.402 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.239 ms 00:29:05.402 00:29:05.402 --- 10.0.0.2 ping statistics --- 00:29:05.402 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:05.402 rtt min/avg/max/mdev = 0.239/0.239/0.239/0.000 ms 00:29:05.402 00:41:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:05.402 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:05.402 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.083 ms 00:29:05.402 00:29:05.402 --- 10.0.0.1 ping statistics --- 00:29:05.402 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:05.402 rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms 00:29:05.402 00:41:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:05.402 00:41:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # return 0 00:29:05.402 00:41:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:29:05.402 00:41:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:05.402 00:41:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:29:05.402 00:41:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:29:05.402 00:41:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:05.402 00:41:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:29:05.402 00:41:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:29:05.402 00:41:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:29:05.402 00:41:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:05.402 00:41:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@720 -- # xtrace_disable 00:29:05.402 00:41:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:05.402 00:41:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@481 -- # nvmfpid=1024516 00:29:05.402 00:41:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:29:05.402 00:41:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # waitforlisten 1024516 00:29:05.402 00:41:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@827 -- # '[' -z 1024516 ']' 00:29:05.402 00:41:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:05.402 00:41:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@832 -- # local max_retries=100 00:29:05.402 00:41:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:05.402 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:05.402 00:41:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # xtrace_disable 00:29:05.402 00:41:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:05.402 [2024-07-12 00:41:33.236401] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:29:05.402 [2024-07-12 00:41:33.236478] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:05.660 EAL: No free 2048 kB hugepages reported on node 1 00:29:05.660 [2024-07-12 00:41:33.300710] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:05.660 [2024-07-12 00:41:33.388335] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:05.660 [2024-07-12 00:41:33.388392] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:05.660 [2024-07-12 00:41:33.388408] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:05.660 [2024-07-12 00:41:33.388422] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:05.660 [2024-07-12 00:41:33.388433] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:05.660 [2024-07-12 00:41:33.388513] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:29:05.660 [2024-07-12 00:41:33.388564] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:29:05.660 [2024-07-12 00:41:33.388616] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:29:05.660 [2024-07-12 00:41:33.388619] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:05.918 00:41:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:29:05.918 00:41:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@860 -- # return 0 00:29:05.918 00:41:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:05.918 00:41:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:05.918 00:41:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:05.918 00:41:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:05.918 00:41:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:05.918 00:41:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:05.918 00:41:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:05.918 [2024-07-12 00:41:33.533254] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:05.918 00:41:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:05.918 00:41:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:29:05.918 00:41:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:29:05.918 00:41:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@720 -- # xtrace_disable 00:29:05.918 00:41:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:05.918 00:41:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:05.918 00:41:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:29:05.918 00:41:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:29:05.918 00:41:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:29:05.918 00:41:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:29:05.918 00:41:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:29:05.918 00:41:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:29:05.918 00:41:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:29:05.918 00:41:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:29:05.918 00:41:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:29:05.918 00:41:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:29:05.918 00:41:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:29:05.918 00:41:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:29:05.918 00:41:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:29:05.918 00:41:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:29:05.918 00:41:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:29:05.918 00:41:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:29:05.918 00:41:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:29:05.918 00:41:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:29:05.918 00:41:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:29:05.918 00:41:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:29:05.918 00:41:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@35 -- # rpc_cmd 00:29:05.918 00:41:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:05.918 00:41:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:05.918 Malloc1 00:29:05.918 [2024-07-12 00:41:33.619617] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:05.918 Malloc2 00:29:05.918 Malloc3 00:29:05.918 Malloc4 00:29:06.175 Malloc5 00:29:06.175 Malloc6 00:29:06.175 Malloc7 00:29:06.175 Malloc8 00:29:06.175 Malloc9 00:29:06.434 Malloc10 00:29:06.434 00:41:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:06.434 00:41:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:29:06.434 00:41:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:06.434 00:41:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:06.434 00:41:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # perfpid=1024577 00:29:06.434 00:41:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # waitforlisten 1024577 /var/tmp/bdevperf.sock 00:29:06.434 00:41:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@827 -- # '[' -z 1024577 ']' 00:29:06.434 00:41:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:29:06.434 00:41:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:06.434 00:41:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:29:06.434 00:41:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@832 -- # local max_retries=100 00:29:06.434 00:41:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:29:06.434 00:41:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:06.434 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:06.434 00:41:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:29:06.434 00:41:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # xtrace_disable 00:29:06.434 00:41:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:06.434 00:41:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:06.434 00:41:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:06.434 { 00:29:06.434 "params": { 00:29:06.434 "name": "Nvme$subsystem", 00:29:06.434 "trtype": "$TEST_TRANSPORT", 00:29:06.434 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:06.434 "adrfam": "ipv4", 00:29:06.434 "trsvcid": "$NVMF_PORT", 00:29:06.434 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:06.434 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:06.434 "hdgst": ${hdgst:-false}, 00:29:06.434 "ddgst": ${ddgst:-false} 00:29:06.434 }, 00:29:06.434 "method": "bdev_nvme_attach_controller" 00:29:06.434 } 00:29:06.434 EOF 00:29:06.434 )") 00:29:06.434 00:41:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:29:06.434 00:41:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:06.434 00:41:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:06.434 { 00:29:06.434 "params": { 00:29:06.434 "name": "Nvme$subsystem", 00:29:06.434 "trtype": "$TEST_TRANSPORT", 00:29:06.434 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:06.434 "adrfam": "ipv4", 00:29:06.434 "trsvcid": "$NVMF_PORT", 00:29:06.434 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:06.434 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:06.434 "hdgst": ${hdgst:-false}, 00:29:06.434 "ddgst": ${ddgst:-false} 00:29:06.434 }, 00:29:06.434 "method": "bdev_nvme_attach_controller" 00:29:06.434 } 00:29:06.434 EOF 00:29:06.434 )") 00:29:06.434 00:41:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:29:06.434 00:41:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:06.434 00:41:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:06.434 { 00:29:06.434 "params": { 00:29:06.434 "name": "Nvme$subsystem", 00:29:06.434 "trtype": "$TEST_TRANSPORT", 00:29:06.434 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:06.434 "adrfam": "ipv4", 00:29:06.434 "trsvcid": "$NVMF_PORT", 00:29:06.434 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:06.434 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:06.434 "hdgst": ${hdgst:-false}, 00:29:06.434 "ddgst": ${ddgst:-false} 00:29:06.434 }, 00:29:06.434 "method": "bdev_nvme_attach_controller" 00:29:06.434 } 00:29:06.434 EOF 00:29:06.434 )") 00:29:06.434 00:41:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:29:06.434 00:41:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:06.434 00:41:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:06.434 { 00:29:06.434 "params": { 00:29:06.434 "name": "Nvme$subsystem", 00:29:06.434 "trtype": "$TEST_TRANSPORT", 00:29:06.434 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:06.434 "adrfam": "ipv4", 00:29:06.434 "trsvcid": "$NVMF_PORT", 00:29:06.434 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:06.434 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:06.434 "hdgst": ${hdgst:-false}, 00:29:06.434 "ddgst": ${ddgst:-false} 00:29:06.434 }, 00:29:06.434 "method": "bdev_nvme_attach_controller" 00:29:06.435 } 00:29:06.435 EOF 00:29:06.435 )") 00:29:06.435 00:41:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:29:06.435 00:41:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:06.435 00:41:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:06.435 { 00:29:06.435 "params": { 00:29:06.435 "name": "Nvme$subsystem", 00:29:06.435 "trtype": "$TEST_TRANSPORT", 00:29:06.435 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:06.435 "adrfam": "ipv4", 00:29:06.435 "trsvcid": "$NVMF_PORT", 00:29:06.435 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:06.435 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:06.435 "hdgst": ${hdgst:-false}, 00:29:06.435 "ddgst": ${ddgst:-false} 00:29:06.435 }, 00:29:06.435 "method": "bdev_nvme_attach_controller" 00:29:06.435 } 00:29:06.435 EOF 00:29:06.435 )") 00:29:06.435 00:41:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:29:06.435 00:41:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:06.435 00:41:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:06.435 { 00:29:06.435 "params": { 00:29:06.435 "name": "Nvme$subsystem", 00:29:06.435 "trtype": "$TEST_TRANSPORT", 00:29:06.435 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:06.435 "adrfam": "ipv4", 00:29:06.435 "trsvcid": "$NVMF_PORT", 00:29:06.435 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:06.435 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:06.435 "hdgst": ${hdgst:-false}, 00:29:06.435 "ddgst": ${ddgst:-false} 00:29:06.435 }, 00:29:06.435 "method": "bdev_nvme_attach_controller" 00:29:06.435 } 00:29:06.435 EOF 00:29:06.435 )") 00:29:06.435 00:41:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:29:06.435 00:41:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:06.435 00:41:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:06.435 { 00:29:06.435 "params": { 00:29:06.435 "name": "Nvme$subsystem", 00:29:06.435 "trtype": "$TEST_TRANSPORT", 00:29:06.435 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:06.435 "adrfam": "ipv4", 00:29:06.435 "trsvcid": "$NVMF_PORT", 00:29:06.435 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:06.435 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:06.435 "hdgst": ${hdgst:-false}, 00:29:06.435 "ddgst": ${ddgst:-false} 00:29:06.435 }, 00:29:06.435 "method": "bdev_nvme_attach_controller" 00:29:06.435 } 00:29:06.435 EOF 00:29:06.435 )") 00:29:06.435 00:41:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:29:06.435 00:41:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:06.435 00:41:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:06.435 { 00:29:06.435 "params": { 00:29:06.435 "name": "Nvme$subsystem", 00:29:06.435 "trtype": "$TEST_TRANSPORT", 00:29:06.435 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:06.435 "adrfam": "ipv4", 00:29:06.435 "trsvcid": "$NVMF_PORT", 00:29:06.435 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:06.435 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:06.435 "hdgst": ${hdgst:-false}, 00:29:06.435 "ddgst": ${ddgst:-false} 00:29:06.435 }, 00:29:06.435 "method": "bdev_nvme_attach_controller" 00:29:06.435 } 00:29:06.435 EOF 00:29:06.435 )") 00:29:06.435 00:41:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:29:06.435 00:41:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:06.435 00:41:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:06.435 { 00:29:06.435 "params": { 00:29:06.435 "name": "Nvme$subsystem", 00:29:06.435 "trtype": "$TEST_TRANSPORT", 00:29:06.435 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:06.435 "adrfam": "ipv4", 00:29:06.435 "trsvcid": "$NVMF_PORT", 00:29:06.435 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:06.435 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:06.435 "hdgst": ${hdgst:-false}, 00:29:06.435 "ddgst": ${ddgst:-false} 00:29:06.435 }, 00:29:06.435 "method": "bdev_nvme_attach_controller" 00:29:06.435 } 00:29:06.435 EOF 00:29:06.435 )") 00:29:06.435 00:41:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:29:06.435 00:41:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:06.435 00:41:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:06.435 { 00:29:06.435 "params": { 00:29:06.435 "name": "Nvme$subsystem", 00:29:06.435 "trtype": "$TEST_TRANSPORT", 00:29:06.435 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:06.435 "adrfam": "ipv4", 00:29:06.435 "trsvcid": "$NVMF_PORT", 00:29:06.435 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:06.435 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:06.435 "hdgst": ${hdgst:-false}, 00:29:06.435 "ddgst": ${ddgst:-false} 00:29:06.435 }, 00:29:06.435 "method": "bdev_nvme_attach_controller" 00:29:06.435 } 00:29:06.435 EOF 00:29:06.435 )") 00:29:06.435 00:41:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:29:06.435 00:41:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:29:06.435 00:41:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:29:06.435 00:41:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:29:06.435 "params": { 00:29:06.435 "name": "Nvme1", 00:29:06.435 "trtype": "tcp", 00:29:06.435 "traddr": "10.0.0.2", 00:29:06.435 "adrfam": "ipv4", 00:29:06.435 "trsvcid": "4420", 00:29:06.435 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:06.435 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:06.435 "hdgst": false, 00:29:06.435 "ddgst": false 00:29:06.435 }, 00:29:06.435 "method": "bdev_nvme_attach_controller" 00:29:06.435 },{ 00:29:06.435 "params": { 00:29:06.435 "name": "Nvme2", 00:29:06.435 "trtype": "tcp", 00:29:06.435 "traddr": "10.0.0.2", 00:29:06.435 "adrfam": "ipv4", 00:29:06.435 "trsvcid": "4420", 00:29:06.435 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:29:06.435 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:29:06.435 "hdgst": false, 00:29:06.435 "ddgst": false 00:29:06.435 }, 00:29:06.435 "method": "bdev_nvme_attach_controller" 00:29:06.435 },{ 00:29:06.435 "params": { 00:29:06.435 "name": "Nvme3", 00:29:06.435 "trtype": "tcp", 00:29:06.435 "traddr": "10.0.0.2", 00:29:06.435 "adrfam": "ipv4", 00:29:06.435 "trsvcid": "4420", 00:29:06.435 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:29:06.435 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:29:06.435 "hdgst": false, 00:29:06.435 "ddgst": false 00:29:06.435 }, 00:29:06.435 "method": "bdev_nvme_attach_controller" 00:29:06.435 },{ 00:29:06.435 "params": { 00:29:06.435 "name": "Nvme4", 00:29:06.435 "trtype": "tcp", 00:29:06.435 "traddr": "10.0.0.2", 00:29:06.435 "adrfam": "ipv4", 00:29:06.435 "trsvcid": "4420", 00:29:06.435 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:29:06.435 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:29:06.435 "hdgst": false, 00:29:06.435 "ddgst": false 00:29:06.435 }, 00:29:06.435 "method": "bdev_nvme_attach_controller" 00:29:06.435 },{ 00:29:06.435 "params": { 00:29:06.435 "name": "Nvme5", 00:29:06.435 "trtype": "tcp", 00:29:06.435 "traddr": "10.0.0.2", 00:29:06.435 "adrfam": "ipv4", 00:29:06.435 "trsvcid": "4420", 00:29:06.435 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:29:06.435 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:29:06.435 "hdgst": false, 00:29:06.435 "ddgst": false 00:29:06.435 }, 00:29:06.435 "method": "bdev_nvme_attach_controller" 00:29:06.435 },{ 00:29:06.435 "params": { 00:29:06.435 "name": "Nvme6", 00:29:06.435 "trtype": "tcp", 00:29:06.435 "traddr": "10.0.0.2", 00:29:06.435 "adrfam": "ipv4", 00:29:06.435 "trsvcid": "4420", 00:29:06.435 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:29:06.435 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:29:06.435 "hdgst": false, 00:29:06.435 "ddgst": false 00:29:06.435 }, 00:29:06.435 "method": "bdev_nvme_attach_controller" 00:29:06.435 },{ 00:29:06.435 "params": { 00:29:06.435 "name": "Nvme7", 00:29:06.435 "trtype": "tcp", 00:29:06.435 "traddr": "10.0.0.2", 00:29:06.435 "adrfam": "ipv4", 00:29:06.435 "trsvcid": "4420", 00:29:06.435 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:29:06.435 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:29:06.435 "hdgst": false, 00:29:06.435 "ddgst": false 00:29:06.435 }, 00:29:06.435 "method": "bdev_nvme_attach_controller" 00:29:06.435 },{ 00:29:06.435 "params": { 00:29:06.435 "name": "Nvme8", 00:29:06.435 "trtype": "tcp", 00:29:06.435 "traddr": "10.0.0.2", 00:29:06.435 "adrfam": "ipv4", 00:29:06.435 "trsvcid": "4420", 00:29:06.435 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:29:06.435 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:29:06.435 "hdgst": false, 00:29:06.435 "ddgst": false 00:29:06.435 }, 00:29:06.435 "method": "bdev_nvme_attach_controller" 00:29:06.435 },{ 00:29:06.435 "params": { 00:29:06.435 "name": "Nvme9", 00:29:06.435 "trtype": "tcp", 00:29:06.435 "traddr": "10.0.0.2", 00:29:06.435 "adrfam": "ipv4", 00:29:06.435 "trsvcid": "4420", 00:29:06.435 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:29:06.435 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:29:06.435 "hdgst": false, 00:29:06.435 "ddgst": false 00:29:06.435 }, 00:29:06.435 "method": "bdev_nvme_attach_controller" 00:29:06.435 },{ 00:29:06.435 "params": { 00:29:06.435 "name": "Nvme10", 00:29:06.435 "trtype": "tcp", 00:29:06.435 "traddr": "10.0.0.2", 00:29:06.435 "adrfam": "ipv4", 00:29:06.436 "trsvcid": "4420", 00:29:06.436 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:29:06.436 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:29:06.436 "hdgst": false, 00:29:06.436 "ddgst": false 00:29:06.436 }, 00:29:06.436 "method": "bdev_nvme_attach_controller" 00:29:06.436 }' 00:29:06.436 [2024-07-12 00:41:34.115970] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:29:06.436 [2024-07-12 00:41:34.116058] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:29:06.436 EAL: No free 2048 kB hugepages reported on node 1 00:29:06.436 [2024-07-12 00:41:34.178643] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:06.436 [2024-07-12 00:41:34.266008] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:08.334 00:41:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:29:08.334 00:41:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@860 -- # return 0 00:29:08.334 00:41:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:29:08.334 00:41:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:08.334 00:41:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:08.334 00:41:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:08.334 00:41:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@83 -- # kill -9 1024577 00:29:08.334 00:41:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:29:08.334 00:41:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@87 -- # sleep 1 00:29:09.735 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 1024577 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:29:09.735 00:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # kill -0 1024516 00:29:09.735 00:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:29:09.735 00:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:29:09.735 00:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:29:09.735 00:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:29:09.735 00:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:09.735 00:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:09.735 { 00:29:09.735 "params": { 00:29:09.735 "name": "Nvme$subsystem", 00:29:09.735 "trtype": "$TEST_TRANSPORT", 00:29:09.735 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:09.735 "adrfam": "ipv4", 00:29:09.735 "trsvcid": "$NVMF_PORT", 00:29:09.735 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:09.735 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:09.735 "hdgst": ${hdgst:-false}, 00:29:09.735 "ddgst": ${ddgst:-false} 00:29:09.735 }, 00:29:09.735 "method": "bdev_nvme_attach_controller" 00:29:09.735 } 00:29:09.735 EOF 00:29:09.735 )") 00:29:09.735 00:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:29:09.735 00:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:09.735 00:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:09.735 { 00:29:09.735 "params": { 00:29:09.735 "name": "Nvme$subsystem", 00:29:09.735 "trtype": "$TEST_TRANSPORT", 00:29:09.735 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:09.735 "adrfam": "ipv4", 00:29:09.735 "trsvcid": "$NVMF_PORT", 00:29:09.735 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:09.735 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:09.735 "hdgst": ${hdgst:-false}, 00:29:09.735 "ddgst": ${ddgst:-false} 00:29:09.735 }, 00:29:09.735 "method": "bdev_nvme_attach_controller" 00:29:09.735 } 00:29:09.735 EOF 00:29:09.735 )") 00:29:09.735 00:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:29:09.735 00:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:09.735 00:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:09.735 { 00:29:09.735 "params": { 00:29:09.735 "name": "Nvme$subsystem", 00:29:09.735 "trtype": "$TEST_TRANSPORT", 00:29:09.735 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:09.735 "adrfam": "ipv4", 00:29:09.735 "trsvcid": "$NVMF_PORT", 00:29:09.735 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:09.735 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:09.735 "hdgst": ${hdgst:-false}, 00:29:09.735 "ddgst": ${ddgst:-false} 00:29:09.735 }, 00:29:09.736 "method": "bdev_nvme_attach_controller" 00:29:09.736 } 00:29:09.736 EOF 00:29:09.736 )") 00:29:09.736 00:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:29:09.736 00:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:09.736 00:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:09.736 { 00:29:09.736 "params": { 00:29:09.736 "name": "Nvme$subsystem", 00:29:09.736 "trtype": "$TEST_TRANSPORT", 00:29:09.736 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:09.736 "adrfam": "ipv4", 00:29:09.736 "trsvcid": "$NVMF_PORT", 00:29:09.736 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:09.736 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:09.736 "hdgst": ${hdgst:-false}, 00:29:09.736 "ddgst": ${ddgst:-false} 00:29:09.736 }, 00:29:09.736 "method": "bdev_nvme_attach_controller" 00:29:09.736 } 00:29:09.736 EOF 00:29:09.736 )") 00:29:09.736 00:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:29:09.736 00:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:09.736 00:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:09.736 { 00:29:09.736 "params": { 00:29:09.736 "name": "Nvme$subsystem", 00:29:09.736 "trtype": "$TEST_TRANSPORT", 00:29:09.736 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:09.736 "adrfam": "ipv4", 00:29:09.736 "trsvcid": "$NVMF_PORT", 00:29:09.736 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:09.736 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:09.736 "hdgst": ${hdgst:-false}, 00:29:09.736 "ddgst": ${ddgst:-false} 00:29:09.736 }, 00:29:09.736 "method": "bdev_nvme_attach_controller" 00:29:09.736 } 00:29:09.736 EOF 00:29:09.736 )") 00:29:09.736 00:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:29:09.736 00:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:09.736 00:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:09.736 { 00:29:09.736 "params": { 00:29:09.736 "name": "Nvme$subsystem", 00:29:09.736 "trtype": "$TEST_TRANSPORT", 00:29:09.736 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:09.736 "adrfam": "ipv4", 00:29:09.736 "trsvcid": "$NVMF_PORT", 00:29:09.736 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:09.736 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:09.736 "hdgst": ${hdgst:-false}, 00:29:09.736 "ddgst": ${ddgst:-false} 00:29:09.736 }, 00:29:09.736 "method": "bdev_nvme_attach_controller" 00:29:09.736 } 00:29:09.736 EOF 00:29:09.736 )") 00:29:09.736 00:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:29:09.736 00:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:09.736 00:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:09.736 { 00:29:09.736 "params": { 00:29:09.736 "name": "Nvme$subsystem", 00:29:09.736 "trtype": "$TEST_TRANSPORT", 00:29:09.736 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:09.736 "adrfam": "ipv4", 00:29:09.736 "trsvcid": "$NVMF_PORT", 00:29:09.736 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:09.736 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:09.736 "hdgst": ${hdgst:-false}, 00:29:09.736 "ddgst": ${ddgst:-false} 00:29:09.736 }, 00:29:09.736 "method": "bdev_nvme_attach_controller" 00:29:09.736 } 00:29:09.736 EOF 00:29:09.736 )") 00:29:09.736 00:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:29:09.736 00:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:09.736 00:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:09.736 { 00:29:09.736 "params": { 00:29:09.736 "name": "Nvme$subsystem", 00:29:09.736 "trtype": "$TEST_TRANSPORT", 00:29:09.736 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:09.736 "adrfam": "ipv4", 00:29:09.736 "trsvcid": "$NVMF_PORT", 00:29:09.736 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:09.736 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:09.736 "hdgst": ${hdgst:-false}, 00:29:09.736 "ddgst": ${ddgst:-false} 00:29:09.736 }, 00:29:09.736 "method": "bdev_nvme_attach_controller" 00:29:09.736 } 00:29:09.736 EOF 00:29:09.736 )") 00:29:09.736 00:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:29:09.736 00:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:09.736 00:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:09.736 { 00:29:09.736 "params": { 00:29:09.736 "name": "Nvme$subsystem", 00:29:09.736 "trtype": "$TEST_TRANSPORT", 00:29:09.736 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:09.736 "adrfam": "ipv4", 00:29:09.736 "trsvcid": "$NVMF_PORT", 00:29:09.736 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:09.736 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:09.736 "hdgst": ${hdgst:-false}, 00:29:09.736 "ddgst": ${ddgst:-false} 00:29:09.736 }, 00:29:09.736 "method": "bdev_nvme_attach_controller" 00:29:09.736 } 00:29:09.736 EOF 00:29:09.736 )") 00:29:09.736 00:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:29:09.736 00:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:09.736 00:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:09.736 { 00:29:09.736 "params": { 00:29:09.736 "name": "Nvme$subsystem", 00:29:09.736 "trtype": "$TEST_TRANSPORT", 00:29:09.736 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:09.736 "adrfam": "ipv4", 00:29:09.736 "trsvcid": "$NVMF_PORT", 00:29:09.736 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:09.736 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:09.736 "hdgst": ${hdgst:-false}, 00:29:09.736 "ddgst": ${ddgst:-false} 00:29:09.736 }, 00:29:09.736 "method": "bdev_nvme_attach_controller" 00:29:09.736 } 00:29:09.736 EOF 00:29:09.736 )") 00:29:09.736 00:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:29:09.736 00:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:29:09.736 00:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:29:09.736 00:41:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:29:09.736 "params": { 00:29:09.736 "name": "Nvme1", 00:29:09.736 "trtype": "tcp", 00:29:09.736 "traddr": "10.0.0.2", 00:29:09.736 "adrfam": "ipv4", 00:29:09.736 "trsvcid": "4420", 00:29:09.736 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:09.736 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:09.736 "hdgst": false, 00:29:09.736 "ddgst": false 00:29:09.736 }, 00:29:09.736 "method": "bdev_nvme_attach_controller" 00:29:09.736 },{ 00:29:09.736 "params": { 00:29:09.736 "name": "Nvme2", 00:29:09.736 "trtype": "tcp", 00:29:09.736 "traddr": "10.0.0.2", 00:29:09.736 "adrfam": "ipv4", 00:29:09.736 "trsvcid": "4420", 00:29:09.736 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:29:09.736 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:29:09.736 "hdgst": false, 00:29:09.736 "ddgst": false 00:29:09.736 }, 00:29:09.736 "method": "bdev_nvme_attach_controller" 00:29:09.736 },{ 00:29:09.736 "params": { 00:29:09.736 "name": "Nvme3", 00:29:09.736 "trtype": "tcp", 00:29:09.736 "traddr": "10.0.0.2", 00:29:09.736 "adrfam": "ipv4", 00:29:09.736 "trsvcid": "4420", 00:29:09.736 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:29:09.736 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:29:09.736 "hdgst": false, 00:29:09.736 "ddgst": false 00:29:09.736 }, 00:29:09.736 "method": "bdev_nvme_attach_controller" 00:29:09.736 },{ 00:29:09.736 "params": { 00:29:09.736 "name": "Nvme4", 00:29:09.736 "trtype": "tcp", 00:29:09.736 "traddr": "10.0.0.2", 00:29:09.736 "adrfam": "ipv4", 00:29:09.736 "trsvcid": "4420", 00:29:09.736 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:29:09.736 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:29:09.736 "hdgst": false, 00:29:09.736 "ddgst": false 00:29:09.736 }, 00:29:09.736 "method": "bdev_nvme_attach_controller" 00:29:09.736 },{ 00:29:09.736 "params": { 00:29:09.736 "name": "Nvme5", 00:29:09.736 "trtype": "tcp", 00:29:09.736 "traddr": "10.0.0.2", 00:29:09.736 "adrfam": "ipv4", 00:29:09.736 "trsvcid": "4420", 00:29:09.736 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:29:09.736 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:29:09.736 "hdgst": false, 00:29:09.736 "ddgst": false 00:29:09.736 }, 00:29:09.736 "method": "bdev_nvme_attach_controller" 00:29:09.736 },{ 00:29:09.736 "params": { 00:29:09.736 "name": "Nvme6", 00:29:09.736 "trtype": "tcp", 00:29:09.736 "traddr": "10.0.0.2", 00:29:09.736 "adrfam": "ipv4", 00:29:09.736 "trsvcid": "4420", 00:29:09.736 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:29:09.736 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:29:09.736 "hdgst": false, 00:29:09.736 "ddgst": false 00:29:09.736 }, 00:29:09.736 "method": "bdev_nvme_attach_controller" 00:29:09.736 },{ 00:29:09.736 "params": { 00:29:09.736 "name": "Nvme7", 00:29:09.736 "trtype": "tcp", 00:29:09.736 "traddr": "10.0.0.2", 00:29:09.736 "adrfam": "ipv4", 00:29:09.736 "trsvcid": "4420", 00:29:09.736 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:29:09.736 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:29:09.736 "hdgst": false, 00:29:09.736 "ddgst": false 00:29:09.736 }, 00:29:09.736 "method": "bdev_nvme_attach_controller" 00:29:09.736 },{ 00:29:09.736 "params": { 00:29:09.736 "name": "Nvme8", 00:29:09.736 "trtype": "tcp", 00:29:09.736 "traddr": "10.0.0.2", 00:29:09.736 "adrfam": "ipv4", 00:29:09.736 "trsvcid": "4420", 00:29:09.736 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:29:09.736 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:29:09.736 "hdgst": false, 00:29:09.736 "ddgst": false 00:29:09.736 }, 00:29:09.737 "method": "bdev_nvme_attach_controller" 00:29:09.737 },{ 00:29:09.737 "params": { 00:29:09.737 "name": "Nvme9", 00:29:09.737 "trtype": "tcp", 00:29:09.737 "traddr": "10.0.0.2", 00:29:09.737 "adrfam": "ipv4", 00:29:09.737 "trsvcid": "4420", 00:29:09.737 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:29:09.737 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:29:09.737 "hdgst": false, 00:29:09.737 "ddgst": false 00:29:09.737 }, 00:29:09.737 "method": "bdev_nvme_attach_controller" 00:29:09.737 },{ 00:29:09.737 "params": { 00:29:09.737 "name": "Nvme10", 00:29:09.737 "trtype": "tcp", 00:29:09.737 "traddr": "10.0.0.2", 00:29:09.737 "adrfam": "ipv4", 00:29:09.737 "trsvcid": "4420", 00:29:09.737 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:29:09.737 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:29:09.737 "hdgst": false, 00:29:09.737 "ddgst": false 00:29:09.737 }, 00:29:09.737 "method": "bdev_nvme_attach_controller" 00:29:09.737 }' 00:29:09.737 [2024-07-12 00:41:37.200500] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:29:09.737 [2024-07-12 00:41:37.200604] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1024900 ] 00:29:09.737 EAL: No free 2048 kB hugepages reported on node 1 00:29:09.737 [2024-07-12 00:41:37.264401] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:09.737 [2024-07-12 00:41:37.355559] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:11.635 Running I/O for 1 seconds... 00:29:12.568 00:29:12.568 Latency(us) 00:29:12.568 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:12.568 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:12.568 Verification LBA range: start 0x0 length 0x400 00:29:12.568 Nvme1n1 : 1.02 187.32 11.71 0.00 0.00 337164.52 25437.68 302921.96 00:29:12.568 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:12.568 Verification LBA range: start 0x0 length 0x400 00:29:12.568 Nvme2n1 : 1.13 169.93 10.62 0.00 0.00 363773.85 21748.24 304475.40 00:29:12.568 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:12.568 Verification LBA range: start 0x0 length 0x400 00:29:12.568 Nvme3n1 : 1.09 176.61 11.04 0.00 0.00 340403.14 20194.80 307582.29 00:29:12.568 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:12.568 Verification LBA range: start 0x0 length 0x400 00:29:12.568 Nvme4n1 : 1.20 217.04 13.56 0.00 0.00 273292.00 8204.14 299815.06 00:29:12.568 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:12.568 Verification LBA range: start 0x0 length 0x400 00:29:12.568 Nvme5n1 : 1.21 211.57 13.22 0.00 0.00 275128.89 21845.33 306028.85 00:29:12.568 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:12.568 Verification LBA range: start 0x0 length 0x400 00:29:12.569 Nvme6n1 : 1.22 212.30 13.27 0.00 0.00 268724.53 1953.94 292047.83 00:29:12.569 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:12.569 Verification LBA range: start 0x0 length 0x400 00:29:12.569 Nvme7n1 : 1.23 208.15 13.01 0.00 0.00 269824.19 22719.15 287387.50 00:29:12.569 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:12.569 Verification LBA range: start 0x0 length 0x400 00:29:12.569 Nvme8n1 : 1.22 209.65 13.10 0.00 0.00 261958.54 18641.35 287387.50 00:29:12.569 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:12.569 Verification LBA range: start 0x0 length 0x400 00:29:12.569 Nvme9n1 : 1.23 207.46 12.97 0.00 0.00 259539.25 24078.41 302921.96 00:29:12.569 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:12.569 Verification LBA range: start 0x0 length 0x400 00:29:12.569 Nvme10n1 : 1.24 206.69 12.92 0.00 0.00 255244.14 19320.98 330883.98 00:29:12.569 =================================================================================================================== 00:29:12.569 Total : 2006.71 125.42 0.00 0.00 285859.94 1953.94 330883.98 00:29:12.827 00:41:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@94 -- # stoptarget 00:29:12.827 00:41:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:29:12.827 00:41:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:29:12.827 00:41:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:12.827 00:41:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@45 -- # nvmftestfini 00:29:12.827 00:41:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:12.827 00:41:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # sync 00:29:12.827 00:41:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:12.827 00:41:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@120 -- # set +e 00:29:12.827 00:41:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:12.827 00:41:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:12.827 rmmod nvme_tcp 00:29:12.827 rmmod nvme_fabrics 00:29:12.827 rmmod nvme_keyring 00:29:12.827 00:41:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:12.827 00:41:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set -e 00:29:12.827 00:41:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # return 0 00:29:12.827 00:41:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@489 -- # '[' -n 1024516 ']' 00:29:12.827 00:41:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@490 -- # killprocess 1024516 00:29:12.827 00:41:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@946 -- # '[' -z 1024516 ']' 00:29:12.827 00:41:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@950 -- # kill -0 1024516 00:29:12.827 00:41:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@951 -- # uname 00:29:12.827 00:41:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:29:12.827 00:41:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1024516 00:29:12.827 00:41:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:29:12.827 00:41:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:29:12.827 00:41:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1024516' 00:29:12.827 killing process with pid 1024516 00:29:12.827 00:41:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@965 -- # kill 1024516 00:29:12.827 00:41:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@970 -- # wait 1024516 00:29:13.087 00:41:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:29:13.087 00:41:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:29:13.087 00:41:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:29:13.087 00:41:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:13.087 00:41:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:13.087 00:41:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:13.087 00:41:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:13.087 00:41:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:15.618 00:41:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:15.618 00:29:15.618 real 0m11.476s 00:29:15.618 user 0m34.741s 00:29:15.618 sys 0m2.859s 00:29:15.618 00:41:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:29:15.618 00:41:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:15.618 ************************************ 00:29:15.618 END TEST nvmf_shutdown_tc1 00:29:15.618 ************************************ 00:29:15.618 00:41:42 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:29:15.618 00:41:42 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:29:15.618 00:41:42 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1103 -- # xtrace_disable 00:29:15.618 00:41:42 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:15.618 ************************************ 00:29:15.618 START TEST nvmf_shutdown_tc2 00:29:15.618 ************************************ 00:29:15.618 00:41:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1121 -- # nvmf_shutdown_tc2 00:29:15.618 00:41:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@99 -- # starttarget 00:29:15.618 00:41:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@15 -- # nvmftestinit 00:29:15.618 00:41:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:29:15.618 00:41:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:15.618 00:41:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:15.618 00:41:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:15.618 00:41:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:15.618 00:41:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:15.618 00:41:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:15.618 00:41:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:15.618 00:41:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:29:15.618 00:41:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:29:15.618 00:41:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@285 -- # xtrace_disable 00:29:15.618 00:41:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:15.618 00:41:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:15.618 00:41:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # pci_devs=() 00:29:15.618 00:41:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # local -a pci_devs 00:29:15.618 00:41:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:29:15.618 00:41:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:29:15.618 00:41:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # pci_drivers=() 00:29:15.618 00:41:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:29:15.618 00:41:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # net_devs=() 00:29:15.619 00:41:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # local -ga net_devs 00:29:15.619 00:41:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # e810=() 00:29:15.619 00:41:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # local -ga e810 00:29:15.619 00:41:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # x722=() 00:29:15.619 00:41:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # local -ga x722 00:29:15.619 00:41:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # mlx=() 00:29:15.619 00:41:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # local -ga mlx 00:29:15.619 00:41:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:15.619 00:41:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:15.619 00:41:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:15.619 00:41:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:15.619 00:41:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:15.619 00:41:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:15.619 00:41:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:15.619 00:41:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:15.619 00:41:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:15.619 00:41:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:15.619 00:41:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:15.619 00:41:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:29:15.619 00:41:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:29:15.619 00:41:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:29:15.619 00:41:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:29:15.619 00:41:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:29:15.619 00:41:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:29:15.619 00:41:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:15.619 00:41:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:29:15.619 Found 0000:08:00.0 (0x8086 - 0x159b) 00:29:15.619 00:41:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:15.619 00:41:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:15.619 00:41:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:15.619 00:41:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:15.619 00:41:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:15.619 00:41:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:15.619 00:41:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:29:15.619 Found 0000:08:00.1 (0x8086 - 0x159b) 00:29:15.619 00:41:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:15.619 00:41:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:15.619 00:41:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:15.619 00:41:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:15.619 00:41:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:15.619 00:41:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:29:15.619 00:41:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:29:15.619 00:41:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:29:15.619 00:41:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:15.619 00:41:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:15.619 00:41:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:15.619 00:41:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:15.619 00:41:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:15.619 00:41:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:15.619 00:41:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:15.619 00:41:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:29:15.619 Found net devices under 0000:08:00.0: cvl_0_0 00:29:15.619 00:41:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:15.619 00:41:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:15.619 00:41:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:15.619 00:41:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:15.619 00:41:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:15.619 00:41:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:15.619 00:41:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:15.619 00:41:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:15.619 00:41:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:29:15.619 Found net devices under 0000:08:00.1: cvl_0_1 00:29:15.619 00:41:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:15.619 00:41:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:29:15.619 00:41:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # is_hw=yes 00:29:15.619 00:41:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:29:15.619 00:41:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:29:15.619 00:41:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:29:15.619 00:41:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:15.619 00:41:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:15.619 00:41:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:15.619 00:41:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:29:15.619 00:41:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:15.619 00:41:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:15.619 00:41:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:29:15.619 00:41:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:15.619 00:41:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:15.619 00:41:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:29:15.619 00:41:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:29:15.619 00:41:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:29:15.619 00:41:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:15.619 00:41:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:15.619 00:41:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:15.619 00:41:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:29:15.619 00:41:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:15.619 00:41:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:15.619 00:41:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:15.619 00:41:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:29:15.619 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:15.619 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.329 ms 00:29:15.619 00:29:15.619 --- 10.0.0.2 ping statistics --- 00:29:15.619 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:15.619 rtt min/avg/max/mdev = 0.329/0.329/0.329/0.000 ms 00:29:15.619 00:41:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:15.619 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:15.619 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.114 ms 00:29:15.619 00:29:15.619 --- 10.0.0.1 ping statistics --- 00:29:15.619 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:15.619 rtt min/avg/max/mdev = 0.114/0.114/0.114/0.000 ms 00:29:15.619 00:41:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:15.619 00:41:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # return 0 00:29:15.619 00:41:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:29:15.619 00:41:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:15.619 00:41:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:29:15.619 00:41:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:29:15.619 00:41:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:15.619 00:41:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:29:15.619 00:41:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:29:15.619 00:41:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:29:15.619 00:41:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:15.619 00:41:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@720 -- # xtrace_disable 00:29:15.619 00:41:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:15.619 00:41:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@481 -- # nvmfpid=1025590 00:29:15.619 00:41:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:29:15.619 00:41:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # waitforlisten 1025590 00:29:15.619 00:41:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@827 -- # '[' -z 1025590 ']' 00:29:15.619 00:41:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:15.619 00:41:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@832 -- # local max_retries=100 00:29:15.619 00:41:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:15.619 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:15.619 00:41:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # xtrace_disable 00:29:15.620 00:41:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:15.620 [2024-07-12 00:41:43.138773] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:29:15.620 [2024-07-12 00:41:43.138852] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:15.620 EAL: No free 2048 kB hugepages reported on node 1 00:29:15.620 [2024-07-12 00:41:43.194656] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:15.620 [2024-07-12 00:41:43.272481] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:15.620 [2024-07-12 00:41:43.272540] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:15.620 [2024-07-12 00:41:43.272556] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:15.620 [2024-07-12 00:41:43.272570] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:15.620 [2024-07-12 00:41:43.272583] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:15.620 [2024-07-12 00:41:43.272670] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:29:15.620 [2024-07-12 00:41:43.272753] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:29:15.620 [2024-07-12 00:41:43.272837] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:29:15.620 [2024-07-12 00:41:43.272872] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:15.620 00:41:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:29:15.620 00:41:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@860 -- # return 0 00:29:15.620 00:41:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:15.620 00:41:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:15.620 00:41:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:15.620 00:41:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:15.620 00:41:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:15.620 00:41:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:15.620 00:41:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:15.620 [2024-07-12 00:41:43.421132] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:15.620 00:41:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:15.620 00:41:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:29:15.620 00:41:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:29:15.620 00:41:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@720 -- # xtrace_disable 00:29:15.620 00:41:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:15.620 00:41:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:15.620 00:41:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:29:15.620 00:41:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:29:15.620 00:41:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:29:15.620 00:41:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:29:15.620 00:41:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:29:15.620 00:41:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:29:15.620 00:41:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:29:15.620 00:41:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:29:15.620 00:41:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:29:15.620 00:41:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:29:15.620 00:41:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:29:15.620 00:41:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:29:15.620 00:41:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:29:15.620 00:41:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:29:15.877 00:41:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:29:15.877 00:41:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:29:15.877 00:41:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:29:15.877 00:41:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:29:15.877 00:41:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:29:15.877 00:41:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:29:15.877 00:41:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@35 -- # rpc_cmd 00:29:15.877 00:41:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:15.877 00:41:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:15.877 Malloc1 00:29:15.877 [2024-07-12 00:41:43.507541] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:15.877 Malloc2 00:29:15.877 Malloc3 00:29:15.877 Malloc4 00:29:15.877 Malloc5 00:29:15.877 Malloc6 00:29:16.134 Malloc7 00:29:16.134 Malloc8 00:29:16.134 Malloc9 00:29:16.134 Malloc10 00:29:16.134 00:41:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:16.134 00:41:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:29:16.134 00:41:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:16.134 00:41:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:16.134 00:41:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # perfpid=1025652 00:29:16.134 00:41:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # waitforlisten 1025652 /var/tmp/bdevperf.sock 00:29:16.134 00:41:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@827 -- # '[' -z 1025652 ']' 00:29:16.134 00:41:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:16.134 00:41:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:29:16.134 00:41:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:29:16.134 00:41:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@832 -- # local max_retries=100 00:29:16.134 00:41:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # config=() 00:29:16.134 00:41:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:16.134 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:16.134 00:41:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # local subsystem config 00:29:16.134 00:41:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # xtrace_disable 00:29:16.134 00:41:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:16.134 00:41:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:16.134 00:41:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:16.134 { 00:29:16.134 "params": { 00:29:16.134 "name": "Nvme$subsystem", 00:29:16.134 "trtype": "$TEST_TRANSPORT", 00:29:16.134 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:16.134 "adrfam": "ipv4", 00:29:16.134 "trsvcid": "$NVMF_PORT", 00:29:16.134 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:16.134 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:16.134 "hdgst": ${hdgst:-false}, 00:29:16.134 "ddgst": ${ddgst:-false} 00:29:16.134 }, 00:29:16.134 "method": "bdev_nvme_attach_controller" 00:29:16.134 } 00:29:16.134 EOF 00:29:16.134 )") 00:29:16.134 00:41:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:29:16.134 00:41:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:16.134 00:41:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:16.134 { 00:29:16.134 "params": { 00:29:16.134 "name": "Nvme$subsystem", 00:29:16.134 "trtype": "$TEST_TRANSPORT", 00:29:16.134 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:16.134 "adrfam": "ipv4", 00:29:16.134 "trsvcid": "$NVMF_PORT", 00:29:16.134 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:16.134 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:16.134 "hdgst": ${hdgst:-false}, 00:29:16.135 "ddgst": ${ddgst:-false} 00:29:16.135 }, 00:29:16.135 "method": "bdev_nvme_attach_controller" 00:29:16.135 } 00:29:16.135 EOF 00:29:16.135 )") 00:29:16.135 00:41:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:29:16.135 00:41:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:16.135 00:41:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:16.135 { 00:29:16.135 "params": { 00:29:16.135 "name": "Nvme$subsystem", 00:29:16.135 "trtype": "$TEST_TRANSPORT", 00:29:16.135 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:16.135 "adrfam": "ipv4", 00:29:16.135 "trsvcid": "$NVMF_PORT", 00:29:16.135 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:16.135 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:16.135 "hdgst": ${hdgst:-false}, 00:29:16.135 "ddgst": ${ddgst:-false} 00:29:16.135 }, 00:29:16.135 "method": "bdev_nvme_attach_controller" 00:29:16.135 } 00:29:16.135 EOF 00:29:16.135 )") 00:29:16.135 00:41:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:29:16.135 00:41:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:16.135 00:41:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:16.135 { 00:29:16.135 "params": { 00:29:16.135 "name": "Nvme$subsystem", 00:29:16.135 "trtype": "$TEST_TRANSPORT", 00:29:16.135 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:16.135 "adrfam": "ipv4", 00:29:16.135 "trsvcid": "$NVMF_PORT", 00:29:16.135 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:16.135 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:16.135 "hdgst": ${hdgst:-false}, 00:29:16.135 "ddgst": ${ddgst:-false} 00:29:16.135 }, 00:29:16.135 "method": "bdev_nvme_attach_controller" 00:29:16.135 } 00:29:16.135 EOF 00:29:16.135 )") 00:29:16.135 00:41:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:29:16.135 00:41:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:16.135 00:41:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:16.135 { 00:29:16.135 "params": { 00:29:16.135 "name": "Nvme$subsystem", 00:29:16.135 "trtype": "$TEST_TRANSPORT", 00:29:16.135 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:16.135 "adrfam": "ipv4", 00:29:16.135 "trsvcid": "$NVMF_PORT", 00:29:16.135 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:16.135 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:16.135 "hdgst": ${hdgst:-false}, 00:29:16.135 "ddgst": ${ddgst:-false} 00:29:16.135 }, 00:29:16.135 "method": "bdev_nvme_attach_controller" 00:29:16.135 } 00:29:16.135 EOF 00:29:16.135 )") 00:29:16.391 00:41:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:29:16.391 00:41:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:16.391 00:41:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:16.391 { 00:29:16.391 "params": { 00:29:16.391 "name": "Nvme$subsystem", 00:29:16.391 "trtype": "$TEST_TRANSPORT", 00:29:16.391 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:16.391 "adrfam": "ipv4", 00:29:16.391 "trsvcid": "$NVMF_PORT", 00:29:16.391 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:16.391 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:16.391 "hdgst": ${hdgst:-false}, 00:29:16.391 "ddgst": ${ddgst:-false} 00:29:16.391 }, 00:29:16.391 "method": "bdev_nvme_attach_controller" 00:29:16.391 } 00:29:16.391 EOF 00:29:16.391 )") 00:29:16.391 00:41:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:29:16.391 00:41:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:16.391 00:41:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:16.391 { 00:29:16.391 "params": { 00:29:16.391 "name": "Nvme$subsystem", 00:29:16.391 "trtype": "$TEST_TRANSPORT", 00:29:16.391 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:16.391 "adrfam": "ipv4", 00:29:16.391 "trsvcid": "$NVMF_PORT", 00:29:16.391 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:16.391 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:16.391 "hdgst": ${hdgst:-false}, 00:29:16.391 "ddgst": ${ddgst:-false} 00:29:16.391 }, 00:29:16.391 "method": "bdev_nvme_attach_controller" 00:29:16.391 } 00:29:16.391 EOF 00:29:16.391 )") 00:29:16.391 00:41:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:29:16.391 00:41:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:16.391 00:41:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:16.391 { 00:29:16.391 "params": { 00:29:16.391 "name": "Nvme$subsystem", 00:29:16.391 "trtype": "$TEST_TRANSPORT", 00:29:16.392 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:16.392 "adrfam": "ipv4", 00:29:16.392 "trsvcid": "$NVMF_PORT", 00:29:16.392 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:16.392 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:16.392 "hdgst": ${hdgst:-false}, 00:29:16.392 "ddgst": ${ddgst:-false} 00:29:16.392 }, 00:29:16.392 "method": "bdev_nvme_attach_controller" 00:29:16.392 } 00:29:16.392 EOF 00:29:16.392 )") 00:29:16.392 00:41:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:29:16.392 00:41:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:16.392 00:41:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:16.392 { 00:29:16.392 "params": { 00:29:16.392 "name": "Nvme$subsystem", 00:29:16.392 "trtype": "$TEST_TRANSPORT", 00:29:16.392 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:16.392 "adrfam": "ipv4", 00:29:16.392 "trsvcid": "$NVMF_PORT", 00:29:16.392 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:16.392 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:16.392 "hdgst": ${hdgst:-false}, 00:29:16.392 "ddgst": ${ddgst:-false} 00:29:16.392 }, 00:29:16.392 "method": "bdev_nvme_attach_controller" 00:29:16.392 } 00:29:16.392 EOF 00:29:16.392 )") 00:29:16.392 00:41:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:29:16.392 00:41:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:16.392 00:41:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:16.392 { 00:29:16.392 "params": { 00:29:16.392 "name": "Nvme$subsystem", 00:29:16.392 "trtype": "$TEST_TRANSPORT", 00:29:16.392 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:16.392 "adrfam": "ipv4", 00:29:16.392 "trsvcid": "$NVMF_PORT", 00:29:16.392 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:16.392 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:16.392 "hdgst": ${hdgst:-false}, 00:29:16.392 "ddgst": ${ddgst:-false} 00:29:16.392 }, 00:29:16.392 "method": "bdev_nvme_attach_controller" 00:29:16.392 } 00:29:16.392 EOF 00:29:16.392 )") 00:29:16.392 00:41:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:29:16.392 00:41:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@556 -- # jq . 00:29:16.392 00:41:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@557 -- # IFS=, 00:29:16.392 00:41:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:29:16.392 "params": { 00:29:16.392 "name": "Nvme1", 00:29:16.392 "trtype": "tcp", 00:29:16.392 "traddr": "10.0.0.2", 00:29:16.392 "adrfam": "ipv4", 00:29:16.392 "trsvcid": "4420", 00:29:16.392 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:16.392 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:16.392 "hdgst": false, 00:29:16.392 "ddgst": false 00:29:16.392 }, 00:29:16.392 "method": "bdev_nvme_attach_controller" 00:29:16.392 },{ 00:29:16.392 "params": { 00:29:16.392 "name": "Nvme2", 00:29:16.392 "trtype": "tcp", 00:29:16.392 "traddr": "10.0.0.2", 00:29:16.392 "adrfam": "ipv4", 00:29:16.392 "trsvcid": "4420", 00:29:16.392 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:29:16.392 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:29:16.392 "hdgst": false, 00:29:16.392 "ddgst": false 00:29:16.392 }, 00:29:16.392 "method": "bdev_nvme_attach_controller" 00:29:16.392 },{ 00:29:16.392 "params": { 00:29:16.392 "name": "Nvme3", 00:29:16.392 "trtype": "tcp", 00:29:16.392 "traddr": "10.0.0.2", 00:29:16.392 "adrfam": "ipv4", 00:29:16.392 "trsvcid": "4420", 00:29:16.392 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:29:16.392 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:29:16.392 "hdgst": false, 00:29:16.392 "ddgst": false 00:29:16.392 }, 00:29:16.392 "method": "bdev_nvme_attach_controller" 00:29:16.392 },{ 00:29:16.392 "params": { 00:29:16.392 "name": "Nvme4", 00:29:16.392 "trtype": "tcp", 00:29:16.392 "traddr": "10.0.0.2", 00:29:16.392 "adrfam": "ipv4", 00:29:16.392 "trsvcid": "4420", 00:29:16.392 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:29:16.392 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:29:16.392 "hdgst": false, 00:29:16.392 "ddgst": false 00:29:16.392 }, 00:29:16.392 "method": "bdev_nvme_attach_controller" 00:29:16.392 },{ 00:29:16.392 "params": { 00:29:16.392 "name": "Nvme5", 00:29:16.392 "trtype": "tcp", 00:29:16.392 "traddr": "10.0.0.2", 00:29:16.392 "adrfam": "ipv4", 00:29:16.392 "trsvcid": "4420", 00:29:16.392 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:29:16.392 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:29:16.392 "hdgst": false, 00:29:16.392 "ddgst": false 00:29:16.392 }, 00:29:16.392 "method": "bdev_nvme_attach_controller" 00:29:16.392 },{ 00:29:16.392 "params": { 00:29:16.392 "name": "Nvme6", 00:29:16.392 "trtype": "tcp", 00:29:16.392 "traddr": "10.0.0.2", 00:29:16.392 "adrfam": "ipv4", 00:29:16.392 "trsvcid": "4420", 00:29:16.392 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:29:16.392 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:29:16.392 "hdgst": false, 00:29:16.392 "ddgst": false 00:29:16.392 }, 00:29:16.392 "method": "bdev_nvme_attach_controller" 00:29:16.392 },{ 00:29:16.392 "params": { 00:29:16.392 "name": "Nvme7", 00:29:16.392 "trtype": "tcp", 00:29:16.392 "traddr": "10.0.0.2", 00:29:16.392 "adrfam": "ipv4", 00:29:16.392 "trsvcid": "4420", 00:29:16.392 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:29:16.392 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:29:16.392 "hdgst": false, 00:29:16.392 "ddgst": false 00:29:16.392 }, 00:29:16.392 "method": "bdev_nvme_attach_controller" 00:29:16.392 },{ 00:29:16.392 "params": { 00:29:16.392 "name": "Nvme8", 00:29:16.392 "trtype": "tcp", 00:29:16.392 "traddr": "10.0.0.2", 00:29:16.392 "adrfam": "ipv4", 00:29:16.392 "trsvcid": "4420", 00:29:16.392 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:29:16.392 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:29:16.392 "hdgst": false, 00:29:16.392 "ddgst": false 00:29:16.392 }, 00:29:16.392 "method": "bdev_nvme_attach_controller" 00:29:16.392 },{ 00:29:16.392 "params": { 00:29:16.392 "name": "Nvme9", 00:29:16.392 "trtype": "tcp", 00:29:16.392 "traddr": "10.0.0.2", 00:29:16.392 "adrfam": "ipv4", 00:29:16.392 "trsvcid": "4420", 00:29:16.392 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:29:16.392 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:29:16.392 "hdgst": false, 00:29:16.392 "ddgst": false 00:29:16.392 }, 00:29:16.392 "method": "bdev_nvme_attach_controller" 00:29:16.392 },{ 00:29:16.392 "params": { 00:29:16.392 "name": "Nvme10", 00:29:16.392 "trtype": "tcp", 00:29:16.392 "traddr": "10.0.0.2", 00:29:16.392 "adrfam": "ipv4", 00:29:16.392 "trsvcid": "4420", 00:29:16.392 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:29:16.392 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:29:16.392 "hdgst": false, 00:29:16.392 "ddgst": false 00:29:16.392 }, 00:29:16.392 "method": "bdev_nvme_attach_controller" 00:29:16.392 }' 00:29:16.392 [2024-07-12 00:41:44.002460] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:29:16.392 [2024-07-12 00:41:44.002555] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1025652 ] 00:29:16.392 EAL: No free 2048 kB hugepages reported on node 1 00:29:16.392 [2024-07-12 00:41:44.064679] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:16.392 [2024-07-12 00:41:44.152060] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:18.290 Running I/O for 10 seconds... 00:29:18.290 00:41:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:29:18.290 00:41:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@860 -- # return 0 00:29:18.290 00:41:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:29:18.290 00:41:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:18.290 00:41:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:18.290 00:41:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:18.290 00:41:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@107 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:29:18.290 00:41:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:29:18.290 00:41:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:29:18.290 00:41:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@57 -- # local ret=1 00:29:18.290 00:41:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local i 00:29:18.290 00:41:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:29:18.290 00:41:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:29:18.290 00:41:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:29:18.290 00:41:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:29:18.290 00:41:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:18.290 00:41:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:18.290 00:41:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:18.290 00:41:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=14 00:29:18.290 00:41:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 14 -ge 100 ']' 00:29:18.290 00:41:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:29:18.548 00:41:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:29:18.548 00:41:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:29:18.548 00:41:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:29:18.548 00:41:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:29:18.548 00:41:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:18.548 00:41:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:18.806 00:41:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:18.806 00:41:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=67 00:29:18.806 00:41:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:29:18.806 00:41:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:29:19.064 00:41:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:29:19.064 00:41:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:29:19.064 00:41:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:29:19.064 00:41:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:29:19.064 00:41:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:19.064 00:41:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:19.064 00:41:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:19.064 00:41:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=131 00:29:19.064 00:41:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:29:19.064 00:41:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # ret=0 00:29:19.064 00:41:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # break 00:29:19.064 00:41:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@69 -- # return 0 00:29:19.064 00:41:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@110 -- # killprocess 1025652 00:29:19.064 00:41:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@946 -- # '[' -z 1025652 ']' 00:29:19.064 00:41:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@950 -- # kill -0 1025652 00:29:19.064 00:41:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@951 -- # uname 00:29:19.064 00:41:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:29:19.064 00:41:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1025652 00:29:19.064 00:41:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:29:19.064 00:41:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:29:19.064 00:41:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1025652' 00:29:19.064 killing process with pid 1025652 00:29:19.064 00:41:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@965 -- # kill 1025652 00:29:19.064 00:41:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@970 -- # wait 1025652 00:29:19.064 Received shutdown signal, test time was about 1.064354 seconds 00:29:19.064 00:29:19.064 Latency(us) 00:29:19.064 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:19.064 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:19.064 Verification LBA range: start 0x0 length 0x400 00:29:19.064 Nvme1n1 : 1.02 188.82 11.80 0.00 0.00 334268.49 23787.14 287387.50 00:29:19.064 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:19.064 Verification LBA range: start 0x0 length 0x400 00:29:19.064 Nvme2n1 : 1.03 189.49 11.84 0.00 0.00 324865.56 3094.76 307582.29 00:29:19.064 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:19.064 Verification LBA range: start 0x0 length 0x400 00:29:19.064 Nvme3n1 : 1.06 240.73 15.05 0.00 0.00 250975.00 16505.36 315349.52 00:29:19.064 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:19.064 Verification LBA range: start 0x0 length 0x400 00:29:19.065 Nvme4n1 : 1.03 187.20 11.70 0.00 0.00 313939.44 21068.61 313796.08 00:29:19.065 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:19.065 Verification LBA range: start 0x0 length 0x400 00:29:19.065 Nvme5n1 : 1.04 184.30 11.52 0.00 0.00 312291.68 30292.20 307582.29 00:29:19.065 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:19.065 Verification LBA range: start 0x0 length 0x400 00:29:19.065 Nvme6n1 : 1.05 182.57 11.41 0.00 0.00 308023.75 24855.13 316902.97 00:29:19.065 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:19.065 Verification LBA range: start 0x0 length 0x400 00:29:19.065 Nvme7n1 : 1.02 187.98 11.75 0.00 0.00 290358.36 24272.59 332437.43 00:29:19.065 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:19.065 Verification LBA range: start 0x0 length 0x400 00:29:19.065 Nvme8n1 : 1.04 187.65 11.73 0.00 0.00 282828.34 5291.43 309135.74 00:29:19.065 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:19.065 Verification LBA range: start 0x0 length 0x400 00:29:19.065 Nvme9n1 : 1.06 186.56 11.66 0.00 0.00 278870.71 6140.97 315349.52 00:29:19.065 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:19.065 Verification LBA range: start 0x0 length 0x400 00:29:19.065 Nvme10n1 : 1.06 185.94 11.62 0.00 0.00 272563.66 3713.71 338651.21 00:29:19.065 =================================================================================================================== 00:29:19.065 Total : 1921.23 120.08 0.00 0.00 295352.38 3094.76 338651.21 00:29:19.323 00:41:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@113 -- # sleep 1 00:29:20.255 00:41:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # kill -0 1025590 00:29:20.255 00:41:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@116 -- # stoptarget 00:29:20.255 00:41:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:29:20.255 00:41:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:29:20.255 00:41:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:20.255 00:41:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@45 -- # nvmftestfini 00:29:20.255 00:41:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:20.255 00:41:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # sync 00:29:20.255 00:41:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:20.255 00:41:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@120 -- # set +e 00:29:20.255 00:41:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:20.255 00:41:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:20.255 rmmod nvme_tcp 00:29:20.255 rmmod nvme_fabrics 00:29:20.255 rmmod nvme_keyring 00:29:20.255 00:41:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:20.255 00:41:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set -e 00:29:20.255 00:41:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # return 0 00:29:20.255 00:41:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@489 -- # '[' -n 1025590 ']' 00:29:20.255 00:41:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@490 -- # killprocess 1025590 00:29:20.255 00:41:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@946 -- # '[' -z 1025590 ']' 00:29:20.255 00:41:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@950 -- # kill -0 1025590 00:29:20.255 00:41:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@951 -- # uname 00:29:20.255 00:41:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:29:20.255 00:41:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1025590 00:29:20.255 00:41:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:29:20.255 00:41:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:29:20.255 00:41:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1025590' 00:29:20.255 killing process with pid 1025590 00:29:20.255 00:41:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@965 -- # kill 1025590 00:29:20.255 00:41:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@970 -- # wait 1025590 00:29:20.845 00:41:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:29:20.845 00:41:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:29:20.845 00:41:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:29:20.845 00:41:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:20.845 00:41:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:20.845 00:41:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:20.845 00:41:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:20.845 00:41:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:22.755 00:41:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:22.755 00:29:22.755 real 0m7.494s 00:29:22.755 user 0m22.979s 00:29:22.755 sys 0m1.438s 00:29:22.755 00:41:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:29:22.755 00:41:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:22.755 ************************************ 00:29:22.755 END TEST nvmf_shutdown_tc2 00:29:22.755 ************************************ 00:29:22.755 00:41:50 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@149 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:29:22.755 00:41:50 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:29:22.755 00:41:50 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1103 -- # xtrace_disable 00:29:22.755 00:41:50 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:22.755 ************************************ 00:29:22.755 START TEST nvmf_shutdown_tc3 00:29:22.755 ************************************ 00:29:22.755 00:41:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1121 -- # nvmf_shutdown_tc3 00:29:22.755 00:41:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@121 -- # starttarget 00:29:22.755 00:41:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@15 -- # nvmftestinit 00:29:22.755 00:41:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:29:22.755 00:41:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:22.755 00:41:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:22.755 00:41:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:22.755 00:41:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:22.755 00:41:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:22.755 00:41:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:22.755 00:41:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:22.755 00:41:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:29:22.755 00:41:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:29:22.755 00:41:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@285 -- # xtrace_disable 00:29:22.755 00:41:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:22.755 00:41:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:22.755 00:41:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # pci_devs=() 00:29:22.755 00:41:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # local -a pci_devs 00:29:22.755 00:41:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:29:22.755 00:41:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:29:22.755 00:41:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # pci_drivers=() 00:29:22.755 00:41:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:29:22.755 00:41:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # net_devs=() 00:29:22.755 00:41:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # local -ga net_devs 00:29:22.755 00:41:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # e810=() 00:29:22.755 00:41:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # local -ga e810 00:29:22.755 00:41:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # x722=() 00:29:22.755 00:41:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # local -ga x722 00:29:22.755 00:41:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # mlx=() 00:29:22.755 00:41:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # local -ga mlx 00:29:22.755 00:41:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:22.755 00:41:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:22.755 00:41:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:22.755 00:41:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:22.755 00:41:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:22.755 00:41:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:22.755 00:41:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:22.755 00:41:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:22.755 00:41:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:22.755 00:41:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:22.755 00:41:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:22.755 00:41:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:29:22.755 00:41:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:29:22.755 00:41:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:29:22.755 00:41:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:29:22.755 00:41:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:29:22.755 00:41:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:29:22.755 00:41:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:22.755 00:41:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:29:22.755 Found 0000:08:00.0 (0x8086 - 0x159b) 00:29:22.755 00:41:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:22.755 00:41:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:22.755 00:41:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:22.755 00:41:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:22.755 00:41:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:22.755 00:41:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:22.755 00:41:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:29:22.755 Found 0000:08:00.1 (0x8086 - 0x159b) 00:29:22.755 00:41:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:22.755 00:41:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:22.755 00:41:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:22.755 00:41:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:22.755 00:41:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:22.755 00:41:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:29:22.755 00:41:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:29:22.755 00:41:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:29:22.755 00:41:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:22.755 00:41:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:22.755 00:41:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:22.755 00:41:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:22.755 00:41:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:22.755 00:41:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:22.755 00:41:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:22.755 00:41:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:29:22.755 Found net devices under 0000:08:00.0: cvl_0_0 00:29:22.755 00:41:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:22.755 00:41:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:22.755 00:41:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:22.755 00:41:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:22.755 00:41:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:22.755 00:41:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:22.755 00:41:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:22.755 00:41:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:22.755 00:41:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:29:22.755 Found net devices under 0000:08:00.1: cvl_0_1 00:29:22.755 00:41:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:22.755 00:41:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:29:22.755 00:41:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # is_hw=yes 00:29:22.755 00:41:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:29:22.755 00:41:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:29:22.755 00:41:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:29:22.755 00:41:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:22.755 00:41:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:22.755 00:41:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:22.755 00:41:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:29:22.755 00:41:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:22.755 00:41:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:22.755 00:41:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:29:22.755 00:41:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:22.756 00:41:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:22.756 00:41:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:29:22.756 00:41:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:29:22.756 00:41:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:29:22.756 00:41:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:22.756 00:41:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:22.756 00:41:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:22.756 00:41:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:29:22.756 00:41:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:23.016 00:41:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:23.016 00:41:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:23.016 00:41:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:29:23.016 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:23.016 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.341 ms 00:29:23.016 00:29:23.016 --- 10.0.0.2 ping statistics --- 00:29:23.016 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:23.016 rtt min/avg/max/mdev = 0.341/0.341/0.341/0.000 ms 00:29:23.016 00:41:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:23.016 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:23.016 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.167 ms 00:29:23.016 00:29:23.016 --- 10.0.0.1 ping statistics --- 00:29:23.016 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:23.016 rtt min/avg/max/mdev = 0.167/0.167/0.167/0.000 ms 00:29:23.016 00:41:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:23.016 00:41:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # return 0 00:29:23.016 00:41:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:29:23.016 00:41:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:23.016 00:41:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:29:23.016 00:41:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:29:23.016 00:41:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:23.016 00:41:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:29:23.016 00:41:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:29:23.016 00:41:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:29:23.016 00:41:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:23.016 00:41:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@720 -- # xtrace_disable 00:29:23.016 00:41:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:23.016 00:41:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@481 -- # nvmfpid=1026377 00:29:23.016 00:41:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:29:23.016 00:41:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # waitforlisten 1026377 00:29:23.016 00:41:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@827 -- # '[' -z 1026377 ']' 00:29:23.016 00:41:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:23.016 00:41:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@832 -- # local max_retries=100 00:29:23.016 00:41:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:23.016 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:23.016 00:41:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # xtrace_disable 00:29:23.016 00:41:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:23.016 [2024-07-12 00:41:50.716298] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:29:23.016 [2024-07-12 00:41:50.716407] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:23.016 EAL: No free 2048 kB hugepages reported on node 1 00:29:23.016 [2024-07-12 00:41:50.783030] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:23.276 [2024-07-12 00:41:50.874392] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:23.276 [2024-07-12 00:41:50.874450] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:23.276 [2024-07-12 00:41:50.874466] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:23.276 [2024-07-12 00:41:50.874479] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:23.276 [2024-07-12 00:41:50.874491] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:23.276 [2024-07-12 00:41:50.874576] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:29:23.276 [2024-07-12 00:41:50.874629] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:29:23.276 [2024-07-12 00:41:50.874709] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:29:23.276 [2024-07-12 00:41:50.874741] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:23.276 00:41:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:29:23.276 00:41:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@860 -- # return 0 00:29:23.276 00:41:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:23.276 00:41:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:23.276 00:41:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:23.276 00:41:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:23.276 00:41:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:23.276 00:41:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:23.276 00:41:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:23.276 [2024-07-12 00:41:51.021262] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:23.276 00:41:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:23.276 00:41:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:29:23.276 00:41:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:29:23.276 00:41:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@720 -- # xtrace_disable 00:29:23.276 00:41:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:23.276 00:41:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:23.276 00:41:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:29:23.276 00:41:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:29:23.276 00:41:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:29:23.276 00:41:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:29:23.276 00:41:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:29:23.276 00:41:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:29:23.276 00:41:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:29:23.276 00:41:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:29:23.276 00:41:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:29:23.276 00:41:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:29:23.276 00:41:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:29:23.276 00:41:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:29:23.276 00:41:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:29:23.276 00:41:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:29:23.276 00:41:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:29:23.276 00:41:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:29:23.276 00:41:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:29:23.276 00:41:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:29:23.276 00:41:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:29:23.276 00:41:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:29:23.276 00:41:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@35 -- # rpc_cmd 00:29:23.276 00:41:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:23.276 00:41:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:23.276 Malloc1 00:29:23.276 [2024-07-12 00:41:51.111712] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:23.534 Malloc2 00:29:23.534 Malloc3 00:29:23.534 Malloc4 00:29:23.534 Malloc5 00:29:23.534 Malloc6 00:29:23.534 Malloc7 00:29:23.792 Malloc8 00:29:23.792 Malloc9 00:29:23.792 Malloc10 00:29:23.792 00:41:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:23.792 00:41:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:29:23.792 00:41:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:23.792 00:41:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:23.792 00:41:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # perfpid=1026524 00:29:23.792 00:41:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # waitforlisten 1026524 /var/tmp/bdevperf.sock 00:29:23.792 00:41:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@827 -- # '[' -z 1026524 ']' 00:29:23.792 00:41:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:23.792 00:41:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:29:23.792 00:41:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:29:23.792 00:41:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@832 -- # local max_retries=100 00:29:23.792 00:41:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # config=() 00:29:23.792 00:41:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:23.792 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:23.792 00:41:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # local subsystem config 00:29:23.792 00:41:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # xtrace_disable 00:29:23.792 00:41:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:23.792 00:41:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:23.792 00:41:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:23.792 { 00:29:23.792 "params": { 00:29:23.792 "name": "Nvme$subsystem", 00:29:23.792 "trtype": "$TEST_TRANSPORT", 00:29:23.792 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:23.792 "adrfam": "ipv4", 00:29:23.792 "trsvcid": "$NVMF_PORT", 00:29:23.792 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:23.792 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:23.792 "hdgst": ${hdgst:-false}, 00:29:23.792 "ddgst": ${ddgst:-false} 00:29:23.792 }, 00:29:23.792 "method": "bdev_nvme_attach_controller" 00:29:23.792 } 00:29:23.792 EOF 00:29:23.792 )") 00:29:23.792 00:41:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:29:23.792 00:41:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:23.792 00:41:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:23.792 { 00:29:23.792 "params": { 00:29:23.792 "name": "Nvme$subsystem", 00:29:23.792 "trtype": "$TEST_TRANSPORT", 00:29:23.792 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:23.792 "adrfam": "ipv4", 00:29:23.792 "trsvcid": "$NVMF_PORT", 00:29:23.792 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:23.792 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:23.792 "hdgst": ${hdgst:-false}, 00:29:23.792 "ddgst": ${ddgst:-false} 00:29:23.792 }, 00:29:23.792 "method": "bdev_nvme_attach_controller" 00:29:23.792 } 00:29:23.792 EOF 00:29:23.792 )") 00:29:23.792 00:41:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:29:23.792 00:41:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:23.792 00:41:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:23.792 { 00:29:23.792 "params": { 00:29:23.792 "name": "Nvme$subsystem", 00:29:23.792 "trtype": "$TEST_TRANSPORT", 00:29:23.792 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:23.792 "adrfam": "ipv4", 00:29:23.792 "trsvcid": "$NVMF_PORT", 00:29:23.792 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:23.792 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:23.792 "hdgst": ${hdgst:-false}, 00:29:23.792 "ddgst": ${ddgst:-false} 00:29:23.792 }, 00:29:23.792 "method": "bdev_nvme_attach_controller" 00:29:23.792 } 00:29:23.792 EOF 00:29:23.792 )") 00:29:23.792 00:41:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:29:23.792 00:41:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:23.792 00:41:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:23.792 { 00:29:23.792 "params": { 00:29:23.792 "name": "Nvme$subsystem", 00:29:23.792 "trtype": "$TEST_TRANSPORT", 00:29:23.793 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:23.793 "adrfam": "ipv4", 00:29:23.793 "trsvcid": "$NVMF_PORT", 00:29:23.793 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:23.793 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:23.793 "hdgst": ${hdgst:-false}, 00:29:23.793 "ddgst": ${ddgst:-false} 00:29:23.793 }, 00:29:23.793 "method": "bdev_nvme_attach_controller" 00:29:23.793 } 00:29:23.793 EOF 00:29:23.793 )") 00:29:23.793 00:41:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:29:23.793 00:41:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:23.793 00:41:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:23.793 { 00:29:23.793 "params": { 00:29:23.793 "name": "Nvme$subsystem", 00:29:23.793 "trtype": "$TEST_TRANSPORT", 00:29:23.793 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:23.793 "adrfam": "ipv4", 00:29:23.793 "trsvcid": "$NVMF_PORT", 00:29:23.793 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:23.793 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:23.793 "hdgst": ${hdgst:-false}, 00:29:23.793 "ddgst": ${ddgst:-false} 00:29:23.793 }, 00:29:23.793 "method": "bdev_nvme_attach_controller" 00:29:23.793 } 00:29:23.793 EOF 00:29:23.793 )") 00:29:23.793 00:41:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:29:23.793 00:41:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:23.793 00:41:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:23.793 { 00:29:23.793 "params": { 00:29:23.793 "name": "Nvme$subsystem", 00:29:23.793 "trtype": "$TEST_TRANSPORT", 00:29:23.793 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:23.793 "adrfam": "ipv4", 00:29:23.793 "trsvcid": "$NVMF_PORT", 00:29:23.793 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:23.793 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:23.793 "hdgst": ${hdgst:-false}, 00:29:23.793 "ddgst": ${ddgst:-false} 00:29:23.793 }, 00:29:23.793 "method": "bdev_nvme_attach_controller" 00:29:23.793 } 00:29:23.793 EOF 00:29:23.793 )") 00:29:23.793 00:41:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:29:23.793 00:41:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:23.793 00:41:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:23.793 { 00:29:23.793 "params": { 00:29:23.793 "name": "Nvme$subsystem", 00:29:23.793 "trtype": "$TEST_TRANSPORT", 00:29:23.793 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:23.793 "adrfam": "ipv4", 00:29:23.793 "trsvcid": "$NVMF_PORT", 00:29:23.793 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:23.793 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:23.793 "hdgst": ${hdgst:-false}, 00:29:23.793 "ddgst": ${ddgst:-false} 00:29:23.793 }, 00:29:23.793 "method": "bdev_nvme_attach_controller" 00:29:23.793 } 00:29:23.793 EOF 00:29:23.793 )") 00:29:23.793 00:41:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:29:23.793 00:41:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:23.793 00:41:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:23.793 { 00:29:23.793 "params": { 00:29:23.793 "name": "Nvme$subsystem", 00:29:23.793 "trtype": "$TEST_TRANSPORT", 00:29:23.793 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:23.793 "adrfam": "ipv4", 00:29:23.793 "trsvcid": "$NVMF_PORT", 00:29:23.793 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:23.793 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:23.793 "hdgst": ${hdgst:-false}, 00:29:23.793 "ddgst": ${ddgst:-false} 00:29:23.793 }, 00:29:23.793 "method": "bdev_nvme_attach_controller" 00:29:23.793 } 00:29:23.793 EOF 00:29:23.793 )") 00:29:23.793 00:41:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:29:23.793 00:41:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:23.793 00:41:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:23.793 { 00:29:23.793 "params": { 00:29:23.793 "name": "Nvme$subsystem", 00:29:23.793 "trtype": "$TEST_TRANSPORT", 00:29:23.793 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:23.793 "adrfam": "ipv4", 00:29:23.793 "trsvcid": "$NVMF_PORT", 00:29:23.793 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:23.793 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:23.793 "hdgst": ${hdgst:-false}, 00:29:23.793 "ddgst": ${ddgst:-false} 00:29:23.793 }, 00:29:23.793 "method": "bdev_nvme_attach_controller" 00:29:23.793 } 00:29:23.793 EOF 00:29:23.793 )") 00:29:23.793 00:41:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:29:23.793 00:41:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:23.793 00:41:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:23.793 { 00:29:23.793 "params": { 00:29:23.793 "name": "Nvme$subsystem", 00:29:23.793 "trtype": "$TEST_TRANSPORT", 00:29:23.793 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:23.793 "adrfam": "ipv4", 00:29:23.793 "trsvcid": "$NVMF_PORT", 00:29:23.793 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:23.793 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:23.793 "hdgst": ${hdgst:-false}, 00:29:23.793 "ddgst": ${ddgst:-false} 00:29:23.793 }, 00:29:23.793 "method": "bdev_nvme_attach_controller" 00:29:23.793 } 00:29:23.793 EOF 00:29:23.793 )") 00:29:23.793 00:41:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:29:23.793 00:41:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@556 -- # jq . 00:29:23.793 00:41:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@557 -- # IFS=, 00:29:23.793 00:41:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:29:23.793 "params": { 00:29:23.793 "name": "Nvme1", 00:29:23.793 "trtype": "tcp", 00:29:23.793 "traddr": "10.0.0.2", 00:29:23.793 "adrfam": "ipv4", 00:29:23.793 "trsvcid": "4420", 00:29:23.793 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:23.793 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:23.793 "hdgst": false, 00:29:23.793 "ddgst": false 00:29:23.793 }, 00:29:23.793 "method": "bdev_nvme_attach_controller" 00:29:23.793 },{ 00:29:23.793 "params": { 00:29:23.793 "name": "Nvme2", 00:29:23.793 "trtype": "tcp", 00:29:23.793 "traddr": "10.0.0.2", 00:29:23.793 "adrfam": "ipv4", 00:29:23.793 "trsvcid": "4420", 00:29:23.793 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:29:23.793 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:29:23.793 "hdgst": false, 00:29:23.793 "ddgst": false 00:29:23.793 }, 00:29:23.793 "method": "bdev_nvme_attach_controller" 00:29:23.793 },{ 00:29:23.793 "params": { 00:29:23.793 "name": "Nvme3", 00:29:23.793 "trtype": "tcp", 00:29:23.793 "traddr": "10.0.0.2", 00:29:23.793 "adrfam": "ipv4", 00:29:23.793 "trsvcid": "4420", 00:29:23.793 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:29:23.793 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:29:23.793 "hdgst": false, 00:29:23.793 "ddgst": false 00:29:23.793 }, 00:29:23.793 "method": "bdev_nvme_attach_controller" 00:29:23.793 },{ 00:29:23.793 "params": { 00:29:23.793 "name": "Nvme4", 00:29:23.793 "trtype": "tcp", 00:29:23.793 "traddr": "10.0.0.2", 00:29:23.793 "adrfam": "ipv4", 00:29:23.793 "trsvcid": "4420", 00:29:23.793 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:29:23.793 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:29:23.793 "hdgst": false, 00:29:23.793 "ddgst": false 00:29:23.793 }, 00:29:23.793 "method": "bdev_nvme_attach_controller" 00:29:23.793 },{ 00:29:23.793 "params": { 00:29:23.793 "name": "Nvme5", 00:29:23.793 "trtype": "tcp", 00:29:23.793 "traddr": "10.0.0.2", 00:29:23.793 "adrfam": "ipv4", 00:29:23.793 "trsvcid": "4420", 00:29:23.793 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:29:23.793 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:29:23.793 "hdgst": false, 00:29:23.793 "ddgst": false 00:29:23.793 }, 00:29:23.793 "method": "bdev_nvme_attach_controller" 00:29:23.793 },{ 00:29:23.793 "params": { 00:29:23.793 "name": "Nvme6", 00:29:23.793 "trtype": "tcp", 00:29:23.793 "traddr": "10.0.0.2", 00:29:23.793 "adrfam": "ipv4", 00:29:23.793 "trsvcid": "4420", 00:29:23.793 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:29:23.793 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:29:23.793 "hdgst": false, 00:29:23.793 "ddgst": false 00:29:23.793 }, 00:29:23.793 "method": "bdev_nvme_attach_controller" 00:29:23.793 },{ 00:29:23.793 "params": { 00:29:23.793 "name": "Nvme7", 00:29:23.793 "trtype": "tcp", 00:29:23.793 "traddr": "10.0.0.2", 00:29:23.793 "adrfam": "ipv4", 00:29:23.793 "trsvcid": "4420", 00:29:23.793 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:29:23.793 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:29:23.793 "hdgst": false, 00:29:23.793 "ddgst": false 00:29:23.793 }, 00:29:23.793 "method": "bdev_nvme_attach_controller" 00:29:23.793 },{ 00:29:23.793 "params": { 00:29:23.793 "name": "Nvme8", 00:29:23.793 "trtype": "tcp", 00:29:23.793 "traddr": "10.0.0.2", 00:29:23.793 "adrfam": "ipv4", 00:29:23.793 "trsvcid": "4420", 00:29:23.793 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:29:23.793 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:29:23.793 "hdgst": false, 00:29:23.793 "ddgst": false 00:29:23.793 }, 00:29:23.793 "method": "bdev_nvme_attach_controller" 00:29:23.793 },{ 00:29:23.793 "params": { 00:29:23.793 "name": "Nvme9", 00:29:23.793 "trtype": "tcp", 00:29:23.793 "traddr": "10.0.0.2", 00:29:23.793 "adrfam": "ipv4", 00:29:23.794 "trsvcid": "4420", 00:29:23.794 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:29:23.794 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:29:23.794 "hdgst": false, 00:29:23.794 "ddgst": false 00:29:23.794 }, 00:29:23.794 "method": "bdev_nvme_attach_controller" 00:29:23.794 },{ 00:29:23.794 "params": { 00:29:23.794 "name": "Nvme10", 00:29:23.794 "trtype": "tcp", 00:29:23.794 "traddr": "10.0.0.2", 00:29:23.794 "adrfam": "ipv4", 00:29:23.794 "trsvcid": "4420", 00:29:23.794 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:29:23.794 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:29:23.794 "hdgst": false, 00:29:23.794 "ddgst": false 00:29:23.794 }, 00:29:23.794 "method": "bdev_nvme_attach_controller" 00:29:23.794 }' 00:29:23.794 [2024-07-12 00:41:51.592148] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:29:23.794 [2024-07-12 00:41:51.592238] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1026524 ] 00:29:23.794 EAL: No free 2048 kB hugepages reported on node 1 00:29:24.051 [2024-07-12 00:41:51.654491] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:24.051 [2024-07-12 00:41:51.742025] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:25.422 Running I/O for 10 seconds... 00:29:25.987 00:41:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:29:25.987 00:41:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@860 -- # return 0 00:29:25.987 00:41:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:29:25.987 00:41:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:25.987 00:41:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:25.987 00:41:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:25.987 00:41:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@130 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:25.987 00:41:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@132 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:29:25.987 00:41:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:29:25.987 00:41:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:29:25.987 00:41:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@57 -- # local ret=1 00:29:25.987 00:41:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local i 00:29:25.987 00:41:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:29:25.987 00:41:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:29:25.987 00:41:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:29:25.987 00:41:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:29:25.987 00:41:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:25.987 00:41:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:25.987 00:41:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:25.987 00:41:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=3 00:29:25.987 00:41:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:29:25.987 00:41:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:29:26.244 00:41:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:29:26.244 00:41:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:29:26.244 00:41:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:29:26.244 00:41:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:29:26.244 00:41:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:26.244 00:41:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:26.244 00:41:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:26.244 00:41:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=67 00:29:26.244 00:41:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:29:26.244 00:41:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:29:26.510 00:41:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:29:26.510 00:41:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:29:26.510 00:41:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:29:26.510 00:41:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:29:26.510 00:41:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:26.510 00:41:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:26.510 00:41:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:26.510 00:41:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=131 00:29:26.510 00:41:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:29:26.510 00:41:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # ret=0 00:29:26.510 00:41:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # break 00:29:26.510 00:41:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@69 -- # return 0 00:29:26.510 00:41:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@135 -- # killprocess 1026377 00:29:26.510 00:41:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@946 -- # '[' -z 1026377 ']' 00:29:26.510 00:41:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@950 -- # kill -0 1026377 00:29:26.510 00:41:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@951 -- # uname 00:29:26.510 00:41:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:29:26.510 00:41:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1026377 00:29:26.510 00:41:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:29:26.510 00:41:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:29:26.510 00:41:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1026377' 00:29:26.510 killing process with pid 1026377 00:29:26.510 00:41:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@965 -- # kill 1026377 00:29:26.510 00:41:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@970 -- # wait 1026377 00:29:26.510 [2024-07-12 00:41:54.313814] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187e350 is same with the state(5) to be set 00:29:26.510 [2024-07-12 00:41:54.313908] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187e350 is same with the state(5) to be set 00:29:26.510 [2024-07-12 00:41:54.313950] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187e350 is same with the state(5) to be set 00:29:26.510 [2024-07-12 00:41:54.313970] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187e350 is same with the state(5) to be set 00:29:26.510 [2024-07-12 00:41:54.313992] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187e350 is same with the state(5) to be set 00:29:26.510 [2024-07-12 00:41:54.314014] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187e350 is same with the state(5) to be set 00:29:26.510 [2024-07-12 00:41:54.314028] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187e350 is same with the state(5) to be set 00:29:26.510 [2024-07-12 00:41:54.314042] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187e350 is same with the state(5) to be set 00:29:26.510 [2024-07-12 00:41:54.314056] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187e350 is same with the state(5) to be set 00:29:26.510 [2024-07-12 00:41:54.314070] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187e350 is same with the state(5) to be set 00:29:26.510 [2024-07-12 00:41:54.314083] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187e350 is same with the state(5) to be set 00:29:26.510 [2024-07-12 00:41:54.314097] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187e350 is same with the state(5) to be set 00:29:26.510 [2024-07-12 00:41:54.314111] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187e350 is same with the state(5) to be set 00:29:26.510 [2024-07-12 00:41:54.314124] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187e350 is same with the state(5) to be set 00:29:26.510 [2024-07-12 00:41:54.314137] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187e350 is same with the state(5) to be set 00:29:26.510 [2024-07-12 00:41:54.314151] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187e350 is same with the state(5) to be set 00:29:26.510 [2024-07-12 00:41:54.314165] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187e350 is same with the state(5) to be set 00:29:26.510 [2024-07-12 00:41:54.314179] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187e350 is same with the state(5) to be set 00:29:26.510 [2024-07-12 00:41:54.314192] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187e350 is same with the state(5) to be set 00:29:26.510 [2024-07-12 00:41:54.314206] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187e350 is same with the state(5) to be set 00:29:26.510 [2024-07-12 00:41:54.314219] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187e350 is same with the state(5) to be set 00:29:26.510 [2024-07-12 00:41:54.314233] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187e350 is same with the state(5) to be set 00:29:26.510 [2024-07-12 00:41:54.314247] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187e350 is same with the state(5) to be set 00:29:26.510 [2024-07-12 00:41:54.314260] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187e350 is same with the state(5) to be set 00:29:26.510 [2024-07-12 00:41:54.314274] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187e350 is same with the state(5) to be set 00:29:26.510 [2024-07-12 00:41:54.314287] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187e350 is same with the state(5) to be set 00:29:26.510 [2024-07-12 00:41:54.314301] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187e350 is same with the state(5) to be set 00:29:26.510 [2024-07-12 00:41:54.314314] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187e350 is same with the state(5) to be set 00:29:26.510 [2024-07-12 00:41:54.314328] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187e350 is same with the state(5) to be set 00:29:26.510 [2024-07-12 00:41:54.314346] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187e350 is same with the state(5) to be set 00:29:26.510 [2024-07-12 00:41:54.314360] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187e350 is same with the state(5) to be set 00:29:26.510 [2024-07-12 00:41:54.314374] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187e350 is same with the state(5) to be set 00:29:26.510 [2024-07-12 00:41:54.314388] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187e350 is same with the state(5) to be set 00:29:26.510 [2024-07-12 00:41:54.314401] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187e350 is same with the state(5) to be set 00:29:26.510 [2024-07-12 00:41:54.314415] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187e350 is same with the state(5) to be set 00:29:26.510 [2024-07-12 00:41:54.314429] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187e350 is same with the state(5) to be set 00:29:26.510 [2024-07-12 00:41:54.314443] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187e350 is same with the state(5) to be set 00:29:26.510 [2024-07-12 00:41:54.314457] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187e350 is same with the state(5) to be set 00:29:26.510 [2024-07-12 00:41:54.314471] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187e350 is same with the state(5) to be set 00:29:26.510 [2024-07-12 00:41:54.314484] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187e350 is same with the state(5) to be set 00:29:26.510 [2024-07-12 00:41:54.314498] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187e350 is same with the state(5) to be set 00:29:26.510 [2024-07-12 00:41:54.314512] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187e350 is same with the state(5) to be set 00:29:26.510 [2024-07-12 00:41:54.314525] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187e350 is same with the state(5) to be set 00:29:26.510 [2024-07-12 00:41:54.314539] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187e350 is same with the state(5) to be set 00:29:26.510 [2024-07-12 00:41:54.314553] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187e350 is same with the state(5) to be set 00:29:26.510 [2024-07-12 00:41:54.314567] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187e350 is same with the state(5) to be set 00:29:26.510 [2024-07-12 00:41:54.314581] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187e350 is same with the state(5) to be set 00:29:26.510 [2024-07-12 00:41:54.314604] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187e350 is same with the state(5) to be set 00:29:26.510 [2024-07-12 00:41:54.314618] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187e350 is same with the state(5) to be set 00:29:26.510 [2024-07-12 00:41:54.314632] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187e350 is same with the state(5) to be set 00:29:26.510 [2024-07-12 00:41:54.314646] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187e350 is same with the state(5) to be set 00:29:26.510 [2024-07-12 00:41:54.314662] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187e350 is same with the state(5) to be set 00:29:26.510 [2024-07-12 00:41:54.314675] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187e350 is same with the state(5) to be set 00:29:26.510 [2024-07-12 00:41:54.314689] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187e350 is same with the state(5) to be set 00:29:26.511 [2024-07-12 00:41:54.314702] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187e350 is same with the state(5) to be set 00:29:26.511 [2024-07-12 00:41:54.314720] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187e350 is same with the state(5) to be set 00:29:26.511 [2024-07-12 00:41:54.314738] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187e350 is same with the state(5) to be set 00:29:26.511 [2024-07-12 00:41:54.314752] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187e350 is same with the state(5) to be set 00:29:26.511 [2024-07-12 00:41:54.314766] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187e350 is same with the state(5) to be set 00:29:26.511 [2024-07-12 00:41:54.314780] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187e350 is same with the state(5) to be set 00:29:26.511 [2024-07-12 00:41:54.314793] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187e350 is same with the state(5) to be set 00:29:26.511 [2024-07-12 00:41:54.314807] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187e350 is same with the state(5) to be set 00:29:26.511 [2024-07-12 00:41:54.314821] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187e350 is same with the state(5) to be set 00:29:26.511 [2024-07-12 00:41:54.316560] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1880d50 is same with the state(5) to be set 00:29:26.511 [2024-07-12 00:41:54.316604] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1880d50 is same with the state(5) to be set 00:29:26.511 [2024-07-12 00:41:54.316622] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1880d50 is same with the state(5) to be set 00:29:26.511 [2024-07-12 00:41:54.316636] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1880d50 is same with the state(5) to be set 00:29:26.511 [2024-07-12 00:41:54.316662] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1880d50 is same with the state(5) to be set 00:29:26.511 [2024-07-12 00:41:54.316711] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1880d50 is same with the state(5) to be set 00:29:26.511 [2024-07-12 00:41:54.316733] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1880d50 is same with the state(5) to be set 00:29:26.511 [2024-07-12 00:41:54.316758] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1880d50 is same with the state(5) to be set 00:29:26.511 [2024-07-12 00:41:54.316774] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1880d50 is same with the state(5) to be set 00:29:26.511 [2024-07-12 00:41:54.316789] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1880d50 is same with the state(5) to be set 00:29:26.511 [2024-07-12 00:41:54.316803] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1880d50 is same with the state(5) to be set 00:29:26.511 [2024-07-12 00:41:54.316817] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1880d50 is same with the state(5) to be set 00:29:26.511 [2024-07-12 00:41:54.316831] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1880d50 is same with the state(5) to be set 00:29:26.511 [2024-07-12 00:41:54.316846] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1880d50 is same with the state(5) to be set 00:29:26.511 [2024-07-12 00:41:54.316860] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1880d50 is same with the state(5) to be set 00:29:26.511 [2024-07-12 00:41:54.316874] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1880d50 is same with the state(5) to be set 00:29:26.511 [2024-07-12 00:41:54.316888] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1880d50 is same with the state(5) to be set 00:29:26.511 [2024-07-12 00:41:54.316905] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1880d50 is same with the state(5) to be set 00:29:26.511 [2024-07-12 00:41:54.316919] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1880d50 is same with the state(5) to be set 00:29:26.511 [2024-07-12 00:41:54.316933] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1880d50 is same with the state(5) to be set 00:29:26.511 [2024-07-12 00:41:54.316956] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1880d50 is same with the state(5) to be set 00:29:26.511 [2024-07-12 00:41:54.316971] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1880d50 is same with the state(5) to be set 00:29:26.511 [2024-07-12 00:41:54.316985] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1880d50 is same with the state(5) to be set 00:29:26.511 [2024-07-12 00:41:54.316999] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1880d50 is same with the state(5) to be set 00:29:26.511 [2024-07-12 00:41:54.317013] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1880d50 is same with the state(5) to be set 00:29:26.511 [2024-07-12 00:41:54.317029] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1880d50 is same with the state(5) to be set 00:29:26.511 [2024-07-12 00:41:54.317044] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1880d50 is same with the state(5) to be set 00:29:26.511 [2024-07-12 00:41:54.317058] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1880d50 is same with the state(5) to be set 00:29:26.511 [2024-07-12 00:41:54.317074] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1880d50 is same with the state(5) to be set 00:29:26.511 [2024-07-12 00:41:54.317088] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1880d50 is same with the state(5) to be set 00:29:26.511 [2024-07-12 00:41:54.317102] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1880d50 is same with the state(5) to be set 00:29:26.511 [2024-07-12 00:41:54.317116] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1880d50 is same with the state(5) to be set 00:29:26.511 [2024-07-12 00:41:54.317130] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1880d50 is same with the state(5) to be set 00:29:26.511 [2024-07-12 00:41:54.317144] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1880d50 is same with the state(5) to be set 00:29:26.511 [2024-07-12 00:41:54.317158] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1880d50 is same with the state(5) to be set 00:29:26.511 [2024-07-12 00:41:54.317171] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1880d50 is same with the state(5) to be set 00:29:26.511 [2024-07-12 00:41:54.317185] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1880d50 is same with the state(5) to be set 00:29:26.511 [2024-07-12 00:41:54.317199] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1880d50 is same with the state(5) to be set 00:29:26.511 [2024-07-12 00:41:54.317213] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1880d50 is same with the state(5) to be set 00:29:26.511 [2024-07-12 00:41:54.317227] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1880d50 is same with the state(5) to be set 00:29:26.511 [2024-07-12 00:41:54.317240] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1880d50 is same with the state(5) to be set 00:29:26.511 [2024-07-12 00:41:54.317254] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1880d50 is same with the state(5) to be set 00:29:26.511 [2024-07-12 00:41:54.317268] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1880d50 is same with the state(5) to be set 00:29:26.511 [2024-07-12 00:41:54.317283] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1880d50 is same with the state(5) to be set 00:29:26.511 [2024-07-12 00:41:54.317297] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1880d50 is same with the state(5) to be set 00:29:26.511 [2024-07-12 00:41:54.317310] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1880d50 is same with the state(5) to be set 00:29:26.511 [2024-07-12 00:41:54.317324] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1880d50 is same with the state(5) to be set 00:29:26.511 [2024-07-12 00:41:54.317342] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1880d50 is same with the state(5) to be set 00:29:26.511 [2024-07-12 00:41:54.317357] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1880d50 is same with the state(5) to be set 00:29:26.511 [2024-07-12 00:41:54.317371] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1880d50 is same with the state(5) to be set 00:29:26.511 [2024-07-12 00:41:54.317385] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1880d50 is same with the state(5) to be set 00:29:26.511 [2024-07-12 00:41:54.317399] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1880d50 is same with the state(5) to be set 00:29:26.511 [2024-07-12 00:41:54.317413] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1880d50 is same with the state(5) to be set 00:29:26.511 [2024-07-12 00:41:54.317427] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1880d50 is same with the state(5) to be set 00:29:26.511 [2024-07-12 00:41:54.317440] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1880d50 is same with the state(5) to be set 00:29:26.511 [2024-07-12 00:41:54.317454] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1880d50 is same with the state(5) to be set 00:29:26.511 [2024-07-12 00:41:54.317467] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1880d50 is same with the state(5) to be set 00:29:26.511 [2024-07-12 00:41:54.317502] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1880d50 is same with the state(5) to be set 00:29:26.511 [2024-07-12 00:41:54.317534] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1880d50 is same with the state(5) to be set 00:29:26.511 [2024-07-12 00:41:54.317551] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1880d50 is same with the state(5) to be set 00:29:26.511 [2024-07-12 00:41:54.317567] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1880d50 is same with the state(5) to be set 00:29:26.511 [2024-07-12 00:41:54.317581] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1880d50 is same with the state(5) to be set 00:29:26.511 [2024-07-12 00:41:54.317603] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1880d50 is same with the state(5) to be set 00:29:26.511 [2024-07-12 00:41:54.319754] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187e7f0 is same with the state(5) to be set 00:29:26.511 [2024-07-12 00:41:54.319789] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187e7f0 is same with the state(5) to be set 00:29:26.511 [2024-07-12 00:41:54.319804] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187e7f0 is same with the state(5) to be set 00:29:26.511 [2024-07-12 00:41:54.319818] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187e7f0 is same with the state(5) to be set 00:29:26.511 [2024-07-12 00:41:54.319831] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187e7f0 is same with the state(5) to be set 00:29:26.511 [2024-07-12 00:41:54.319845] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187e7f0 is same with the state(5) to be set 00:29:26.511 [2024-07-12 00:41:54.319860] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187e7f0 is same with the state(5) to be set 00:29:26.511 [2024-07-12 00:41:54.319873] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187e7f0 is same with the state(5) to be set 00:29:26.511 [2024-07-12 00:41:54.319887] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187e7f0 is same with the state(5) to be set 00:29:26.511 [2024-07-12 00:41:54.319901] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187e7f0 is same with the state(5) to be set 00:29:26.511 [2024-07-12 00:41:54.319915] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187e7f0 is same with the state(5) to be set 00:29:26.511 [2024-07-12 00:41:54.319928] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187e7f0 is same with the state(5) to be set 00:29:26.511 [2024-07-12 00:41:54.319949] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187e7f0 is same with the state(5) to be set 00:29:26.511 [2024-07-12 00:41:54.319963] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187e7f0 is same with the state(5) to be set 00:29:26.511 [2024-07-12 00:41:54.319977] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187e7f0 is same with the state(5) to be set 00:29:26.511 [2024-07-12 00:41:54.319991] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187e7f0 is same with the state(5) to be set 00:29:26.511 [2024-07-12 00:41:54.320005] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187e7f0 is same with the state(5) to be set 00:29:26.511 [2024-07-12 00:41:54.320019] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187e7f0 is same with the state(5) to be set 00:29:26.512 [2024-07-12 00:41:54.320033] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187e7f0 is same with the state(5) to be set 00:29:26.512 [2024-07-12 00:41:54.320047] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187e7f0 is same with the state(5) to be set 00:29:26.512 [2024-07-12 00:41:54.320060] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187e7f0 is same with the state(5) to be set 00:29:26.512 [2024-07-12 00:41:54.320074] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187e7f0 is same with the state(5) to be set 00:29:26.512 [2024-07-12 00:41:54.320088] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187e7f0 is same with the state(5) to be set 00:29:26.512 [2024-07-12 00:41:54.320102] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187e7f0 is same with the state(5) to be set 00:29:26.512 [2024-07-12 00:41:54.320115] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187e7f0 is same with the state(5) to be set 00:29:26.512 [2024-07-12 00:41:54.320129] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187e7f0 is same with the state(5) to be set 00:29:26.512 [2024-07-12 00:41:54.320143] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187e7f0 is same with the state(5) to be set 00:29:26.512 [2024-07-12 00:41:54.320157] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187e7f0 is same with the state(5) to be set 00:29:26.512 [2024-07-12 00:41:54.320171] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187e7f0 is same with the state(5) to be set 00:29:26.512 [2024-07-12 00:41:54.320185] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187e7f0 is same with the state(5) to be set 00:29:26.512 [2024-07-12 00:41:54.320198] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187e7f0 is same with the state(5) to be set 00:29:26.512 [2024-07-12 00:41:54.320212] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187e7f0 is same with the state(5) to be set 00:29:26.512 [2024-07-12 00:41:54.320226] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187e7f0 is same with the state(5) to be set 00:29:26.512 [2024-07-12 00:41:54.320240] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187e7f0 is same with the state(5) to be set 00:29:26.512 [2024-07-12 00:41:54.320253] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187e7f0 is same with the state(5) to be set 00:29:26.512 [2024-07-12 00:41:54.320267] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187e7f0 is same with the state(5) to be set 00:29:26.512 [2024-07-12 00:41:54.320281] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187e7f0 is same with the state(5) to be set 00:29:26.512 [2024-07-12 00:41:54.320295] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187e7f0 is same with the state(5) to be set 00:29:26.512 [2024-07-12 00:41:54.320309] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187e7f0 is same with the state(5) to be set 00:29:26.512 [2024-07-12 00:41:54.320327] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187e7f0 is same with the state(5) to be set 00:29:26.512 [2024-07-12 00:41:54.320341] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187e7f0 is same with the state(5) to be set 00:29:26.512 [2024-07-12 00:41:54.320354] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187e7f0 is same with the state(5) to be set 00:29:26.512 [2024-07-12 00:41:54.320368] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187e7f0 is same with the state(5) to be set 00:29:26.512 [2024-07-12 00:41:54.320382] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187e7f0 is same with the state(5) to be set 00:29:26.512 [2024-07-12 00:41:54.320395] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187e7f0 is same with the state(5) to be set 00:29:26.512 [2024-07-12 00:41:54.320409] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187e7f0 is same with the state(5) to be set 00:29:26.512 [2024-07-12 00:41:54.320423] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187e7f0 is same with the state(5) to be set 00:29:26.512 [2024-07-12 00:41:54.320436] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187e7f0 is same with the state(5) to be set 00:29:26.512 [2024-07-12 00:41:54.320450] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187e7f0 is same with the state(5) to be set 00:29:26.512 [2024-07-12 00:41:54.321908] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187ec90 is same with the state(5) to be set 00:29:26.512 [2024-07-12 00:41:54.321946] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187ec90 is same with the state(5) to be set 00:29:26.512 [2024-07-12 00:41:54.321963] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187ec90 is same with the state(5) to be set 00:29:26.512 [2024-07-12 00:41:54.321978] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187ec90 is same with the state(5) to be set 00:29:26.512 [2024-07-12 00:41:54.322356] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:29:26.512 [2024-07-12 00:41:54.323193] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187f150 is same with the state(5) to be set 00:29:26.512 [2024-07-12 00:41:54.323229] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187f150 is same with the state(5) to be set 00:29:26.512 [2024-07-12 00:41:54.323246] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187f150 is same with the state(5) to be set 00:29:26.512 [2024-07-12 00:41:54.323261] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187f150 is same with the state(5) to be set 00:29:26.512 [2024-07-12 00:41:54.323274] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187f150 is same with the state(5) to be set 00:29:26.512 [2024-07-12 00:41:54.323289] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187f150 is same with the state(5) to be set 00:29:26.512 [2024-07-12 00:41:54.323303] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187f150 is same with the state(5) to be set 00:29:26.512 [2024-07-12 00:41:54.323317] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187f150 is same with the state(5) to be set 00:29:26.512 [2024-07-12 00:41:54.323332] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187f150 is same with the state(5) to be set 00:29:26.512 [2024-07-12 00:41:54.323345] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187f150 is same with the state(5) to be set 00:29:26.512 [2024-07-12 00:41:54.323359] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187f150 is same with the state(5) to be set 00:29:26.512 [2024-07-12 00:41:54.323373] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187f150 is same with the state(5) to be set 00:29:26.512 [2024-07-12 00:41:54.323399] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187f150 is same with the state(5) to be set 00:29:26.512 [2024-07-12 00:41:54.323414] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187f150 is same with the state(5) to be set 00:29:26.512 [2024-07-12 00:41:54.323429] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187f150 is same with the state(5) to be set 00:29:26.512 [2024-07-12 00:41:54.323443] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187f150 is same with the state(5) to be set 00:29:26.512 [2024-07-12 00:41:54.323457] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187f150 is same with the state(5) to be set 00:29:26.512 [2024-07-12 00:41:54.323472] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187f150 is same with the state(5) to be set 00:29:26.512 [2024-07-12 00:41:54.323486] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187f150 is same with the state(5) to be set 00:29:26.512 [2024-07-12 00:41:54.323500] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187f150 is same with the state(5) to be set 00:29:26.512 [2024-07-12 00:41:54.323514] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187f150 is same with the state(5) to be set 00:29:26.512 [2024-07-12 00:41:54.323527] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187f150 is same with the state(5) to be set 00:29:26.512 [2024-07-12 00:41:54.323541] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187f150 is same with the state(5) to be set 00:29:26.512 [2024-07-12 00:41:54.323555] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187f150 is same with the state(5) to be set 00:29:26.512 [2024-07-12 00:41:54.323569] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187f150 is same with the state(5) to be set 00:29:26.512 [2024-07-12 00:41:54.323583] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187f150 is same with the state(5) to be set 00:29:26.512 [2024-07-12 00:41:54.323603] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187f150 is same with the state(5) to be set 00:29:26.512 [2024-07-12 00:41:54.323618] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187f150 is same with the state(5) to be set 00:29:26.512 [2024-07-12 00:41:54.323641] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187f150 is same with the state(5) to be set 00:29:26.512 [2024-07-12 00:41:54.323655] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187f150 is same with the state(5) to be set 00:29:26.512 [2024-07-12 00:41:54.323668] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187f150 is same with the state(5) to be set 00:29:26.512 [2024-07-12 00:41:54.323682] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187f150 is same with the state(5) to be set 00:29:26.512 [2024-07-12 00:41:54.323701] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187f150 is same with the state(5) to be set 00:29:26.512 [2024-07-12 00:41:54.323715] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187f150 is same with the state(5) to be set 00:29:26.512 [2024-07-12 00:41:54.323729] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187f150 is same with the state(5) to be set 00:29:26.512 [2024-07-12 00:41:54.323743] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187f150 is same with the state(5) to be set 00:29:26.512 [2024-07-12 00:41:54.323757] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187f150 is same with the state(5) to be set 00:29:26.512 [2024-07-12 00:41:54.323779] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187f150 is same with the state(5) to be set 00:29:26.512 [2024-07-12 00:41:54.323792] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187f150 is same with the state(5) to be set 00:29:26.512 [2024-07-12 00:41:54.323811] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187f150 is same with the state(5) to be set 00:29:26.512 [2024-07-12 00:41:54.323825] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187f150 is same with the state(5) to be set 00:29:26.512 [2024-07-12 00:41:54.323838] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187f150 is same with the state(5) to be set 00:29:26.512 [2024-07-12 00:41:54.323852] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187f150 is same with the state(5) to be set 00:29:26.512 [2024-07-12 00:41:54.323866] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187f150 is same with the state(5) to be set 00:29:26.512 [2024-07-12 00:41:54.323879] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187f150 is same with the state(5) to be set 00:29:26.512 [2024-07-12 00:41:54.323893] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187f150 is same with the state(5) to be set 00:29:26.512 [2024-07-12 00:41:54.323907] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187f150 is same with the state(5) to be set 00:29:26.512 [2024-07-12 00:41:54.323921] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187f150 is same with the state(5) to be set 00:29:26.512 [2024-07-12 00:41:54.323935] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187f150 is same with the state(5) to be set 00:29:26.512 [2024-07-12 00:41:54.323948] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187f150 is same with the state(5) to be set 00:29:26.512 [2024-07-12 00:41:54.323962] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187f150 is same with the state(5) to be set 00:29:26.512 [2024-07-12 00:41:54.323976] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187f150 is same with the state(5) to be set 00:29:26.512 [2024-07-12 00:41:54.323990] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187f150 is same with the state(5) to be set 00:29:26.513 [2024-07-12 00:41:54.324003] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187f150 is same with the state(5) to be set 00:29:26.513 [2024-07-12 00:41:54.324017] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187f150 is same with the state(5) to be set 00:29:26.513 [2024-07-12 00:41:54.324031] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187f150 is same with the state(5) to be set 00:29:26.513 [2024-07-12 00:41:54.324045] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187f150 is same with the state(5) to be set 00:29:26.513 [2024-07-12 00:41:54.324058] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187f150 is same with the state(5) to be set 00:29:26.513 [2024-07-12 00:41:54.324072] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187f150 is same with the state(5) to be set 00:29:26.513 [2024-07-12 00:41:54.324085] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187f150 is same with the state(5) to be set 00:29:26.513 [2024-07-12 00:41:54.324099] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187f150 is same with the state(5) to be set 00:29:26.513 [2024-07-12 00:41:54.324113] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187f150 is same with the state(5) to be set 00:29:26.513 [2024-07-12 00:41:54.324127] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187f150 is same with the state(5) to be set 00:29:26.513 [2024-07-12 00:41:54.324475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.513 [2024-07-12 00:41:54.324510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.513 [2024-07-12 00:41:54.324541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.513 [2024-07-12 00:41:54.324571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.513 [2024-07-12 00:41:54.324598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.513 [2024-07-12 00:41:54.324615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.513 [2024-07-12 00:41:54.324633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.513 [2024-07-12 00:41:54.324649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.513 [2024-07-12 00:41:54.324665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.513 [2024-07-12 00:41:54.324681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.513 [2024-07-12 00:41:54.324698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.513 [2024-07-12 00:41:54.324718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.513 [2024-07-12 00:41:54.324735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.513 [2024-07-12 00:41:54.324750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.513 [2024-07-12 00:41:54.324767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.513 [2024-07-12 00:41:54.324782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.513 [2024-07-12 00:41:54.324799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.513 [2024-07-12 00:41:54.324815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.513 [2024-07-12 00:41:54.324832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.513 [2024-07-12 00:41:54.324847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.513 [2024-07-12 00:41:54.324864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.513 [2024-07-12 00:41:54.324879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.513 [2024-07-12 00:41:54.324896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.513 [2024-07-12 00:41:54.324911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.513 [2024-07-12 00:41:54.324928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.513 [2024-07-12 00:41:54.324943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.513 [2024-07-12 00:41:54.324960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.513 [2024-07-12 00:41:54.324976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.513 [2024-07-12 00:41:54.324997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.513 [2024-07-12 00:41:54.325014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.513 [2024-07-12 00:41:54.325031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.513 [2024-07-12 00:41:54.325030] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187f5f0 is same with the state(5) to be set 00:29:26.513 [2024-07-12 00:41:54.325046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.513 [2024-07-12 00:41:54.325061] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187f5f0 is same with the state(5) to be set 00:29:26.513 [2024-07-12 00:41:54.325064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.513 [2024-07-12 00:41:54.325076] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187f5f0 is same with the state(5) to be set 00:29:26.513 [2024-07-12 00:41:54.325081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.513 [2024-07-12 00:41:54.325091] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187f5f0 is same with the state(5) to be set 00:29:26.513 [2024-07-12 00:41:54.325099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.513 [2024-07-12 00:41:54.325105] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187f5f0 is same with the state(5) to be set 00:29:26.513 [2024-07-12 00:41:54.325115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.513 [2024-07-12 00:41:54.325120] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187f5f0 is same with the state(5) to be set 00:29:26.513 [2024-07-12 00:41:54.325133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.513 [2024-07-12 00:41:54.325135] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187f5f0 is same with the state(5) to be set 00:29:26.513 [2024-07-12 00:41:54.325148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-12 00:41:54.325150] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187f5f0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.513 the state(5) to be set 00:29:26.513 [2024-07-12 00:41:54.325165] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187f5f0 is same with the state(5) to be set 00:29:26.513 [2024-07-12 00:41:54.325168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.513 [2024-07-12 00:41:54.325179] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187f5f0 is same with the state(5) to be set 00:29:26.513 [2024-07-12 00:41:54.325183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.513 [2024-07-12 00:41:54.325193] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187f5f0 is same with the state(5) to be set 00:29:26.513 [2024-07-12 00:41:54.325201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.513 [2024-07-12 00:41:54.325207] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187f5f0 is same with the state(5) to be set 00:29:26.513 [2024-07-12 00:41:54.325216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.513 [2024-07-12 00:41:54.325222] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187f5f0 is same with the state(5) to be set 00:29:26.513 [2024-07-12 00:41:54.325237] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187f5f0 is same with [2024-07-12 00:41:54.325238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:1the state(5) to be set 00:29:26.513 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.513 [2024-07-12 00:41:54.325253] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187f5f0 is same with the state(5) to be set 00:29:26.513 [2024-07-12 00:41:54.325255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.513 [2024-07-12 00:41:54.325268] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187f5f0 is same with the state(5) to be set 00:29:26.513 [2024-07-12 00:41:54.325274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.513 [2024-07-12 00:41:54.325282] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187f5f0 is same with the state(5) to be set 00:29:26.513 [2024-07-12 00:41:54.325289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.513 [2024-07-12 00:41:54.325298] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187f5f0 is same with the state(5) to be set 00:29:26.513 [2024-07-12 00:41:54.325306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.513 [2024-07-12 00:41:54.325312] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187f5f0 is same with the state(5) to be set 00:29:26.513 [2024-07-12 00:41:54.325322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.513 [2024-07-12 00:41:54.325326] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187f5f0 is same with the state(5) to be set 00:29:26.513 [2024-07-12 00:41:54.325339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:1[2024-07-12 00:41:54.325341] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187f5f0 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.513 the state(5) to be set 00:29:26.513 [2024-07-12 00:41:54.325356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-12 00:41:54.325356] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187f5f0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.513 the state(5) to be set 00:29:26.513 [2024-07-12 00:41:54.325373] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187f5f0 is same with the state(5) to be set 00:29:26.513 [2024-07-12 00:41:54.325375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.513 [2024-07-12 00:41:54.325387] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187f5f0 is same with the state(5) to be set 00:29:26.513 [2024-07-12 00:41:54.325391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.514 [2024-07-12 00:41:54.325401] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187f5f0 is same with the state(5) to be set 00:29:26.514 [2024-07-12 00:41:54.325408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.514 [2024-07-12 00:41:54.325415] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187f5f0 is same with the state(5) to be set 00:29:26.514 [2024-07-12 00:41:54.325424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.514 [2024-07-12 00:41:54.325430] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187f5f0 is same with the state(5) to be set 00:29:26.514 [2024-07-12 00:41:54.325444] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187f5f0 is same with [2024-07-12 00:41:54.325445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:1the state(5) to be set 00:29:26.514 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.514 [2024-07-12 00:41:54.325460] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187f5f0 is same with [2024-07-12 00:41:54.325461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:29:26.514 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.514 [2024-07-12 00:41:54.325476] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187f5f0 is same with the state(5) to be set 00:29:26.514 [2024-07-12 00:41:54.325480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.514 [2024-07-12 00:41:54.325490] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187f5f0 is same with the state(5) to be set 00:29:26.514 [2024-07-12 00:41:54.325495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.514 [2024-07-12 00:41:54.325504] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187f5f0 is same with the state(5) to be set 00:29:26.514 [2024-07-12 00:41:54.325513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.514 [2024-07-12 00:41:54.325519] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187f5f0 is same with the state(5) to be set 00:29:26.514 [2024-07-12 00:41:54.325528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.514 [2024-07-12 00:41:54.325541] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187f5f0 is same with the state(5) to be set 00:29:26.514 [2024-07-12 00:41:54.325546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.514 [2024-07-12 00:41:54.325556] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187f5f0 is same with the state(5) to be set 00:29:26.514 [2024-07-12 00:41:54.325561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.514 [2024-07-12 00:41:54.325570] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187f5f0 is same with the state(5) to be set 00:29:26.514 [2024-07-12 00:41:54.325579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.514 [2024-07-12 00:41:54.325584] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187f5f0 is same with the state(5) to be set 00:29:26.514 [2024-07-12 00:41:54.325601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.514 [2024-07-12 00:41:54.325610] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187f5f0 is same with the state(5) to be set 00:29:26.514 [2024-07-12 00:41:54.325620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.514 [2024-07-12 00:41:54.325626] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187f5f0 is same with the state(5) to be set 00:29:26.514 [2024-07-12 00:41:54.325636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.514 [2024-07-12 00:41:54.325649] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187f5f0 is same with [2024-07-12 00:41:54.325654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:1the state(5) to be set 00:29:26.514 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.514 [2024-07-12 00:41:54.325670] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187f5f0 is same with [2024-07-12 00:41:54.325671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:29:26.514 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.514 [2024-07-12 00:41:54.325686] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187f5f0 is same with the state(5) to be set 00:29:26.514 [2024-07-12 00:41:54.325691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.514 [2024-07-12 00:41:54.325707] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187f5f0 is same with [2024-07-12 00:41:54.325709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:29:26.514 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.514 [2024-07-12 00:41:54.325723] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187f5f0 is same with the state(5) to be set 00:29:26.514 [2024-07-12 00:41:54.325727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.514 [2024-07-12 00:41:54.325737] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187f5f0 is same with the state(5) to be set 00:29:26.514 [2024-07-12 00:41:54.325743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.514 [2024-07-12 00:41:54.325751] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187f5f0 is same with the state(5) to be set 00:29:26.514 [2024-07-12 00:41:54.325760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.514 [2024-07-12 00:41:54.325766] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187f5f0 is same with the state(5) to be set 00:29:26.514 [2024-07-12 00:41:54.325776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.514 [2024-07-12 00:41:54.325780] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187f5f0 is same with the state(5) to be set 00:29:26.514 [2024-07-12 00:41:54.325793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:1[2024-07-12 00:41:54.325794] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187f5f0 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.514 the state(5) to be set 00:29:26.514 [2024-07-12 00:41:54.325810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-12 00:41:54.325811] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187f5f0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.514 the state(5) to be set 00:29:26.514 [2024-07-12 00:41:54.325827] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187f5f0 is same with the state(5) to be set 00:29:26.514 [2024-07-12 00:41:54.325829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.514 [2024-07-12 00:41:54.325841] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187f5f0 is same with the state(5) to be set 00:29:26.514 [2024-07-12 00:41:54.325845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.514 [2024-07-12 00:41:54.325854] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187f5f0 is same with the state(5) to be set 00:29:26.514 [2024-07-12 00:41:54.325862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.514 [2024-07-12 00:41:54.325869] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187f5f0 is same with the state(5) to be set 00:29:26.514 [2024-07-12 00:41:54.325882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-12 00:41:54.325883] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187f5f0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.514 the state(5) to be set 00:29:26.514 [2024-07-12 00:41:54.325898] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187f5f0 is same with the state(5) to be set 00:29:26.514 [2024-07-12 00:41:54.325901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.514 [2024-07-12 00:41:54.325912] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187f5f0 is same with the state(5) to be set 00:29:26.514 [2024-07-12 00:41:54.325916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.514 [2024-07-12 00:41:54.325926] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187f5f0 is same with the state(5) to be set 00:29:26.514 [2024-07-12 00:41:54.325934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.514 [2024-07-12 00:41:54.325940] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187f5f0 is same with the state(5) to be set 00:29:26.514 [2024-07-12 00:41:54.325949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.514 [2024-07-12 00:41:54.325954] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187f5f0 is same with the state(5) to be set 00:29:26.514 [2024-07-12 00:41:54.325967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:1[2024-07-12 00:41:54.325968] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187f5f0 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.515 the state(5) to be set 00:29:26.515 [2024-07-12 00:41:54.325984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-12 00:41:54.325984] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187f5f0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.515 the state(5) to be set 00:29:26.515 [2024-07-12 00:41:54.326001] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187f5f0 is same with the state(5) to be set 00:29:26.515 [2024-07-12 00:41:54.326004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.515 [2024-07-12 00:41:54.326019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.515 [2024-07-12 00:41:54.326036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.515 [2024-07-12 00:41:54.326051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.515 [2024-07-12 00:41:54.326069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.515 [2024-07-12 00:41:54.326084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.515 [2024-07-12 00:41:54.326101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.515 [2024-07-12 00:41:54.326116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.515 [2024-07-12 00:41:54.326133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.515 [2024-07-12 00:41:54.326152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.515 [2024-07-12 00:41:54.326170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.515 [2024-07-12 00:41:54.326192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.515 [2024-07-12 00:41:54.326210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.515 [2024-07-12 00:41:54.326225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.515 [2024-07-12 00:41:54.326242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.515 [2024-07-12 00:41:54.326257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.515 [2024-07-12 00:41:54.326274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.515 [2024-07-12 00:41:54.326289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.515 [2024-07-12 00:41:54.326306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.515 [2024-07-12 00:41:54.326321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.515 [2024-07-12 00:41:54.326338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.515 [2024-07-12 00:41:54.326353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.515 [2024-07-12 00:41:54.326369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.515 [2024-07-12 00:41:54.326384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.515 [2024-07-12 00:41:54.326401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.515 [2024-07-12 00:41:54.326416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.515 [2024-07-12 00:41:54.326433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.515 [2024-07-12 00:41:54.326448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.515 [2024-07-12 00:41:54.326465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.515 [2024-07-12 00:41:54.326480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.515 [2024-07-12 00:41:54.326497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.515 [2024-07-12 00:41:54.326512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.515 [2024-07-12 00:41:54.326529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.515 [2024-07-12 00:41:54.326544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.515 [2024-07-12 00:41:54.326565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.515 [2024-07-12 00:41:54.326581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.515 [2024-07-12 00:41:54.326617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.515 [2024-07-12 00:41:54.326644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.515 [2024-07-12 00:41:54.326662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.515 [2024-07-12 00:41:54.326677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.515 [2024-07-12 00:41:54.326694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.515 [2024-07-12 00:41:54.326709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.515 [2024-07-12 00:41:54.326759] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:26.515 [2024-07-12 00:41:54.326834] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xc9a790 was disconnected and freed. reset controller. 00:29:26.515 [2024-07-12 00:41:54.327391] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:26.515 [2024-07-12 00:41:54.327416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.515 [2024-07-12 00:41:54.327433] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:26.515 [2024-07-12 00:41:54.327448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.515 [2024-07-12 00:41:54.327463] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:26.515 [2024-07-12 00:41:54.327478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.515 [2024-07-12 00:41:54.327493] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:26.515 [2024-07-12 00:41:54.327508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.515 [2024-07-12 00:41:54.327523] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf44c0 is same with the state(5) to be set 00:29:26.515 [2024-07-12 00:41:54.327560] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187fab0 is same with the state(5) to be set 00:29:26.515 [2024-07-12 00:41:54.327597] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187fab0 is same with the state(5) to be set 00:29:26.515 [2024-07-12 00:41:54.327614] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187fab0 is same with [2024-07-12 00:41:54.327611] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsthe state(5) to be set 00:29:26.515 id:0 cdw10:00000000 cdw11:00000000 00:29:26.515 [2024-07-12 00:41:54.327631] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187fab0 is same with the state(5) to be set 00:29:26.515 [2024-07-12 00:41:54.327634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.515 [2024-07-12 00:41:54.327651] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187fab0 is same with [2024-07-12 00:41:54.327653] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsthe state(5) to be set 00:29:26.515 id:0 cdw10:00000000 cdw11:00000000 00:29:26.515 [2024-07-12 00:41:54.327671] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187fab0 is same with [2024-07-12 00:41:54.327673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cthe state(5) to be set 00:29:26.515 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.515 [2024-07-12 00:41:54.327688] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187fab0 is same with the state(5) to be set 00:29:26.515 [2024-07-12 00:41:54.327690] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:26.515 [2024-07-12 00:41:54.327709] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187fab0 is same with [2024-07-12 00:41:54.327710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cthe state(5) to be set 00:29:26.515 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.515 [2024-07-12 00:41:54.327725] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187fab0 is same with the state(5) to be set 00:29:26.515 [2024-07-12 00:41:54.327728] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:26.515 [2024-07-12 00:41:54.327739] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187fab0 is same with the state(5) to be set 00:29:26.515 [2024-07-12 00:41:54.327743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.515 [2024-07-12 00:41:54.327753] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187fab0 is same with the state(5) to be set 00:29:26.515 [2024-07-12 00:41:54.327757] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7d8b0 is same with the state(5) to be set 00:29:26.515 [2024-07-12 00:41:54.327767] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187fab0 is same with the state(5) to be set 00:29:26.515 [2024-07-12 00:41:54.327781] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187fab0 is same with the state(5) to be set 00:29:26.515 [2024-07-12 00:41:54.327795] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187fab0 is same with the state(5) to be set 00:29:26.515 [2024-07-12 00:41:54.327803] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 ns[2024-07-12 00:41:54.327808] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187fab0 is same with id:0 cdw10:00000000 cdw11:00000000 00:29:26.515 the state(5) to be set 00:29:26.515 [2024-07-12 00:41:54.327825] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187fab0 is same with [2024-07-12 00:41:54.327825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cthe state(5) to be set 00:29:26.515 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.515 [2024-07-12 00:41:54.327840] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187fab0 is same with the state(5) to be set 00:29:26.515 [2024-07-12 00:41:54.327843] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:26.515 [2024-07-12 00:41:54.327854] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187fab0 is same with the state(5) to be set 00:29:26.516 [2024-07-12 00:41:54.327858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.516 [2024-07-12 00:41:54.327868] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187fab0 is same with the state(5) to be set 00:29:26.516 [2024-07-12 00:41:54.327874] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:26.516 [2024-07-12 00:41:54.327882] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187fab0 is same with the state(5) to be set 00:29:26.516 [2024-07-12 00:41:54.327894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.516 [2024-07-12 00:41:54.327897] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187fab0 is same with the state(5) to be set 00:29:26.516 [2024-07-12 00:41:54.327910] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 ns[2024-07-12 00:41:54.327911] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187fab0 is same with id:0 cdw10:00000000 cdw11:00000000 00:29:26.516 the state(5) to be set 00:29:26.516 [2024-07-12 00:41:54.327926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c[2024-07-12 00:41:54.327927] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187fab0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.516 the state(5) to be set 00:29:26.516 [2024-07-12 00:41:54.327943] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb92fd0 is same w[2024-07-12 00:41:54.327943] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187fab0 is same with ith the state(5) to be set 00:29:26.516 the state(5) to be set 00:29:26.516 [2024-07-12 00:41:54.327960] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187fab0 is same with the state(5) to be set 00:29:26.516 [2024-07-12 00:41:54.327973] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187fab0 is same with the state(5) to be set 00:29:26.516 [2024-07-12 00:41:54.327987] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187fab0 is same with the state(5) to be set 00:29:26.516 [2024-07-12 00:41:54.327992] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:26.516 [2024-07-12 00:41:54.328013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.516 [2024-07-12 00:41:54.328008] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187fab0 is same with the state(5) to be set 00:29:26.516 [2024-07-12 00:41:54.328029] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:26.516 [2024-07-12 00:41:54.328037] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187fab0 is same with the state(5) to be set 00:29:26.516 [2024-07-12 00:41:54.328044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.516 [2024-07-12 00:41:54.328054] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187fab0 is same with the state(5) to be set 00:29:26.516 [2024-07-12 00:41:54.328060] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:26.516 [2024-07-12 00:41:54.328069] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187fab0 is same with the state(5) to be set 00:29:26.516 [2024-07-12 00:41:54.328076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.516 [2024-07-12 00:41:54.328083] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187fab0 is same with the state(5) to be set 00:29:26.516 [2024-07-12 00:41:54.328091] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:26.516 [2024-07-12 00:41:54.328097] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187fab0 is same with the state(5) to be set 00:29:26.516 [2024-07-12 00:41:54.328106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.516 [2024-07-12 00:41:54.328112] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187fab0 is same with [2024-07-12 00:41:54.328121] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbbf0e0 is same wthe state(5) to be set 00:29:26.516 ith the state(5) to be set 00:29:26.516 [2024-07-12 00:41:54.328136] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187fab0 is same with the state(5) to be set 00:29:26.516 [2024-07-12 00:41:54.328151] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187fab0 is same with the state(5) to be set 00:29:26.516 [2024-07-12 00:41:54.328165] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187fab0 is same with the state(5) to be set 00:29:26.516 [2024-07-12 00:41:54.328170] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:26.516 [2024-07-12 00:41:54.328178] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187fab0 is same with the state(5) to be set 00:29:26.516 [2024-07-12 00:41:54.328190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c[2024-07-12 00:41:54.328192] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187fab0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.516 the state(5) to be set 00:29:26.516 [2024-07-12 00:41:54.328207] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187fab0 is same with [2024-07-12 00:41:54.328208] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsthe state(5) to be set 00:29:26.516 id:0 cdw10:00000000 cdw11:00000000 00:29:26.516 [2024-07-12 00:41:54.328222] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187fab0 is same with [2024-07-12 00:41:54.328224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cthe state(5) to be set 00:29:26.516 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.516 [2024-07-12 00:41:54.328238] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187fab0 is same with the state(5) to be set 00:29:26.516 [2024-07-12 00:41:54.328241] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:26.516 [2024-07-12 00:41:54.328252] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187fab0 is same with the state(5) to be set 00:29:26.516 [2024-07-12 00:41:54.328256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.516 [2024-07-12 00:41:54.328266] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187fab0 is same with the state(5) to be set 00:29:26.516 [2024-07-12 00:41:54.328271] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:26.516 [2024-07-12 00:41:54.328280] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187fab0 is same with the state(5) to be set 00:29:26.516 [2024-07-12 00:41:54.328286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.516 [2024-07-12 00:41:54.328294] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187fab0 is same with the state(5) to be set 00:29:26.516 [2024-07-12 00:41:54.328301] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b600 is same with the state(5) to be set 00:29:26.516 [2024-07-12 00:41:54.328308] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187fab0 is same with the state(5) to be set 00:29:26.516 [2024-07-12 00:41:54.328322] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187fab0 is same with the state(5) to be set 00:29:26.516 [2024-07-12 00:41:54.328335] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187fab0 is same with the state(5) to be set 00:29:26.516 [2024-07-12 00:41:54.328342] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 ns[2024-07-12 00:41:54.328348] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187fab0 is same with id:0 cdw10:00000000 cdw11:00000000 00:29:26.516 the state(5) to be set 00:29:26.516 [2024-07-12 00:41:54.328366] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187fab0 is same with the state(5) to be set 00:29:26.516 [2024-07-12 00:41:54.328368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.516 [2024-07-12 00:41:54.328380] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187fab0 is same with the state(5) to be set 00:29:26.516 [2024-07-12 00:41:54.328385] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:26.516 [2024-07-12 00:41:54.328393] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187fab0 is same with the state(5) to be set 00:29:26.516 [2024-07-12 00:41:54.328400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.516 [2024-07-12 00:41:54.328408] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187fab0 is same with the state(5) to be set 00:29:26.516 [2024-07-12 00:41:54.328416] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:26.516 [2024-07-12 00:41:54.328421] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187fab0 is same with the state(5) to be set 00:29:26.516 [2024-07-12 00:41:54.328431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.516 [2024-07-12 00:41:54.328435] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187fab0 is same with the state(5) to be set 00:29:26.516 [2024-07-12 00:41:54.328447] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:26.516 [2024-07-12 00:41:54.328449] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187fab0 is same with the state(5) to be set 00:29:26.516 [2024-07-12 00:41:54.328461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.516 [2024-07-12 00:41:54.328464] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187fab0 is same with the state(5) to be set 00:29:26.516 [2024-07-12 00:41:54.328476] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75b980 is same w[2024-07-12 00:41:54.328478] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187fab0 is same with ith the state(5) to be set 00:29:26.516 the state(5) to be set 00:29:26.516 [2024-07-12 00:41:54.330316] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18803f0 is same with the state(5) to be set 00:29:26.516 [2024-07-12 00:41:54.330347] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18803f0 is same with the state(5) to be set 00:29:26.516 [2024-07-12 00:41:54.330362] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18803f0 is same with the state(5) to be set 00:29:26.516 [2024-07-12 00:41:54.330376] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18803f0 is same with the state(5) to be set 00:29:26.516 [2024-07-12 00:41:54.330371] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:29:26.516 [2024-07-12 00:41:54.330390] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18803f0 is same with the state(5) to be set 00:29:26.516 [2024-07-12 00:41:54.330407] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18803f0 is same with the state(5) to be set 00:29:26.516 [2024-07-12 00:41:54.330416] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b600 (9): Bad file descriptor 00:29:26.516 [2024-07-12 00:41:54.330422] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18803f0 is same with the state(5) to be set 00:29:26.516 [2024-07-12 00:41:54.330445] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18803f0 is same with the state(5) to be set 00:29:26.516 [2024-07-12 00:41:54.330460] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18803f0 is same with the state(5) to be set 00:29:26.516 [2024-07-12 00:41:54.330473] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18803f0 is same with the state(5) to be set 00:29:26.516 [2024-07-12 00:41:54.330487] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18803f0 is same with the state(5) to be set 00:29:26.516 [2024-07-12 00:41:54.330500] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18803f0 is same with the state(5) to be set 00:29:26.516 [2024-07-12 00:41:54.330514] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18803f0 is same with the state(5) to be set 00:29:26.516 [2024-07-12 00:41:54.330528] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18803f0 is same with the state(5) to be set 00:29:26.517 [2024-07-12 00:41:54.330545] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18803f0 is same with the state(5) to be set 00:29:26.517 [2024-07-12 00:41:54.330559] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18803f0 is same with the state(5) to be set 00:29:26.517 [2024-07-12 00:41:54.330572] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18803f0 is same with the state(5) to be set 00:29:26.517 [2024-07-12 00:41:54.330593] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18803f0 is same with the state(5) to be set 00:29:26.517 [2024-07-12 00:41:54.330608] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18803f0 is same with the state(5) to be set 00:29:26.517 [2024-07-12 00:41:54.330622] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18803f0 is same with the state(5) to be set 00:29:26.517 [2024-07-12 00:41:54.330642] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18803f0 is same with the state(5) to be set 00:29:26.517 [2024-07-12 00:41:54.330656] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18803f0 is same with the state(5) to be set 00:29:26.517 [2024-07-12 00:41:54.330673] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18803f0 is same with the state(5) to be set 00:29:26.517 [2024-07-12 00:41:54.330687] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18803f0 is same with the state(5) to be set 00:29:26.517 [2024-07-12 00:41:54.330706] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18803f0 is same with the state(5) to be set 00:29:26.517 [2024-07-12 00:41:54.330719] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18803f0 is same with the state(5) to be set 00:29:26.517 [2024-07-12 00:41:54.330733] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18803f0 is same with the state(5) to be set 00:29:26.517 [2024-07-12 00:41:54.330747] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18803f0 is same with the state(5) to be set 00:29:26.517 [2024-07-12 00:41:54.330762] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18803f0 is same with the state(5) to be set 00:29:26.517 [2024-07-12 00:41:54.330776] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18803f0 is same with the state(5) to be set 00:29:26.517 [2024-07-12 00:41:54.330789] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18803f0 is same with the state(5) to be set 00:29:26.517 [2024-07-12 00:41:54.330806] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18803f0 is same with the state(5) to be set 00:29:26.517 [2024-07-12 00:41:54.330820] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18803f0 is same with the state(5) to be set 00:29:26.517 [2024-07-12 00:41:54.330834] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18803f0 is same with the state(5) to be set 00:29:26.517 [2024-07-12 00:41:54.330852] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18803f0 is same with the state(5) to be set 00:29:26.517 [2024-07-12 00:41:54.330867] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18803f0 is same with the state(5) to be set 00:29:26.517 [2024-07-12 00:41:54.330881] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18803f0 is same with the state(5) to be set 00:29:26.517 [2024-07-12 00:41:54.330895] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18803f0 is same with the state(5) to be set 00:29:26.517 [2024-07-12 00:41:54.330908] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18803f0 is same with the state(5) to be set 00:29:26.517 [2024-07-12 00:41:54.330922] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18803f0 is same with the state(5) to be set 00:29:26.517 [2024-07-12 00:41:54.330939] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18803f0 is same with the state(5) to be set 00:29:26.517 [2024-07-12 00:41:54.330953] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18803f0 is same with the state(5) to be set 00:29:26.517 [2024-07-12 00:41:54.330967] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18803f0 is same with the state(5) to be set 00:29:26.517 [2024-07-12 00:41:54.330981] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18803f0 is same with the state(5) to be set 00:29:26.517 [2024-07-12 00:41:54.330994] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18803f0 is same with the state(5) to be set 00:29:26.517 [2024-07-12 00:41:54.331008] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18803f0 is same with the state(5) to be set 00:29:26.517 [2024-07-12 00:41:54.331022] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18803f0 is same with the state(5) to be set 00:29:26.517 [2024-07-12 00:41:54.331035] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18803f0 is same with the state(5) to be set 00:29:26.517 [2024-07-12 00:41:54.331049] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18803f0 is same with the state(5) to be set 00:29:26.517 [2024-07-12 00:41:54.331063] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18803f0 is same with the state(5) to be set 00:29:26.517 [2024-07-12 00:41:54.331077] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18803f0 is same with the state(5) to be set 00:29:26.517 [2024-07-12 00:41:54.331090] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18803f0 is same with the state(5) to be set 00:29:26.517 [2024-07-12 00:41:54.331104] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18803f0 is same with the state(5) to be set 00:29:26.517 [2024-07-12 00:41:54.331117] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18803f0 is same with the state(5) to be set 00:29:26.517 [2024-07-12 00:41:54.331131] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18803f0 is same with the state(5) to be set 00:29:26.517 [2024-07-12 00:41:54.331145] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18803f0 is same with the state(5) to be set 00:29:26.517 [2024-07-12 00:41:54.331158] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18803f0 is same with the state(5) to be set 00:29:26.517 [2024-07-12 00:41:54.331172] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18803f0 is same with the state(5) to be set 00:29:26.517 [2024-07-12 00:41:54.331186] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18803f0 is same with the state(5) to be set 00:29:26.517 [2024-07-12 00:41:54.331200] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18803f0 is same with the state(5) to be set 00:29:26.517 [2024-07-12 00:41:54.331213] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18803f0 is same with the state(5) to be set 00:29:26.517 [2024-07-12 00:41:54.331230] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18803f0 is same with the state(5) to be set 00:29:26.517 [2024-07-12 00:41:54.331244] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18803f0 is same with the state(5) to be set 00:29:26.517 [2024-07-12 00:41:54.332028] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1880890 is same with the state(5) to be set 00:29:26.517 [2024-07-12 00:41:54.332057] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1880890 is same with the state(5) to be set 00:29:26.517 [2024-07-12 00:41:54.332072] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1880890 is same with the state(5) to be set 00:29:26.517 [2024-07-12 00:41:54.332086] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1880890 is same with the state(5) to be set 00:29:26.517 [2024-07-12 00:41:54.332100] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1880890 is same with the state(5) to be set 00:29:26.517 [2024-07-12 00:41:54.332113] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1880890 is same with the state(5) to be set 00:29:26.517 [2024-07-12 00:41:54.332127] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1880890 is same with the state(5) to be set 00:29:26.517 [2024-07-12 00:41:54.332141] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1880890 is same with the state(5) to be set 00:29:26.517 [2024-07-12 00:41:54.332155] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1880890 is same with the state(5) to be set 00:29:26.517 [2024-07-12 00:41:54.332169] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1880890 is same with the state(5) to be set 00:29:26.517 [2024-07-12 00:41:54.332182] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1880890 is same with the state(5) to be set 00:29:26.517 [2024-07-12 00:41:54.332196] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1880890 is same with the state(5) to be set 00:29:26.517 [2024-07-12 00:41:54.332210] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1880890 is same with the state(5) to be set 00:29:26.517 [2024-07-12 00:41:54.332224] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1880890 is same with the state(5) to be set 00:29:26.517 [2024-07-12 00:41:54.332238] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1880890 is same with the state(5) to be set 00:29:26.517 [2024-07-12 00:41:54.332252] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1880890 is same with the state(5) to be set 00:29:26.517 [2024-07-12 00:41:54.332265] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1880890 is same with the state(5) to be set 00:29:26.517 [2024-07-12 00:41:54.332279] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1880890 is same with the state(5) to be set 00:29:26.517 [2024-07-12 00:41:54.332293] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1880890 is same with the state(5) to be set 00:29:26.517 [2024-07-12 00:41:54.332307] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1880890 is same with the state(5) to be set 00:29:26.517 [2024-07-12 00:41:54.332307] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:29:26.517 [2024-07-12 00:41:54.332321] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1880890 is same with the state(5) to be set 00:29:26.517 [2024-07-12 00:41:54.332335] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1880890 is same with the state(5) to be set 00:29:26.517 [2024-07-12 00:41:54.332349] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1880890 is same with the state(5) to be set 00:29:26.517 [2024-07-12 00:41:54.332363] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1880890 is same with the state(5) to be set 00:29:26.517 [2024-07-12 00:41:54.332377] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1880890 is same with the state(5) to be set 00:29:26.517 [2024-07-12 00:41:54.332396] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1880890 is same with the state(5) to be set 00:29:26.517 [2024-07-12 00:41:54.332410] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1880890 is same with the state(5) to be set 00:29:26.517 [2024-07-12 00:41:54.332424] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1880890 is same with the state(5) to be set 00:29:26.517 [2024-07-12 00:41:54.332443] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1880890 is same with the state(5) to be set 00:29:26.517 [2024-07-12 00:41:54.332457] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1880890 is same with the state(5) to be set 00:29:26.517 [2024-07-12 00:41:54.332455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.517 [2024-07-12 00:41:54.332472] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1880890 is same with the state(5) to be set 00:29:26.517 [2024-07-12 00:41:54.332486] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1880890 is same with [2024-07-12 00:41:54.332486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b600 withthe state(5) to be set 00:29:26.517 addr=10.0.0.2, port=4420 00:29:26.517 [2024-07-12 00:41:54.332502] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1880890 is same with the state(5) to be set 00:29:26.517 [2024-07-12 00:41:54.332506] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b600 is same with the state(5) to be set 00:29:26.517 [2024-07-12 00:41:54.332523] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1880890 is same with the state(5) to be set 00:29:26.517 [2024-07-12 00:41:54.332538] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1880890 is same with the state(5) to be set 00:29:26.517 [2024-07-12 00:41:54.332551] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1880890 is same with the state(5) to be set 00:29:26.517 [2024-07-12 00:41:54.332565] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1880890 is same with the state(5) to be set 00:29:26.517 [2024-07-12 00:41:54.332579] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1880890 is same with the state(5) to be set 00:29:26.517 [2024-07-12 00:41:54.332585] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:29:26.518 [2024-07-12 00:41:54.332601] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1880890 is same with the state(5) to be set 00:29:26.518 [2024-07-12 00:41:54.332616] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1880890 is same with the state(5) to be set 00:29:26.518 [2024-07-12 00:41:54.332630] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1880890 is same with the state(5) to be set 00:29:26.518 [2024-07-12 00:41:54.332643] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1880890 is same with the state(5) to be set 00:29:26.518 [2024-07-12 00:41:54.332660] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1880890 is same with the state(5) to be set 00:29:26.518 [2024-07-12 00:41:54.332673] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1880890 is same with the state(5) to be set 00:29:26.518 [2024-07-12 00:41:54.332688] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1880890 is same with the state(5) to be set 00:29:26.518 [2024-07-12 00:41:54.332688] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:29:26.518 [2024-07-12 00:41:54.332702] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1880890 is same with the state(5) to be set 00:29:26.518 [2024-07-12 00:41:54.332719] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1880890 is same with the state(5) to be set 00:29:26.518 [2024-07-12 00:41:54.332732] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1880890 is same with the state(5) to be set 00:29:26.518 [2024-07-12 00:41:54.332750] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1880890 is same with the state(5) to be set 00:29:26.518 [2024-07-12 00:41:54.332754] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:29:26.518 [2024-07-12 00:41:54.332764] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1880890 is same with the state(5) to be set 00:29:26.518 [2024-07-12 00:41:54.332778] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1880890 is same with the state(5) to be set 00:29:26.518 [2024-07-12 00:41:54.332791] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1880890 is same with the state(5) to be set 00:29:26.518 [2024-07-12 00:41:54.332805] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1880890 is same with the state(5) to be set 00:29:26.518 [2024-07-12 00:41:54.332819] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1880890 is same with the state(5) to be set 00:29:26.518 [2024-07-12 00:41:54.332833] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1880890 is same with the state(5) to be set 00:29:26.518 [2024-07-12 00:41:54.332846] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1880890 is same with the state(5) to be set 00:29:26.518 [2024-07-12 00:41:54.332860] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1880890 is same with the state(5) to be set 00:29:26.518 [2024-07-12 00:41:54.332873] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1880890 is same with the state(5) to be set 00:29:26.518 [2024-07-12 00:41:54.332887] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1880890 is same with the state(5) to be set 00:29:26.518 [2024-07-12 00:41:54.332901] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1880890 is same with the state(5) to be set 00:29:26.518 [2024-07-12 00:41:54.332914] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1880890 is same with the state(5) to be set 00:29:26.518 [2024-07-12 00:41:54.332927] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1880890 is same with the state(5) to be set 00:29:26.518 [2024-07-12 00:41:54.332941] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1880890 is same with the state(5) to be set 00:29:26.518 [2024-07-12 00:41:54.333013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.518 [2024-07-12 00:41:54.333038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.518 [2024-07-12 00:41:54.333065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.518 [2024-07-12 00:41:54.333082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.518 [2024-07-12 00:41:54.333101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.518 [2024-07-12 00:41:54.333116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.518 [2024-07-12 00:41:54.333133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.518 [2024-07-12 00:41:54.333149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.518 [2024-07-12 00:41:54.333166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.518 [2024-07-12 00:41:54.333181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.518 [2024-07-12 00:41:54.333210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.518 [2024-07-12 00:41:54.333227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.518 [2024-07-12 00:41:54.333244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.518 [2024-07-12 00:41:54.333259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.518 [2024-07-12 00:41:54.333274] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9e770 is same with the state(5) to be set 00:29:26.518 [2024-07-12 00:41:54.333337] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xc9e770 was disconnected and freed. reset controller. 00:29:26.518 [2024-07-12 00:41:54.333506] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b600 (9): Bad file descriptor 00:29:26.518 [2024-07-12 00:41:54.333555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.518 [2024-07-12 00:41:54.333575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.518 [2024-07-12 00:41:54.333604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.518 [2024-07-12 00:41:54.333622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.518 [2024-07-12 00:41:54.333639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.518 [2024-07-12 00:41:54.333662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.519 [2024-07-12 00:41:54.333679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.519 [2024-07-12 00:41:54.333695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.519 [2024-07-12 00:41:54.333718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.519 [2024-07-12 00:41:54.333733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.519 [2024-07-12 00:41:54.333750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.519 [2024-07-12 00:41:54.333765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.519 [2024-07-12 00:41:54.333782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.519 [2024-07-12 00:41:54.333797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.519 [2024-07-12 00:41:54.333814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.519 [2024-07-12 00:41:54.333829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.519 [2024-07-12 00:41:54.333846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.519 [2024-07-12 00:41:54.333861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.519 [2024-07-12 00:41:54.333879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.519 [2024-07-12 00:41:54.333899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.519 [2024-07-12 00:41:54.333917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.519 [2024-07-12 00:41:54.333932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.519 [2024-07-12 00:41:54.333949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.519 [2024-07-12 00:41:54.333965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.519 [2024-07-12 00:41:54.333982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.519 [2024-07-12 00:41:54.333996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.519 [2024-07-12 00:41:54.334013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.519 [2024-07-12 00:41:54.334028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.519 [2024-07-12 00:41:54.334045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.519 [2024-07-12 00:41:54.334060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.519 [2024-07-12 00:41:54.334077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.519 [2024-07-12 00:41:54.334092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.519 [2024-07-12 00:41:54.334109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.519 [2024-07-12 00:41:54.334124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.519 [2024-07-12 00:41:54.334141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.519 [2024-07-12 00:41:54.334156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.519 [2024-07-12 00:41:54.334173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.519 [2024-07-12 00:41:54.334187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.519 [2024-07-12 00:41:54.334204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.519 [2024-07-12 00:41:54.334220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.519 [2024-07-12 00:41:54.334236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.519 [2024-07-12 00:41:54.334251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.519 [2024-07-12 00:41:54.334268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.519 [2024-07-12 00:41:54.334283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.519 [2024-07-12 00:41:54.334313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.519 [2024-07-12 00:41:54.334330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.519 [2024-07-12 00:41:54.334347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.519 [2024-07-12 00:41:54.334362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.519 [2024-07-12 00:41:54.334379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.519 [2024-07-12 00:41:54.334394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.519 [2024-07-12 00:41:54.334411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.519 [2024-07-12 00:41:54.334426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.519 [2024-07-12 00:41:54.334443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.519 [2024-07-12 00:41:54.334458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.519 [2024-07-12 00:41:54.334475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.519 [2024-07-12 00:41:54.334490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.519 [2024-07-12 00:41:54.334507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.519 [2024-07-12 00:41:54.334522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.519 [2024-07-12 00:41:54.334539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.519 [2024-07-12 00:41:54.334554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.519 [2024-07-12 00:41:54.334571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.519 [2024-07-12 00:41:54.334593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.519 [2024-07-12 00:41:54.334626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.519 [2024-07-12 00:41:54.334645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.519 [2024-07-12 00:41:54.334662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.519 [2024-07-12 00:41:54.334677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.519 [2024-07-12 00:41:54.334694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.519 [2024-07-12 00:41:54.334712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.519 [2024-07-12 00:41:54.334729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.519 [2024-07-12 00:41:54.334748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.519 [2024-07-12 00:41:54.334766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.519 [2024-07-12 00:41:54.334781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.519 [2024-07-12 00:41:54.334798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.519 [2024-07-12 00:41:54.334813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.519 [2024-07-12 00:41:54.334830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.519 [2024-07-12 00:41:54.334845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.519 [2024-07-12 00:41:54.334868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.519 [2024-07-12 00:41:54.334884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.519 [2024-07-12 00:41:54.334901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.520 [2024-07-12 00:41:54.334916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.520 [2024-07-12 00:41:54.334933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.520 [2024-07-12 00:41:54.334949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.520 [2024-07-12 00:41:54.334966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.520 [2024-07-12 00:41:54.334981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.520 [2024-07-12 00:41:54.334997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.520 [2024-07-12 00:41:54.335012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.520 [2024-07-12 00:41:54.335029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.520 [2024-07-12 00:41:54.335044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.520 [2024-07-12 00:41:54.335061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.520 [2024-07-12 00:41:54.335076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.520 [2024-07-12 00:41:54.335092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.520 [2024-07-12 00:41:54.335108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.520 [2024-07-12 00:41:54.335124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.520 [2024-07-12 00:41:54.335139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.520 [2024-07-12 00:41:54.335160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.520 [2024-07-12 00:41:54.335175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.520 [2024-07-12 00:41:54.335192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.520 [2024-07-12 00:41:54.335208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.520 [2024-07-12 00:41:54.335225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.520 [2024-07-12 00:41:54.335240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.520 [2024-07-12 00:41:54.335256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.520 [2024-07-12 00:41:54.335271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.520 [2024-07-12 00:41:54.335288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.520 [2024-07-12 00:41:54.335303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.520 [2024-07-12 00:41:54.335320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.520 [2024-07-12 00:41:54.335335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.520 [2024-07-12 00:41:54.335351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.520 [2024-07-12 00:41:54.335366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.520 [2024-07-12 00:41:54.335388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.520 [2024-07-12 00:41:54.335404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.520 [2024-07-12 00:41:54.335420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.520 [2024-07-12 00:41:54.335436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.520 [2024-07-12 00:41:54.335453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:33536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.520 [2024-07-12 00:41:54.335468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.520 [2024-07-12 00:41:54.335485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.520 [2024-07-12 00:41:54.335500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.520 [2024-07-12 00:41:54.335516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:33792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.520 [2024-07-12 00:41:54.335532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.520 [2024-07-12 00:41:54.335548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.520 [2024-07-12 00:41:54.335567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.520 [2024-07-12 00:41:54.335584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.520 [2024-07-12 00:41:54.335608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.520 [2024-07-12 00:41:54.335626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.520 [2024-07-12 00:41:54.335649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.520 [2024-07-12 00:41:54.335666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.520 [2024-07-12 00:41:54.335681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.520 [2024-07-12 00:41:54.335698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.520 [2024-07-12 00:41:54.335713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.520 [2024-07-12 00:41:54.335729] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75fc70 is same with the state(5) to be set 00:29:26.520 [2024-07-12 00:41:54.335799] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x75fc70 was disconnected and freed. reset controller. 00:29:26.520 [2024-07-12 00:41:54.336983] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:29:26.520 [2024-07-12 00:41:54.337075] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd4610 (9): Bad file descriptor 00:29:26.520 [2024-07-12 00:41:54.337104] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:29:26.520 [2024-07-12 00:41:54.337119] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:29:26.520 [2024-07-12 00:41:54.337138] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:29:26.520 [2024-07-12 00:41:54.338557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.520 [2024-07-12 00:41:54.338609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.520 [2024-07-12 00:41:54.338656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.520 [2024-07-12 00:41:54.338673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.520 [2024-07-12 00:41:54.338692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.520 [2024-07-12 00:41:54.338711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.520 [2024-07-12 00:41:54.338729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.520 [2024-07-12 00:41:54.338744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.520 [2024-07-12 00:41:54.338761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.520 [2024-07-12 00:41:54.338777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.520 [2024-07-12 00:41:54.338805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.520 [2024-07-12 00:41:54.338821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.520 [2024-07-12 00:41:54.338838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.520 [2024-07-12 00:41:54.338853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.520 [2024-07-12 00:41:54.338870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.520 [2024-07-12 00:41:54.338885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.520 [2024-07-12 00:41:54.338902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.520 [2024-07-12 00:41:54.338918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.520 [2024-07-12 00:41:54.338935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.520 [2024-07-12 00:41:54.338949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.520 [2024-07-12 00:41:54.338967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.520 [2024-07-12 00:41:54.338982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.520 [2024-07-12 00:41:54.338999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.520 [2024-07-12 00:41:54.339014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.520 [2024-07-12 00:41:54.339031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.520 [2024-07-12 00:41:54.339046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.520 [2024-07-12 00:41:54.339063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.520 [2024-07-12 00:41:54.339078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.520 [2024-07-12 00:41:54.339096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.520 [2024-07-12 00:41:54.339111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.520 [2024-07-12 00:41:54.339129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.521 [2024-07-12 00:41:54.339144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.521 [2024-07-12 00:41:54.339161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.521 [2024-07-12 00:41:54.339176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.521 [2024-07-12 00:41:54.339194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.521 [2024-07-12 00:41:54.339213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.521 [2024-07-12 00:41:54.339231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.521 [2024-07-12 00:41:54.339246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.521 [2024-07-12 00:41:54.339264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.521 [2024-07-12 00:41:54.339279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.521 [2024-07-12 00:41:54.339296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.521 [2024-07-12 00:41:54.339311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.521 [2024-07-12 00:41:54.339329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.521 [2024-07-12 00:41:54.339344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.521 [2024-07-12 00:41:54.339361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.521 [2024-07-12 00:41:54.339376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.521 [2024-07-12 00:41:54.339393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.521 [2024-07-12 00:41:54.339408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.521 [2024-07-12 00:41:54.339426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.521 [2024-07-12 00:41:54.339441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.521 [2024-07-12 00:41:54.339458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.521 [2024-07-12 00:41:54.339473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.521 [2024-07-12 00:41:54.339490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.521 [2024-07-12 00:41:54.339505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.521 [2024-07-12 00:41:54.339523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.521 [2024-07-12 00:41:54.339537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.521 [2024-07-12 00:41:54.339555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.521 [2024-07-12 00:41:54.339570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.521 [2024-07-12 00:41:54.339594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.521 [2024-07-12 00:41:54.339611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.521 [2024-07-12 00:41:54.339632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.521 [2024-07-12 00:41:54.339656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.521 [2024-07-12 00:41:54.339674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.521 [2024-07-12 00:41:54.339689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.521 [2024-07-12 00:41:54.339713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.521 [2024-07-12 00:41:54.339728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.521 [2024-07-12 00:41:54.339746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.521 [2024-07-12 00:41:54.339761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.521 [2024-07-12 00:41:54.339778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.521 [2024-07-12 00:41:54.339793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.521 [2024-07-12 00:41:54.339810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.521 [2024-07-12 00:41:54.339825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.521 [2024-07-12 00:41:54.339843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.521 [2024-07-12 00:41:54.339858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.521 [2024-07-12 00:41:54.339875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.521 [2024-07-12 00:41:54.339890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.521 [2024-07-12 00:41:54.339916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.521 [2024-07-12 00:41:54.339932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.521 [2024-07-12 00:41:54.339949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.521 [2024-07-12 00:41:54.339964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.521 [2024-07-12 00:41:54.339981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.521 [2024-07-12 00:41:54.339996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.521 [2024-07-12 00:41:54.340013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.521 [2024-07-12 00:41:54.340028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.521 [2024-07-12 00:41:54.340045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.521 [2024-07-12 00:41:54.340063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.521 [2024-07-12 00:41:54.340081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.521 [2024-07-12 00:41:54.340096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.521 [2024-07-12 00:41:54.340113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.521 [2024-07-12 00:41:54.340128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.521 [2024-07-12 00:41:54.340145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.521 [2024-07-12 00:41:54.340160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.521 [2024-07-12 00:41:54.340177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.521 [2024-07-12 00:41:54.340192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.521 [2024-07-12 00:41:54.340209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.521 [2024-07-12 00:41:54.340224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.521 [2024-07-12 00:41:54.340241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.521 [2024-07-12 00:41:54.340256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.521 [2024-07-12 00:41:54.340273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.521 [2024-07-12 00:41:54.340288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.521 [2024-07-12 00:41:54.340306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.521 [2024-07-12 00:41:54.340321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.521 [2024-07-12 00:41:54.340338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.521 [2024-07-12 00:41:54.340353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.521 [2024-07-12 00:41:54.340370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.521 [2024-07-12 00:41:54.340385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.521 [2024-07-12 00:41:54.340402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.521 [2024-07-12 00:41:54.340417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.521 [2024-07-12 00:41:54.340439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.521 [2024-07-12 00:41:54.340455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.521 [2024-07-12 00:41:54.340476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.521 [2024-07-12 00:41:54.340492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.521 [2024-07-12 00:41:54.340509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.521 [2024-07-12 00:41:54.340524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.521 [2024-07-12 00:41:54.340542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.521 [2024-07-12 00:41:54.340556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.522 [2024-07-12 00:41:54.340573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.522 [2024-07-12 00:41:54.340594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.522 [2024-07-12 00:41:54.340613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.522 [2024-07-12 00:41:54.340631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.522 [2024-07-12 00:41:54.340655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.522 [2024-07-12 00:41:54.340671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.522 [2024-07-12 00:41:54.340688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.522 [2024-07-12 00:41:54.340709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.522 [2024-07-12 00:41:54.340726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.522 [2024-07-12 00:41:54.340742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.522 [2024-07-12 00:41:54.340759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.805 [2024-07-12 00:41:54.348271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.805 [2024-07-12 00:41:54.348364] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9d520 is same with the state(5) to be set 00:29:26.805 [2024-07-12 00:41:54.348464] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xc9d520 was disconnected and freed. reset controller. 00:29:26.805 [2024-07-12 00:41:54.348776] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:26.805 [2024-07-12 00:41:54.348807] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:29:26.805 [2024-07-12 00:41:54.348863] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7d8b0 (9): Bad file descriptor 00:29:26.805 [2024-07-12 00:41:54.348952] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:26.805 [2024-07-12 00:41:54.348976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.805 [2024-07-12 00:41:54.348994] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:26.805 [2024-07-12 00:41:54.349021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.805 [2024-07-12 00:41:54.349039] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:26.805 [2024-07-12 00:41:54.349053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.805 [2024-07-12 00:41:54.349069] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:26.805 [2024-07-12 00:41:54.349083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.805 [2024-07-12 00:41:54.349103] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf46a0 is same with the state(5) to be set 00:29:26.805 [2024-07-12 00:41:54.349137] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf44c0 (9): Bad file descriptor 00:29:26.805 [2024-07-12 00:41:54.349197] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:26.805 [2024-07-12 00:41:54.349219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.805 [2024-07-12 00:41:54.349235] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:26.805 [2024-07-12 00:41:54.349250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.805 [2024-07-12 00:41:54.349265] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:26.805 [2024-07-12 00:41:54.349279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.805 [2024-07-12 00:41:54.349294] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:26.805 [2024-07-12 00:41:54.349308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.805 [2024-07-12 00:41:54.349322] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb941a0 is same with the state(5) to be set 00:29:26.805 [2024-07-12 00:41:54.349373] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:26.805 [2024-07-12 00:41:54.349393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.806 [2024-07-12 00:41:54.349409] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:26.806 [2024-07-12 00:41:54.349423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.806 [2024-07-12 00:41:54.349439] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:26.806 [2024-07-12 00:41:54.349454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.806 [2024-07-12 00:41:54.349469] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:26.806 [2024-07-12 00:41:54.349483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.806 [2024-07-12 00:41:54.349497] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x762a90 is same with the state(5) to be set 00:29:26.806 [2024-07-12 00:41:54.349527] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb92fd0 (9): Bad file descriptor 00:29:26.806 [2024-07-12 00:41:54.349560] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbbf0e0 (9): Bad file descriptor 00:29:26.806 [2024-07-12 00:41:54.349605] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x75b980 (9): Bad file descriptor 00:29:26.806 [2024-07-12 00:41:54.351476] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:29:26.806 [2024-07-12 00:41:54.351664] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:29:26.806 [2024-07-12 00:41:54.351724] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x762a90 (9): Bad file descriptor 00:29:26.806 [2024-07-12 00:41:54.351961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.806 [2024-07-12 00:41:54.352006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbd4610 with addr=10.0.0.2, port=4420 00:29:26.806 [2024-07-12 00:41:54.352027] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd4610 is same with the state(5) to be set 00:29:26.806 [2024-07-12 00:41:54.352525] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:29:26.806 [2024-07-12 00:41:54.352743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.806 [2024-07-12 00:41:54.352785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb7d8b0 with addr=10.0.0.2, port=4420 00:29:26.806 [2024-07-12 00:41:54.352805] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7d8b0 is same with the state(5) to be set 00:29:26.806 [2024-07-12 00:41:54.352841] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd4610 (9): Bad file descriptor 00:29:26.806 [2024-07-12 00:41:54.353253] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:29:26.806 [2024-07-12 00:41:54.353412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.806 [2024-07-12 00:41:54.353442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x762a90 with addr=10.0.0.2, port=4420 00:29:26.806 [2024-07-12 00:41:54.353459] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x762a90 is same with the state(5) to be set 00:29:26.806 [2024-07-12 00:41:54.353559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.806 [2024-07-12 00:41:54.353584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b600 with addr=10.0.0.2, port=4420 00:29:26.806 [2024-07-12 00:41:54.353610] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b600 is same with the state(5) to be set 00:29:26.806 [2024-07-12 00:41:54.353631] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7d8b0 (9): Bad file descriptor 00:29:26.806 [2024-07-12 00:41:54.353657] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:29:26.806 [2024-07-12 00:41:54.353672] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:29:26.806 [2024-07-12 00:41:54.353689] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:29:26.806 [2024-07-12 00:41:54.353819] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:26.806 [2024-07-12 00:41:54.353849] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x762a90 (9): Bad file descriptor 00:29:26.806 [2024-07-12 00:41:54.353869] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b600 (9): Bad file descriptor 00:29:26.806 [2024-07-12 00:41:54.353886] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:29:26.806 [2024-07-12 00:41:54.353900] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:29:26.806 [2024-07-12 00:41:54.353926] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:29:26.806 [2024-07-12 00:41:54.354003] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:26.806 [2024-07-12 00:41:54.354022] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:29:26.806 [2024-07-12 00:41:54.354036] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:29:26.806 [2024-07-12 00:41:54.354050] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:29:26.806 [2024-07-12 00:41:54.354071] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:29:26.806 [2024-07-12 00:41:54.354086] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:29:26.806 [2024-07-12 00:41:54.354100] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:29:26.806 [2024-07-12 00:41:54.354157] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:26.806 [2024-07-12 00:41:54.354175] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:26.806 [2024-07-12 00:41:54.358855] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf46a0 (9): Bad file descriptor 00:29:26.806 [2024-07-12 00:41:54.358984] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb941a0 (9): Bad file descriptor 00:29:26.806 [2024-07-12 00:41:54.359203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.806 [2024-07-12 00:41:54.359232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.806 [2024-07-12 00:41:54.359267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.806 [2024-07-12 00:41:54.359284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.806 [2024-07-12 00:41:54.359302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.806 [2024-07-12 00:41:54.359318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.806 [2024-07-12 00:41:54.359335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.806 [2024-07-12 00:41:54.359350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.806 [2024-07-12 00:41:54.359367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.806 [2024-07-12 00:41:54.359383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.806 [2024-07-12 00:41:54.359400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.806 [2024-07-12 00:41:54.359415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.806 [2024-07-12 00:41:54.359432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.806 [2024-07-12 00:41:54.359448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.806 [2024-07-12 00:41:54.359465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.806 [2024-07-12 00:41:54.359480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.806 [2024-07-12 00:41:54.359510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.806 [2024-07-12 00:41:54.359526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.806 [2024-07-12 00:41:54.359543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.806 [2024-07-12 00:41:54.359559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.806 [2024-07-12 00:41:54.359576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.806 [2024-07-12 00:41:54.359599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.806 [2024-07-12 00:41:54.359618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.806 [2024-07-12 00:41:54.359633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.806 [2024-07-12 00:41:54.359651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.806 [2024-07-12 00:41:54.359666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.806 [2024-07-12 00:41:54.359683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.806 [2024-07-12 00:41:54.359698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.806 [2024-07-12 00:41:54.359715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.806 [2024-07-12 00:41:54.359730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.806 [2024-07-12 00:41:54.359747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.806 [2024-07-12 00:41:54.359762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.806 [2024-07-12 00:41:54.359779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.806 [2024-07-12 00:41:54.359793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.806 [2024-07-12 00:41:54.359811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.806 [2024-07-12 00:41:54.359826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.806 [2024-07-12 00:41:54.359843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.806 [2024-07-12 00:41:54.359858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.806 [2024-07-12 00:41:54.359875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.806 [2024-07-12 00:41:54.359890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.806 [2024-07-12 00:41:54.359907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.806 [2024-07-12 00:41:54.359931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.807 [2024-07-12 00:41:54.359948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.807 [2024-07-12 00:41:54.359964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.807 [2024-07-12 00:41:54.359981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.807 [2024-07-12 00:41:54.359996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.807 [2024-07-12 00:41:54.360013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.807 [2024-07-12 00:41:54.360029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.807 [2024-07-12 00:41:54.360046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.807 [2024-07-12 00:41:54.360061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.807 [2024-07-12 00:41:54.360078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.807 [2024-07-12 00:41:54.360093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.807 [2024-07-12 00:41:54.360111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.807 [2024-07-12 00:41:54.360126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.807 [2024-07-12 00:41:54.360142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.807 [2024-07-12 00:41:54.360157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.807 [2024-07-12 00:41:54.360175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.807 [2024-07-12 00:41:54.360190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.807 [2024-07-12 00:41:54.360209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.807 [2024-07-12 00:41:54.360224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.807 [2024-07-12 00:41:54.360241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.807 [2024-07-12 00:41:54.360257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.807 [2024-07-12 00:41:54.360275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.807 [2024-07-12 00:41:54.360290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.807 [2024-07-12 00:41:54.360306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.807 [2024-07-12 00:41:54.360321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.807 [2024-07-12 00:41:54.360342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.807 [2024-07-12 00:41:54.360358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.807 [2024-07-12 00:41:54.360375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.807 [2024-07-12 00:41:54.360390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.807 [2024-07-12 00:41:54.360407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.807 [2024-07-12 00:41:54.360423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.807 [2024-07-12 00:41:54.360440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.807 [2024-07-12 00:41:54.360455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.807 [2024-07-12 00:41:54.360472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.807 [2024-07-12 00:41:54.360487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.807 [2024-07-12 00:41:54.360504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.807 [2024-07-12 00:41:54.360519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.807 [2024-07-12 00:41:54.360536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.807 [2024-07-12 00:41:54.360551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.807 [2024-07-12 00:41:54.360568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.807 [2024-07-12 00:41:54.360583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.807 [2024-07-12 00:41:54.360616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.807 [2024-07-12 00:41:54.360637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.807 [2024-07-12 00:41:54.360656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.807 [2024-07-12 00:41:54.360671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.807 [2024-07-12 00:41:54.360688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.807 [2024-07-12 00:41:54.360703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.807 [2024-07-12 00:41:54.360720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.807 [2024-07-12 00:41:54.360735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.807 [2024-07-12 00:41:54.360753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.807 [2024-07-12 00:41:54.360774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.807 [2024-07-12 00:41:54.360792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.807 [2024-07-12 00:41:54.360808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.807 [2024-07-12 00:41:54.360825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.807 [2024-07-12 00:41:54.360841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.807 [2024-07-12 00:41:54.360858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.807 [2024-07-12 00:41:54.360873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.807 [2024-07-12 00:41:54.360890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.807 [2024-07-12 00:41:54.360905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.807 [2024-07-12 00:41:54.360922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.807 [2024-07-12 00:41:54.360937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.807 [2024-07-12 00:41:54.360954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.807 [2024-07-12 00:41:54.360970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.807 [2024-07-12 00:41:54.360987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.807 [2024-07-12 00:41:54.361002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.807 [2024-07-12 00:41:54.361019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.807 [2024-07-12 00:41:54.361034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.807 [2024-07-12 00:41:54.361051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.807 [2024-07-12 00:41:54.361066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.807 [2024-07-12 00:41:54.361083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.807 [2024-07-12 00:41:54.361098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.807 [2024-07-12 00:41:54.361115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.807 [2024-07-12 00:41:54.361130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.807 [2024-07-12 00:41:54.361146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.807 [2024-07-12 00:41:54.361162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.807 [2024-07-12 00:41:54.361182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.807 [2024-07-12 00:41:54.361198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.807 [2024-07-12 00:41:54.361215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.807 [2024-07-12 00:41:54.361230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.807 [2024-07-12 00:41:54.361247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.807 [2024-07-12 00:41:54.361262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.807 [2024-07-12 00:41:54.361280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.807 [2024-07-12 00:41:54.361295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.807 [2024-07-12 00:41:54.361312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.807 [2024-07-12 00:41:54.361327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.807 [2024-07-12 00:41:54.361344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.807 [2024-07-12 00:41:54.361359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.808 [2024-07-12 00:41:54.361376] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75e9f0 is same with the state(5) to be set 00:29:26.808 [2024-07-12 00:41:54.362874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.808 [2024-07-12 00:41:54.362908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.808 [2024-07-12 00:41:54.362934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.808 [2024-07-12 00:41:54.362950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.808 [2024-07-12 00:41:54.362968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.808 [2024-07-12 00:41:54.362983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.808 [2024-07-12 00:41:54.363000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.808 [2024-07-12 00:41:54.363015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.808 [2024-07-12 00:41:54.363032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.808 [2024-07-12 00:41:54.363048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.808 [2024-07-12 00:41:54.363064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.808 [2024-07-12 00:41:54.363079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.808 [2024-07-12 00:41:54.363104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.808 [2024-07-12 00:41:54.363120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.808 [2024-07-12 00:41:54.363137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.808 [2024-07-12 00:41:54.363152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.808 [2024-07-12 00:41:54.363169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.808 [2024-07-12 00:41:54.363184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.808 [2024-07-12 00:41:54.363202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.808 [2024-07-12 00:41:54.363218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.808 [2024-07-12 00:41:54.363235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.808 [2024-07-12 00:41:54.363250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.808 [2024-07-12 00:41:54.363267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.808 [2024-07-12 00:41:54.363282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.808 [2024-07-12 00:41:54.363299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.808 [2024-07-12 00:41:54.363314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.808 [2024-07-12 00:41:54.363331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.808 [2024-07-12 00:41:54.363346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.808 [2024-07-12 00:41:54.363364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.808 [2024-07-12 00:41:54.363379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.808 [2024-07-12 00:41:54.363396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.808 [2024-07-12 00:41:54.363411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.808 [2024-07-12 00:41:54.363428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.808 [2024-07-12 00:41:54.363443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.808 [2024-07-12 00:41:54.363460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.808 [2024-07-12 00:41:54.363475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.808 [2024-07-12 00:41:54.363492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.808 [2024-07-12 00:41:54.363511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.808 [2024-07-12 00:41:54.363529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.808 [2024-07-12 00:41:54.363544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.808 [2024-07-12 00:41:54.363561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.808 [2024-07-12 00:41:54.363576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.808 [2024-07-12 00:41:54.363603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.808 [2024-07-12 00:41:54.363620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.808 [2024-07-12 00:41:54.363637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.808 [2024-07-12 00:41:54.363652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.808 [2024-07-12 00:41:54.363669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.808 [2024-07-12 00:41:54.363685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.808 [2024-07-12 00:41:54.363702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.808 [2024-07-12 00:41:54.363717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.808 [2024-07-12 00:41:54.363735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.808 [2024-07-12 00:41:54.363750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.808 [2024-07-12 00:41:54.363767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.808 [2024-07-12 00:41:54.363782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.808 [2024-07-12 00:41:54.363799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.808 [2024-07-12 00:41:54.363814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.808 [2024-07-12 00:41:54.363832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.808 [2024-07-12 00:41:54.363847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.808 [2024-07-12 00:41:54.363864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.808 [2024-07-12 00:41:54.363879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.808 [2024-07-12 00:41:54.363896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.808 [2024-07-12 00:41:54.363911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.808 [2024-07-12 00:41:54.363932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.808 [2024-07-12 00:41:54.363948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.808 [2024-07-12 00:41:54.363965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.808 [2024-07-12 00:41:54.363980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.808 [2024-07-12 00:41:54.363997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.808 [2024-07-12 00:41:54.364012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.808 [2024-07-12 00:41:54.364029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.808 [2024-07-12 00:41:54.364044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.808 [2024-07-12 00:41:54.364061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.808 [2024-07-12 00:41:54.364076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.808 [2024-07-12 00:41:54.364093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.808 [2024-07-12 00:41:54.364108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.808 [2024-07-12 00:41:54.364125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.808 [2024-07-12 00:41:54.364140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.808 [2024-07-12 00:41:54.364157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.808 [2024-07-12 00:41:54.364172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.808 [2024-07-12 00:41:54.364189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.808 [2024-07-12 00:41:54.364204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.808 [2024-07-12 00:41:54.364222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.808 [2024-07-12 00:41:54.364237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.808 [2024-07-12 00:41:54.364254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.808 [2024-07-12 00:41:54.364269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.808 [2024-07-12 00:41:54.364286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.809 [2024-07-12 00:41:54.364301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.809 [2024-07-12 00:41:54.364319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.809 [2024-07-12 00:41:54.364337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.809 [2024-07-12 00:41:54.364356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.809 [2024-07-12 00:41:54.364372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.809 [2024-07-12 00:41:54.364389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.809 [2024-07-12 00:41:54.364404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.809 [2024-07-12 00:41:54.364421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.809 [2024-07-12 00:41:54.364437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.809 [2024-07-12 00:41:54.364454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.809 [2024-07-12 00:41:54.364470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.809 [2024-07-12 00:41:54.364487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.809 [2024-07-12 00:41:54.364502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.809 [2024-07-12 00:41:54.364519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.809 [2024-07-12 00:41:54.364534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.809 [2024-07-12 00:41:54.364551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.809 [2024-07-12 00:41:54.364567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.809 [2024-07-12 00:41:54.364584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.809 [2024-07-12 00:41:54.364616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.809 [2024-07-12 00:41:54.364643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.809 [2024-07-12 00:41:54.364659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.809 [2024-07-12 00:41:54.364677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.809 [2024-07-12 00:41:54.364692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.809 [2024-07-12 00:41:54.364710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.809 [2024-07-12 00:41:54.364725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.809 [2024-07-12 00:41:54.364742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.809 [2024-07-12 00:41:54.364757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.809 [2024-07-12 00:41:54.364778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.809 [2024-07-12 00:41:54.364794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.809 [2024-07-12 00:41:54.364812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.809 [2024-07-12 00:41:54.364827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.809 [2024-07-12 00:41:54.364844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.809 [2024-07-12 00:41:54.364859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.809 [2024-07-12 00:41:54.364877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.809 [2024-07-12 00:41:54.364892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.809 [2024-07-12 00:41:54.364910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.809 [2024-07-12 00:41:54.364925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.809 [2024-07-12 00:41:54.364943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.809 [2024-07-12 00:41:54.364958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.809 [2024-07-12 00:41:54.364976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.809 [2024-07-12 00:41:54.364991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.809 [2024-07-12 00:41:54.365008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.809 [2024-07-12 00:41:54.365023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.809 [2024-07-12 00:41:54.365040] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9bce0 is same with the state(5) to be set 00:29:26.809 [2024-07-12 00:41:54.366515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.809 [2024-07-12 00:41:54.366548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.809 [2024-07-12 00:41:54.366576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.809 [2024-07-12 00:41:54.366600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.809 [2024-07-12 00:41:54.366619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.809 [2024-07-12 00:41:54.366635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.809 [2024-07-12 00:41:54.366652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.809 [2024-07-12 00:41:54.366667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.809 [2024-07-12 00:41:54.366691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.809 [2024-07-12 00:41:54.366707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.809 [2024-07-12 00:41:54.366724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.809 [2024-07-12 00:41:54.366739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.809 [2024-07-12 00:41:54.366756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.809 [2024-07-12 00:41:54.366771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.809 [2024-07-12 00:41:54.366788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.809 [2024-07-12 00:41:54.366803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.809 [2024-07-12 00:41:54.366820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.809 [2024-07-12 00:41:54.366835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.809 [2024-07-12 00:41:54.366853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.809 [2024-07-12 00:41:54.366867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.809 [2024-07-12 00:41:54.366884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.809 [2024-07-12 00:41:54.366901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.809 [2024-07-12 00:41:54.366919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.809 [2024-07-12 00:41:54.366934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.809 [2024-07-12 00:41:54.366951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.809 [2024-07-12 00:41:54.366967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.809 [2024-07-12 00:41:54.366984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.810 [2024-07-12 00:41:54.366999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.810 [2024-07-12 00:41:54.367016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.810 [2024-07-12 00:41:54.367031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.810 [2024-07-12 00:41:54.367048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.810 [2024-07-12 00:41:54.367063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.810 [2024-07-12 00:41:54.367080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.810 [2024-07-12 00:41:54.367095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.810 [2024-07-12 00:41:54.367116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.810 [2024-07-12 00:41:54.367132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.810 [2024-07-12 00:41:54.367149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.810 [2024-07-12 00:41:54.367164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.810 [2024-07-12 00:41:54.367181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.810 [2024-07-12 00:41:54.367196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.810 [2024-07-12 00:41:54.367213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.810 [2024-07-12 00:41:54.367229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.810 [2024-07-12 00:41:54.367245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.810 [2024-07-12 00:41:54.367260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.810 [2024-07-12 00:41:54.367278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.810 [2024-07-12 00:41:54.367292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.810 [2024-07-12 00:41:54.367310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.810 [2024-07-12 00:41:54.367325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.810 [2024-07-12 00:41:54.367342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.810 [2024-07-12 00:41:54.367356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.810 [2024-07-12 00:41:54.367374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.810 [2024-07-12 00:41:54.367390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.810 [2024-07-12 00:41:54.367407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.810 [2024-07-12 00:41:54.367422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.810 [2024-07-12 00:41:54.367440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.810 [2024-07-12 00:41:54.367455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.810 [2024-07-12 00:41:54.367472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.810 [2024-07-12 00:41:54.367488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.810 [2024-07-12 00:41:54.367505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.810 [2024-07-12 00:41:54.367524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.810 [2024-07-12 00:41:54.367541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.810 [2024-07-12 00:41:54.367557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.810 [2024-07-12 00:41:54.367574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.810 [2024-07-12 00:41:54.367595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.810 [2024-07-12 00:41:54.367613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.810 [2024-07-12 00:41:54.367628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.810 [2024-07-12 00:41:54.367646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.810 [2024-07-12 00:41:54.367661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.810 [2024-07-12 00:41:54.367679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.810 [2024-07-12 00:41:54.367694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.810 [2024-07-12 00:41:54.367711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.810 [2024-07-12 00:41:54.367726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.810 [2024-07-12 00:41:54.367744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.810 [2024-07-12 00:41:54.367759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.810 [2024-07-12 00:41:54.367776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.810 [2024-07-12 00:41:54.367791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.810 [2024-07-12 00:41:54.367808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.810 [2024-07-12 00:41:54.367824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.810 [2024-07-12 00:41:54.367841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.810 [2024-07-12 00:41:54.367856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.810 [2024-07-12 00:41:54.367873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.810 [2024-07-12 00:41:54.367888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.810 [2024-07-12 00:41:54.367906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.810 [2024-07-12 00:41:54.367922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.810 [2024-07-12 00:41:54.367943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.810 [2024-07-12 00:41:54.367958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.810 [2024-07-12 00:41:54.367976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.810 [2024-07-12 00:41:54.367992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.810 [2024-07-12 00:41:54.368009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.810 [2024-07-12 00:41:54.368024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.810 [2024-07-12 00:41:54.368042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.810 [2024-07-12 00:41:54.368057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.810 [2024-07-12 00:41:54.368074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.810 [2024-07-12 00:41:54.368089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.810 [2024-07-12 00:41:54.368106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.810 [2024-07-12 00:41:54.368121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.810 [2024-07-12 00:41:54.368138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.810 [2024-07-12 00:41:54.368153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.810 [2024-07-12 00:41:54.368170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.810 [2024-07-12 00:41:54.368185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.810 [2024-07-12 00:41:54.368202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.810 [2024-07-12 00:41:54.368217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.810 [2024-07-12 00:41:54.368234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.810 [2024-07-12 00:41:54.368249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.810 [2024-07-12 00:41:54.368266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.810 [2024-07-12 00:41:54.368281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.810 [2024-07-12 00:41:54.368298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.810 [2024-07-12 00:41:54.368314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.810 [2024-07-12 00:41:54.368331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.810 [2024-07-12 00:41:54.368350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.810 [2024-07-12 00:41:54.368368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.810 [2024-07-12 00:41:54.368383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.810 [2024-07-12 00:41:54.368400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.811 [2024-07-12 00:41:54.368415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.811 [2024-07-12 00:41:54.368432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.811 [2024-07-12 00:41:54.368447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.811 [2024-07-12 00:41:54.368464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.811 [2024-07-12 00:41:54.368479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.811 [2024-07-12 00:41:54.368496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.811 [2024-07-12 00:41:54.368512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.811 [2024-07-12 00:41:54.368529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.811 [2024-07-12 00:41:54.368544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.811 [2024-07-12 00:41:54.368561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.811 [2024-07-12 00:41:54.368575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.811 [2024-07-12 00:41:54.368604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.811 [2024-07-12 00:41:54.368630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.811 [2024-07-12 00:41:54.368653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.811 [2024-07-12 00:41:54.368668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.811 [2024-07-12 00:41:54.368685] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb6fb80 is same with the state(5) to be set 00:29:26.811 [2024-07-12 00:41:54.370182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.811 [2024-07-12 00:41:54.370217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.811 [2024-07-12 00:41:54.370244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.811 [2024-07-12 00:41:54.370261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.811 [2024-07-12 00:41:54.370279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.811 [2024-07-12 00:41:54.370301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.811 [2024-07-12 00:41:54.370319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.811 [2024-07-12 00:41:54.370335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.811 [2024-07-12 00:41:54.370352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.811 [2024-07-12 00:41:54.370368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.811 [2024-07-12 00:41:54.370385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.811 [2024-07-12 00:41:54.370400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.811 [2024-07-12 00:41:54.370417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.811 [2024-07-12 00:41:54.370432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.811 [2024-07-12 00:41:54.370449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.811 [2024-07-12 00:41:54.370464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.811 [2024-07-12 00:41:54.370481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.811 [2024-07-12 00:41:54.370496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.811 [2024-07-12 00:41:54.370514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.811 [2024-07-12 00:41:54.370529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.811 [2024-07-12 00:41:54.370547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.811 [2024-07-12 00:41:54.370562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.811 [2024-07-12 00:41:54.370580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.811 [2024-07-12 00:41:54.370603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.811 [2024-07-12 00:41:54.370621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.811 [2024-07-12 00:41:54.370637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.811 [2024-07-12 00:41:54.370654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.811 [2024-07-12 00:41:54.370669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.811 [2024-07-12 00:41:54.370687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.811 [2024-07-12 00:41:54.370702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.811 [2024-07-12 00:41:54.370727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.811 [2024-07-12 00:41:54.370743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.811 [2024-07-12 00:41:54.370760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.811 [2024-07-12 00:41:54.370775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.811 [2024-07-12 00:41:54.370792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.811 [2024-07-12 00:41:54.370808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.811 [2024-07-12 00:41:54.370825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.811 [2024-07-12 00:41:54.370840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.811 [2024-07-12 00:41:54.370857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.811 [2024-07-12 00:41:54.370873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.811 [2024-07-12 00:41:54.370890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.811 [2024-07-12 00:41:54.370905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.811 [2024-07-12 00:41:54.370922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.811 [2024-07-12 00:41:54.370937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.811 [2024-07-12 00:41:54.370954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.811 [2024-07-12 00:41:54.370969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.811 [2024-07-12 00:41:54.370987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.811 [2024-07-12 00:41:54.371002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.811 [2024-07-12 00:41:54.371019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.811 [2024-07-12 00:41:54.371034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.811 [2024-07-12 00:41:54.371051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.811 [2024-07-12 00:41:54.371066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.811 [2024-07-12 00:41:54.371084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.811 [2024-07-12 00:41:54.371100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.811 [2024-07-12 00:41:54.371117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.811 [2024-07-12 00:41:54.371136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.811 [2024-07-12 00:41:54.371154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.811 [2024-07-12 00:41:54.371170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.811 [2024-07-12 00:41:54.371187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.811 [2024-07-12 00:41:54.371202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.811 [2024-07-12 00:41:54.371219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.811 [2024-07-12 00:41:54.371235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.811 [2024-07-12 00:41:54.371253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.811 [2024-07-12 00:41:54.371268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.811 [2024-07-12 00:41:54.371285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.811 [2024-07-12 00:41:54.371300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.811 [2024-07-12 00:41:54.371318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.811 [2024-07-12 00:41:54.371333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.811 [2024-07-12 00:41:54.371350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.811 [2024-07-12 00:41:54.371365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.812 [2024-07-12 00:41:54.371383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.812 [2024-07-12 00:41:54.371398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.812 [2024-07-12 00:41:54.371415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.812 [2024-07-12 00:41:54.371430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.812 [2024-07-12 00:41:54.371448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.812 [2024-07-12 00:41:54.371463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.812 [2024-07-12 00:41:54.371480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.812 [2024-07-12 00:41:54.371495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.812 [2024-07-12 00:41:54.371512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.812 [2024-07-12 00:41:54.371527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.812 [2024-07-12 00:41:54.371548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.812 [2024-07-12 00:41:54.371564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.812 [2024-07-12 00:41:54.371581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.812 [2024-07-12 00:41:54.371603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.812 [2024-07-12 00:41:54.371621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.812 [2024-07-12 00:41:54.371637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.812 [2024-07-12 00:41:54.371654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.812 [2024-07-12 00:41:54.371669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.812 [2024-07-12 00:41:54.371686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.812 [2024-07-12 00:41:54.371702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.812 [2024-07-12 00:41:54.371719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.812 [2024-07-12 00:41:54.371734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.812 [2024-07-12 00:41:54.371751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.812 [2024-07-12 00:41:54.371766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.812 [2024-07-12 00:41:54.371783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.812 [2024-07-12 00:41:54.371798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.812 [2024-07-12 00:41:54.371815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.812 [2024-07-12 00:41:54.371830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.812 [2024-07-12 00:41:54.371848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.812 [2024-07-12 00:41:54.371863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.812 [2024-07-12 00:41:54.371881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.812 [2024-07-12 00:41:54.371896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.812 [2024-07-12 00:41:54.371913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.812 [2024-07-12 00:41:54.371928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.812 [2024-07-12 00:41:54.371945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.812 [2024-07-12 00:41:54.371964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.812 [2024-07-12 00:41:54.371982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.812 [2024-07-12 00:41:54.371997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.812 [2024-07-12 00:41:54.372014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.812 [2024-07-12 00:41:54.372029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.812 [2024-07-12 00:41:54.372047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.812 [2024-07-12 00:41:54.372062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.812 [2024-07-12 00:41:54.372079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.812 [2024-07-12 00:41:54.372094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.812 [2024-07-12 00:41:54.372111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.812 [2024-07-12 00:41:54.372127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.812 [2024-07-12 00:41:54.372144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.812 [2024-07-12 00:41:54.372159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.812 [2024-07-12 00:41:54.372176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.812 [2024-07-12 00:41:54.372192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.812 [2024-07-12 00:41:54.372209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.812 [2024-07-12 00:41:54.372224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.812 [2024-07-12 00:41:54.372241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.812 [2024-07-12 00:41:54.372256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.812 [2024-07-12 00:41:54.372273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.812 [2024-07-12 00:41:54.372288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.812 [2024-07-12 00:41:54.372305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.812 [2024-07-12 00:41:54.372321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.812 [2024-07-12 00:41:54.372338] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd1e0d0 is same with the state(5) to be set 00:29:26.812 [2024-07-12 00:41:54.374199] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:26.812 [2024-07-12 00:41:54.374266] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:29:26.812 [2024-07-12 00:41:54.374286] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:29:26.812 [2024-07-12 00:41:54.374414] bdev_nvme.c:2896:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:26.812 [2024-07-12 00:41:54.374455] bdev_nvme.c:2896:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:26.812 [2024-07-12 00:41:54.374574] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:29:26.812 [2024-07-12 00:41:54.374608] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:29:26.812 [2024-07-12 00:41:54.374875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.812 [2024-07-12 00:41:54.374921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x75b980 with addr=10.0.0.2, port=4420 00:29:26.812 [2024-07-12 00:41:54.374943] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75b980 is same with the state(5) to be set 00:29:26.812 [2024-07-12 00:41:54.375065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.812 [2024-07-12 00:41:54.375092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb92fd0 with addr=10.0.0.2, port=4420 00:29:26.812 [2024-07-12 00:41:54.375108] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb92fd0 is same with the state(5) to be set 00:29:26.812 [2024-07-12 00:41:54.375207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.812 [2024-07-12 00:41:54.375232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbbf0e0 with addr=10.0.0.2, port=4420 00:29:26.812 [2024-07-12 00:41:54.375248] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbbf0e0 is same with the state(5) to be set 00:29:26.812 [2024-07-12 00:41:54.376228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.812 [2024-07-12 00:41:54.376263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.812 [2024-07-12 00:41:54.376298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.812 [2024-07-12 00:41:54.376315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.812 [2024-07-12 00:41:54.376335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.812 [2024-07-12 00:41:54.376351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.812 [2024-07-12 00:41:54.376368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.812 [2024-07-12 00:41:54.376383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.812 [2024-07-12 00:41:54.376401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.812 [2024-07-12 00:41:54.376416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.812 [2024-07-12 00:41:54.376433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.812 [2024-07-12 00:41:54.376449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.812 [2024-07-12 00:41:54.376466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.813 [2024-07-12 00:41:54.376494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.813 [2024-07-12 00:41:54.376514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.813 [2024-07-12 00:41:54.376529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.813 [2024-07-12 00:41:54.376546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.813 [2024-07-12 00:41:54.376561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.813 [2024-07-12 00:41:54.376578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.813 [2024-07-12 00:41:54.376608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.813 [2024-07-12 00:41:54.376638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.813 [2024-07-12 00:41:54.376656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.813 [2024-07-12 00:41:54.376674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.813 [2024-07-12 00:41:54.376689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.813 [2024-07-12 00:41:54.376706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.813 [2024-07-12 00:41:54.376722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.813 [2024-07-12 00:41:54.376739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.813 [2024-07-12 00:41:54.376754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.813 [2024-07-12 00:41:54.376771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.813 [2024-07-12 00:41:54.376786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.813 [2024-07-12 00:41:54.376803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.813 [2024-07-12 00:41:54.376818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.813 [2024-07-12 00:41:54.376835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.813 [2024-07-12 00:41:54.376850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.813 [2024-07-12 00:41:54.376867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.813 [2024-07-12 00:41:54.376882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.813 [2024-07-12 00:41:54.376899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.813 [2024-07-12 00:41:54.376914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.813 [2024-07-12 00:41:54.376935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.813 [2024-07-12 00:41:54.376951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.813 [2024-07-12 00:41:54.376969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.813 [2024-07-12 00:41:54.376984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.813 [2024-07-12 00:41:54.377001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.813 [2024-07-12 00:41:54.377016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.813 [2024-07-12 00:41:54.377033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.813 [2024-07-12 00:41:54.377048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.813 [2024-07-12 00:41:54.377066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.813 [2024-07-12 00:41:54.377081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.813 [2024-07-12 00:41:54.377098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.813 [2024-07-12 00:41:54.377113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.813 [2024-07-12 00:41:54.377130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.813 [2024-07-12 00:41:54.377145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.813 [2024-07-12 00:41:54.377162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.813 [2024-07-12 00:41:54.377178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.813 [2024-07-12 00:41:54.377195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.813 [2024-07-12 00:41:54.377210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.813 [2024-07-12 00:41:54.377227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.813 [2024-07-12 00:41:54.377242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.813 [2024-07-12 00:41:54.377259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.813 [2024-07-12 00:41:54.377275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.813 [2024-07-12 00:41:54.377292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.813 [2024-07-12 00:41:54.377308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.813 [2024-07-12 00:41:54.377325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.813 [2024-07-12 00:41:54.377344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.813 [2024-07-12 00:41:54.377361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.813 [2024-07-12 00:41:54.377376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.813 [2024-07-12 00:41:54.377393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.813 [2024-07-12 00:41:54.377408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.813 [2024-07-12 00:41:54.377425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.813 [2024-07-12 00:41:54.377440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.813 [2024-07-12 00:41:54.377457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.813 [2024-07-12 00:41:54.377472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.813 [2024-07-12 00:41:54.377489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.813 [2024-07-12 00:41:54.377504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.813 [2024-07-12 00:41:54.377521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.813 [2024-07-12 00:41:54.377535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.813 [2024-07-12 00:41:54.377553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.813 [2024-07-12 00:41:54.377568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.813 [2024-07-12 00:41:54.377597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.813 [2024-07-12 00:41:54.377615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.813 [2024-07-12 00:41:54.377632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.813 [2024-07-12 00:41:54.377647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.813 [2024-07-12 00:41:54.377664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.813 [2024-07-12 00:41:54.377681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.813 [2024-07-12 00:41:54.377698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.813 [2024-07-12 00:41:54.377714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.813 [2024-07-12 00:41:54.377731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.813 [2024-07-12 00:41:54.377746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.813 [2024-07-12 00:41:54.377767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.813 [2024-07-12 00:41:54.377782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.814 [2024-07-12 00:41:54.377800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.814 [2024-07-12 00:41:54.377815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.814 [2024-07-12 00:41:54.377832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.814 [2024-07-12 00:41:54.377847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.814 [2024-07-12 00:41:54.377864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.814 [2024-07-12 00:41:54.377879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.814 [2024-07-12 00:41:54.377895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.814 [2024-07-12 00:41:54.377910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.814 [2024-07-12 00:41:54.377927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.814 [2024-07-12 00:41:54.377942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.814 [2024-07-12 00:41:54.377960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.814 [2024-07-12 00:41:54.377974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.814 [2024-07-12 00:41:54.377991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.814 [2024-07-12 00:41:54.378006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.814 [2024-07-12 00:41:54.378023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.814 [2024-07-12 00:41:54.378038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.814 [2024-07-12 00:41:54.378055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.814 [2024-07-12 00:41:54.378070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.814 [2024-07-12 00:41:54.378087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.814 [2024-07-12 00:41:54.378102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.814 [2024-07-12 00:41:54.378119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.814 [2024-07-12 00:41:54.378134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.814 [2024-07-12 00:41:54.378151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.814 [2024-07-12 00:41:54.378169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.814 [2024-07-12 00:41:54.378186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.814 [2024-07-12 00:41:54.378202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.814 [2024-07-12 00:41:54.378219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.814 [2024-07-12 00:41:54.378234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.814 [2024-07-12 00:41:54.378251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.814 [2024-07-12 00:41:54.378267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.814 [2024-07-12 00:41:54.378284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.814 [2024-07-12 00:41:54.378299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.814 [2024-07-12 00:41:54.378316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.814 [2024-07-12 00:41:54.378331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.814 [2024-07-12 00:41:54.378348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.814 [2024-07-12 00:41:54.378363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.814 [2024-07-12 00:41:54.378380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.814 [2024-07-12 00:41:54.378395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.814 [2024-07-12 00:41:54.378412] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd1b770 is same with the state(5) to be set 00:29:26.814 [2024-07-12 00:41:54.379896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.814 [2024-07-12 00:41:54.379926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.814 [2024-07-12 00:41:54.379953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.814 [2024-07-12 00:41:54.379975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.814 [2024-07-12 00:41:54.379993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.814 [2024-07-12 00:41:54.380008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.814 [2024-07-12 00:41:54.380025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.814 [2024-07-12 00:41:54.380040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.814 [2024-07-12 00:41:54.380057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.814 [2024-07-12 00:41:54.380079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.814 [2024-07-12 00:41:54.380097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.814 [2024-07-12 00:41:54.380112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.814 [2024-07-12 00:41:54.380129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.814 [2024-07-12 00:41:54.380144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.814 [2024-07-12 00:41:54.380161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.814 [2024-07-12 00:41:54.380176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.814 [2024-07-12 00:41:54.380194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.814 [2024-07-12 00:41:54.380210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.814 [2024-07-12 00:41:54.380227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.814 [2024-07-12 00:41:54.380243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.814 [2024-07-12 00:41:54.380260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.814 [2024-07-12 00:41:54.380275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.814 [2024-07-12 00:41:54.380292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.814 [2024-07-12 00:41:54.380308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.814 [2024-07-12 00:41:54.380326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.814 [2024-07-12 00:41:54.380341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.814 [2024-07-12 00:41:54.380358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.814 [2024-07-12 00:41:54.380373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.814 [2024-07-12 00:41:54.380390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.814 [2024-07-12 00:41:54.380406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.814 [2024-07-12 00:41:54.380423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.814 [2024-07-12 00:41:54.380438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.814 [2024-07-12 00:41:54.380455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.814 [2024-07-12 00:41:54.380470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.814 [2024-07-12 00:41:54.380492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.814 [2024-07-12 00:41:54.380511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.814 [2024-07-12 00:41:54.380529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.814 [2024-07-12 00:41:54.380544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.814 [2024-07-12 00:41:54.380561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.814 [2024-07-12 00:41:54.380577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.814 [2024-07-12 00:41:54.380607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.814 [2024-07-12 00:41:54.380634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.814 [2024-07-12 00:41:54.380659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.814 [2024-07-12 00:41:54.380675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.814 [2024-07-12 00:41:54.380692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.815 [2024-07-12 00:41:54.380707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.815 [2024-07-12 00:41:54.380724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.815 [2024-07-12 00:41:54.380739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.815 [2024-07-12 00:41:54.380757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.815 [2024-07-12 00:41:54.380772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.815 [2024-07-12 00:41:54.380789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.815 [2024-07-12 00:41:54.380805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.815 [2024-07-12 00:41:54.380822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.815 [2024-07-12 00:41:54.380837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.815 [2024-07-12 00:41:54.380855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.815 [2024-07-12 00:41:54.380870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.815 [2024-07-12 00:41:54.380887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.815 [2024-07-12 00:41:54.380902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.815 [2024-07-12 00:41:54.380919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.815 [2024-07-12 00:41:54.380934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.815 [2024-07-12 00:41:54.380957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.815 [2024-07-12 00:41:54.380973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.815 [2024-07-12 00:41:54.380990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.815 [2024-07-12 00:41:54.381005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.815 [2024-07-12 00:41:54.381022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.815 [2024-07-12 00:41:54.381037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.815 [2024-07-12 00:41:54.381054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.815 [2024-07-12 00:41:54.381069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.815 [2024-07-12 00:41:54.381086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.815 [2024-07-12 00:41:54.381101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.815 [2024-07-12 00:41:54.381118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.815 [2024-07-12 00:41:54.381133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.815 [2024-07-12 00:41:54.381151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.815 [2024-07-12 00:41:54.381165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.815 [2024-07-12 00:41:54.381182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.815 [2024-07-12 00:41:54.381197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.815 [2024-07-12 00:41:54.381214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.815 [2024-07-12 00:41:54.381229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.815 [2024-07-12 00:41:54.381246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.815 [2024-07-12 00:41:54.381261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.815 [2024-07-12 00:41:54.381278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.815 [2024-07-12 00:41:54.381292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.815 [2024-07-12 00:41:54.381310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.815 [2024-07-12 00:41:54.381326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.815 [2024-07-12 00:41:54.381343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.815 [2024-07-12 00:41:54.381361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.815 [2024-07-12 00:41:54.381379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.815 [2024-07-12 00:41:54.381394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.815 [2024-07-12 00:41:54.381411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.815 [2024-07-12 00:41:54.381426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.815 [2024-07-12 00:41:54.381443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.815 [2024-07-12 00:41:54.381458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.815 [2024-07-12 00:41:54.381475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.815 [2024-07-12 00:41:54.381490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.815 [2024-07-12 00:41:54.381507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.815 [2024-07-12 00:41:54.381522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.815 [2024-07-12 00:41:54.381538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.815 [2024-07-12 00:41:54.381553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.815 [2024-07-12 00:41:54.381570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.815 [2024-07-12 00:41:54.381602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.815 [2024-07-12 00:41:54.381621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.815 [2024-07-12 00:41:54.381637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.815 [2024-07-12 00:41:54.381654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.815 [2024-07-12 00:41:54.381669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.815 [2024-07-12 00:41:54.381686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.815 [2024-07-12 00:41:54.381701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.815 [2024-07-12 00:41:54.381718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.815 [2024-07-12 00:41:54.381733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.815 [2024-07-12 00:41:54.381750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.815 [2024-07-12 00:41:54.381766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.815 [2024-07-12 00:41:54.381787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.815 [2024-07-12 00:41:54.381802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.815 [2024-07-12 00:41:54.381820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.815 [2024-07-12 00:41:54.381835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.815 [2024-07-12 00:41:54.381852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.815 [2024-07-12 00:41:54.381868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.815 [2024-07-12 00:41:54.381885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.815 [2024-07-12 00:41:54.381900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.815 [2024-07-12 00:41:54.381917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.815 [2024-07-12 00:41:54.381932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.815 [2024-07-12 00:41:54.381949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.816 [2024-07-12 00:41:54.381965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.816 [2024-07-12 00:41:54.381982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.816 [2024-07-12 00:41:54.381997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.816 [2024-07-12 00:41:54.382014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.816 [2024-07-12 00:41:54.382029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.816 [2024-07-12 00:41:54.382046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.816 [2024-07-12 00:41:54.382061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.816 [2024-07-12 00:41:54.382077] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd1cbc0 is same with the state(5) to be set 00:29:26.816 [2024-07-12 00:41:54.384141] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:29:26.816 [2024-07-12 00:41:54.384193] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:29:26.816 [2024-07-12 00:41:54.384213] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:29:26.816 [2024-07-12 00:41:54.384234] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:29:26.816 task offset: 26624 on job bdev=Nvme3n1 fails 00:29:26.816 00:29:26.816 Latency(us) 00:29:26.816 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:26.816 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:26.816 Job: Nvme1n1 ended in about 1.13 seconds with error 00:29:26.816 Verification LBA range: start 0x0 length 0x400 00:29:26.816 Nvme1n1 : 1.13 113.12 7.07 56.56 0.00 373035.87 40583.77 316902.97 00:29:26.816 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:26.816 Job: Nvme2n1 ended in about 1.11 seconds with error 00:29:26.816 Verification LBA range: start 0x0 length 0x400 00:29:26.816 Nvme2n1 : 1.11 181.52 11.34 57.80 0.00 258814.17 13398.47 307582.29 00:29:26.816 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:26.816 Job: Nvme3n1 ended in about 1.10 seconds with error 00:29:26.816 Verification LBA range: start 0x0 length 0x400 00:29:26.816 Nvme3n1 : 1.10 174.73 10.92 58.24 0.00 260054.90 4660.34 310689.19 00:29:26.816 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:26.816 Job: Nvme4n1 ended in about 1.14 seconds with error 00:29:26.816 Verification LBA range: start 0x0 length 0x400 00:29:26.816 Nvme4n1 : 1.14 172.66 10.79 56.38 0.00 259337.34 22913.33 282727.16 00:29:26.816 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:26.816 Job: Nvme5n1 ended in about 1.14 seconds with error 00:29:26.816 Verification LBA range: start 0x0 length 0x400 00:29:26.816 Nvme5n1 : 1.14 171.23 10.70 56.20 0.00 255593.96 26991.12 307582.29 00:29:26.816 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:26.816 Job: Nvme6n1 ended in about 1.12 seconds with error 00:29:26.816 Verification LBA range: start 0x0 length 0x400 00:29:26.816 Nvme6n1 : 1.12 125.01 7.81 57.15 0.00 311525.88 24175.50 304475.40 00:29:26.816 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:26.816 Job: Nvme7n1 ended in about 1.11 seconds with error 00:29:26.816 Verification LBA range: start 0x0 length 0x400 00:29:26.816 Nvme7n1 : 1.11 173.64 10.85 6.33 0.00 306230.35 24855.13 298261.62 00:29:26.816 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:26.816 Job: Nvme8n1 ended in about 1.15 seconds with error 00:29:26.816 Verification LBA range: start 0x0 length 0x400 00:29:26.816 Nvme8n1 : 1.15 111.45 6.97 55.72 0.00 325442.12 17864.63 310689.19 00:29:26.816 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:26.816 Job: Nvme9n1 ended in about 1.15 seconds with error 00:29:26.816 Verification LBA range: start 0x0 length 0x400 00:29:26.816 Nvme9n1 : 1.15 114.57 7.16 55.55 0.00 312725.85 21651.15 310689.19 00:29:26.816 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:26.816 Job: Nvme10n1 ended in about 1.14 seconds with error 00:29:26.816 Verification LBA range: start 0x0 length 0x400 00:29:26.816 Nvme10n1 : 1.14 112.04 7.00 56.02 0.00 308734.99 24272.59 338651.21 00:29:26.816 =================================================================================================================== 00:29:26.816 Total : 1449.98 90.62 515.95 0.00 292531.21 4660.34 338651.21 00:29:26.816 [2024-07-12 00:41:54.413095] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:29:26.816 [2024-07-12 00:41:54.413187] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:29:26.816 [2024-07-12 00:41:54.413515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.816 [2024-07-12 00:41:54.413552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbf44c0 with addr=10.0.0.2, port=4420 00:29:26.816 [2024-07-12 00:41:54.413574] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf44c0 is same with the state(5) to be set 00:29:26.816 [2024-07-12 00:41:54.413744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.816 [2024-07-12 00:41:54.413788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbd4610 with addr=10.0.0.2, port=4420 00:29:26.816 [2024-07-12 00:41:54.413807] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd4610 is same with the state(5) to be set 00:29:26.816 [2024-07-12 00:41:54.413846] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x75b980 (9): Bad file descriptor 00:29:26.816 [2024-07-12 00:41:54.413873] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb92fd0 (9): Bad file descriptor 00:29:26.816 [2024-07-12 00:41:54.413892] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbbf0e0 (9): Bad file descriptor 00:29:26.816 [2024-07-12 00:41:54.414226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.816 [2024-07-12 00:41:54.414256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb7d8b0 with addr=10.0.0.2, port=4420 00:29:26.816 [2024-07-12 00:41:54.414274] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7d8b0 is same with the state(5) to be set 00:29:26.816 [2024-07-12 00:41:54.414368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.816 [2024-07-12 00:41:54.414393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb9b600 with addr=10.0.0.2, port=4420 00:29:26.816 [2024-07-12 00:41:54.414409] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9b600 is same with the state(5) to be set 00:29:26.816 [2024-07-12 00:41:54.414513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.816 [2024-07-12 00:41:54.414538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x762a90 with addr=10.0.0.2, port=4420 00:29:26.816 [2024-07-12 00:41:54.414554] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x762a90 is same with the state(5) to be set 00:29:26.816 [2024-07-12 00:41:54.414676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.816 [2024-07-12 00:41:54.414706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb941a0 with addr=10.0.0.2, port=4420 00:29:26.816 [2024-07-12 00:41:54.414722] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb941a0 is same with the state(5) to be set 00:29:26.816 [2024-07-12 00:41:54.414833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.816 [2024-07-12 00:41:54.414857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbf46a0 with addr=10.0.0.2, port=4420 00:29:26.816 [2024-07-12 00:41:54.414873] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbf46a0 is same with the state(5) to be set 00:29:26.816 [2024-07-12 00:41:54.414893] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf44c0 (9): Bad file descriptor 00:29:26.816 [2024-07-12 00:41:54.414913] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd4610 (9): Bad file descriptor 00:29:26.816 [2024-07-12 00:41:54.414932] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:26.816 [2024-07-12 00:41:54.414947] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:26.816 [2024-07-12 00:41:54.414965] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:26.816 [2024-07-12 00:41:54.414989] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:29:26.816 [2024-07-12 00:41:54.415004] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:29:26.816 [2024-07-12 00:41:54.415018] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:29:26.816 [2024-07-12 00:41:54.415036] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:29:26.816 [2024-07-12 00:41:54.415051] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:29:26.816 [2024-07-12 00:41:54.415064] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:29:26.817 [2024-07-12 00:41:54.415099] bdev_nvme.c:2896:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:26.817 [2024-07-12 00:41:54.415127] bdev_nvme.c:2896:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:26.817 [2024-07-12 00:41:54.415149] bdev_nvme.c:2896:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:26.817 [2024-07-12 00:41:54.415171] bdev_nvme.c:2896:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:26.817 [2024-07-12 00:41:54.415191] bdev_nvme.c:2896:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:26.817 [2024-07-12 00:41:54.415905] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:26.817 [2024-07-12 00:41:54.415935] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:26.817 [2024-07-12 00:41:54.415950] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:26.817 [2024-07-12 00:41:54.415971] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb7d8b0 (9): Bad file descriptor 00:29:26.817 [2024-07-12 00:41:54.415991] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb9b600 (9): Bad file descriptor 00:29:26.817 [2024-07-12 00:41:54.416010] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x762a90 (9): Bad file descriptor 00:29:26.817 [2024-07-12 00:41:54.416029] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb941a0 (9): Bad file descriptor 00:29:26.817 [2024-07-12 00:41:54.416048] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf46a0 (9): Bad file descriptor 00:29:26.817 [2024-07-12 00:41:54.416064] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:29:26.817 [2024-07-12 00:41:54.416078] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:29:26.817 [2024-07-12 00:41:54.416092] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:29:26.817 [2024-07-12 00:41:54.416111] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:29:26.817 [2024-07-12 00:41:54.416126] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:29:26.817 [2024-07-12 00:41:54.416140] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:29:26.817 [2024-07-12 00:41:54.416486] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:26.817 [2024-07-12 00:41:54.416512] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:26.817 [2024-07-12 00:41:54.416527] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:29:26.817 [2024-07-12 00:41:54.416541] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:29:26.817 [2024-07-12 00:41:54.416556] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:29:26.817 [2024-07-12 00:41:54.416574] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:29:26.817 [2024-07-12 00:41:54.416597] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:29:26.817 [2024-07-12 00:41:54.416613] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:29:26.817 [2024-07-12 00:41:54.416630] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:29:26.817 [2024-07-12 00:41:54.416644] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:29:26.817 [2024-07-12 00:41:54.416658] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:29:26.817 [2024-07-12 00:41:54.416676] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:29:26.817 [2024-07-12 00:41:54.416697] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:29:26.817 [2024-07-12 00:41:54.416712] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:29:26.817 [2024-07-12 00:41:54.416730] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:29:26.817 [2024-07-12 00:41:54.416744] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:29:26.817 [2024-07-12 00:41:54.416758] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:29:26.817 [2024-07-12 00:41:54.416820] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:26.817 [2024-07-12 00:41:54.416839] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:26.817 [2024-07-12 00:41:54.416852] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:26.817 [2024-07-12 00:41:54.416865] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:26.817 [2024-07-12 00:41:54.416877] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:27.078 00:41:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # nvmfpid= 00:29:27.078 00:41:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@139 -- # sleep 1 00:29:28.015 00:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # kill -9 1026524 00:29:28.015 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 142: kill: (1026524) - No such process 00:29:28.015 00:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # true 00:29:28.015 00:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@144 -- # stoptarget 00:29:28.015 00:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:29:28.015 00:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:29:28.015 00:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:28.015 00:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@45 -- # nvmftestfini 00:29:28.015 00:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:28.015 00:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # sync 00:29:28.015 00:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:28.015 00:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@120 -- # set +e 00:29:28.015 00:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:28.015 00:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:28.015 rmmod nvme_tcp 00:29:28.015 rmmod nvme_fabrics 00:29:28.015 rmmod nvme_keyring 00:29:28.015 00:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:28.015 00:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set -e 00:29:28.015 00:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # return 0 00:29:28.015 00:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:29:28.015 00:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:29:28.015 00:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:29:28.015 00:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:29:28.015 00:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:28.015 00:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:28.015 00:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:28.015 00:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:28.015 00:41:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:30.600 00:41:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:30.600 00:29:30.600 real 0m7.325s 00:29:30.600 user 0m17.976s 00:29:30.600 sys 0m1.390s 00:29:30.600 00:41:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:29:30.600 00:41:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:30.600 ************************************ 00:29:30.600 END TEST nvmf_shutdown_tc3 00:29:30.600 ************************************ 00:29:30.600 00:41:57 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@151 -- # trap - SIGINT SIGTERM EXIT 00:29:30.600 00:29:30.600 real 0m26.528s 00:29:30.600 user 1m15.785s 00:29:30.600 sys 0m5.844s 00:29:30.600 00:41:57 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1122 -- # xtrace_disable 00:29:30.600 00:41:57 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:30.600 ************************************ 00:29:30.600 END TEST nvmf_shutdown 00:29:30.600 ************************************ 00:29:30.600 00:41:57 nvmf_tcp -- nvmf/nvmf.sh@86 -- # timing_exit target 00:29:30.600 00:41:57 nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:30.600 00:41:57 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:30.600 00:41:57 nvmf_tcp -- nvmf/nvmf.sh@88 -- # timing_enter host 00:29:30.600 00:41:57 nvmf_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:29:30.600 00:41:57 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:30.600 00:41:57 nvmf_tcp -- nvmf/nvmf.sh@90 -- # [[ 0 -eq 0 ]] 00:29:30.600 00:41:57 nvmf_tcp -- nvmf/nvmf.sh@91 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:29:30.600 00:41:57 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:29:30.600 00:41:57 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:29:30.600 00:41:57 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:30.600 ************************************ 00:29:30.600 START TEST nvmf_multicontroller 00:29:30.600 ************************************ 00:29:30.600 00:41:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:29:30.600 * Looking for test storage... 00:29:30.600 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:30.600 00:41:57 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:30.600 00:41:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:29:30.600 00:41:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:30.600 00:41:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:30.600 00:41:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:30.600 00:41:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:30.600 00:41:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:30.600 00:41:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:30.600 00:41:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:30.600 00:41:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:30.600 00:41:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:30.600 00:41:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:30.600 00:41:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:29:30.600 00:41:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:29:30.600 00:41:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:30.600 00:41:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:30.600 00:41:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:30.600 00:41:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:30.600 00:41:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:30.600 00:41:58 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:30.600 00:41:58 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:30.600 00:41:58 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:30.600 00:41:58 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:30.600 00:41:58 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:30.600 00:41:58 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:30.600 00:41:58 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:29:30.600 00:41:58 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:30.600 00:41:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@47 -- # : 0 00:29:30.600 00:41:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:30.600 00:41:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:30.600 00:41:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:30.600 00:41:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:30.600 00:41:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:30.600 00:41:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:30.600 00:41:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:30.600 00:41:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:30.600 00:41:58 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:30.600 00:41:58 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:30.600 00:41:58 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:29:30.600 00:41:58 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:29:30.600 00:41:58 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:29:30.600 00:41:58 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:29:30.600 00:41:58 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:29:30.600 00:41:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:29:30.600 00:41:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:30.600 00:41:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:30.600 00:41:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:30.600 00:41:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:30.600 00:41:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:30.600 00:41:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:30.600 00:41:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:30.600 00:41:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:29:30.600 00:41:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:29:30.600 00:41:58 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@285 -- # xtrace_disable 00:29:30.600 00:41:58 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:31.973 00:41:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:31.973 00:41:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # pci_devs=() 00:29:31.973 00:41:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # local -a pci_devs 00:29:31.973 00:41:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # pci_net_devs=() 00:29:31.973 00:41:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:29:31.973 00:41:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # pci_drivers=() 00:29:31.973 00:41:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # local -A pci_drivers 00:29:31.973 00:41:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # net_devs=() 00:29:31.973 00:41:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # local -ga net_devs 00:29:31.973 00:41:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # e810=() 00:29:31.973 00:41:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # local -ga e810 00:29:31.973 00:41:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # x722=() 00:29:31.973 00:41:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # local -ga x722 00:29:31.973 00:41:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # mlx=() 00:29:31.973 00:41:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # local -ga mlx 00:29:31.973 00:41:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:31.973 00:41:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:31.973 00:41:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:31.973 00:41:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:31.973 00:41:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:31.973 00:41:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:31.973 00:41:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:31.973 00:41:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:31.973 00:41:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:31.973 00:41:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:31.973 00:41:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:31.973 00:41:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:29:31.973 00:41:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:29:31.973 00:41:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:29:31.973 00:41:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:29:31.973 00:41:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:29:31.973 00:41:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:29:31.973 00:41:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:31.973 00:41:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:29:31.973 Found 0000:08:00.0 (0x8086 - 0x159b) 00:29:31.973 00:41:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:31.973 00:41:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:31.973 00:41:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:31.973 00:41:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:31.973 00:41:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:31.973 00:41:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:31.973 00:41:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:29:31.973 Found 0000:08:00.1 (0x8086 - 0x159b) 00:29:31.973 00:41:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:31.973 00:41:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:31.973 00:41:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:31.973 00:41:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:31.973 00:41:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:31.973 00:41:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:29:31.973 00:41:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:29:31.973 00:41:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:29:31.973 00:41:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:31.973 00:41:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:31.973 00:41:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:31.973 00:41:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:31.973 00:41:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:31.973 00:41:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:31.973 00:41:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:31.973 00:41:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:29:31.973 Found net devices under 0000:08:00.0: cvl_0_0 00:29:31.973 00:41:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:31.973 00:41:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:31.973 00:41:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:31.973 00:41:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:31.973 00:41:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:31.973 00:41:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:31.973 00:41:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:31.973 00:41:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:31.973 00:41:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:29:31.973 Found net devices under 0000:08:00.1: cvl_0_1 00:29:31.973 00:41:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:31.973 00:41:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:29:31.973 00:41:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # is_hw=yes 00:29:31.973 00:41:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:29:31.973 00:41:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:29:31.973 00:41:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:29:31.973 00:41:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:31.973 00:41:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:31.973 00:41:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:31.973 00:41:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:29:31.973 00:41:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:31.973 00:41:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:31.973 00:41:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:29:31.973 00:41:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:31.973 00:41:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:31.973 00:41:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:29:31.973 00:41:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:29:31.973 00:41:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:29:31.973 00:41:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:31.973 00:41:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:31.973 00:41:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:31.973 00:41:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:29:31.973 00:41:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:31.973 00:41:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:31.973 00:41:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:31.973 00:41:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:29:31.973 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:31.973 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.298 ms 00:29:31.973 00:29:31.973 --- 10.0.0.2 ping statistics --- 00:29:31.973 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:31.973 rtt min/avg/max/mdev = 0.298/0.298/0.298/0.000 ms 00:29:31.973 00:41:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:31.973 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:31.973 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.137 ms 00:29:31.973 00:29:31.973 --- 10.0.0.1 ping statistics --- 00:29:31.973 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:31.973 rtt min/avg/max/mdev = 0.137/0.137/0.137/0.000 ms 00:29:31.973 00:41:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:31.973 00:41:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@422 -- # return 0 00:29:31.973 00:41:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:29:31.973 00:41:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:31.973 00:41:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:29:31.973 00:41:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:29:31.973 00:41:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:31.973 00:41:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:29:31.973 00:41:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:29:31.973 00:41:59 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:29:31.973 00:41:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:31.973 00:41:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@720 -- # xtrace_disable 00:29:31.973 00:41:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:31.973 00:41:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@481 -- # nvmfpid=1028475 00:29:31.974 00:41:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:29:31.974 00:41:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@482 -- # waitforlisten 1028475 00:29:31.974 00:41:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@827 -- # '[' -z 1028475 ']' 00:29:31.974 00:41:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:31.974 00:41:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@832 -- # local max_retries=100 00:29:31.974 00:41:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:31.974 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:31.974 00:41:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@836 -- # xtrace_disable 00:29:31.974 00:41:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:31.974 [2024-07-12 00:41:59.784966] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:29:31.974 [2024-07-12 00:41:59.785080] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:32.231 EAL: No free 2048 kB hugepages reported on node 1 00:29:32.231 [2024-07-12 00:41:59.850618] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:32.231 [2024-07-12 00:41:59.941355] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:32.231 [2024-07-12 00:41:59.941410] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:32.231 [2024-07-12 00:41:59.941427] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:32.231 [2024-07-12 00:41:59.941440] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:32.231 [2024-07-12 00:41:59.941451] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:32.231 [2024-07-12 00:41:59.941553] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:29:32.231 [2024-07-12 00:41:59.941611] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:29:32.231 [2024-07-12 00:41:59.941622] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:32.231 00:42:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:29:32.231 00:42:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@860 -- # return 0 00:29:32.231 00:42:00 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:32.231 00:42:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:32.231 00:42:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:32.489 00:42:00 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:32.489 00:42:00 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:32.489 00:42:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:32.489 00:42:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:32.489 [2024-07-12 00:42:00.082535] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:32.489 00:42:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:32.489 00:42:00 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:32.489 00:42:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:32.489 00:42:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:32.489 Malloc0 00:29:32.489 00:42:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:32.489 00:42:00 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:32.489 00:42:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:32.489 00:42:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:32.489 00:42:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:32.489 00:42:00 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:32.489 00:42:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:32.489 00:42:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:32.489 00:42:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:32.489 00:42:00 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:32.489 00:42:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:32.489 00:42:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:32.489 [2024-07-12 00:42:00.146881] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:32.489 00:42:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:32.489 00:42:00 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:29:32.489 00:42:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:32.489 00:42:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:32.489 [2024-07-12 00:42:00.154794] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:29:32.489 00:42:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:32.489 00:42:00 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:29:32.489 00:42:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:32.489 00:42:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:32.489 Malloc1 00:29:32.489 00:42:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:32.489 00:42:00 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:29:32.489 00:42:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:32.489 00:42:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:32.489 00:42:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:32.489 00:42:00 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:29:32.489 00:42:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:32.489 00:42:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:32.489 00:42:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:32.489 00:42:00 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:29:32.489 00:42:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:32.489 00:42:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:32.489 00:42:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:32.489 00:42:00 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:29:32.489 00:42:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:32.489 00:42:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:32.489 00:42:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:32.489 00:42:00 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=1028522 00:29:32.489 00:42:00 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:29:32.489 00:42:00 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:32.489 00:42:00 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 1028522 /var/tmp/bdevperf.sock 00:29:32.489 00:42:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@827 -- # '[' -z 1028522 ']' 00:29:32.489 00:42:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:32.489 00:42:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@832 -- # local max_retries=100 00:29:32.489 00:42:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:32.489 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:32.489 00:42:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@836 -- # xtrace_disable 00:29:32.489 00:42:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:32.747 00:42:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:29:32.747 00:42:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@860 -- # return 0 00:29:32.747 00:42:00 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:29:32.747 00:42:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:32.747 00:42:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:33.005 NVMe0n1 00:29:33.005 00:42:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:33.005 00:42:00 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:29:33.005 00:42:00 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:29:33.005 00:42:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:33.005 00:42:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:33.005 00:42:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:33.005 1 00:29:33.005 00:42:00 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:29:33.005 00:42:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:29:33.005 00:42:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:29:33.005 00:42:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:29:33.005 00:42:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:33.005 00:42:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:29:33.005 00:42:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:33.005 00:42:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:29:33.005 00:42:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:33.005 00:42:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:33.005 request: 00:29:33.005 { 00:29:33.005 "name": "NVMe0", 00:29:33.005 "trtype": "tcp", 00:29:33.005 "traddr": "10.0.0.2", 00:29:33.005 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:29:33.005 "hostaddr": "10.0.0.2", 00:29:33.005 "hostsvcid": "60000", 00:29:33.005 "adrfam": "ipv4", 00:29:33.005 "trsvcid": "4420", 00:29:33.005 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:33.005 "method": "bdev_nvme_attach_controller", 00:29:33.005 "req_id": 1 00:29:33.005 } 00:29:33.005 Got JSON-RPC error response 00:29:33.005 response: 00:29:33.005 { 00:29:33.005 "code": -114, 00:29:33.005 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:29:33.005 } 00:29:33.005 00:42:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:29:33.005 00:42:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:29:33.005 00:42:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:29:33.005 00:42:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:29:33.005 00:42:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:29:33.005 00:42:00 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:29:33.005 00:42:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:29:33.005 00:42:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:29:33.005 00:42:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:29:33.005 00:42:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:33.005 00:42:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:29:33.005 00:42:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:33.005 00:42:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:29:33.005 00:42:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:33.005 00:42:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:33.005 request: 00:29:33.005 { 00:29:33.005 "name": "NVMe0", 00:29:33.005 "trtype": "tcp", 00:29:33.005 "traddr": "10.0.0.2", 00:29:33.005 "hostaddr": "10.0.0.2", 00:29:33.005 "hostsvcid": "60000", 00:29:33.005 "adrfam": "ipv4", 00:29:33.005 "trsvcid": "4420", 00:29:33.005 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:29:33.005 "method": "bdev_nvme_attach_controller", 00:29:33.005 "req_id": 1 00:29:33.005 } 00:29:33.005 Got JSON-RPC error response 00:29:33.005 response: 00:29:33.005 { 00:29:33.005 "code": -114, 00:29:33.005 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:29:33.005 } 00:29:33.005 00:42:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:29:33.005 00:42:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:29:33.006 00:42:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:29:33.006 00:42:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:29:33.006 00:42:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:29:33.006 00:42:00 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:29:33.006 00:42:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:29:33.006 00:42:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:29:33.006 00:42:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:29:33.006 00:42:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:33.006 00:42:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:29:33.006 00:42:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:33.006 00:42:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:29:33.006 00:42:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:33.006 00:42:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:33.006 request: 00:29:33.006 { 00:29:33.006 "name": "NVMe0", 00:29:33.006 "trtype": "tcp", 00:29:33.006 "traddr": "10.0.0.2", 00:29:33.006 "hostaddr": "10.0.0.2", 00:29:33.006 "hostsvcid": "60000", 00:29:33.006 "adrfam": "ipv4", 00:29:33.006 "trsvcid": "4420", 00:29:33.006 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:33.006 "multipath": "disable", 00:29:33.006 "method": "bdev_nvme_attach_controller", 00:29:33.006 "req_id": 1 00:29:33.006 } 00:29:33.006 Got JSON-RPC error response 00:29:33.006 response: 00:29:33.006 { 00:29:33.006 "code": -114, 00:29:33.006 "message": "A controller named NVMe0 already exists and multipath is disabled\n" 00:29:33.006 } 00:29:33.006 00:42:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:29:33.006 00:42:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:29:33.006 00:42:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:29:33.006 00:42:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:29:33.006 00:42:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:29:33.006 00:42:00 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:29:33.006 00:42:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:29:33.006 00:42:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:29:33.006 00:42:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:29:33.006 00:42:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:33.006 00:42:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:29:33.006 00:42:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:33.006 00:42:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:29:33.006 00:42:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:33.006 00:42:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:33.006 request: 00:29:33.006 { 00:29:33.006 "name": "NVMe0", 00:29:33.006 "trtype": "tcp", 00:29:33.006 "traddr": "10.0.0.2", 00:29:33.006 "hostaddr": "10.0.0.2", 00:29:33.006 "hostsvcid": "60000", 00:29:33.006 "adrfam": "ipv4", 00:29:33.006 "trsvcid": "4420", 00:29:33.006 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:33.006 "multipath": "failover", 00:29:33.006 "method": "bdev_nvme_attach_controller", 00:29:33.006 "req_id": 1 00:29:33.006 } 00:29:33.006 Got JSON-RPC error response 00:29:33.006 response: 00:29:33.006 { 00:29:33.006 "code": -114, 00:29:33.006 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:29:33.006 } 00:29:33.006 00:42:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:29:33.006 00:42:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:29:33.006 00:42:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:29:33.006 00:42:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:29:33.006 00:42:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:29:33.006 00:42:00 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:33.006 00:42:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:33.006 00:42:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:33.264 00:29:33.264 00:42:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:33.264 00:42:00 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:33.264 00:42:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:33.264 00:42:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:33.264 00:42:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:33.264 00:42:00 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:29:33.264 00:42:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:33.264 00:42:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:33.264 00:29:33.264 00:42:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:33.264 00:42:00 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:29:33.264 00:42:00 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:29:33.264 00:42:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:33.264 00:42:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:33.264 00:42:00 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:33.264 00:42:00 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:29:33.264 00:42:00 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:29:34.638 0 00:29:34.638 00:42:02 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:29:34.638 00:42:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:34.638 00:42:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:34.638 00:42:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:34.638 00:42:02 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@100 -- # killprocess 1028522 00:29:34.638 00:42:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@946 -- # '[' -z 1028522 ']' 00:29:34.638 00:42:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@950 -- # kill -0 1028522 00:29:34.638 00:42:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@951 -- # uname 00:29:34.638 00:42:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:29:34.638 00:42:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1028522 00:29:34.638 00:42:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:29:34.638 00:42:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:29:34.638 00:42:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1028522' 00:29:34.638 killing process with pid 1028522 00:29:34.638 00:42:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@965 -- # kill 1028522 00:29:34.639 00:42:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@970 -- # wait 1028522 00:29:34.639 00:42:02 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:34.639 00:42:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:34.639 00:42:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:34.639 00:42:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:34.639 00:42:02 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:29:34.639 00:42:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:34.639 00:42:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:34.639 00:42:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:34.639 00:42:02 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:29:34.639 00:42:02 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@107 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:29:34.639 00:42:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1608 -- # read -r file 00:29:34.639 00:42:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1607 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:29:34.639 00:42:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1607 -- # sort -u 00:29:34.639 00:42:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1609 -- # cat 00:29:34.639 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:29:34.639 [2024-07-12 00:42:00.255431] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:29:34.639 [2024-07-12 00:42:00.255542] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1028522 ] 00:29:34.639 EAL: No free 2048 kB hugepages reported on node 1 00:29:34.639 [2024-07-12 00:42:00.318213] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:34.639 [2024-07-12 00:42:00.405620] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:34.639 [2024-07-12 00:42:00.941613] bdev.c:4580:bdev_name_add: *ERROR*: Bdev name b65f8142-2b72-4f1b-9687-68335e1ece83 already exists 00:29:34.639 [2024-07-12 00:42:00.941661] bdev.c:7696:bdev_register: *ERROR*: Unable to add uuid:b65f8142-2b72-4f1b-9687-68335e1ece83 alias for bdev NVMe1n1 00:29:34.639 [2024-07-12 00:42:00.941680] bdev_nvme.c:4314:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:29:34.639 Running I/O for 1 seconds... 00:29:34.639 00:29:34.639 Latency(us) 00:29:34.639 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:34.639 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:29:34.639 NVMe0n1 : 1.01 16645.75 65.02 0.00 0.00 7676.48 6505.05 15049.01 00:29:34.639 =================================================================================================================== 00:29:34.639 Total : 16645.75 65.02 0.00 0.00 7676.48 6505.05 15049.01 00:29:34.639 Received shutdown signal, test time was about 1.000000 seconds 00:29:34.639 00:29:34.639 Latency(us) 00:29:34.639 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:34.639 =================================================================================================================== 00:29:34.639 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:34.639 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:29:34.639 00:42:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1614 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:29:34.639 00:42:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1608 -- # read -r file 00:29:34.639 00:42:02 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@108 -- # nvmftestfini 00:29:34.639 00:42:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:34.639 00:42:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@117 -- # sync 00:29:34.639 00:42:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:34.639 00:42:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@120 -- # set +e 00:29:34.639 00:42:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:34.639 00:42:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:34.639 rmmod nvme_tcp 00:29:34.639 rmmod nvme_fabrics 00:29:34.639 rmmod nvme_keyring 00:29:34.639 00:42:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:34.639 00:42:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@124 -- # set -e 00:29:34.639 00:42:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@125 -- # return 0 00:29:34.639 00:42:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@489 -- # '[' -n 1028475 ']' 00:29:34.639 00:42:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@490 -- # killprocess 1028475 00:29:34.639 00:42:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@946 -- # '[' -z 1028475 ']' 00:29:34.639 00:42:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@950 -- # kill -0 1028475 00:29:34.639 00:42:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@951 -- # uname 00:29:34.639 00:42:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:29:34.639 00:42:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1028475 00:29:34.639 00:42:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:29:34.639 00:42:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:29:34.639 00:42:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1028475' 00:29:34.639 killing process with pid 1028475 00:29:34.639 00:42:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@965 -- # kill 1028475 00:29:34.639 00:42:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@970 -- # wait 1028475 00:29:34.898 00:42:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:29:34.898 00:42:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:29:34.898 00:42:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:29:34.898 00:42:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:34.898 00:42:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:34.898 00:42:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:34.898 00:42:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:34.898 00:42:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:37.428 00:42:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:37.428 00:29:37.428 real 0m6.723s 00:29:37.428 user 0m10.745s 00:29:37.428 sys 0m1.957s 00:29:37.428 00:42:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1122 -- # xtrace_disable 00:29:37.428 00:42:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:37.428 ************************************ 00:29:37.428 END TEST nvmf_multicontroller 00:29:37.428 ************************************ 00:29:37.428 00:42:04 nvmf_tcp -- nvmf/nvmf.sh@92 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:29:37.428 00:42:04 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:29:37.428 00:42:04 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:29:37.428 00:42:04 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:37.428 ************************************ 00:29:37.428 START TEST nvmf_aer 00:29:37.428 ************************************ 00:29:37.428 00:42:04 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:29:37.428 * Looking for test storage... 00:29:37.428 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:37.428 00:42:04 nvmf_tcp.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:37.428 00:42:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:29:37.428 00:42:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:37.428 00:42:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:37.428 00:42:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:37.428 00:42:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:37.428 00:42:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:37.428 00:42:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:37.428 00:42:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:37.428 00:42:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:37.428 00:42:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:37.428 00:42:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:37.428 00:42:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:29:37.428 00:42:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:29:37.428 00:42:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:37.428 00:42:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:37.428 00:42:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:37.428 00:42:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:37.428 00:42:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:37.428 00:42:04 nvmf_tcp.nvmf_aer -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:37.428 00:42:04 nvmf_tcp.nvmf_aer -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:37.428 00:42:04 nvmf_tcp.nvmf_aer -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:37.428 00:42:04 nvmf_tcp.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:37.428 00:42:04 nvmf_tcp.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:37.428 00:42:04 nvmf_tcp.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:37.428 00:42:04 nvmf_tcp.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:29:37.428 00:42:04 nvmf_tcp.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:37.428 00:42:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@47 -- # : 0 00:29:37.428 00:42:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:37.428 00:42:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:37.428 00:42:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:37.428 00:42:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:37.428 00:42:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:37.428 00:42:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:37.428 00:42:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:37.428 00:42:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:37.428 00:42:04 nvmf_tcp.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:29:37.428 00:42:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:29:37.428 00:42:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:37.428 00:42:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:37.428 00:42:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:37.428 00:42:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:37.428 00:42:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:37.428 00:42:04 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:37.428 00:42:04 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:37.428 00:42:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:29:37.428 00:42:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:29:37.428 00:42:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@285 -- # xtrace_disable 00:29:37.428 00:42:04 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:38.803 00:42:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:38.803 00:42:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # pci_devs=() 00:29:38.803 00:42:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # local -a pci_devs 00:29:38.803 00:42:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # pci_net_devs=() 00:29:38.803 00:42:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:29:38.803 00:42:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # pci_drivers=() 00:29:38.803 00:42:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # local -A pci_drivers 00:29:38.803 00:42:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # net_devs=() 00:29:38.803 00:42:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # local -ga net_devs 00:29:38.803 00:42:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # e810=() 00:29:38.803 00:42:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # local -ga e810 00:29:38.803 00:42:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # x722=() 00:29:38.803 00:42:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # local -ga x722 00:29:38.803 00:42:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # mlx=() 00:29:38.803 00:42:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # local -ga mlx 00:29:38.803 00:42:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:38.803 00:42:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:38.803 00:42:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:38.803 00:42:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:38.803 00:42:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:38.803 00:42:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:38.803 00:42:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:38.803 00:42:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:38.803 00:42:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:38.803 00:42:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:38.803 00:42:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:38.803 00:42:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:29:38.803 00:42:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:29:38.803 00:42:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:29:38.803 00:42:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:29:38.803 00:42:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:29:38.803 00:42:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:29:38.803 00:42:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:38.803 00:42:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:29:38.803 Found 0000:08:00.0 (0x8086 - 0x159b) 00:29:38.803 00:42:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:38.803 00:42:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:38.803 00:42:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:38.803 00:42:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:38.803 00:42:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:38.803 00:42:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:38.803 00:42:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:29:38.803 Found 0000:08:00.1 (0x8086 - 0x159b) 00:29:38.803 00:42:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:38.803 00:42:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:38.803 00:42:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:38.803 00:42:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:38.803 00:42:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:38.803 00:42:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:29:38.803 00:42:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:29:38.803 00:42:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:29:38.803 00:42:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:38.803 00:42:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:38.803 00:42:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:38.803 00:42:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:38.803 00:42:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:38.803 00:42:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:38.803 00:42:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:38.803 00:42:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:29:38.803 Found net devices under 0000:08:00.0: cvl_0_0 00:29:38.803 00:42:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:38.803 00:42:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:38.803 00:42:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:38.803 00:42:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:38.803 00:42:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:38.803 00:42:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:38.803 00:42:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:38.803 00:42:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:38.804 00:42:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:29:38.804 Found net devices under 0000:08:00.1: cvl_0_1 00:29:38.804 00:42:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:38.804 00:42:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:29:38.804 00:42:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # is_hw=yes 00:29:38.804 00:42:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:29:38.804 00:42:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:29:38.804 00:42:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:29:38.804 00:42:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:38.804 00:42:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:38.804 00:42:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:38.804 00:42:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:29:38.804 00:42:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:38.804 00:42:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:38.804 00:42:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:29:38.804 00:42:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:38.804 00:42:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:38.804 00:42:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:29:38.804 00:42:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:29:38.804 00:42:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:29:38.804 00:42:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:38.804 00:42:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:38.804 00:42:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:38.804 00:42:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:29:38.804 00:42:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:38.804 00:42:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:38.804 00:42:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:38.804 00:42:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:29:38.804 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:38.804 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.165 ms 00:29:38.804 00:29:38.804 --- 10.0.0.2 ping statistics --- 00:29:38.804 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:38.804 rtt min/avg/max/mdev = 0.165/0.165/0.165/0.000 ms 00:29:38.804 00:42:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:38.804 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:38.804 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.195 ms 00:29:38.804 00:29:38.804 --- 10.0.0.1 ping statistics --- 00:29:38.804 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:38.804 rtt min/avg/max/mdev = 0.195/0.195/0.195/0.000 ms 00:29:38.804 00:42:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:38.804 00:42:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@422 -- # return 0 00:29:38.804 00:42:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:29:38.804 00:42:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:38.804 00:42:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:29:38.804 00:42:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:29:38.804 00:42:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:38.804 00:42:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:29:38.804 00:42:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:29:38.804 00:42:06 nvmf_tcp.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:29:38.804 00:42:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:38.804 00:42:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@720 -- # xtrace_disable 00:29:38.804 00:42:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:38.804 00:42:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@481 -- # nvmfpid=1030313 00:29:38.804 00:42:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:29:38.804 00:42:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@482 -- # waitforlisten 1030313 00:29:38.804 00:42:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@827 -- # '[' -z 1030313 ']' 00:29:38.804 00:42:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:38.804 00:42:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@832 -- # local max_retries=100 00:29:38.804 00:42:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:38.804 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:38.804 00:42:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@836 -- # xtrace_disable 00:29:38.804 00:42:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:38.804 [2024-07-12 00:42:06.522800] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:29:38.804 [2024-07-12 00:42:06.522889] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:38.804 EAL: No free 2048 kB hugepages reported on node 1 00:29:38.804 [2024-07-12 00:42:06.587882] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:39.061 [2024-07-12 00:42:06.676021] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:39.061 [2024-07-12 00:42:06.676077] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:39.061 [2024-07-12 00:42:06.676093] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:39.061 [2024-07-12 00:42:06.676106] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:39.061 [2024-07-12 00:42:06.676117] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:39.061 [2024-07-12 00:42:06.676205] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:39.061 [2024-07-12 00:42:06.676261] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:29:39.061 [2024-07-12 00:42:06.676314] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:39.061 [2024-07-12 00:42:06.676311] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:29:39.061 00:42:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:29:39.061 00:42:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@860 -- # return 0 00:29:39.061 00:42:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:39.061 00:42:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:39.061 00:42:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:39.061 00:42:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:39.061 00:42:06 nvmf_tcp.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:39.061 00:42:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:39.061 00:42:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:39.061 [2024-07-12 00:42:06.818223] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:39.061 00:42:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:39.061 00:42:06 nvmf_tcp.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:29:39.061 00:42:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:39.061 00:42:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:39.061 Malloc0 00:29:39.061 00:42:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:39.061 00:42:06 nvmf_tcp.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:29:39.061 00:42:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:39.061 00:42:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:39.061 00:42:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:39.061 00:42:06 nvmf_tcp.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:39.061 00:42:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:39.061 00:42:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:39.061 00:42:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:39.061 00:42:06 nvmf_tcp.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:39.061 00:42:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:39.061 00:42:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:39.061 [2024-07-12 00:42:06.868323] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:39.061 00:42:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:39.061 00:42:06 nvmf_tcp.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:29:39.061 00:42:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:39.061 00:42:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:39.061 [ 00:29:39.061 { 00:29:39.061 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:29:39.061 "subtype": "Discovery", 00:29:39.061 "listen_addresses": [], 00:29:39.061 "allow_any_host": true, 00:29:39.061 "hosts": [] 00:29:39.061 }, 00:29:39.061 { 00:29:39.061 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:29:39.061 "subtype": "NVMe", 00:29:39.061 "listen_addresses": [ 00:29:39.061 { 00:29:39.061 "trtype": "TCP", 00:29:39.061 "adrfam": "IPv4", 00:29:39.061 "traddr": "10.0.0.2", 00:29:39.061 "trsvcid": "4420" 00:29:39.061 } 00:29:39.061 ], 00:29:39.061 "allow_any_host": true, 00:29:39.061 "hosts": [], 00:29:39.061 "serial_number": "SPDK00000000000001", 00:29:39.061 "model_number": "SPDK bdev Controller", 00:29:39.061 "max_namespaces": 2, 00:29:39.061 "min_cntlid": 1, 00:29:39.061 "max_cntlid": 65519, 00:29:39.061 "namespaces": [ 00:29:39.061 { 00:29:39.061 "nsid": 1, 00:29:39.061 "bdev_name": "Malloc0", 00:29:39.061 "name": "Malloc0", 00:29:39.061 "nguid": "1681BBBDFB284588BECED9EFDC201471", 00:29:39.061 "uuid": "1681bbbd-fb28-4588-bece-d9efdc201471" 00:29:39.061 } 00:29:39.061 ] 00:29:39.061 } 00:29:39.061 ] 00:29:39.061 00:42:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:39.061 00:42:06 nvmf_tcp.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:29:39.061 00:42:06 nvmf_tcp.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:29:39.061 00:42:06 nvmf_tcp.nvmf_aer -- host/aer.sh@33 -- # aerpid=1030346 00:29:39.061 00:42:06 nvmf_tcp.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:29:39.061 00:42:06 nvmf_tcp.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:29:39.061 00:42:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1261 -- # local i=0 00:29:39.061 00:42:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1262 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:29:39.061 00:42:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1263 -- # '[' 0 -lt 200 ']' 00:29:39.061 00:42:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1264 -- # i=1 00:29:39.061 00:42:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1265 -- # sleep 0.1 00:29:39.317 EAL: No free 2048 kB hugepages reported on node 1 00:29:39.317 00:42:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1262 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:29:39.317 00:42:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1263 -- # '[' 1 -lt 200 ']' 00:29:39.317 00:42:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1264 -- # i=2 00:29:39.317 00:42:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1265 -- # sleep 0.1 00:29:39.317 00:42:07 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1262 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:29:39.317 00:42:07 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:29:39.317 00:42:07 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1272 -- # return 0 00:29:39.317 00:42:07 nvmf_tcp.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:29:39.317 00:42:07 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:39.317 00:42:07 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:39.317 Malloc1 00:29:39.317 00:42:07 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:39.317 00:42:07 nvmf_tcp.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:29:39.317 00:42:07 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:39.317 00:42:07 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:39.317 00:42:07 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:39.317 00:42:07 nvmf_tcp.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:29:39.318 00:42:07 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:39.318 00:42:07 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:39.318 [ 00:29:39.318 { 00:29:39.318 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:29:39.318 "subtype": "Discovery", 00:29:39.318 "listen_addresses": [], 00:29:39.318 "allow_any_host": true, 00:29:39.318 "hosts": [] 00:29:39.318 }, 00:29:39.318 { 00:29:39.318 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:29:39.318 "subtype": "NVMe", 00:29:39.318 "listen_addresses": [ 00:29:39.318 { 00:29:39.318 "trtype": "TCP", 00:29:39.318 "adrfam": "IPv4", 00:29:39.318 "traddr": "10.0.0.2", 00:29:39.318 "trsvcid": "4420" 00:29:39.318 } 00:29:39.318 ], 00:29:39.318 "allow_any_host": true, 00:29:39.575 "hosts": [], 00:29:39.575 "serial_number": "SPDK00000000000001", 00:29:39.575 "model_number": "SPDK bdev Controller", 00:29:39.575 "max_namespaces": 2, 00:29:39.575 "min_cntlid": 1, 00:29:39.575 "max_cntlid": 65519, 00:29:39.575 "namespaces": [ 00:29:39.575 { 00:29:39.575 "nsid": 1, 00:29:39.575 "bdev_name": "Malloc0", 00:29:39.575 "name": "Malloc0", 00:29:39.575 "nguid": "1681BBBDFB284588BECED9EFDC201471", 00:29:39.575 "uuid": "1681bbbd-fb28-4588-bece-d9efdc201471" 00:29:39.575 }, 00:29:39.575 { 00:29:39.575 "nsid": 2, 00:29:39.575 "bdev_name": "Malloc1", 00:29:39.575 "name": "Malloc1", 00:29:39.575 "nguid": "B4B2063A99A04CD5B04B9FD3A963B0F6", 00:29:39.575 "uuid": "b4b2063a-99a0-4cd5-b04b-9fd3a963b0f6" 00:29:39.575 } 00:29:39.575 ] 00:29:39.575 } 00:29:39.575 ] 00:29:39.575 Asynchronous Event Request test 00:29:39.575 Attaching to 10.0.0.2 00:29:39.575 Attached to 10.0.0.2 00:29:39.575 Registering asynchronous event callbacks... 00:29:39.575 Starting namespace attribute notice tests for all controllers... 00:29:39.575 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:29:39.575 aer_cb - Changed Namespace 00:29:39.575 Cleaning up... 00:29:39.575 00:42:07 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:39.575 00:42:07 nvmf_tcp.nvmf_aer -- host/aer.sh@43 -- # wait 1030346 00:29:39.575 00:42:07 nvmf_tcp.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:29:39.575 00:42:07 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:39.575 00:42:07 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:39.575 00:42:07 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:39.575 00:42:07 nvmf_tcp.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:29:39.575 00:42:07 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:39.575 00:42:07 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:39.575 00:42:07 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:39.575 00:42:07 nvmf_tcp.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:39.575 00:42:07 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:39.575 00:42:07 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:39.575 00:42:07 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:39.575 00:42:07 nvmf_tcp.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:29:39.575 00:42:07 nvmf_tcp.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:29:39.575 00:42:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:39.575 00:42:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@117 -- # sync 00:29:39.575 00:42:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:39.575 00:42:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@120 -- # set +e 00:29:39.575 00:42:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:39.575 00:42:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:39.575 rmmod nvme_tcp 00:29:39.575 rmmod nvme_fabrics 00:29:39.575 rmmod nvme_keyring 00:29:39.575 00:42:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:39.575 00:42:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@124 -- # set -e 00:29:39.575 00:42:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@125 -- # return 0 00:29:39.575 00:42:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@489 -- # '[' -n 1030313 ']' 00:29:39.575 00:42:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@490 -- # killprocess 1030313 00:29:39.575 00:42:07 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@946 -- # '[' -z 1030313 ']' 00:29:39.575 00:42:07 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@950 -- # kill -0 1030313 00:29:39.575 00:42:07 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@951 -- # uname 00:29:39.575 00:42:07 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:29:39.575 00:42:07 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1030313 00:29:39.575 00:42:07 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:29:39.575 00:42:07 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:29:39.575 00:42:07 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1030313' 00:29:39.575 killing process with pid 1030313 00:29:39.575 00:42:07 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@965 -- # kill 1030313 00:29:39.575 00:42:07 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@970 -- # wait 1030313 00:29:39.834 00:42:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:29:39.834 00:42:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:29:39.834 00:42:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:29:39.834 00:42:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:39.834 00:42:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:39.834 00:42:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:39.834 00:42:07 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:39.834 00:42:07 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:41.737 00:42:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:41.737 00:29:41.737 real 0m4.775s 00:29:41.737 user 0m3.646s 00:29:41.737 sys 0m1.590s 00:29:41.737 00:42:09 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1122 -- # xtrace_disable 00:29:41.737 00:42:09 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:41.737 ************************************ 00:29:41.737 END TEST nvmf_aer 00:29:41.737 ************************************ 00:29:41.737 00:42:09 nvmf_tcp -- nvmf/nvmf.sh@93 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:29:41.737 00:42:09 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:29:41.737 00:42:09 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:29:41.737 00:42:09 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:41.737 ************************************ 00:29:41.737 START TEST nvmf_async_init 00:29:41.737 ************************************ 00:29:41.737 00:42:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:29:41.996 * Looking for test storage... 00:29:41.996 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:41.996 00:42:09 nvmf_tcp.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:41.996 00:42:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:29:41.996 00:42:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:41.996 00:42:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:41.996 00:42:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:41.996 00:42:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:41.996 00:42:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:41.996 00:42:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:41.996 00:42:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:41.996 00:42:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:41.996 00:42:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:41.996 00:42:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:41.996 00:42:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:29:41.996 00:42:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:29:41.996 00:42:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:41.996 00:42:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:41.996 00:42:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:41.996 00:42:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:41.996 00:42:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:41.996 00:42:09 nvmf_tcp.nvmf_async_init -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:41.996 00:42:09 nvmf_tcp.nvmf_async_init -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:41.996 00:42:09 nvmf_tcp.nvmf_async_init -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:41.996 00:42:09 nvmf_tcp.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:41.996 00:42:09 nvmf_tcp.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:41.996 00:42:09 nvmf_tcp.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:41.996 00:42:09 nvmf_tcp.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:29:41.996 00:42:09 nvmf_tcp.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:41.996 00:42:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@47 -- # : 0 00:29:41.996 00:42:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:41.996 00:42:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:41.996 00:42:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:41.997 00:42:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:41.997 00:42:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:41.997 00:42:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:41.997 00:42:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:41.997 00:42:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:41.997 00:42:09 nvmf_tcp.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:29:41.997 00:42:09 nvmf_tcp.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:29:41.997 00:42:09 nvmf_tcp.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:29:41.997 00:42:09 nvmf_tcp.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:29:41.997 00:42:09 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:29:41.997 00:42:09 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:29:41.997 00:42:09 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # nguid=6e230e6507204c7ebc6e1bd26740b1bf 00:29:41.997 00:42:09 nvmf_tcp.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:29:41.997 00:42:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:29:41.997 00:42:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:41.997 00:42:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:41.997 00:42:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:41.997 00:42:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:41.997 00:42:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:41.997 00:42:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:41.997 00:42:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:41.997 00:42:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:29:41.997 00:42:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:29:41.997 00:42:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@285 -- # xtrace_disable 00:29:41.997 00:42:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:43.372 00:42:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:43.372 00:42:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # pci_devs=() 00:29:43.372 00:42:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # local -a pci_devs 00:29:43.372 00:42:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # pci_net_devs=() 00:29:43.372 00:42:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:29:43.372 00:42:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # pci_drivers=() 00:29:43.372 00:42:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # local -A pci_drivers 00:29:43.372 00:42:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # net_devs=() 00:29:43.372 00:42:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # local -ga net_devs 00:29:43.372 00:42:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # e810=() 00:29:43.372 00:42:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # local -ga e810 00:29:43.372 00:42:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # x722=() 00:29:43.372 00:42:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # local -ga x722 00:29:43.372 00:42:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # mlx=() 00:29:43.372 00:42:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # local -ga mlx 00:29:43.372 00:42:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:43.372 00:42:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:43.372 00:42:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:43.372 00:42:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:43.372 00:42:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:43.372 00:42:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:43.372 00:42:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:43.372 00:42:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:43.372 00:42:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:43.372 00:42:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:43.372 00:42:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:43.372 00:42:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:29:43.372 00:42:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:29:43.372 00:42:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:29:43.372 00:42:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:29:43.372 00:42:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:29:43.372 00:42:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:29:43.372 00:42:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:43.372 00:42:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:29:43.372 Found 0000:08:00.0 (0x8086 - 0x159b) 00:29:43.372 00:42:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:43.372 00:42:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:43.372 00:42:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:43.372 00:42:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:43.372 00:42:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:43.372 00:42:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:43.372 00:42:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:29:43.372 Found 0000:08:00.1 (0x8086 - 0x159b) 00:29:43.372 00:42:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:43.372 00:42:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:43.372 00:42:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:43.372 00:42:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:43.372 00:42:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:43.372 00:42:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:29:43.372 00:42:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:29:43.372 00:42:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:29:43.372 00:42:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:43.372 00:42:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:43.372 00:42:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:43.372 00:42:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:43.372 00:42:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:43.372 00:42:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:43.372 00:42:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:43.372 00:42:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:29:43.372 Found net devices under 0000:08:00.0: cvl_0_0 00:29:43.372 00:42:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:43.372 00:42:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:43.372 00:42:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:43.372 00:42:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:43.372 00:42:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:43.372 00:42:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:43.372 00:42:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:43.372 00:42:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:43.372 00:42:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:29:43.372 Found net devices under 0000:08:00.1: cvl_0_1 00:29:43.372 00:42:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:43.372 00:42:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:29:43.372 00:42:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # is_hw=yes 00:29:43.372 00:42:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:29:43.372 00:42:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:29:43.372 00:42:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:29:43.372 00:42:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:43.372 00:42:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:43.372 00:42:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:43.372 00:42:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:29:43.372 00:42:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:43.372 00:42:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:43.372 00:42:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:29:43.372 00:42:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:43.372 00:42:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:43.372 00:42:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:29:43.372 00:42:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:29:43.372 00:42:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:29:43.372 00:42:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:43.653 00:42:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:43.653 00:42:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:43.653 00:42:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:29:43.653 00:42:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:43.653 00:42:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:43.653 00:42:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:43.653 00:42:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:29:43.653 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:43.653 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.255 ms 00:29:43.653 00:29:43.653 --- 10.0.0.2 ping statistics --- 00:29:43.653 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:43.653 rtt min/avg/max/mdev = 0.255/0.255/0.255/0.000 ms 00:29:43.653 00:42:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:43.653 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:43.653 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.149 ms 00:29:43.653 00:29:43.653 --- 10.0.0.1 ping statistics --- 00:29:43.653 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:43.653 rtt min/avg/max/mdev = 0.149/0.149/0.149/0.000 ms 00:29:43.653 00:42:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:43.653 00:42:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@422 -- # return 0 00:29:43.653 00:42:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:29:43.653 00:42:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:43.653 00:42:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:29:43.653 00:42:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:29:43.653 00:42:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:43.653 00:42:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:29:43.653 00:42:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:29:43.653 00:42:11 nvmf_tcp.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:29:43.653 00:42:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:43.653 00:42:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@720 -- # xtrace_disable 00:29:43.653 00:42:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:43.653 00:42:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@481 -- # nvmfpid=1032356 00:29:43.653 00:42:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:29:43.653 00:42:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@482 -- # waitforlisten 1032356 00:29:43.653 00:42:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@827 -- # '[' -z 1032356 ']' 00:29:43.654 00:42:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:43.654 00:42:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@832 -- # local max_retries=100 00:29:43.654 00:42:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:43.654 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:43.654 00:42:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@836 -- # xtrace_disable 00:29:43.654 00:42:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:43.654 [2024-07-12 00:42:11.368063] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:29:43.654 [2024-07-12 00:42:11.368162] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:43.654 EAL: No free 2048 kB hugepages reported on node 1 00:29:43.654 [2024-07-12 00:42:11.434352] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:43.915 [2024-07-12 00:42:11.524945] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:43.915 [2024-07-12 00:42:11.525008] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:43.916 [2024-07-12 00:42:11.525024] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:43.916 [2024-07-12 00:42:11.525037] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:43.916 [2024-07-12 00:42:11.525049] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:43.916 [2024-07-12 00:42:11.525078] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:43.916 00:42:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:29:43.916 00:42:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@860 -- # return 0 00:29:43.916 00:42:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:43.916 00:42:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:43.916 00:42:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:43.916 00:42:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:43.916 00:42:11 nvmf_tcp.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:29:43.916 00:42:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:43.916 00:42:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:43.916 [2024-07-12 00:42:11.652946] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:43.916 00:42:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:43.916 00:42:11 nvmf_tcp.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:29:43.916 00:42:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:43.916 00:42:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:43.916 null0 00:29:43.916 00:42:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:43.916 00:42:11 nvmf_tcp.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:29:43.916 00:42:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:43.916 00:42:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:43.916 00:42:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:43.916 00:42:11 nvmf_tcp.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:29:43.916 00:42:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:43.916 00:42:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:43.916 00:42:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:43.916 00:42:11 nvmf_tcp.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 6e230e6507204c7ebc6e1bd26740b1bf 00:29:43.916 00:42:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:43.916 00:42:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:43.916 00:42:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:43.916 00:42:11 nvmf_tcp.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:43.916 00:42:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:43.916 00:42:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:43.916 [2024-07-12 00:42:11.693161] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:43.916 00:42:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:43.916 00:42:11 nvmf_tcp.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:29:43.916 00:42:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:43.916 00:42:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:44.175 nvme0n1 00:29:44.175 00:42:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:44.175 00:42:11 nvmf_tcp.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:29:44.175 00:42:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:44.175 00:42:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:44.175 [ 00:29:44.175 { 00:29:44.175 "name": "nvme0n1", 00:29:44.175 "aliases": [ 00:29:44.175 "6e230e65-0720-4c7e-bc6e-1bd26740b1bf" 00:29:44.175 ], 00:29:44.175 "product_name": "NVMe disk", 00:29:44.175 "block_size": 512, 00:29:44.175 "num_blocks": 2097152, 00:29:44.175 "uuid": "6e230e65-0720-4c7e-bc6e-1bd26740b1bf", 00:29:44.175 "assigned_rate_limits": { 00:29:44.175 "rw_ios_per_sec": 0, 00:29:44.175 "rw_mbytes_per_sec": 0, 00:29:44.175 "r_mbytes_per_sec": 0, 00:29:44.175 "w_mbytes_per_sec": 0 00:29:44.175 }, 00:29:44.175 "claimed": false, 00:29:44.175 "zoned": false, 00:29:44.175 "supported_io_types": { 00:29:44.175 "read": true, 00:29:44.175 "write": true, 00:29:44.175 "unmap": false, 00:29:44.175 "write_zeroes": true, 00:29:44.175 "flush": true, 00:29:44.175 "reset": true, 00:29:44.175 "compare": true, 00:29:44.175 "compare_and_write": true, 00:29:44.175 "abort": true, 00:29:44.175 "nvme_admin": true, 00:29:44.175 "nvme_io": true 00:29:44.175 }, 00:29:44.175 "memory_domains": [ 00:29:44.175 { 00:29:44.175 "dma_device_id": "system", 00:29:44.175 "dma_device_type": 1 00:29:44.175 } 00:29:44.175 ], 00:29:44.175 "driver_specific": { 00:29:44.175 "nvme": [ 00:29:44.175 { 00:29:44.175 "trid": { 00:29:44.175 "trtype": "TCP", 00:29:44.175 "adrfam": "IPv4", 00:29:44.175 "traddr": "10.0.0.2", 00:29:44.175 "trsvcid": "4420", 00:29:44.175 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:29:44.175 }, 00:29:44.175 "ctrlr_data": { 00:29:44.175 "cntlid": 1, 00:29:44.175 "vendor_id": "0x8086", 00:29:44.175 "model_number": "SPDK bdev Controller", 00:29:44.175 "serial_number": "00000000000000000000", 00:29:44.175 "firmware_revision": "24.05.1", 00:29:44.175 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:44.175 "oacs": { 00:29:44.175 "security": 0, 00:29:44.175 "format": 0, 00:29:44.175 "firmware": 0, 00:29:44.175 "ns_manage": 0 00:29:44.175 }, 00:29:44.175 "multi_ctrlr": true, 00:29:44.175 "ana_reporting": false 00:29:44.175 }, 00:29:44.175 "vs": { 00:29:44.175 "nvme_version": "1.3" 00:29:44.175 }, 00:29:44.175 "ns_data": { 00:29:44.175 "id": 1, 00:29:44.175 "can_share": true 00:29:44.175 } 00:29:44.175 } 00:29:44.175 ], 00:29:44.175 "mp_policy": "active_passive" 00:29:44.175 } 00:29:44.175 } 00:29:44.175 ] 00:29:44.175 00:42:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:44.175 00:42:11 nvmf_tcp.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:29:44.175 00:42:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:44.175 00:42:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:44.175 [2024-07-12 00:42:11.945836] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:29:44.175 [2024-07-12 00:42:11.945925] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2389840 (9): Bad file descriptor 00:29:44.434 [2024-07-12 00:42:12.118742] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:29:44.434 00:42:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:44.434 00:42:12 nvmf_tcp.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:29:44.434 00:42:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:44.434 00:42:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:44.434 [ 00:29:44.434 { 00:29:44.434 "name": "nvme0n1", 00:29:44.434 "aliases": [ 00:29:44.434 "6e230e65-0720-4c7e-bc6e-1bd26740b1bf" 00:29:44.434 ], 00:29:44.434 "product_name": "NVMe disk", 00:29:44.434 "block_size": 512, 00:29:44.434 "num_blocks": 2097152, 00:29:44.434 "uuid": "6e230e65-0720-4c7e-bc6e-1bd26740b1bf", 00:29:44.434 "assigned_rate_limits": { 00:29:44.434 "rw_ios_per_sec": 0, 00:29:44.434 "rw_mbytes_per_sec": 0, 00:29:44.434 "r_mbytes_per_sec": 0, 00:29:44.434 "w_mbytes_per_sec": 0 00:29:44.434 }, 00:29:44.434 "claimed": false, 00:29:44.434 "zoned": false, 00:29:44.434 "supported_io_types": { 00:29:44.434 "read": true, 00:29:44.434 "write": true, 00:29:44.434 "unmap": false, 00:29:44.434 "write_zeroes": true, 00:29:44.434 "flush": true, 00:29:44.434 "reset": true, 00:29:44.434 "compare": true, 00:29:44.434 "compare_and_write": true, 00:29:44.434 "abort": true, 00:29:44.434 "nvme_admin": true, 00:29:44.434 "nvme_io": true 00:29:44.434 }, 00:29:44.434 "memory_domains": [ 00:29:44.434 { 00:29:44.434 "dma_device_id": "system", 00:29:44.434 "dma_device_type": 1 00:29:44.434 } 00:29:44.434 ], 00:29:44.434 "driver_specific": { 00:29:44.434 "nvme": [ 00:29:44.434 { 00:29:44.434 "trid": { 00:29:44.434 "trtype": "TCP", 00:29:44.434 "adrfam": "IPv4", 00:29:44.434 "traddr": "10.0.0.2", 00:29:44.434 "trsvcid": "4420", 00:29:44.434 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:29:44.434 }, 00:29:44.434 "ctrlr_data": { 00:29:44.434 "cntlid": 2, 00:29:44.434 "vendor_id": "0x8086", 00:29:44.434 "model_number": "SPDK bdev Controller", 00:29:44.434 "serial_number": "00000000000000000000", 00:29:44.434 "firmware_revision": "24.05.1", 00:29:44.434 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:44.434 "oacs": { 00:29:44.434 "security": 0, 00:29:44.434 "format": 0, 00:29:44.434 "firmware": 0, 00:29:44.434 "ns_manage": 0 00:29:44.434 }, 00:29:44.434 "multi_ctrlr": true, 00:29:44.434 "ana_reporting": false 00:29:44.434 }, 00:29:44.434 "vs": { 00:29:44.434 "nvme_version": "1.3" 00:29:44.434 }, 00:29:44.434 "ns_data": { 00:29:44.434 "id": 1, 00:29:44.434 "can_share": true 00:29:44.434 } 00:29:44.434 } 00:29:44.434 ], 00:29:44.434 "mp_policy": "active_passive" 00:29:44.434 } 00:29:44.434 } 00:29:44.434 ] 00:29:44.434 00:42:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:44.434 00:42:12 nvmf_tcp.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:44.434 00:42:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:44.434 00:42:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:44.434 00:42:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:44.434 00:42:12 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:29:44.434 00:42:12 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.zePTVezj9s 00:29:44.434 00:42:12 nvmf_tcp.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:29:44.434 00:42:12 nvmf_tcp.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.zePTVezj9s 00:29:44.434 00:42:12 nvmf_tcp.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:29:44.434 00:42:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:44.434 00:42:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:44.434 00:42:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:44.434 00:42:12 nvmf_tcp.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:29:44.434 00:42:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:44.434 00:42:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:44.434 [2024-07-12 00:42:12.174634] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:29:44.434 [2024-07-12 00:42:12.174764] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:29:44.434 00:42:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:44.434 00:42:12 nvmf_tcp.nvmf_async_init -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.zePTVezj9s 00:29:44.434 00:42:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:44.434 00:42:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:44.434 [2024-07-12 00:42:12.182648] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:29:44.434 00:42:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:44.434 00:42:12 nvmf_tcp.nvmf_async_init -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.zePTVezj9s 00:29:44.434 00:42:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:44.434 00:42:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:44.434 [2024-07-12 00:42:12.190665] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:29:44.434 [2024-07-12 00:42:12.190736] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:29:44.434 nvme0n1 00:29:44.435 00:42:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:44.435 00:42:12 nvmf_tcp.nvmf_async_init -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:29:44.435 00:42:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:44.435 00:42:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:44.435 [ 00:29:44.435 { 00:29:44.435 "name": "nvme0n1", 00:29:44.435 "aliases": [ 00:29:44.435 "6e230e65-0720-4c7e-bc6e-1bd26740b1bf" 00:29:44.435 ], 00:29:44.435 "product_name": "NVMe disk", 00:29:44.435 "block_size": 512, 00:29:44.435 "num_blocks": 2097152, 00:29:44.435 "uuid": "6e230e65-0720-4c7e-bc6e-1bd26740b1bf", 00:29:44.435 "assigned_rate_limits": { 00:29:44.435 "rw_ios_per_sec": 0, 00:29:44.435 "rw_mbytes_per_sec": 0, 00:29:44.435 "r_mbytes_per_sec": 0, 00:29:44.435 "w_mbytes_per_sec": 0 00:29:44.435 }, 00:29:44.435 "claimed": false, 00:29:44.435 "zoned": false, 00:29:44.435 "supported_io_types": { 00:29:44.435 "read": true, 00:29:44.435 "write": true, 00:29:44.435 "unmap": false, 00:29:44.435 "write_zeroes": true, 00:29:44.435 "flush": true, 00:29:44.435 "reset": true, 00:29:44.435 "compare": true, 00:29:44.435 "compare_and_write": true, 00:29:44.435 "abort": true, 00:29:44.435 "nvme_admin": true, 00:29:44.435 "nvme_io": true 00:29:44.435 }, 00:29:44.435 "memory_domains": [ 00:29:44.435 { 00:29:44.435 "dma_device_id": "system", 00:29:44.435 "dma_device_type": 1 00:29:44.435 } 00:29:44.435 ], 00:29:44.435 "driver_specific": { 00:29:44.435 "nvme": [ 00:29:44.435 { 00:29:44.435 "trid": { 00:29:44.435 "trtype": "TCP", 00:29:44.435 "adrfam": "IPv4", 00:29:44.435 "traddr": "10.0.0.2", 00:29:44.435 "trsvcid": "4421", 00:29:44.435 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:29:44.435 }, 00:29:44.435 "ctrlr_data": { 00:29:44.435 "cntlid": 3, 00:29:44.692 "vendor_id": "0x8086", 00:29:44.692 "model_number": "SPDK bdev Controller", 00:29:44.692 "serial_number": "00000000000000000000", 00:29:44.692 "firmware_revision": "24.05.1", 00:29:44.692 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:44.692 "oacs": { 00:29:44.692 "security": 0, 00:29:44.692 "format": 0, 00:29:44.692 "firmware": 0, 00:29:44.692 "ns_manage": 0 00:29:44.692 }, 00:29:44.692 "multi_ctrlr": true, 00:29:44.692 "ana_reporting": false 00:29:44.693 }, 00:29:44.693 "vs": { 00:29:44.693 "nvme_version": "1.3" 00:29:44.693 }, 00:29:44.693 "ns_data": { 00:29:44.693 "id": 1, 00:29:44.693 "can_share": true 00:29:44.693 } 00:29:44.693 } 00:29:44.693 ], 00:29:44.693 "mp_policy": "active_passive" 00:29:44.693 } 00:29:44.693 } 00:29:44.693 ] 00:29:44.693 00:42:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:44.693 00:42:12 nvmf_tcp.nvmf_async_init -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:44.693 00:42:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:44.693 00:42:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:44.693 00:42:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:44.693 00:42:12 nvmf_tcp.nvmf_async_init -- host/async_init.sh@75 -- # rm -f /tmp/tmp.zePTVezj9s 00:29:44.693 00:42:12 nvmf_tcp.nvmf_async_init -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:29:44.693 00:42:12 nvmf_tcp.nvmf_async_init -- host/async_init.sh@78 -- # nvmftestfini 00:29:44.693 00:42:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:44.693 00:42:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@117 -- # sync 00:29:44.693 00:42:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:44.693 00:42:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@120 -- # set +e 00:29:44.693 00:42:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:44.693 00:42:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:44.693 rmmod nvme_tcp 00:29:44.693 rmmod nvme_fabrics 00:29:44.693 rmmod nvme_keyring 00:29:44.693 00:42:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:44.693 00:42:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@124 -- # set -e 00:29:44.693 00:42:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@125 -- # return 0 00:29:44.693 00:42:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@489 -- # '[' -n 1032356 ']' 00:29:44.693 00:42:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@490 -- # killprocess 1032356 00:29:44.693 00:42:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@946 -- # '[' -z 1032356 ']' 00:29:44.693 00:42:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@950 -- # kill -0 1032356 00:29:44.693 00:42:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@951 -- # uname 00:29:44.693 00:42:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:29:44.693 00:42:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1032356 00:29:44.693 00:42:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:29:44.693 00:42:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:29:44.693 00:42:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1032356' 00:29:44.693 killing process with pid 1032356 00:29:44.693 00:42:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@965 -- # kill 1032356 00:29:44.693 [2024-07-12 00:42:12.364412] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:29:44.693 [2024-07-12 00:42:12.364456] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:29:44.693 00:42:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@970 -- # wait 1032356 00:29:44.693 00:42:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:29:44.693 00:42:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:29:44.693 00:42:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:29:44.693 00:42:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:44.693 00:42:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:44.693 00:42:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:44.693 00:42:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:44.693 00:42:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:47.245 00:42:14 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:47.245 00:29:47.245 real 0m5.018s 00:29:47.245 user 0m1.887s 00:29:47.245 sys 0m1.526s 00:29:47.245 00:42:14 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1122 -- # xtrace_disable 00:29:47.245 00:42:14 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:47.245 ************************************ 00:29:47.245 END TEST nvmf_async_init 00:29:47.245 ************************************ 00:29:47.245 00:42:14 nvmf_tcp -- nvmf/nvmf.sh@94 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:29:47.245 00:42:14 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:29:47.245 00:42:14 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:29:47.245 00:42:14 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:47.245 ************************************ 00:29:47.245 START TEST dma 00:29:47.245 ************************************ 00:29:47.245 00:42:14 nvmf_tcp.dma -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:29:47.245 * Looking for test storage... 00:29:47.245 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:47.245 00:42:14 nvmf_tcp.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:47.245 00:42:14 nvmf_tcp.dma -- nvmf/common.sh@7 -- # uname -s 00:29:47.245 00:42:14 nvmf_tcp.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:47.245 00:42:14 nvmf_tcp.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:47.245 00:42:14 nvmf_tcp.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:47.245 00:42:14 nvmf_tcp.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:47.245 00:42:14 nvmf_tcp.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:47.245 00:42:14 nvmf_tcp.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:47.245 00:42:14 nvmf_tcp.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:47.245 00:42:14 nvmf_tcp.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:47.245 00:42:14 nvmf_tcp.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:47.245 00:42:14 nvmf_tcp.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:47.245 00:42:14 nvmf_tcp.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:29:47.245 00:42:14 nvmf_tcp.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:29:47.245 00:42:14 nvmf_tcp.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:47.245 00:42:14 nvmf_tcp.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:47.245 00:42:14 nvmf_tcp.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:47.245 00:42:14 nvmf_tcp.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:47.245 00:42:14 nvmf_tcp.dma -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:47.245 00:42:14 nvmf_tcp.dma -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:47.245 00:42:14 nvmf_tcp.dma -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:47.245 00:42:14 nvmf_tcp.dma -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:47.245 00:42:14 nvmf_tcp.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:47.245 00:42:14 nvmf_tcp.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:47.245 00:42:14 nvmf_tcp.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:47.245 00:42:14 nvmf_tcp.dma -- paths/export.sh@5 -- # export PATH 00:29:47.245 00:42:14 nvmf_tcp.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:47.245 00:42:14 nvmf_tcp.dma -- nvmf/common.sh@47 -- # : 0 00:29:47.245 00:42:14 nvmf_tcp.dma -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:47.245 00:42:14 nvmf_tcp.dma -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:47.245 00:42:14 nvmf_tcp.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:47.245 00:42:14 nvmf_tcp.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:47.245 00:42:14 nvmf_tcp.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:47.245 00:42:14 nvmf_tcp.dma -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:47.245 00:42:14 nvmf_tcp.dma -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:47.245 00:42:14 nvmf_tcp.dma -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:47.245 00:42:14 nvmf_tcp.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:29:47.245 00:42:14 nvmf_tcp.dma -- host/dma.sh@13 -- # exit 0 00:29:47.245 00:29:47.245 real 0m0.071s 00:29:47.245 user 0m0.030s 00:29:47.245 sys 0m0.046s 00:29:47.245 00:42:14 nvmf_tcp.dma -- common/autotest_common.sh@1122 -- # xtrace_disable 00:29:47.245 00:42:14 nvmf_tcp.dma -- common/autotest_common.sh@10 -- # set +x 00:29:47.245 ************************************ 00:29:47.245 END TEST dma 00:29:47.245 ************************************ 00:29:47.245 00:42:14 nvmf_tcp -- nvmf/nvmf.sh@97 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:29:47.245 00:42:14 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:29:47.245 00:42:14 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:29:47.245 00:42:14 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:47.245 ************************************ 00:29:47.245 START TEST nvmf_identify 00:29:47.245 ************************************ 00:29:47.245 00:42:14 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:29:47.245 * Looking for test storage... 00:29:47.245 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:47.245 00:42:14 nvmf_tcp.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:47.245 00:42:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:29:47.245 00:42:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:47.245 00:42:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:47.246 00:42:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:47.246 00:42:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:47.246 00:42:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:47.246 00:42:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:47.246 00:42:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:47.246 00:42:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:47.246 00:42:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:47.246 00:42:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:47.246 00:42:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:29:47.246 00:42:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:29:47.246 00:42:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:47.246 00:42:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:47.246 00:42:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:47.246 00:42:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:47.246 00:42:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:47.246 00:42:14 nvmf_tcp.nvmf_identify -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:47.246 00:42:14 nvmf_tcp.nvmf_identify -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:47.246 00:42:14 nvmf_tcp.nvmf_identify -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:47.246 00:42:14 nvmf_tcp.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:47.246 00:42:14 nvmf_tcp.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:47.246 00:42:14 nvmf_tcp.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:47.246 00:42:14 nvmf_tcp.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:29:47.246 00:42:14 nvmf_tcp.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:47.246 00:42:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@47 -- # : 0 00:29:47.246 00:42:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:47.246 00:42:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:47.246 00:42:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:47.246 00:42:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:47.246 00:42:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:47.246 00:42:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:47.246 00:42:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:47.246 00:42:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:47.246 00:42:14 nvmf_tcp.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:47.246 00:42:14 nvmf_tcp.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:47.246 00:42:14 nvmf_tcp.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:29:47.246 00:42:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:29:47.246 00:42:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:47.246 00:42:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:47.246 00:42:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:47.246 00:42:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:47.246 00:42:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:47.246 00:42:14 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:47.246 00:42:14 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:47.246 00:42:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:29:47.246 00:42:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:29:47.246 00:42:14 nvmf_tcp.nvmf_identify -- nvmf/common.sh@285 -- # xtrace_disable 00:29:47.246 00:42:14 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:48.626 00:42:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:48.626 00:42:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # pci_devs=() 00:29:48.626 00:42:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # local -a pci_devs 00:29:48.626 00:42:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # pci_net_devs=() 00:29:48.626 00:42:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:29:48.626 00:42:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # pci_drivers=() 00:29:48.626 00:42:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # local -A pci_drivers 00:29:48.626 00:42:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # net_devs=() 00:29:48.626 00:42:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # local -ga net_devs 00:29:48.626 00:42:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # e810=() 00:29:48.626 00:42:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # local -ga e810 00:29:48.626 00:42:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # x722=() 00:29:48.626 00:42:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # local -ga x722 00:29:48.626 00:42:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # mlx=() 00:29:48.626 00:42:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # local -ga mlx 00:29:48.626 00:42:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:48.626 00:42:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:48.626 00:42:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:48.626 00:42:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:48.626 00:42:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:48.626 00:42:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:48.626 00:42:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:48.626 00:42:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:48.626 00:42:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:48.626 00:42:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:48.626 00:42:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:48.626 00:42:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:29:48.626 00:42:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:29:48.626 00:42:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:29:48.626 00:42:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:29:48.626 00:42:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:29:48.626 00:42:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:29:48.626 00:42:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:48.626 00:42:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:29:48.626 Found 0000:08:00.0 (0x8086 - 0x159b) 00:29:48.626 00:42:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:48.626 00:42:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:48.626 00:42:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:48.626 00:42:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:48.626 00:42:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:48.626 00:42:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:48.626 00:42:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:29:48.626 Found 0000:08:00.1 (0x8086 - 0x159b) 00:29:48.626 00:42:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:48.626 00:42:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:48.626 00:42:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:48.626 00:42:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:48.626 00:42:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:48.626 00:42:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:29:48.626 00:42:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:29:48.626 00:42:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:29:48.626 00:42:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:48.626 00:42:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:48.626 00:42:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:48.626 00:42:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:48.626 00:42:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:48.626 00:42:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:48.626 00:42:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:48.626 00:42:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:29:48.626 Found net devices under 0000:08:00.0: cvl_0_0 00:29:48.626 00:42:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:48.626 00:42:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:48.626 00:42:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:48.626 00:42:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:48.626 00:42:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:48.626 00:42:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:48.626 00:42:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:48.626 00:42:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:48.626 00:42:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:29:48.626 Found net devices under 0000:08:00.1: cvl_0_1 00:29:48.626 00:42:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:48.626 00:42:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:29:48.626 00:42:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # is_hw=yes 00:29:48.626 00:42:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:29:48.626 00:42:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:29:48.626 00:42:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:29:48.626 00:42:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:48.626 00:42:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:48.626 00:42:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:48.626 00:42:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:29:48.626 00:42:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:48.626 00:42:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:48.626 00:42:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:29:48.626 00:42:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:48.626 00:42:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:48.626 00:42:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:29:48.626 00:42:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:29:48.626 00:42:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:29:48.626 00:42:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:48.626 00:42:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:48.626 00:42:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:48.626 00:42:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:29:48.626 00:42:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:48.626 00:42:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:48.626 00:42:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:48.626 00:42:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:29:48.626 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:48.626 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.193 ms 00:29:48.626 00:29:48.626 --- 10.0.0.2 ping statistics --- 00:29:48.626 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:48.626 rtt min/avg/max/mdev = 0.193/0.193/0.193/0.000 ms 00:29:48.626 00:42:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:48.626 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:48.626 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.150 ms 00:29:48.626 00:29:48.626 --- 10.0.0.1 ping statistics --- 00:29:48.626 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:48.626 rtt min/avg/max/mdev = 0.150/0.150/0.150/0.000 ms 00:29:48.626 00:42:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:48.626 00:42:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@422 -- # return 0 00:29:48.626 00:42:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:29:48.626 00:42:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:48.626 00:42:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:29:48.626 00:42:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:29:48.626 00:42:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:48.626 00:42:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:29:48.626 00:42:16 nvmf_tcp.nvmf_identify -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:29:48.626 00:42:16 nvmf_tcp.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:29:48.626 00:42:16 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@720 -- # xtrace_disable 00:29:48.626 00:42:16 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:48.885 00:42:16 nvmf_tcp.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=1034028 00:29:48.885 00:42:16 nvmf_tcp.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:29:48.885 00:42:16 nvmf_tcp.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:48.885 00:42:16 nvmf_tcp.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 1034028 00:29:48.885 00:42:16 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@827 -- # '[' -z 1034028 ']' 00:29:48.885 00:42:16 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:48.885 00:42:16 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@832 -- # local max_retries=100 00:29:48.885 00:42:16 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:48.885 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:48.885 00:42:16 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@836 -- # xtrace_disable 00:29:48.885 00:42:16 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:48.885 [2024-07-12 00:42:16.513056] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:29:48.885 [2024-07-12 00:42:16.513143] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:48.885 EAL: No free 2048 kB hugepages reported on node 1 00:29:48.885 [2024-07-12 00:42:16.578169] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:48.885 [2024-07-12 00:42:16.667087] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:48.885 [2024-07-12 00:42:16.667142] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:48.885 [2024-07-12 00:42:16.667157] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:48.885 [2024-07-12 00:42:16.667172] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:48.885 [2024-07-12 00:42:16.667184] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:48.885 [2024-07-12 00:42:16.667260] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:48.885 [2024-07-12 00:42:16.667316] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:29:48.885 [2024-07-12 00:42:16.667365] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:29:48.885 [2024-07-12 00:42:16.667367] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:49.147 00:42:16 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:29:49.147 00:42:16 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@860 -- # return 0 00:29:49.147 00:42:16 nvmf_tcp.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:49.147 00:42:16 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:49.147 00:42:16 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:49.147 [2024-07-12 00:42:16.792221] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:49.147 00:42:16 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:49.147 00:42:16 nvmf_tcp.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:29:49.147 00:42:16 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:49.147 00:42:16 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:49.147 00:42:16 nvmf_tcp.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:49.147 00:42:16 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:49.147 00:42:16 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:49.147 Malloc0 00:29:49.147 00:42:16 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:49.147 00:42:16 nvmf_tcp.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:49.147 00:42:16 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:49.147 00:42:16 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:49.147 00:42:16 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:49.147 00:42:16 nvmf_tcp.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:29:49.147 00:42:16 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:49.147 00:42:16 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:49.147 00:42:16 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:49.147 00:42:16 nvmf_tcp.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:49.147 00:42:16 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:49.147 00:42:16 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:49.147 [2024-07-12 00:42:16.870507] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:49.147 00:42:16 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:49.147 00:42:16 nvmf_tcp.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:49.147 00:42:16 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:49.147 00:42:16 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:49.147 00:42:16 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:49.147 00:42:16 nvmf_tcp.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:29:49.147 00:42:16 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:49.147 00:42:16 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:49.147 [ 00:29:49.147 { 00:29:49.148 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:29:49.148 "subtype": "Discovery", 00:29:49.148 "listen_addresses": [ 00:29:49.148 { 00:29:49.148 "trtype": "TCP", 00:29:49.148 "adrfam": "IPv4", 00:29:49.148 "traddr": "10.0.0.2", 00:29:49.148 "trsvcid": "4420" 00:29:49.148 } 00:29:49.148 ], 00:29:49.148 "allow_any_host": true, 00:29:49.148 "hosts": [] 00:29:49.148 }, 00:29:49.148 { 00:29:49.148 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:29:49.148 "subtype": "NVMe", 00:29:49.148 "listen_addresses": [ 00:29:49.148 { 00:29:49.148 "trtype": "TCP", 00:29:49.148 "adrfam": "IPv4", 00:29:49.148 "traddr": "10.0.0.2", 00:29:49.148 "trsvcid": "4420" 00:29:49.148 } 00:29:49.148 ], 00:29:49.148 "allow_any_host": true, 00:29:49.148 "hosts": [], 00:29:49.148 "serial_number": "SPDK00000000000001", 00:29:49.148 "model_number": "SPDK bdev Controller", 00:29:49.148 "max_namespaces": 32, 00:29:49.148 "min_cntlid": 1, 00:29:49.148 "max_cntlid": 65519, 00:29:49.148 "namespaces": [ 00:29:49.148 { 00:29:49.148 "nsid": 1, 00:29:49.148 "bdev_name": "Malloc0", 00:29:49.148 "name": "Malloc0", 00:29:49.148 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:29:49.148 "eui64": "ABCDEF0123456789", 00:29:49.148 "uuid": "acb27e64-2d15-4aeb-a5f7-e70b88ad1119" 00:29:49.148 } 00:29:49.148 ] 00:29:49.148 } 00:29:49.148 ] 00:29:49.148 00:42:16 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:49.148 00:42:16 nvmf_tcp.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:29:49.148 [2024-07-12 00:42:16.912650] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:29:49.148 [2024-07-12 00:42:16.912696] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1034057 ] 00:29:49.148 EAL: No free 2048 kB hugepages reported on node 1 00:29:49.148 [2024-07-12 00:42:16.953584] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:29:49.148 [2024-07-12 00:42:16.953658] nvme_tcp.c:2329:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:29:49.148 [2024-07-12 00:42:16.953668] nvme_tcp.c:2333:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:29:49.148 [2024-07-12 00:42:16.953684] nvme_tcp.c:2351:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:29:49.148 [2024-07-12 00:42:16.953701] sock.c: 336:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:29:49.148 [2024-07-12 00:42:16.957663] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:29:49.148 [2024-07-12 00:42:16.957724] nvme_tcp.c:1546:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0xa14030 0 00:29:49.148 [2024-07-12 00:42:16.968605] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:29:49.148 [2024-07-12 00:42:16.968634] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:29:49.148 [2024-07-12 00:42:16.968649] nvme_tcp.c:1592:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:29:49.148 [2024-07-12 00:42:16.968660] nvme_tcp.c:1593:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:29:49.148 [2024-07-12 00:42:16.968735] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:49.148 [2024-07-12 00:42:16.968753] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:49.148 [2024-07-12 00:42:16.968762] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xa14030) 00:29:49.148 [2024-07-12 00:42:16.968784] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:29:49.148 [2024-07-12 00:42:16.968813] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa6d100, cid 0, qid 0 00:29:49.148 [2024-07-12 00:42:16.975605] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:49.148 [2024-07-12 00:42:16.975628] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:49.148 [2024-07-12 00:42:16.975637] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:49.148 [2024-07-12 00:42:16.975647] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa6d100) on tqpair=0xa14030 00:29:49.148 [2024-07-12 00:42:16.975671] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:29:49.148 [2024-07-12 00:42:16.975684] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:29:49.148 [2024-07-12 00:42:16.975694] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:29:49.148 [2024-07-12 00:42:16.975723] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:49.148 [2024-07-12 00:42:16.975733] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:49.148 [2024-07-12 00:42:16.975740] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xa14030) 00:29:49.148 [2024-07-12 00:42:16.975753] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.148 [2024-07-12 00:42:16.975779] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa6d100, cid 0, qid 0 00:29:49.148 [2024-07-12 00:42:16.975911] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:49.148 [2024-07-12 00:42:16.975931] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:49.148 [2024-07-12 00:42:16.975939] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:49.148 [2024-07-12 00:42:16.975947] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa6d100) on tqpair=0xa14030 00:29:49.148 [2024-07-12 00:42:16.975961] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:29:49.148 [2024-07-12 00:42:16.975976] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:29:49.148 [2024-07-12 00:42:16.975990] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:49.148 [2024-07-12 00:42:16.975998] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:49.148 [2024-07-12 00:42:16.976006] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xa14030) 00:29:49.148 [2024-07-12 00:42:16.976018] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.148 [2024-07-12 00:42:16.976040] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa6d100, cid 0, qid 0 00:29:49.148 [2024-07-12 00:42:16.976170] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:49.148 [2024-07-12 00:42:16.976183] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:49.148 [2024-07-12 00:42:16.976190] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:49.148 [2024-07-12 00:42:16.976198] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa6d100) on tqpair=0xa14030 00:29:49.148 [2024-07-12 00:42:16.976209] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:29:49.148 [2024-07-12 00:42:16.976224] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:29:49.148 [2024-07-12 00:42:16.976237] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:49.148 [2024-07-12 00:42:16.976245] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:49.148 [2024-07-12 00:42:16.976253] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xa14030) 00:29:49.148 [2024-07-12 00:42:16.976264] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.148 [2024-07-12 00:42:16.976286] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa6d100, cid 0, qid 0 00:29:49.148 [2024-07-12 00:42:16.976425] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:49.148 [2024-07-12 00:42:16.976438] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:49.148 [2024-07-12 00:42:16.976446] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:49.148 [2024-07-12 00:42:16.976453] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa6d100) on tqpair=0xa14030 00:29:49.148 [2024-07-12 00:42:16.976463] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:29:49.148 [2024-07-12 00:42:16.976481] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:49.148 [2024-07-12 00:42:16.976490] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:49.148 [2024-07-12 00:42:16.976498] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xa14030) 00:29:49.148 [2024-07-12 00:42:16.976510] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.148 [2024-07-12 00:42:16.976531] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa6d100, cid 0, qid 0 00:29:49.148 [2024-07-12 00:42:16.976665] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:49.148 [2024-07-12 00:42:16.976681] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:49.148 [2024-07-12 00:42:16.976689] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:49.148 [2024-07-12 00:42:16.976701] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa6d100) on tqpair=0xa14030 00:29:49.148 [2024-07-12 00:42:16.976712] nvme_ctrlr.c:3751:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:29:49.148 [2024-07-12 00:42:16.976723] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:29:49.148 [2024-07-12 00:42:16.976737] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:29:49.148 [2024-07-12 00:42:16.976849] nvme_ctrlr.c:3944:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:29:49.148 [2024-07-12 00:42:16.976858] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:29:49.148 [2024-07-12 00:42:16.976875] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:49.148 [2024-07-12 00:42:16.976883] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:49.148 [2024-07-12 00:42:16.976890] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xa14030) 00:29:49.148 [2024-07-12 00:42:16.976902] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.148 [2024-07-12 00:42:16.976926] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa6d100, cid 0, qid 0 00:29:49.148 [2024-07-12 00:42:16.977050] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:49.148 [2024-07-12 00:42:16.977064] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:49.148 [2024-07-12 00:42:16.977072] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:49.148 [2024-07-12 00:42:16.977079] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa6d100) on tqpair=0xa14030 00:29:49.148 [2024-07-12 00:42:16.977090] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:29:49.148 [2024-07-12 00:42:16.977107] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:49.149 [2024-07-12 00:42:16.977117] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:49.149 [2024-07-12 00:42:16.977124] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xa14030) 00:29:49.149 [2024-07-12 00:42:16.977136] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.149 [2024-07-12 00:42:16.977158] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa6d100, cid 0, qid 0 00:29:49.149 [2024-07-12 00:42:16.977308] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:49.149 [2024-07-12 00:42:16.977321] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:49.149 [2024-07-12 00:42:16.977329] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:49.149 [2024-07-12 00:42:16.977337] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa6d100) on tqpair=0xa14030 00:29:49.149 [2024-07-12 00:42:16.977346] nvme_ctrlr.c:3786:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:29:49.149 [2024-07-12 00:42:16.977356] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:29:49.149 [2024-07-12 00:42:16.977370] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:29:49.149 [2024-07-12 00:42:16.977385] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:29:49.149 [2024-07-12 00:42:16.977405] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:49.149 [2024-07-12 00:42:16.977414] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xa14030) 00:29:49.149 [2024-07-12 00:42:16.977429] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.149 [2024-07-12 00:42:16.977452] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa6d100, cid 0, qid 0 00:29:49.149 [2024-07-12 00:42:16.977627] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:49.149 [2024-07-12 00:42:16.977644] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:49.149 [2024-07-12 00:42:16.977652] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:49.149 [2024-07-12 00:42:16.977660] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xa14030): datao=0, datal=4096, cccid=0 00:29:49.149 [2024-07-12 00:42:16.977669] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xa6d100) on tqpair(0xa14030): expected_datao=0, payload_size=4096 00:29:49.149 [2024-07-12 00:42:16.977678] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:49.149 [2024-07-12 00:42:16.977691] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:49.149 [2024-07-12 00:42:16.977701] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:49.149 [2024-07-12 00:42:16.977730] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:49.149 [2024-07-12 00:42:16.977742] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:49.149 [2024-07-12 00:42:16.977750] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:49.149 [2024-07-12 00:42:16.977757] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa6d100) on tqpair=0xa14030 00:29:49.149 [2024-07-12 00:42:16.977777] nvme_ctrlr.c:1986:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:29:49.149 [2024-07-12 00:42:16.977787] nvme_ctrlr.c:1990:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:29:49.149 [2024-07-12 00:42:16.977797] nvme_ctrlr.c:1993:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:29:49.149 [2024-07-12 00:42:16.977806] nvme_ctrlr.c:2017:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:29:49.149 [2024-07-12 00:42:16.977815] nvme_ctrlr.c:2032:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:29:49.149 [2024-07-12 00:42:16.977824] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:29:49.149 [2024-07-12 00:42:16.977840] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:29:49.149 [2024-07-12 00:42:16.977854] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:49.149 [2024-07-12 00:42:16.977862] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:49.149 [2024-07-12 00:42:16.977870] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xa14030) 00:29:49.149 [2024-07-12 00:42:16.977882] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:29:49.149 [2024-07-12 00:42:16.977905] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa6d100, cid 0, qid 0 00:29:49.149 [2024-07-12 00:42:16.978081] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:49.149 [2024-07-12 00:42:16.978094] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:49.149 [2024-07-12 00:42:16.978101] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:49.149 [2024-07-12 00:42:16.978109] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa6d100) on tqpair=0xa14030 00:29:49.149 [2024-07-12 00:42:16.978122] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:49.149 [2024-07-12 00:42:16.978131] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:49.149 [2024-07-12 00:42:16.978138] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xa14030) 00:29:49.149 [2024-07-12 00:42:16.978149] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:49.149 [2024-07-12 00:42:16.978172] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:49.149 [2024-07-12 00:42:16.978180] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:49.149 [2024-07-12 00:42:16.978187] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0xa14030) 00:29:49.149 [2024-07-12 00:42:16.978197] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:49.149 [2024-07-12 00:42:16.978209] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:49.149 [2024-07-12 00:42:16.978217] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:49.149 [2024-07-12 00:42:16.978227] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0xa14030) 00:29:49.149 [2024-07-12 00:42:16.978237] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:49.149 [2024-07-12 00:42:16.978247] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:49.149 [2024-07-12 00:42:16.978255] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:49.149 [2024-07-12 00:42:16.978262] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa14030) 00:29:49.149 [2024-07-12 00:42:16.978272] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:49.149 [2024-07-12 00:42:16.978281] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:29:49.149 [2024-07-12 00:42:16.978301] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:29:49.149 [2024-07-12 00:42:16.978315] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:49.149 [2024-07-12 00:42:16.978322] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xa14030) 00:29:49.149 [2024-07-12 00:42:16.978334] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.149 [2024-07-12 00:42:16.978358] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa6d100, cid 0, qid 0 00:29:49.149 [2024-07-12 00:42:16.978370] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa6d260, cid 1, qid 0 00:29:49.149 [2024-07-12 00:42:16.978379] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa6d3c0, cid 2, qid 0 00:29:49.149 [2024-07-12 00:42:16.978388] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa6d520, cid 3, qid 0 00:29:49.149 [2024-07-12 00:42:16.978396] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa6d680, cid 4, qid 0 00:29:49.149 [2024-07-12 00:42:16.978571] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:49.149 [2024-07-12 00:42:16.978584] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:49.149 [2024-07-12 00:42:16.978604] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:49.149 [2024-07-12 00:42:16.978613] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa6d680) on tqpair=0xa14030 00:29:49.149 [2024-07-12 00:42:16.978624] nvme_ctrlr.c:2904:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:29:49.149 [2024-07-12 00:42:16.978634] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:29:49.149 [2024-07-12 00:42:16.978654] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:49.149 [2024-07-12 00:42:16.978664] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xa14030) 00:29:49.149 [2024-07-12 00:42:16.978675] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.149 [2024-07-12 00:42:16.978698] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa6d680, cid 4, qid 0 00:29:49.149 [2024-07-12 00:42:16.978822] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:49.149 [2024-07-12 00:42:16.978836] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:49.149 [2024-07-12 00:42:16.978844] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:49.149 [2024-07-12 00:42:16.978851] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xa14030): datao=0, datal=4096, cccid=4 00:29:49.149 [2024-07-12 00:42:16.978860] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xa6d680) on tqpair(0xa14030): expected_datao=0, payload_size=4096 00:29:49.149 [2024-07-12 00:42:16.978868] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:49.149 [2024-07-12 00:42:16.978886] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:49.149 [2024-07-12 00:42:16.978896] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:49.149 [2024-07-12 00:42:16.978976] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:49.149 [2024-07-12 00:42:16.978989] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:49.149 [2024-07-12 00:42:16.978996] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:49.149 [2024-07-12 00:42:16.979004] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa6d680) on tqpair=0xa14030 00:29:49.149 [2024-07-12 00:42:16.979024] nvme_ctrlr.c:4038:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:29:49.149 [2024-07-12 00:42:16.979065] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:49.149 [2024-07-12 00:42:16.979076] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xa14030) 00:29:49.149 [2024-07-12 00:42:16.979088] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.149 [2024-07-12 00:42:16.979101] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:49.149 [2024-07-12 00:42:16.979108] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:49.149 [2024-07-12 00:42:16.979116] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xa14030) 00:29:49.149 [2024-07-12 00:42:16.979126] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:29:49.149 [2024-07-12 00:42:16.979153] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa6d680, cid 4, qid 0 00:29:49.149 [2024-07-12 00:42:16.979165] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa6d7e0, cid 5, qid 0 00:29:49.149 [2024-07-12 00:42:16.979327] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:49.149 [2024-07-12 00:42:16.979342] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:49.150 [2024-07-12 00:42:16.979350] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:49.150 [2024-07-12 00:42:16.979357] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xa14030): datao=0, datal=1024, cccid=4 00:29:49.150 [2024-07-12 00:42:16.979365] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xa6d680) on tqpair(0xa14030): expected_datao=0, payload_size=1024 00:29:49.150 [2024-07-12 00:42:16.979374] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:49.150 [2024-07-12 00:42:16.979385] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:49.150 [2024-07-12 00:42:16.979392] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:49.150 [2024-07-12 00:42:16.979402] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:49.150 [2024-07-12 00:42:16.979412] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:49.150 [2024-07-12 00:42:16.979420] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:49.150 [2024-07-12 00:42:16.979427] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa6d7e0) on tqpair=0xa14030 00:29:49.415 [2024-07-12 00:42:17.023615] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:49.415 [2024-07-12 00:42:17.023636] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:49.415 [2024-07-12 00:42:17.023644] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:49.415 [2024-07-12 00:42:17.023656] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa6d680) on tqpair=0xa14030 00:29:49.415 [2024-07-12 00:42:17.023685] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:49.415 [2024-07-12 00:42:17.023696] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xa14030) 00:29:49.415 [2024-07-12 00:42:17.023709] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.415 [2024-07-12 00:42:17.023741] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa6d680, cid 4, qid 0 00:29:49.415 [2024-07-12 00:42:17.023881] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:49.415 [2024-07-12 00:42:17.023896] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:49.415 [2024-07-12 00:42:17.023904] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:49.415 [2024-07-12 00:42:17.023911] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xa14030): datao=0, datal=3072, cccid=4 00:29:49.415 [2024-07-12 00:42:17.023920] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xa6d680) on tqpair(0xa14030): expected_datao=0, payload_size=3072 00:29:49.415 [2024-07-12 00:42:17.023928] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:49.415 [2024-07-12 00:42:17.023940] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:49.415 [2024-07-12 00:42:17.023948] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:49.415 [2024-07-12 00:42:17.023992] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:49.415 [2024-07-12 00:42:17.024006] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:49.415 [2024-07-12 00:42:17.024013] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:49.415 [2024-07-12 00:42:17.024021] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa6d680) on tqpair=0xa14030 00:29:49.415 [2024-07-12 00:42:17.024037] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:49.415 [2024-07-12 00:42:17.024046] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xa14030) 00:29:49.415 [2024-07-12 00:42:17.024058] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.415 [2024-07-12 00:42:17.024087] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa6d680, cid 4, qid 0 00:29:49.415 [2024-07-12 00:42:17.024244] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:49.415 [2024-07-12 00:42:17.024259] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:49.415 [2024-07-12 00:42:17.024267] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:49.415 [2024-07-12 00:42:17.024274] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xa14030): datao=0, datal=8, cccid=4 00:29:49.415 [2024-07-12 00:42:17.024282] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xa6d680) on tqpair(0xa14030): expected_datao=0, payload_size=8 00:29:49.415 [2024-07-12 00:42:17.024290] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:49.415 [2024-07-12 00:42:17.024301] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:49.415 [2024-07-12 00:42:17.024310] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:49.415 [2024-07-12 00:42:17.064724] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:49.415 [2024-07-12 00:42:17.064743] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:49.415 [2024-07-12 00:42:17.064751] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:49.415 [2024-07-12 00:42:17.064759] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa6d680) on tqpair=0xa14030 00:29:49.415 ===================================================== 00:29:49.415 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:29:49.415 ===================================================== 00:29:49.415 Controller Capabilities/Features 00:29:49.415 ================================ 00:29:49.415 Vendor ID: 0000 00:29:49.415 Subsystem Vendor ID: 0000 00:29:49.415 Serial Number: .................... 00:29:49.415 Model Number: ........................................ 00:29:49.415 Firmware Version: 24.05.1 00:29:49.415 Recommended Arb Burst: 0 00:29:49.415 IEEE OUI Identifier: 00 00 00 00:29:49.415 Multi-path I/O 00:29:49.415 May have multiple subsystem ports: No 00:29:49.415 May have multiple controllers: No 00:29:49.415 Associated with SR-IOV VF: No 00:29:49.415 Max Data Transfer Size: 131072 00:29:49.415 Max Number of Namespaces: 0 00:29:49.415 Max Number of I/O Queues: 1024 00:29:49.415 NVMe Specification Version (VS): 1.3 00:29:49.415 NVMe Specification Version (Identify): 1.3 00:29:49.415 Maximum Queue Entries: 128 00:29:49.415 Contiguous Queues Required: Yes 00:29:49.415 Arbitration Mechanisms Supported 00:29:49.415 Weighted Round Robin: Not Supported 00:29:49.415 Vendor Specific: Not Supported 00:29:49.415 Reset Timeout: 15000 ms 00:29:49.415 Doorbell Stride: 4 bytes 00:29:49.415 NVM Subsystem Reset: Not Supported 00:29:49.415 Command Sets Supported 00:29:49.415 NVM Command Set: Supported 00:29:49.415 Boot Partition: Not Supported 00:29:49.415 Memory Page Size Minimum: 4096 bytes 00:29:49.415 Memory Page Size Maximum: 4096 bytes 00:29:49.415 Persistent Memory Region: Not Supported 00:29:49.415 Optional Asynchronous Events Supported 00:29:49.415 Namespace Attribute Notices: Not Supported 00:29:49.415 Firmware Activation Notices: Not Supported 00:29:49.415 ANA Change Notices: Not Supported 00:29:49.415 PLE Aggregate Log Change Notices: Not Supported 00:29:49.415 LBA Status Info Alert Notices: Not Supported 00:29:49.415 EGE Aggregate Log Change Notices: Not Supported 00:29:49.415 Normal NVM Subsystem Shutdown event: Not Supported 00:29:49.415 Zone Descriptor Change Notices: Not Supported 00:29:49.415 Discovery Log Change Notices: Supported 00:29:49.415 Controller Attributes 00:29:49.415 128-bit Host Identifier: Not Supported 00:29:49.415 Non-Operational Permissive Mode: Not Supported 00:29:49.415 NVM Sets: Not Supported 00:29:49.415 Read Recovery Levels: Not Supported 00:29:49.415 Endurance Groups: Not Supported 00:29:49.415 Predictable Latency Mode: Not Supported 00:29:49.415 Traffic Based Keep ALive: Not Supported 00:29:49.415 Namespace Granularity: Not Supported 00:29:49.415 SQ Associations: Not Supported 00:29:49.415 UUID List: Not Supported 00:29:49.415 Multi-Domain Subsystem: Not Supported 00:29:49.415 Fixed Capacity Management: Not Supported 00:29:49.415 Variable Capacity Management: Not Supported 00:29:49.415 Delete Endurance Group: Not Supported 00:29:49.415 Delete NVM Set: Not Supported 00:29:49.416 Extended LBA Formats Supported: Not Supported 00:29:49.416 Flexible Data Placement Supported: Not Supported 00:29:49.416 00:29:49.416 Controller Memory Buffer Support 00:29:49.416 ================================ 00:29:49.416 Supported: No 00:29:49.416 00:29:49.416 Persistent Memory Region Support 00:29:49.416 ================================ 00:29:49.416 Supported: No 00:29:49.416 00:29:49.416 Admin Command Set Attributes 00:29:49.416 ============================ 00:29:49.416 Security Send/Receive: Not Supported 00:29:49.416 Format NVM: Not Supported 00:29:49.416 Firmware Activate/Download: Not Supported 00:29:49.416 Namespace Management: Not Supported 00:29:49.416 Device Self-Test: Not Supported 00:29:49.416 Directives: Not Supported 00:29:49.416 NVMe-MI: Not Supported 00:29:49.416 Virtualization Management: Not Supported 00:29:49.416 Doorbell Buffer Config: Not Supported 00:29:49.416 Get LBA Status Capability: Not Supported 00:29:49.416 Command & Feature Lockdown Capability: Not Supported 00:29:49.416 Abort Command Limit: 1 00:29:49.416 Async Event Request Limit: 4 00:29:49.416 Number of Firmware Slots: N/A 00:29:49.416 Firmware Slot 1 Read-Only: N/A 00:29:49.416 Firmware Activation Without Reset: N/A 00:29:49.416 Multiple Update Detection Support: N/A 00:29:49.416 Firmware Update Granularity: No Information Provided 00:29:49.416 Per-Namespace SMART Log: No 00:29:49.416 Asymmetric Namespace Access Log Page: Not Supported 00:29:49.416 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:29:49.416 Command Effects Log Page: Not Supported 00:29:49.416 Get Log Page Extended Data: Supported 00:29:49.416 Telemetry Log Pages: Not Supported 00:29:49.416 Persistent Event Log Pages: Not Supported 00:29:49.416 Supported Log Pages Log Page: May Support 00:29:49.416 Commands Supported & Effects Log Page: Not Supported 00:29:49.416 Feature Identifiers & Effects Log Page:May Support 00:29:49.416 NVMe-MI Commands & Effects Log Page: May Support 00:29:49.416 Data Area 4 for Telemetry Log: Not Supported 00:29:49.416 Error Log Page Entries Supported: 128 00:29:49.416 Keep Alive: Not Supported 00:29:49.416 00:29:49.416 NVM Command Set Attributes 00:29:49.416 ========================== 00:29:49.416 Submission Queue Entry Size 00:29:49.416 Max: 1 00:29:49.416 Min: 1 00:29:49.416 Completion Queue Entry Size 00:29:49.416 Max: 1 00:29:49.416 Min: 1 00:29:49.416 Number of Namespaces: 0 00:29:49.416 Compare Command: Not Supported 00:29:49.416 Write Uncorrectable Command: Not Supported 00:29:49.416 Dataset Management Command: Not Supported 00:29:49.416 Write Zeroes Command: Not Supported 00:29:49.416 Set Features Save Field: Not Supported 00:29:49.416 Reservations: Not Supported 00:29:49.416 Timestamp: Not Supported 00:29:49.416 Copy: Not Supported 00:29:49.416 Volatile Write Cache: Not Present 00:29:49.416 Atomic Write Unit (Normal): 1 00:29:49.416 Atomic Write Unit (PFail): 1 00:29:49.416 Atomic Compare & Write Unit: 1 00:29:49.416 Fused Compare & Write: Supported 00:29:49.416 Scatter-Gather List 00:29:49.416 SGL Command Set: Supported 00:29:49.416 SGL Keyed: Supported 00:29:49.416 SGL Bit Bucket Descriptor: Not Supported 00:29:49.416 SGL Metadata Pointer: Not Supported 00:29:49.416 Oversized SGL: Not Supported 00:29:49.416 SGL Metadata Address: Not Supported 00:29:49.416 SGL Offset: Supported 00:29:49.416 Transport SGL Data Block: Not Supported 00:29:49.416 Replay Protected Memory Block: Not Supported 00:29:49.416 00:29:49.416 Firmware Slot Information 00:29:49.416 ========================= 00:29:49.416 Active slot: 0 00:29:49.416 00:29:49.416 00:29:49.416 Error Log 00:29:49.416 ========= 00:29:49.416 00:29:49.416 Active Namespaces 00:29:49.416 ================= 00:29:49.416 Discovery Log Page 00:29:49.416 ================== 00:29:49.416 Generation Counter: 2 00:29:49.416 Number of Records: 2 00:29:49.416 Record Format: 0 00:29:49.416 00:29:49.416 Discovery Log Entry 0 00:29:49.416 ---------------------- 00:29:49.416 Transport Type: 3 (TCP) 00:29:49.416 Address Family: 1 (IPv4) 00:29:49.416 Subsystem Type: 3 (Current Discovery Subsystem) 00:29:49.416 Entry Flags: 00:29:49.416 Duplicate Returned Information: 1 00:29:49.416 Explicit Persistent Connection Support for Discovery: 1 00:29:49.416 Transport Requirements: 00:29:49.416 Secure Channel: Not Required 00:29:49.416 Port ID: 0 (0x0000) 00:29:49.416 Controller ID: 65535 (0xffff) 00:29:49.416 Admin Max SQ Size: 128 00:29:49.416 Transport Service Identifier: 4420 00:29:49.416 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:29:49.416 Transport Address: 10.0.0.2 00:29:49.416 Discovery Log Entry 1 00:29:49.416 ---------------------- 00:29:49.416 Transport Type: 3 (TCP) 00:29:49.416 Address Family: 1 (IPv4) 00:29:49.416 Subsystem Type: 2 (NVM Subsystem) 00:29:49.416 Entry Flags: 00:29:49.416 Duplicate Returned Information: 0 00:29:49.416 Explicit Persistent Connection Support for Discovery: 0 00:29:49.416 Transport Requirements: 00:29:49.416 Secure Channel: Not Required 00:29:49.416 Port ID: 0 (0x0000) 00:29:49.416 Controller ID: 65535 (0xffff) 00:29:49.416 Admin Max SQ Size: 128 00:29:49.416 Transport Service Identifier: 4420 00:29:49.416 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:29:49.416 Transport Address: 10.0.0.2 [2024-07-12 00:42:17.064885] nvme_ctrlr.c:4234:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:29:49.416 [2024-07-12 00:42:17.064912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.416 [2024-07-12 00:42:17.064926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.416 [2024-07-12 00:42:17.064940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.416 [2024-07-12 00:42:17.064952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.416 [2024-07-12 00:42:17.064972] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:49.416 [2024-07-12 00:42:17.064982] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:49.416 [2024-07-12 00:42:17.064989] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa14030) 00:29:49.416 [2024-07-12 00:42:17.065001] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.416 [2024-07-12 00:42:17.065027] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa6d520, cid 3, qid 0 00:29:49.416 [2024-07-12 00:42:17.065134] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:49.416 [2024-07-12 00:42:17.065148] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:49.416 [2024-07-12 00:42:17.065156] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:49.416 [2024-07-12 00:42:17.065164] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa6d520) on tqpair=0xa14030 00:29:49.416 [2024-07-12 00:42:17.065179] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:49.416 [2024-07-12 00:42:17.065188] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:49.416 [2024-07-12 00:42:17.065195] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa14030) 00:29:49.416 [2024-07-12 00:42:17.065207] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.416 [2024-07-12 00:42:17.065234] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa6d520, cid 3, qid 0 00:29:49.416 [2024-07-12 00:42:17.065385] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:49.416 [2024-07-12 00:42:17.065398] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:49.416 [2024-07-12 00:42:17.065406] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:49.416 [2024-07-12 00:42:17.065414] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa6d520) on tqpair=0xa14030 00:29:49.416 [2024-07-12 00:42:17.065423] nvme_ctrlr.c:1084:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:29:49.416 [2024-07-12 00:42:17.065433] nvme_ctrlr.c:1087:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:29:49.416 [2024-07-12 00:42:17.065450] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:49.416 [2024-07-12 00:42:17.065459] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:49.416 [2024-07-12 00:42:17.065467] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa14030) 00:29:49.417 [2024-07-12 00:42:17.065478] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.417 [2024-07-12 00:42:17.065500] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa6d520, cid 3, qid 0 00:29:49.417 [2024-07-12 00:42:17.065630] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:49.417 [2024-07-12 00:42:17.065645] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:49.417 [2024-07-12 00:42:17.065653] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:49.417 [2024-07-12 00:42:17.065661] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa6d520) on tqpair=0xa14030 00:29:49.417 [2024-07-12 00:42:17.065680] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:49.417 [2024-07-12 00:42:17.065690] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:49.417 [2024-07-12 00:42:17.065697] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa14030) 00:29:49.417 [2024-07-12 00:42:17.065709] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.417 [2024-07-12 00:42:17.065736] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa6d520, cid 3, qid 0 00:29:49.417 [2024-07-12 00:42:17.065877] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:49.417 [2024-07-12 00:42:17.065890] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:49.417 [2024-07-12 00:42:17.065898] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:49.417 [2024-07-12 00:42:17.065905] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa6d520) on tqpair=0xa14030 00:29:49.417 [2024-07-12 00:42:17.065922] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:49.417 [2024-07-12 00:42:17.065932] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:49.417 [2024-07-12 00:42:17.065939] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa14030) 00:29:49.417 [2024-07-12 00:42:17.065951] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.417 [2024-07-12 00:42:17.065973] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa6d520, cid 3, qid 0 00:29:49.417 [2024-07-12 00:42:17.066082] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:49.417 [2024-07-12 00:42:17.066096] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:49.417 [2024-07-12 00:42:17.066104] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:49.417 [2024-07-12 00:42:17.066112] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa6d520) on tqpair=0xa14030 00:29:49.417 [2024-07-12 00:42:17.066129] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:49.417 [2024-07-12 00:42:17.066139] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:49.417 [2024-07-12 00:42:17.066147] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa14030) 00:29:49.417 [2024-07-12 00:42:17.066158] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.417 [2024-07-12 00:42:17.066180] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa6d520, cid 3, qid 0 00:29:49.417 [2024-07-12 00:42:17.066306] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:49.417 [2024-07-12 00:42:17.066318] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:49.417 [2024-07-12 00:42:17.066326] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:49.417 [2024-07-12 00:42:17.066334] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa6d520) on tqpair=0xa14030 00:29:49.417 [2024-07-12 00:42:17.066351] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:49.417 [2024-07-12 00:42:17.066360] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:49.417 [2024-07-12 00:42:17.066368] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa14030) 00:29:49.417 [2024-07-12 00:42:17.066379] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.417 [2024-07-12 00:42:17.066401] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa6d520, cid 3, qid 0 00:29:49.417 [2024-07-12 00:42:17.066528] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:49.417 [2024-07-12 00:42:17.066542] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:49.417 [2024-07-12 00:42:17.066550] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:49.417 [2024-07-12 00:42:17.066558] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa6d520) on tqpair=0xa14030 00:29:49.417 [2024-07-12 00:42:17.066575] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:49.417 [2024-07-12 00:42:17.066592] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:49.417 [2024-07-12 00:42:17.066601] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa14030) 00:29:49.417 [2024-07-12 00:42:17.066613] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.417 [2024-07-12 00:42:17.066635] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa6d520, cid 3, qid 0 00:29:49.417 [2024-07-12 00:42:17.066778] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:49.417 [2024-07-12 00:42:17.066792] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:49.417 [2024-07-12 00:42:17.066800] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:49.417 [2024-07-12 00:42:17.066807] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa6d520) on tqpair=0xa14030 00:29:49.417 [2024-07-12 00:42:17.066825] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:49.417 [2024-07-12 00:42:17.066835] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:49.417 [2024-07-12 00:42:17.066842] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa14030) 00:29:49.417 [2024-07-12 00:42:17.066853] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.417 [2024-07-12 00:42:17.066875] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa6d520, cid 3, qid 0 00:29:49.417 [2024-07-12 00:42:17.066989] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:49.417 [2024-07-12 00:42:17.067003] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:49.417 [2024-07-12 00:42:17.067011] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:49.417 [2024-07-12 00:42:17.067018] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa6d520) on tqpair=0xa14030 00:29:49.417 [2024-07-12 00:42:17.067036] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:49.417 [2024-07-12 00:42:17.067046] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:49.417 [2024-07-12 00:42:17.067053] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa14030) 00:29:49.417 [2024-07-12 00:42:17.067065] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.417 [2024-07-12 00:42:17.067086] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa6d520, cid 3, qid 0 00:29:49.417 [2024-07-12 00:42:17.067212] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:49.417 [2024-07-12 00:42:17.067225] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:49.417 [2024-07-12 00:42:17.067233] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:49.417 [2024-07-12 00:42:17.067240] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa6d520) on tqpair=0xa14030 00:29:49.417 [2024-07-12 00:42:17.067257] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:49.417 [2024-07-12 00:42:17.067267] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:49.417 [2024-07-12 00:42:17.067274] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa14030) 00:29:49.417 [2024-07-12 00:42:17.067286] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.417 [2024-07-12 00:42:17.067307] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa6d520, cid 3, qid 0 00:29:49.417 [2024-07-12 00:42:17.067466] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:49.417 [2024-07-12 00:42:17.067480] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:49.417 [2024-07-12 00:42:17.067488] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:49.417 [2024-07-12 00:42:17.067495] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa6d520) on tqpair=0xa14030 00:29:49.417 [2024-07-12 00:42:17.067513] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:49.417 [2024-07-12 00:42:17.067523] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:49.417 [2024-07-12 00:42:17.067530] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa14030) 00:29:49.417 [2024-07-12 00:42:17.067541] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.417 [2024-07-12 00:42:17.067563] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa6d520, cid 3, qid 0 00:29:49.417 [2024-07-12 00:42:17.071613] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:49.417 [2024-07-12 00:42:17.071636] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:49.417 [2024-07-12 00:42:17.071645] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:49.417 [2024-07-12 00:42:17.071652] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa6d520) on tqpair=0xa14030 00:29:49.417 [2024-07-12 00:42:17.071671] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:49.417 [2024-07-12 00:42:17.071681] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:49.417 [2024-07-12 00:42:17.071689] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xa14030) 00:29:49.417 [2024-07-12 00:42:17.071701] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.417 [2024-07-12 00:42:17.071725] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa6d520, cid 3, qid 0 00:29:49.417 [2024-07-12 00:42:17.071829] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:49.417 [2024-07-12 00:42:17.071843] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:49.417 [2024-07-12 00:42:17.071851] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:49.417 [2024-07-12 00:42:17.071858] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa6d520) on tqpair=0xa14030 00:29:49.417 [2024-07-12 00:42:17.071872] nvme_ctrlr.c:1206:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 6 milliseconds 00:29:49.417 00:29:49.417 00:42:17 nvmf_tcp.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:29:49.417 [2024-07-12 00:42:17.106529] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:29:49.417 [2024-07-12 00:42:17.106579] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1034059 ] 00:29:49.417 EAL: No free 2048 kB hugepages reported on node 1 00:29:49.417 [2024-07-12 00:42:17.148164] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:29:49.417 [2024-07-12 00:42:17.148227] nvme_tcp.c:2329:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:29:49.417 [2024-07-12 00:42:17.148237] nvme_tcp.c:2333:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:29:49.417 [2024-07-12 00:42:17.148253] nvme_tcp.c:2351:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:29:49.417 [2024-07-12 00:42:17.148267] sock.c: 336:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:29:49.418 [2024-07-12 00:42:17.148441] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:29:49.418 [2024-07-12 00:42:17.148492] nvme_tcp.c:1546:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0xde8030 0 00:29:49.418 [2024-07-12 00:42:17.162611] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:29:49.418 [2024-07-12 00:42:17.162638] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:29:49.418 [2024-07-12 00:42:17.162646] nvme_tcp.c:1592:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:29:49.418 [2024-07-12 00:42:17.162653] nvme_tcp.c:1593:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:29:49.418 [2024-07-12 00:42:17.162697] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:49.418 [2024-07-12 00:42:17.162708] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:49.418 [2024-07-12 00:42:17.162716] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xde8030) 00:29:49.418 [2024-07-12 00:42:17.162731] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:29:49.418 [2024-07-12 00:42:17.162763] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe41100, cid 0, qid 0 00:29:49.418 [2024-07-12 00:42:17.170617] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:49.418 [2024-07-12 00:42:17.170635] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:49.418 [2024-07-12 00:42:17.170643] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:49.418 [2024-07-12 00:42:17.170651] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe41100) on tqpair=0xde8030 00:29:49.418 [2024-07-12 00:42:17.170671] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:29:49.418 [2024-07-12 00:42:17.170682] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:29:49.418 [2024-07-12 00:42:17.170692] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:29:49.418 [2024-07-12 00:42:17.170715] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:49.418 [2024-07-12 00:42:17.170726] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:49.418 [2024-07-12 00:42:17.170733] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xde8030) 00:29:49.418 [2024-07-12 00:42:17.170746] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.418 [2024-07-12 00:42:17.170771] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe41100, cid 0, qid 0 00:29:49.418 [2024-07-12 00:42:17.170867] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:49.418 [2024-07-12 00:42:17.170882] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:49.418 [2024-07-12 00:42:17.170890] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:49.418 [2024-07-12 00:42:17.170898] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe41100) on tqpair=0xde8030 00:29:49.418 [2024-07-12 00:42:17.170911] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:29:49.418 [2024-07-12 00:42:17.170927] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:29:49.418 [2024-07-12 00:42:17.170941] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:49.418 [2024-07-12 00:42:17.170949] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:49.418 [2024-07-12 00:42:17.170956] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xde8030) 00:29:49.418 [2024-07-12 00:42:17.170968] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.418 [2024-07-12 00:42:17.170991] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe41100, cid 0, qid 0 00:29:49.418 [2024-07-12 00:42:17.171086] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:49.418 [2024-07-12 00:42:17.171099] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:49.418 [2024-07-12 00:42:17.171107] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:49.418 [2024-07-12 00:42:17.171114] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe41100) on tqpair=0xde8030 00:29:49.418 [2024-07-12 00:42:17.171124] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:29:49.418 [2024-07-12 00:42:17.171138] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:29:49.418 [2024-07-12 00:42:17.171152] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:49.418 [2024-07-12 00:42:17.171160] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:49.418 [2024-07-12 00:42:17.171167] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xde8030) 00:29:49.418 [2024-07-12 00:42:17.171179] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.418 [2024-07-12 00:42:17.171206] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe41100, cid 0, qid 0 00:29:49.418 [2024-07-12 00:42:17.171301] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:49.418 [2024-07-12 00:42:17.171315] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:49.418 [2024-07-12 00:42:17.171323] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:49.418 [2024-07-12 00:42:17.171330] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe41100) on tqpair=0xde8030 00:29:49.418 [2024-07-12 00:42:17.171340] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:29:49.418 [2024-07-12 00:42:17.171358] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:49.418 [2024-07-12 00:42:17.171368] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:49.418 [2024-07-12 00:42:17.171375] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xde8030) 00:29:49.418 [2024-07-12 00:42:17.171387] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.418 [2024-07-12 00:42:17.171408] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe41100, cid 0, qid 0 00:29:49.418 [2024-07-12 00:42:17.171496] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:49.418 [2024-07-12 00:42:17.171509] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:49.418 [2024-07-12 00:42:17.171517] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:49.418 [2024-07-12 00:42:17.171524] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe41100) on tqpair=0xde8030 00:29:49.418 [2024-07-12 00:42:17.171533] nvme_ctrlr.c:3751:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:29:49.418 [2024-07-12 00:42:17.171542] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:29:49.418 [2024-07-12 00:42:17.171556] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:29:49.418 [2024-07-12 00:42:17.171667] nvme_ctrlr.c:3944:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:29:49.418 [2024-07-12 00:42:17.171676] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:29:49.418 [2024-07-12 00:42:17.171690] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:49.418 [2024-07-12 00:42:17.171699] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:49.418 [2024-07-12 00:42:17.171706] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xde8030) 00:29:49.418 [2024-07-12 00:42:17.171718] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.418 [2024-07-12 00:42:17.171741] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe41100, cid 0, qid 0 00:29:49.418 [2024-07-12 00:42:17.171827] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:49.418 [2024-07-12 00:42:17.171841] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:49.418 [2024-07-12 00:42:17.171849] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:49.418 [2024-07-12 00:42:17.171856] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe41100) on tqpair=0xde8030 00:29:49.418 [2024-07-12 00:42:17.171866] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:29:49.418 [2024-07-12 00:42:17.171884] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:49.418 [2024-07-12 00:42:17.171893] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:49.418 [2024-07-12 00:42:17.171901] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xde8030) 00:29:49.418 [2024-07-12 00:42:17.171912] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.418 [2024-07-12 00:42:17.171940] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe41100, cid 0, qid 0 00:29:49.418 [2024-07-12 00:42:17.172039] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:49.418 [2024-07-12 00:42:17.172053] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:49.418 [2024-07-12 00:42:17.172060] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:49.418 [2024-07-12 00:42:17.172068] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe41100) on tqpair=0xde8030 00:29:49.418 [2024-07-12 00:42:17.172076] nvme_ctrlr.c:3786:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:29:49.418 [2024-07-12 00:42:17.172086] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:29:49.418 [2024-07-12 00:42:17.172100] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:29:49.418 [2024-07-12 00:42:17.172121] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:29:49.418 [2024-07-12 00:42:17.172139] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:49.418 [2024-07-12 00:42:17.172148] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xde8030) 00:29:49.418 [2024-07-12 00:42:17.172160] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.418 [2024-07-12 00:42:17.172183] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe41100, cid 0, qid 0 00:29:49.418 [2024-07-12 00:42:17.172315] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:49.418 [2024-07-12 00:42:17.172330] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:49.418 [2024-07-12 00:42:17.172338] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:49.419 [2024-07-12 00:42:17.172345] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xde8030): datao=0, datal=4096, cccid=0 00:29:49.419 [2024-07-12 00:42:17.172354] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xe41100) on tqpair(0xde8030): expected_datao=0, payload_size=4096 00:29:49.419 [2024-07-12 00:42:17.172363] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:49.419 [2024-07-12 00:42:17.172374] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:49.419 [2024-07-12 00:42:17.172383] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:49.419 [2024-07-12 00:42:17.172396] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:49.419 [2024-07-12 00:42:17.172407] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:49.419 [2024-07-12 00:42:17.172414] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:49.419 [2024-07-12 00:42:17.172422] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe41100) on tqpair=0xde8030 00:29:49.419 [2024-07-12 00:42:17.172439] nvme_ctrlr.c:1986:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:29:49.419 [2024-07-12 00:42:17.172450] nvme_ctrlr.c:1990:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:29:49.419 [2024-07-12 00:42:17.172459] nvme_ctrlr.c:1993:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:29:49.419 [2024-07-12 00:42:17.172467] nvme_ctrlr.c:2017:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:29:49.419 [2024-07-12 00:42:17.172475] nvme_ctrlr.c:2032:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:29:49.419 [2024-07-12 00:42:17.172485] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:29:49.419 [2024-07-12 00:42:17.172501] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:29:49.419 [2024-07-12 00:42:17.172518] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:49.419 [2024-07-12 00:42:17.172527] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:49.419 [2024-07-12 00:42:17.172535] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xde8030) 00:29:49.419 [2024-07-12 00:42:17.172547] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:29:49.419 [2024-07-12 00:42:17.172569] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe41100, cid 0, qid 0 00:29:49.419 [2024-07-12 00:42:17.172672] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:49.419 [2024-07-12 00:42:17.172688] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:49.419 [2024-07-12 00:42:17.172695] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:49.419 [2024-07-12 00:42:17.172703] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe41100) on tqpair=0xde8030 00:29:49.419 [2024-07-12 00:42:17.172715] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:49.419 [2024-07-12 00:42:17.172723] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:49.419 [2024-07-12 00:42:17.172731] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xde8030) 00:29:49.419 [2024-07-12 00:42:17.172742] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:49.419 [2024-07-12 00:42:17.172753] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:49.419 [2024-07-12 00:42:17.172761] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:49.419 [2024-07-12 00:42:17.172769] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0xde8030) 00:29:49.419 [2024-07-12 00:42:17.172778] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:49.419 [2024-07-12 00:42:17.172789] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:49.419 [2024-07-12 00:42:17.172797] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:49.419 [2024-07-12 00:42:17.172804] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0xde8030) 00:29:49.419 [2024-07-12 00:42:17.172814] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:49.419 [2024-07-12 00:42:17.172825] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:49.419 [2024-07-12 00:42:17.172833] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:49.419 [2024-07-12 00:42:17.172840] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xde8030) 00:29:49.419 [2024-07-12 00:42:17.172850] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:49.419 [2024-07-12 00:42:17.172860] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:29:49.419 [2024-07-12 00:42:17.172880] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:29:49.419 [2024-07-12 00:42:17.172894] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:49.419 [2024-07-12 00:42:17.172902] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xde8030) 00:29:49.419 [2024-07-12 00:42:17.172913] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.419 [2024-07-12 00:42:17.172937] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe41100, cid 0, qid 0 00:29:49.419 [2024-07-12 00:42:17.172949] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe41260, cid 1, qid 0 00:29:49.419 [2024-07-12 00:42:17.172958] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe413c0, cid 2, qid 0 00:29:49.419 [2024-07-12 00:42:17.172967] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe41520, cid 3, qid 0 00:29:49.419 [2024-07-12 00:42:17.172979] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe41680, cid 4, qid 0 00:29:49.419 [2024-07-12 00:42:17.173095] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:49.419 [2024-07-12 00:42:17.173110] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:49.419 [2024-07-12 00:42:17.173117] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:49.419 [2024-07-12 00:42:17.173125] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe41680) on tqpair=0xde8030 00:29:49.419 [2024-07-12 00:42:17.173134] nvme_ctrlr.c:2904:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:29:49.419 [2024-07-12 00:42:17.173144] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:29:49.419 [2024-07-12 00:42:17.173160] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:29:49.419 [2024-07-12 00:42:17.173173] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:29:49.419 [2024-07-12 00:42:17.173184] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:49.419 [2024-07-12 00:42:17.173193] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:49.419 [2024-07-12 00:42:17.173200] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xde8030) 00:29:49.419 [2024-07-12 00:42:17.173212] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:29:49.419 [2024-07-12 00:42:17.173234] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe41680, cid 4, qid 0 00:29:49.419 [2024-07-12 00:42:17.176604] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:49.419 [2024-07-12 00:42:17.176622] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:49.419 [2024-07-12 00:42:17.176630] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:49.419 [2024-07-12 00:42:17.176637] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe41680) on tqpair=0xde8030 00:29:49.419 [2024-07-12 00:42:17.176717] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:29:49.419 [2024-07-12 00:42:17.176739] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:29:49.419 [2024-07-12 00:42:17.176755] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:49.419 [2024-07-12 00:42:17.176764] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xde8030) 00:29:49.419 [2024-07-12 00:42:17.176776] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.419 [2024-07-12 00:42:17.176799] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe41680, cid 4, qid 0 00:29:49.419 [2024-07-12 00:42:17.176940] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:49.419 [2024-07-12 00:42:17.176956] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:49.419 [2024-07-12 00:42:17.176963] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:49.419 [2024-07-12 00:42:17.176971] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xde8030): datao=0, datal=4096, cccid=4 00:29:49.419 [2024-07-12 00:42:17.176980] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xe41680) on tqpair(0xde8030): expected_datao=0, payload_size=4096 00:29:49.419 [2024-07-12 00:42:17.176988] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:49.419 [2024-07-12 00:42:17.177000] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:49.419 [2024-07-12 00:42:17.177008] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:49.419 [2024-07-12 00:42:17.177030] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:49.419 [2024-07-12 00:42:17.177048] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:49.419 [2024-07-12 00:42:17.177057] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:49.419 [2024-07-12 00:42:17.177064] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe41680) on tqpair=0xde8030 00:29:49.419 [2024-07-12 00:42:17.177082] nvme_ctrlr.c:4570:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:29:49.419 [2024-07-12 00:42:17.177100] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:29:49.419 [2024-07-12 00:42:17.177119] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:29:49.419 [2024-07-12 00:42:17.177134] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:49.419 [2024-07-12 00:42:17.177143] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xde8030) 00:29:49.419 [2024-07-12 00:42:17.177155] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.419 [2024-07-12 00:42:17.177177] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe41680, cid 4, qid 0 00:29:49.419 [2024-07-12 00:42:17.177313] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:49.420 [2024-07-12 00:42:17.177325] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:49.420 [2024-07-12 00:42:17.177333] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:49.420 [2024-07-12 00:42:17.177340] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xde8030): datao=0, datal=4096, cccid=4 00:29:49.420 [2024-07-12 00:42:17.177349] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xe41680) on tqpair(0xde8030): expected_datao=0, payload_size=4096 00:29:49.420 [2024-07-12 00:42:17.177357] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:49.420 [2024-07-12 00:42:17.177369] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:49.420 [2024-07-12 00:42:17.177377] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:49.420 [2024-07-12 00:42:17.177390] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:49.420 [2024-07-12 00:42:17.177401] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:49.420 [2024-07-12 00:42:17.177408] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:49.420 [2024-07-12 00:42:17.177416] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe41680) on tqpair=0xde8030 00:29:49.420 [2024-07-12 00:42:17.177439] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:29:49.420 [2024-07-12 00:42:17.177459] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:29:49.420 [2024-07-12 00:42:17.177475] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:49.420 [2024-07-12 00:42:17.177483] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xde8030) 00:29:49.420 [2024-07-12 00:42:17.177495] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.420 [2024-07-12 00:42:17.177520] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe41680, cid 4, qid 0 00:29:49.420 [2024-07-12 00:42:17.177657] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:49.420 [2024-07-12 00:42:17.177672] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:49.420 [2024-07-12 00:42:17.177680] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:49.420 [2024-07-12 00:42:17.177687] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xde8030): datao=0, datal=4096, cccid=4 00:29:49.420 [2024-07-12 00:42:17.177696] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xe41680) on tqpair(0xde8030): expected_datao=0, payload_size=4096 00:29:49.420 [2024-07-12 00:42:17.177704] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:49.420 [2024-07-12 00:42:17.177722] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:49.420 [2024-07-12 00:42:17.177731] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:49.420 [2024-07-12 00:42:17.177752] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:49.420 [2024-07-12 00:42:17.177765] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:49.420 [2024-07-12 00:42:17.177773] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:49.420 [2024-07-12 00:42:17.177780] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe41680) on tqpair=0xde8030 00:29:49.420 [2024-07-12 00:42:17.177797] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:29:49.420 [2024-07-12 00:42:17.177813] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:29:49.420 [2024-07-12 00:42:17.177830] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:29:49.420 [2024-07-12 00:42:17.177843] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:29:49.420 [2024-07-12 00:42:17.177853] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:29:49.420 [2024-07-12 00:42:17.177862] nvme_ctrlr.c:2992:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:29:49.420 [2024-07-12 00:42:17.177871] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:29:49.420 [2024-07-12 00:42:17.177881] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:29:49.420 [2024-07-12 00:42:17.177906] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:49.420 [2024-07-12 00:42:17.177916] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xde8030) 00:29:49.420 [2024-07-12 00:42:17.177928] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.420 [2024-07-12 00:42:17.177940] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:49.420 [2024-07-12 00:42:17.177948] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:49.420 [2024-07-12 00:42:17.177955] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xde8030) 00:29:49.420 [2024-07-12 00:42:17.177966] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:29:49.420 [2024-07-12 00:42:17.177992] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe41680, cid 4, qid 0 00:29:49.420 [2024-07-12 00:42:17.178004] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe417e0, cid 5, qid 0 00:29:49.420 [2024-07-12 00:42:17.178130] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:49.420 [2024-07-12 00:42:17.178143] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:49.420 [2024-07-12 00:42:17.178150] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:49.420 [2024-07-12 00:42:17.178158] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe41680) on tqpair=0xde8030 00:29:49.420 [2024-07-12 00:42:17.178171] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:49.420 [2024-07-12 00:42:17.178182] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:49.420 [2024-07-12 00:42:17.178189] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:49.420 [2024-07-12 00:42:17.178197] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe417e0) on tqpair=0xde8030 00:29:49.420 [2024-07-12 00:42:17.178213] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:49.420 [2024-07-12 00:42:17.178223] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xde8030) 00:29:49.420 [2024-07-12 00:42:17.178234] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.420 [2024-07-12 00:42:17.178261] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe417e0, cid 5, qid 0 00:29:49.420 [2024-07-12 00:42:17.178395] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:49.420 [2024-07-12 00:42:17.178410] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:49.420 [2024-07-12 00:42:17.178417] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:49.420 [2024-07-12 00:42:17.178425] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe417e0) on tqpair=0xde8030 00:29:49.420 [2024-07-12 00:42:17.178442] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:49.420 [2024-07-12 00:42:17.178452] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xde8030) 00:29:49.420 [2024-07-12 00:42:17.178463] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.420 [2024-07-12 00:42:17.178485] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe417e0, cid 5, qid 0 00:29:49.420 [2024-07-12 00:42:17.178597] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:49.420 [2024-07-12 00:42:17.178612] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:49.420 [2024-07-12 00:42:17.178619] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:49.420 [2024-07-12 00:42:17.178627] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe417e0) on tqpair=0xde8030 00:29:49.420 [2024-07-12 00:42:17.178644] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:49.420 [2024-07-12 00:42:17.178653] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xde8030) 00:29:49.420 [2024-07-12 00:42:17.178665] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.420 [2024-07-12 00:42:17.178689] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe417e0, cid 5, qid 0 00:29:49.420 [2024-07-12 00:42:17.178776] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:49.420 [2024-07-12 00:42:17.178790] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:49.420 [2024-07-12 00:42:17.178798] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:49.421 [2024-07-12 00:42:17.178806] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe417e0) on tqpair=0xde8030 00:29:49.421 [2024-07-12 00:42:17.178825] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:49.421 [2024-07-12 00:42:17.178836] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xde8030) 00:29:49.421 [2024-07-12 00:42:17.178847] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.421 [2024-07-12 00:42:17.178860] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:49.421 [2024-07-12 00:42:17.178868] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xde8030) 00:29:49.421 [2024-07-12 00:42:17.178879] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.421 [2024-07-12 00:42:17.178891] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:49.421 [2024-07-12 00:42:17.178899] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0xde8030) 00:29:49.421 [2024-07-12 00:42:17.178910] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.421 [2024-07-12 00:42:17.178923] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:49.421 [2024-07-12 00:42:17.178931] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0xde8030) 00:29:49.421 [2024-07-12 00:42:17.178941] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.421 [2024-07-12 00:42:17.178969] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe417e0, cid 5, qid 0 00:29:49.421 [2024-07-12 00:42:17.178981] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe41680, cid 4, qid 0 00:29:49.421 [2024-07-12 00:42:17.178990] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe41940, cid 6, qid 0 00:29:49.421 [2024-07-12 00:42:17.178999] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe41aa0, cid 7, qid 0 00:29:49.421 [2024-07-12 00:42:17.179185] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:49.421 [2024-07-12 00:42:17.179201] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:49.421 [2024-07-12 00:42:17.179209] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:49.421 [2024-07-12 00:42:17.179216] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xde8030): datao=0, datal=8192, cccid=5 00:29:49.421 [2024-07-12 00:42:17.179225] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xe417e0) on tqpair(0xde8030): expected_datao=0, payload_size=8192 00:29:49.421 [2024-07-12 00:42:17.179233] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:49.421 [2024-07-12 00:42:17.179245] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:49.421 [2024-07-12 00:42:17.179253] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:49.421 [2024-07-12 00:42:17.179263] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:49.421 [2024-07-12 00:42:17.179273] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:49.421 [2024-07-12 00:42:17.179281] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:49.421 [2024-07-12 00:42:17.179288] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xde8030): datao=0, datal=512, cccid=4 00:29:49.421 [2024-07-12 00:42:17.179296] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xe41680) on tqpair(0xde8030): expected_datao=0, payload_size=512 00:29:49.421 [2024-07-12 00:42:17.179305] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:49.421 [2024-07-12 00:42:17.179315] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:49.421 [2024-07-12 00:42:17.179323] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:49.421 [2024-07-12 00:42:17.179333] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:49.421 [2024-07-12 00:42:17.179343] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:49.421 [2024-07-12 00:42:17.179350] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:49.421 [2024-07-12 00:42:17.179357] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xde8030): datao=0, datal=512, cccid=6 00:29:49.421 [2024-07-12 00:42:17.179365] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xe41940) on tqpair(0xde8030): expected_datao=0, payload_size=512 00:29:49.421 [2024-07-12 00:42:17.179374] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:49.421 [2024-07-12 00:42:17.179384] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:49.421 [2024-07-12 00:42:17.179392] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:49.421 [2024-07-12 00:42:17.179401] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:49.421 [2024-07-12 00:42:17.179411] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:49.421 [2024-07-12 00:42:17.179418] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:49.421 [2024-07-12 00:42:17.179426] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xde8030): datao=0, datal=4096, cccid=7 00:29:49.421 [2024-07-12 00:42:17.179434] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xe41aa0) on tqpair(0xde8030): expected_datao=0, payload_size=4096 00:29:49.421 [2024-07-12 00:42:17.179442] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:49.421 [2024-07-12 00:42:17.179453] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:49.421 [2024-07-12 00:42:17.179461] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:49.421 [2024-07-12 00:42:17.179474] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:49.421 [2024-07-12 00:42:17.179488] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:49.421 [2024-07-12 00:42:17.179497] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:49.421 [2024-07-12 00:42:17.179504] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe417e0) on tqpair=0xde8030 00:29:49.421 [2024-07-12 00:42:17.179525] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:49.421 [2024-07-12 00:42:17.179538] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:49.421 [2024-07-12 00:42:17.179545] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:49.421 [2024-07-12 00:42:17.179552] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe41680) on tqpair=0xde8030 00:29:49.421 [2024-07-12 00:42:17.179569] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:49.421 [2024-07-12 00:42:17.179581] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:49.421 [2024-07-12 00:42:17.179596] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:49.421 [2024-07-12 00:42:17.179604] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe41940) on tqpair=0xde8030 00:29:49.421 [2024-07-12 00:42:17.179622] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:49.421 [2024-07-12 00:42:17.179634] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:49.421 [2024-07-12 00:42:17.179641] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:49.421 [2024-07-12 00:42:17.179649] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe41aa0) on tqpair=0xde8030 00:29:49.421 ===================================================== 00:29:49.421 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:49.421 ===================================================== 00:29:49.421 Controller Capabilities/Features 00:29:49.421 ================================ 00:29:49.421 Vendor ID: 8086 00:29:49.421 Subsystem Vendor ID: 8086 00:29:49.421 Serial Number: SPDK00000000000001 00:29:49.421 Model Number: SPDK bdev Controller 00:29:49.421 Firmware Version: 24.05.1 00:29:49.421 Recommended Arb Burst: 6 00:29:49.421 IEEE OUI Identifier: e4 d2 5c 00:29:49.421 Multi-path I/O 00:29:49.421 May have multiple subsystem ports: Yes 00:29:49.421 May have multiple controllers: Yes 00:29:49.421 Associated with SR-IOV VF: No 00:29:49.421 Max Data Transfer Size: 131072 00:29:49.421 Max Number of Namespaces: 32 00:29:49.421 Max Number of I/O Queues: 127 00:29:49.421 NVMe Specification Version (VS): 1.3 00:29:49.421 NVMe Specification Version (Identify): 1.3 00:29:49.421 Maximum Queue Entries: 128 00:29:49.421 Contiguous Queues Required: Yes 00:29:49.421 Arbitration Mechanisms Supported 00:29:49.421 Weighted Round Robin: Not Supported 00:29:49.421 Vendor Specific: Not Supported 00:29:49.421 Reset Timeout: 15000 ms 00:29:49.421 Doorbell Stride: 4 bytes 00:29:49.421 NVM Subsystem Reset: Not Supported 00:29:49.421 Command Sets Supported 00:29:49.421 NVM Command Set: Supported 00:29:49.421 Boot Partition: Not Supported 00:29:49.421 Memory Page Size Minimum: 4096 bytes 00:29:49.421 Memory Page Size Maximum: 4096 bytes 00:29:49.421 Persistent Memory Region: Not Supported 00:29:49.421 Optional Asynchronous Events Supported 00:29:49.421 Namespace Attribute Notices: Supported 00:29:49.421 Firmware Activation Notices: Not Supported 00:29:49.421 ANA Change Notices: Not Supported 00:29:49.421 PLE Aggregate Log Change Notices: Not Supported 00:29:49.421 LBA Status Info Alert Notices: Not Supported 00:29:49.421 EGE Aggregate Log Change Notices: Not Supported 00:29:49.421 Normal NVM Subsystem Shutdown event: Not Supported 00:29:49.421 Zone Descriptor Change Notices: Not Supported 00:29:49.421 Discovery Log Change Notices: Not Supported 00:29:49.421 Controller Attributes 00:29:49.421 128-bit Host Identifier: Supported 00:29:49.421 Non-Operational Permissive Mode: Not Supported 00:29:49.421 NVM Sets: Not Supported 00:29:49.421 Read Recovery Levels: Not Supported 00:29:49.421 Endurance Groups: Not Supported 00:29:49.421 Predictable Latency Mode: Not Supported 00:29:49.421 Traffic Based Keep ALive: Not Supported 00:29:49.421 Namespace Granularity: Not Supported 00:29:49.421 SQ Associations: Not Supported 00:29:49.421 UUID List: Not Supported 00:29:49.421 Multi-Domain Subsystem: Not Supported 00:29:49.421 Fixed Capacity Management: Not Supported 00:29:49.421 Variable Capacity Management: Not Supported 00:29:49.421 Delete Endurance Group: Not Supported 00:29:49.421 Delete NVM Set: Not Supported 00:29:49.421 Extended LBA Formats Supported: Not Supported 00:29:49.421 Flexible Data Placement Supported: Not Supported 00:29:49.421 00:29:49.421 Controller Memory Buffer Support 00:29:49.421 ================================ 00:29:49.421 Supported: No 00:29:49.421 00:29:49.421 Persistent Memory Region Support 00:29:49.421 ================================ 00:29:49.421 Supported: No 00:29:49.421 00:29:49.421 Admin Command Set Attributes 00:29:49.421 ============================ 00:29:49.421 Security Send/Receive: Not Supported 00:29:49.421 Format NVM: Not Supported 00:29:49.421 Firmware Activate/Download: Not Supported 00:29:49.421 Namespace Management: Not Supported 00:29:49.422 Device Self-Test: Not Supported 00:29:49.422 Directives: Not Supported 00:29:49.422 NVMe-MI: Not Supported 00:29:49.422 Virtualization Management: Not Supported 00:29:49.422 Doorbell Buffer Config: Not Supported 00:29:49.422 Get LBA Status Capability: Not Supported 00:29:49.422 Command & Feature Lockdown Capability: Not Supported 00:29:49.422 Abort Command Limit: 4 00:29:49.422 Async Event Request Limit: 4 00:29:49.422 Number of Firmware Slots: N/A 00:29:49.422 Firmware Slot 1 Read-Only: N/A 00:29:49.422 Firmware Activation Without Reset: N/A 00:29:49.422 Multiple Update Detection Support: N/A 00:29:49.422 Firmware Update Granularity: No Information Provided 00:29:49.422 Per-Namespace SMART Log: No 00:29:49.422 Asymmetric Namespace Access Log Page: Not Supported 00:29:49.422 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:29:49.422 Command Effects Log Page: Supported 00:29:49.422 Get Log Page Extended Data: Supported 00:29:49.422 Telemetry Log Pages: Not Supported 00:29:49.422 Persistent Event Log Pages: Not Supported 00:29:49.422 Supported Log Pages Log Page: May Support 00:29:49.422 Commands Supported & Effects Log Page: Not Supported 00:29:49.422 Feature Identifiers & Effects Log Page:May Support 00:29:49.422 NVMe-MI Commands & Effects Log Page: May Support 00:29:49.422 Data Area 4 for Telemetry Log: Not Supported 00:29:49.422 Error Log Page Entries Supported: 128 00:29:49.422 Keep Alive: Supported 00:29:49.422 Keep Alive Granularity: 10000 ms 00:29:49.422 00:29:49.422 NVM Command Set Attributes 00:29:49.422 ========================== 00:29:49.422 Submission Queue Entry Size 00:29:49.422 Max: 64 00:29:49.422 Min: 64 00:29:49.422 Completion Queue Entry Size 00:29:49.422 Max: 16 00:29:49.422 Min: 16 00:29:49.422 Number of Namespaces: 32 00:29:49.422 Compare Command: Supported 00:29:49.422 Write Uncorrectable Command: Not Supported 00:29:49.422 Dataset Management Command: Supported 00:29:49.422 Write Zeroes Command: Supported 00:29:49.422 Set Features Save Field: Not Supported 00:29:49.422 Reservations: Supported 00:29:49.422 Timestamp: Not Supported 00:29:49.422 Copy: Supported 00:29:49.422 Volatile Write Cache: Present 00:29:49.422 Atomic Write Unit (Normal): 1 00:29:49.422 Atomic Write Unit (PFail): 1 00:29:49.422 Atomic Compare & Write Unit: 1 00:29:49.422 Fused Compare & Write: Supported 00:29:49.422 Scatter-Gather List 00:29:49.422 SGL Command Set: Supported 00:29:49.422 SGL Keyed: Supported 00:29:49.422 SGL Bit Bucket Descriptor: Not Supported 00:29:49.422 SGL Metadata Pointer: Not Supported 00:29:49.422 Oversized SGL: Not Supported 00:29:49.422 SGL Metadata Address: Not Supported 00:29:49.422 SGL Offset: Supported 00:29:49.422 Transport SGL Data Block: Not Supported 00:29:49.422 Replay Protected Memory Block: Not Supported 00:29:49.422 00:29:49.422 Firmware Slot Information 00:29:49.422 ========================= 00:29:49.422 Active slot: 1 00:29:49.422 Slot 1 Firmware Revision: 24.05.1 00:29:49.422 00:29:49.422 00:29:49.422 Commands Supported and Effects 00:29:49.422 ============================== 00:29:49.422 Admin Commands 00:29:49.422 -------------- 00:29:49.422 Get Log Page (02h): Supported 00:29:49.422 Identify (06h): Supported 00:29:49.422 Abort (08h): Supported 00:29:49.422 Set Features (09h): Supported 00:29:49.422 Get Features (0Ah): Supported 00:29:49.422 Asynchronous Event Request (0Ch): Supported 00:29:49.422 Keep Alive (18h): Supported 00:29:49.422 I/O Commands 00:29:49.422 ------------ 00:29:49.422 Flush (00h): Supported LBA-Change 00:29:49.422 Write (01h): Supported LBA-Change 00:29:49.422 Read (02h): Supported 00:29:49.422 Compare (05h): Supported 00:29:49.422 Write Zeroes (08h): Supported LBA-Change 00:29:49.422 Dataset Management (09h): Supported LBA-Change 00:29:49.422 Copy (19h): Supported LBA-Change 00:29:49.422 Unknown (79h): Supported LBA-Change 00:29:49.422 Unknown (7Ah): Supported 00:29:49.422 00:29:49.422 Error Log 00:29:49.422 ========= 00:29:49.422 00:29:49.422 Arbitration 00:29:49.422 =========== 00:29:49.422 Arbitration Burst: 1 00:29:49.422 00:29:49.422 Power Management 00:29:49.422 ================ 00:29:49.422 Number of Power States: 1 00:29:49.422 Current Power State: Power State #0 00:29:49.422 Power State #0: 00:29:49.422 Max Power: 0.00 W 00:29:49.422 Non-Operational State: Operational 00:29:49.422 Entry Latency: Not Reported 00:29:49.422 Exit Latency: Not Reported 00:29:49.422 Relative Read Throughput: 0 00:29:49.422 Relative Read Latency: 0 00:29:49.422 Relative Write Throughput: 0 00:29:49.422 Relative Write Latency: 0 00:29:49.422 Idle Power: Not Reported 00:29:49.422 Active Power: Not Reported 00:29:49.422 Non-Operational Permissive Mode: Not Supported 00:29:49.422 00:29:49.422 Health Information 00:29:49.422 ================== 00:29:49.422 Critical Warnings: 00:29:49.422 Available Spare Space: OK 00:29:49.422 Temperature: OK 00:29:49.422 Device Reliability: OK 00:29:49.422 Read Only: No 00:29:49.422 Volatile Memory Backup: OK 00:29:49.422 Current Temperature: 0 Kelvin (-273 Celsius) 00:29:49.422 Temperature Threshold: [2024-07-12 00:42:17.179788] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:49.422 [2024-07-12 00:42:17.179801] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0xde8030) 00:29:49.422 [2024-07-12 00:42:17.179814] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.422 [2024-07-12 00:42:17.179837] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe41aa0, cid 7, qid 0 00:29:49.422 [2024-07-12 00:42:17.179936] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:49.422 [2024-07-12 00:42:17.179951] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:49.422 [2024-07-12 00:42:17.179958] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:49.422 [2024-07-12 00:42:17.179966] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe41aa0) on tqpair=0xde8030 00:29:49.422 [2024-07-12 00:42:17.180009] nvme_ctrlr.c:4234:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:29:49.422 [2024-07-12 00:42:17.180032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.422 [2024-07-12 00:42:17.180045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.422 [2024-07-12 00:42:17.180056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.422 [2024-07-12 00:42:17.180067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.422 [2024-07-12 00:42:17.180081] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:49.422 [2024-07-12 00:42:17.180090] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:49.422 [2024-07-12 00:42:17.180097] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xde8030) 00:29:49.422 [2024-07-12 00:42:17.180109] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.422 [2024-07-12 00:42:17.180132] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe41520, cid 3, qid 0 00:29:49.422 [2024-07-12 00:42:17.180259] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:49.422 [2024-07-12 00:42:17.180272] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:49.422 [2024-07-12 00:42:17.180279] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:49.422 [2024-07-12 00:42:17.180291] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe41520) on tqpair=0xde8030 00:29:49.422 [2024-07-12 00:42:17.180304] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:49.422 [2024-07-12 00:42:17.180313] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:49.422 [2024-07-12 00:42:17.180320] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xde8030) 00:29:49.422 [2024-07-12 00:42:17.180331] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.422 [2024-07-12 00:42:17.180358] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe41520, cid 3, qid 0 00:29:49.422 [2024-07-12 00:42:17.180457] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:49.422 [2024-07-12 00:42:17.180470] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:49.422 [2024-07-12 00:42:17.180477] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:49.422 [2024-07-12 00:42:17.180485] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe41520) on tqpair=0xde8030 00:29:49.422 [2024-07-12 00:42:17.180493] nvme_ctrlr.c:1084:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:29:49.422 [2024-07-12 00:42:17.180502] nvme_ctrlr.c:1087:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:29:49.422 [2024-07-12 00:42:17.180519] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:49.422 [2024-07-12 00:42:17.180528] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:49.422 [2024-07-12 00:42:17.180536] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xde8030) 00:29:49.422 [2024-07-12 00:42:17.180547] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.422 [2024-07-12 00:42:17.180569] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe41520, cid 3, qid 0 00:29:49.422 [2024-07-12 00:42:17.180713] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:49.422 [2024-07-12 00:42:17.180728] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:49.422 [2024-07-12 00:42:17.180736] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:49.422 [2024-07-12 00:42:17.180743] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe41520) on tqpair=0xde8030 00:29:49.422 [2024-07-12 00:42:17.180761] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:49.422 [2024-07-12 00:42:17.180771] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:49.422 [2024-07-12 00:42:17.180778] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xde8030) 00:29:49.423 [2024-07-12 00:42:17.180790] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.423 [2024-07-12 00:42:17.180812] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe41520, cid 3, qid 0 00:29:49.423 [2024-07-12 00:42:17.180896] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:49.423 [2024-07-12 00:42:17.180911] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:49.423 [2024-07-12 00:42:17.180918] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:49.423 [2024-07-12 00:42:17.180926] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe41520) on tqpair=0xde8030 00:29:49.423 [2024-07-12 00:42:17.180943] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:49.423 [2024-07-12 00:42:17.180953] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:49.423 [2024-07-12 00:42:17.180960] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xde8030) 00:29:49.423 [2024-07-12 00:42:17.180972] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.423 [2024-07-12 00:42:17.180993] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe41520, cid 3, qid 0 00:29:49.423 [2024-07-12 00:42:17.181122] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:49.423 [2024-07-12 00:42:17.181140] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:49.423 [2024-07-12 00:42:17.181148] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:49.423 [2024-07-12 00:42:17.181156] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe41520) on tqpair=0xde8030 00:29:49.423 [2024-07-12 00:42:17.181173] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:49.423 [2024-07-12 00:42:17.181183] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:49.423 [2024-07-12 00:42:17.181190] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xde8030) 00:29:49.423 [2024-07-12 00:42:17.181202] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.423 [2024-07-12 00:42:17.181224] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe41520, cid 3, qid 0 00:29:49.423 [2024-07-12 00:42:17.181325] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:49.423 [2024-07-12 00:42:17.181338] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:49.423 [2024-07-12 00:42:17.181346] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:49.423 [2024-07-12 00:42:17.181354] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe41520) on tqpair=0xde8030 00:29:49.423 [2024-07-12 00:42:17.181371] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:49.423 [2024-07-12 00:42:17.181381] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:49.423 [2024-07-12 00:42:17.181388] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xde8030) 00:29:49.423 [2024-07-12 00:42:17.181399] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.423 [2024-07-12 00:42:17.181421] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe41520, cid 3, qid 0 00:29:49.423 [2024-07-12 00:42:17.181524] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:49.423 [2024-07-12 00:42:17.181537] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:49.423 [2024-07-12 00:42:17.181545] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:49.423 [2024-07-12 00:42:17.181552] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe41520) on tqpair=0xde8030 00:29:49.423 [2024-07-12 00:42:17.181569] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:49.423 [2024-07-12 00:42:17.181579] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:49.423 [2024-07-12 00:42:17.185598] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xde8030) 00:29:49.423 [2024-07-12 00:42:17.185626] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.423 [2024-07-12 00:42:17.185652] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe41520, cid 3, qid 0 00:29:49.423 [2024-07-12 00:42:17.185739] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:49.423 [2024-07-12 00:42:17.185753] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:49.423 [2024-07-12 00:42:17.185761] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:49.423 [2024-07-12 00:42:17.185768] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe41520) on tqpair=0xde8030 00:29:49.423 [2024-07-12 00:42:17.185783] nvme_ctrlr.c:1206:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 5 milliseconds 00:29:49.423 0 Kelvin (-273 Celsius) 00:29:49.423 Available Spare: 0% 00:29:49.423 Available Spare Threshold: 0% 00:29:49.423 Life Percentage Used: 0% 00:29:49.423 Data Units Read: 0 00:29:49.423 Data Units Written: 0 00:29:49.423 Host Read Commands: 0 00:29:49.423 Host Write Commands: 0 00:29:49.423 Controller Busy Time: 0 minutes 00:29:49.423 Power Cycles: 0 00:29:49.423 Power On Hours: 0 hours 00:29:49.423 Unsafe Shutdowns: 0 00:29:49.423 Unrecoverable Media Errors: 0 00:29:49.423 Lifetime Error Log Entries: 0 00:29:49.423 Warning Temperature Time: 0 minutes 00:29:49.423 Critical Temperature Time: 0 minutes 00:29:49.423 00:29:49.423 Number of Queues 00:29:49.423 ================ 00:29:49.423 Number of I/O Submission Queues: 127 00:29:49.423 Number of I/O Completion Queues: 127 00:29:49.423 00:29:49.423 Active Namespaces 00:29:49.423 ================= 00:29:49.423 Namespace ID:1 00:29:49.423 Error Recovery Timeout: Unlimited 00:29:49.423 Command Set Identifier: NVM (00h) 00:29:49.423 Deallocate: Supported 00:29:49.423 Deallocated/Unwritten Error: Not Supported 00:29:49.423 Deallocated Read Value: Unknown 00:29:49.423 Deallocate in Write Zeroes: Not Supported 00:29:49.423 Deallocated Guard Field: 0xFFFF 00:29:49.423 Flush: Supported 00:29:49.423 Reservation: Supported 00:29:49.423 Namespace Sharing Capabilities: Multiple Controllers 00:29:49.423 Size (in LBAs): 131072 (0GiB) 00:29:49.423 Capacity (in LBAs): 131072 (0GiB) 00:29:49.423 Utilization (in LBAs): 131072 (0GiB) 00:29:49.423 NGUID: ABCDEF0123456789ABCDEF0123456789 00:29:49.423 EUI64: ABCDEF0123456789 00:29:49.423 UUID: acb27e64-2d15-4aeb-a5f7-e70b88ad1119 00:29:49.423 Thin Provisioning: Not Supported 00:29:49.423 Per-NS Atomic Units: Yes 00:29:49.423 Atomic Boundary Size (Normal): 0 00:29:49.423 Atomic Boundary Size (PFail): 0 00:29:49.423 Atomic Boundary Offset: 0 00:29:49.423 Maximum Single Source Range Length: 65535 00:29:49.423 Maximum Copy Length: 65535 00:29:49.423 Maximum Source Range Count: 1 00:29:49.423 NGUID/EUI64 Never Reused: No 00:29:49.423 Namespace Write Protected: No 00:29:49.423 Number of LBA Formats: 1 00:29:49.423 Current LBA Format: LBA Format #00 00:29:49.423 LBA Format #00: Data Size: 512 Metadata Size: 0 00:29:49.423 00:29:49.423 00:42:17 nvmf_tcp.nvmf_identify -- host/identify.sh@51 -- # sync 00:29:49.423 00:42:17 nvmf_tcp.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:49.423 00:42:17 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:49.423 00:42:17 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:49.423 00:42:17 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:49.423 00:42:17 nvmf_tcp.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:29:49.423 00:42:17 nvmf_tcp.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:29:49.423 00:42:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:49.423 00:42:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@117 -- # sync 00:29:49.423 00:42:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:49.423 00:42:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@120 -- # set +e 00:29:49.423 00:42:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:49.423 00:42:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:49.423 rmmod nvme_tcp 00:29:49.423 rmmod nvme_fabrics 00:29:49.682 rmmod nvme_keyring 00:29:49.682 00:42:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:49.682 00:42:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@124 -- # set -e 00:29:49.682 00:42:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@125 -- # return 0 00:29:49.682 00:42:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@489 -- # '[' -n 1034028 ']' 00:29:49.682 00:42:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@490 -- # killprocess 1034028 00:29:49.682 00:42:17 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@946 -- # '[' -z 1034028 ']' 00:29:49.682 00:42:17 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@950 -- # kill -0 1034028 00:29:49.682 00:42:17 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@951 -- # uname 00:29:49.682 00:42:17 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:29:49.682 00:42:17 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1034028 00:29:49.682 00:42:17 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:29:49.682 00:42:17 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:29:49.682 00:42:17 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1034028' 00:29:49.682 killing process with pid 1034028 00:29:49.682 00:42:17 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@965 -- # kill 1034028 00:29:49.682 00:42:17 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@970 -- # wait 1034028 00:29:49.682 00:42:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:29:49.682 00:42:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:29:49.682 00:42:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:29:49.682 00:42:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:49.682 00:42:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:49.682 00:42:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:49.682 00:42:17 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:49.682 00:42:17 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:52.215 00:42:19 nvmf_tcp.nvmf_identify -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:52.215 00:29:52.215 real 0m4.790s 00:29:52.215 user 0m3.857s 00:29:52.215 sys 0m1.497s 00:29:52.215 00:42:19 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1122 -- # xtrace_disable 00:29:52.215 00:42:19 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:52.215 ************************************ 00:29:52.215 END TEST nvmf_identify 00:29:52.215 ************************************ 00:29:52.215 00:42:19 nvmf_tcp -- nvmf/nvmf.sh@98 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:29:52.215 00:42:19 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:29:52.215 00:42:19 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:29:52.215 00:42:19 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:52.215 ************************************ 00:29:52.215 START TEST nvmf_perf 00:29:52.215 ************************************ 00:29:52.215 00:42:19 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:29:52.215 * Looking for test storage... 00:29:52.215 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:52.215 00:42:19 nvmf_tcp.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:52.215 00:42:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:29:52.215 00:42:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:52.215 00:42:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:52.215 00:42:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:52.215 00:42:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:52.215 00:42:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:52.215 00:42:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:52.215 00:42:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:52.215 00:42:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:52.215 00:42:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:52.215 00:42:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:52.215 00:42:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:29:52.215 00:42:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:29:52.215 00:42:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:52.215 00:42:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:52.215 00:42:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:52.215 00:42:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:52.215 00:42:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:52.215 00:42:19 nvmf_tcp.nvmf_perf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:52.215 00:42:19 nvmf_tcp.nvmf_perf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:52.215 00:42:19 nvmf_tcp.nvmf_perf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:52.216 00:42:19 nvmf_tcp.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:52.216 00:42:19 nvmf_tcp.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:52.216 00:42:19 nvmf_tcp.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:52.216 00:42:19 nvmf_tcp.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:29:52.216 00:42:19 nvmf_tcp.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:52.216 00:42:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@47 -- # : 0 00:29:52.216 00:42:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:52.216 00:42:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:52.216 00:42:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:52.216 00:42:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:52.216 00:42:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:52.216 00:42:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:52.216 00:42:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:52.216 00:42:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:52.216 00:42:19 nvmf_tcp.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:29:52.216 00:42:19 nvmf_tcp.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:29:52.216 00:42:19 nvmf_tcp.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:52.216 00:42:19 nvmf_tcp.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:29:52.216 00:42:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:29:52.216 00:42:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:52.216 00:42:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:52.216 00:42:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:52.216 00:42:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:52.216 00:42:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:52.216 00:42:19 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:52.216 00:42:19 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:52.216 00:42:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:29:52.216 00:42:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:29:52.216 00:42:19 nvmf_tcp.nvmf_perf -- nvmf/common.sh@285 -- # xtrace_disable 00:29:52.216 00:42:19 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:29:53.626 00:42:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:53.626 00:42:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # pci_devs=() 00:29:53.626 00:42:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # local -a pci_devs 00:29:53.626 00:42:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:29:53.626 00:42:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:29:53.626 00:42:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # pci_drivers=() 00:29:53.626 00:42:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:29:53.626 00:42:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # net_devs=() 00:29:53.626 00:42:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # local -ga net_devs 00:29:53.626 00:42:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # e810=() 00:29:53.626 00:42:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # local -ga e810 00:29:53.626 00:42:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # x722=() 00:29:53.626 00:42:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # local -ga x722 00:29:53.626 00:42:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # mlx=() 00:29:53.626 00:42:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # local -ga mlx 00:29:53.626 00:42:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:53.626 00:42:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:53.626 00:42:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:53.626 00:42:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:53.626 00:42:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:53.626 00:42:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:53.626 00:42:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:53.626 00:42:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:53.626 00:42:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:53.626 00:42:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:53.626 00:42:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:53.626 00:42:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:29:53.626 00:42:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:29:53.626 00:42:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:29:53.626 00:42:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:29:53.626 00:42:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:29:53.626 00:42:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:29:53.626 00:42:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:53.626 00:42:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:29:53.626 Found 0000:08:00.0 (0x8086 - 0x159b) 00:29:53.626 00:42:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:53.626 00:42:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:53.626 00:42:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:53.626 00:42:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:53.626 00:42:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:53.626 00:42:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:53.626 00:42:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:29:53.626 Found 0000:08:00.1 (0x8086 - 0x159b) 00:29:53.626 00:42:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:53.626 00:42:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:53.626 00:42:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:53.626 00:42:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:53.626 00:42:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:53.626 00:42:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:29:53.626 00:42:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:29:53.626 00:42:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:29:53.626 00:42:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:53.626 00:42:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:53.626 00:42:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:53.626 00:42:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:53.626 00:42:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:53.626 00:42:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:53.626 00:42:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:53.626 00:42:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:29:53.626 Found net devices under 0000:08:00.0: cvl_0_0 00:29:53.626 00:42:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:53.626 00:42:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:53.626 00:42:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:53.626 00:42:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:53.626 00:42:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:53.626 00:42:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:53.626 00:42:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:53.626 00:42:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:53.626 00:42:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:29:53.626 Found net devices under 0000:08:00.1: cvl_0_1 00:29:53.626 00:42:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:53.626 00:42:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:29:53.626 00:42:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # is_hw=yes 00:29:53.626 00:42:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:29:53.626 00:42:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:29:53.626 00:42:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:29:53.626 00:42:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:53.626 00:42:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:53.626 00:42:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:53.626 00:42:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:29:53.626 00:42:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:53.626 00:42:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:53.626 00:42:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:29:53.626 00:42:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:53.626 00:42:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:53.626 00:42:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:29:53.626 00:42:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:29:53.626 00:42:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:29:53.626 00:42:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:53.626 00:42:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:53.626 00:42:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:53.626 00:42:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:29:53.626 00:42:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:53.626 00:42:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:53.626 00:42:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:53.626 00:42:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:29:53.626 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:53.626 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.235 ms 00:29:53.626 00:29:53.626 --- 10.0.0.2 ping statistics --- 00:29:53.626 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:53.626 rtt min/avg/max/mdev = 0.235/0.235/0.235/0.000 ms 00:29:53.626 00:42:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:53.626 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:53.626 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.110 ms 00:29:53.626 00:29:53.626 --- 10.0.0.1 ping statistics --- 00:29:53.626 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:53.626 rtt min/avg/max/mdev = 0.110/0.110/0.110/0.000 ms 00:29:53.626 00:42:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:53.626 00:42:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@422 -- # return 0 00:29:53.626 00:42:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:29:53.626 00:42:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:53.626 00:42:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:29:53.626 00:42:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:29:53.626 00:42:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:53.626 00:42:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:29:53.626 00:42:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:29:53.626 00:42:21 nvmf_tcp.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:29:53.626 00:42:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:53.626 00:42:21 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@720 -- # xtrace_disable 00:29:53.626 00:42:21 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:29:53.626 00:42:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@481 -- # nvmfpid=1035553 00:29:53.626 00:42:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:29:53.626 00:42:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@482 -- # waitforlisten 1035553 00:29:53.626 00:42:21 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@827 -- # '[' -z 1035553 ']' 00:29:53.626 00:42:21 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:53.626 00:42:21 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@832 -- # local max_retries=100 00:29:53.626 00:42:21 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:53.626 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:53.626 00:42:21 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@836 -- # xtrace_disable 00:29:53.626 00:42:21 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:29:53.884 [2024-07-12 00:42:21.504638] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:29:53.884 [2024-07-12 00:42:21.504723] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:53.884 EAL: No free 2048 kB hugepages reported on node 1 00:29:53.884 [2024-07-12 00:42:21.572217] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:53.884 [2024-07-12 00:42:21.659767] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:53.884 [2024-07-12 00:42:21.659826] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:53.884 [2024-07-12 00:42:21.659842] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:53.884 [2024-07-12 00:42:21.659855] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:53.884 [2024-07-12 00:42:21.659866] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:53.884 [2024-07-12 00:42:21.659926] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:53.884 [2024-07-12 00:42:21.659986] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:29:53.884 [2024-07-12 00:42:21.660042] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:29:53.884 [2024-07-12 00:42:21.660045] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:54.142 00:42:21 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:29:54.142 00:42:21 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@860 -- # return 0 00:29:54.142 00:42:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:54.142 00:42:21 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:54.142 00:42:21 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:29:54.142 00:42:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:54.142 00:42:21 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:29:54.142 00:42:21 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:29:57.429 00:42:24 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:29:57.429 00:42:24 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:29:57.429 00:42:25 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:84:00.0 00:29:57.429 00:42:25 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:29:57.688 00:42:25 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:29:57.688 00:42:25 nvmf_tcp.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:84:00.0 ']' 00:29:57.688 00:42:25 nvmf_tcp.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:29:57.688 00:42:25 nvmf_tcp.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:29:57.688 00:42:25 nvmf_tcp.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:29:58.253 [2024-07-12 00:42:25.802750] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:58.253 00:42:25 nvmf_tcp.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:58.510 00:42:26 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:29:58.510 00:42:26 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:58.767 00:42:26 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:29:58.767 00:42:26 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:29:59.024 00:42:26 nvmf_tcp.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:59.282 [2024-07-12 00:42:26.995045] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:59.282 00:42:27 nvmf_tcp.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:59.541 00:42:27 nvmf_tcp.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:84:00.0 ']' 00:29:59.541 00:42:27 nvmf_tcp.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:84:00.0' 00:29:59.541 00:42:27 nvmf_tcp.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:29:59.541 00:42:27 nvmf_tcp.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:84:00.0' 00:30:00.919 Initializing NVMe Controllers 00:30:00.919 Attached to NVMe Controller at 0000:84:00.0 [8086:0a54] 00:30:00.919 Associating PCIE (0000:84:00.0) NSID 1 with lcore 0 00:30:00.919 Initialization complete. Launching workers. 00:30:00.919 ======================================================== 00:30:00.919 Latency(us) 00:30:00.919 Device Information : IOPS MiB/s Average min max 00:30:00.919 PCIE (0000:84:00.0) NSID 1 from core 0: 65595.41 256.23 486.82 27.24 4406.12 00:30:00.919 ======================================================== 00:30:00.919 Total : 65595.41 256.23 486.82 27.24 4406.12 00:30:00.919 00:30:00.919 00:42:28 nvmf_tcp.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:00.919 EAL: No free 2048 kB hugepages reported on node 1 00:30:02.324 Initializing NVMe Controllers 00:30:02.324 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:02.324 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:02.324 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:30:02.324 Initialization complete. Launching workers. 00:30:02.324 ======================================================== 00:30:02.324 Latency(us) 00:30:02.324 Device Information : IOPS MiB/s Average min max 00:30:02.324 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 87.00 0.34 11729.42 170.04 44828.10 00:30:02.324 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 52.00 0.20 19777.20 7952.17 51803.69 00:30:02.324 ======================================================== 00:30:02.324 Total : 139.00 0.54 14740.10 170.04 51803.69 00:30:02.324 00:30:02.324 00:42:29 nvmf_tcp.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:02.324 EAL: No free 2048 kB hugepages reported on node 1 00:30:03.260 Initializing NVMe Controllers 00:30:03.260 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:03.260 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:03.260 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:30:03.260 Initialization complete. Launching workers. 00:30:03.260 ======================================================== 00:30:03.260 Latency(us) 00:30:03.260 Device Information : IOPS MiB/s Average min max 00:30:03.260 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 7802.81 30.48 4101.18 721.00 9193.85 00:30:03.260 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3712.58 14.50 8653.18 5082.97 17993.73 00:30:03.260 ======================================================== 00:30:03.260 Total : 11515.39 44.98 5568.75 721.00 17993.73 00:30:03.260 00:30:03.260 00:42:30 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:30:03.260 00:42:30 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:30:03.260 00:42:30 nvmf_tcp.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:03.260 EAL: No free 2048 kB hugepages reported on node 1 00:30:05.793 Initializing NVMe Controllers 00:30:05.793 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:05.793 Controller IO queue size 128, less than required. 00:30:05.793 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:05.793 Controller IO queue size 128, less than required. 00:30:05.793 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:05.793 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:05.793 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:30:05.793 Initialization complete. Launching workers. 00:30:05.793 ======================================================== 00:30:05.793 Latency(us) 00:30:05.794 Device Information : IOPS MiB/s Average min max 00:30:05.794 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1468.93 367.23 89349.68 67688.62 136469.68 00:30:05.794 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 550.29 137.57 241904.63 82529.10 404774.02 00:30:05.794 ======================================================== 00:30:05.794 Total : 2019.22 504.80 130924.68 67688.62 404774.02 00:30:05.794 00:30:05.794 00:42:33 nvmf_tcp.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:30:05.794 EAL: No free 2048 kB hugepages reported on node 1 00:30:06.051 No valid NVMe controllers or AIO or URING devices found 00:30:06.051 Initializing NVMe Controllers 00:30:06.051 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:06.051 Controller IO queue size 128, less than required. 00:30:06.051 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:06.051 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:30:06.051 Controller IO queue size 128, less than required. 00:30:06.051 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:06.051 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:30:06.051 WARNING: Some requested NVMe devices were skipped 00:30:06.051 00:42:33 nvmf_tcp.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:30:06.051 EAL: No free 2048 kB hugepages reported on node 1 00:30:08.585 Initializing NVMe Controllers 00:30:08.585 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:08.585 Controller IO queue size 128, less than required. 00:30:08.585 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:08.586 Controller IO queue size 128, less than required. 00:30:08.586 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:08.586 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:08.586 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:30:08.586 Initialization complete. Launching workers. 00:30:08.586 00:30:08.586 ==================== 00:30:08.586 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:30:08.586 TCP transport: 00:30:08.586 polls: 8720 00:30:08.586 idle_polls: 5169 00:30:08.586 sock_completions: 3551 00:30:08.586 nvme_completions: 6287 00:30:08.586 submitted_requests: 9414 00:30:08.586 queued_requests: 1 00:30:08.586 00:30:08.586 ==================== 00:30:08.586 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:30:08.586 TCP transport: 00:30:08.586 polls: 11298 00:30:08.586 idle_polls: 8048 00:30:08.586 sock_completions: 3250 00:30:08.586 nvme_completions: 5795 00:30:08.586 submitted_requests: 8770 00:30:08.586 queued_requests: 1 00:30:08.586 ======================================================== 00:30:08.586 Latency(us) 00:30:08.586 Device Information : IOPS MiB/s Average min max 00:30:08.586 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1571.13 392.78 82948.59 54513.34 127443.88 00:30:08.586 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1448.15 362.04 89653.31 40346.24 142557.29 00:30:08.586 ======================================================== 00:30:08.586 Total : 3019.28 754.82 86164.41 40346.24 142557.29 00:30:08.586 00:30:08.586 00:42:36 nvmf_tcp.nvmf_perf -- host/perf.sh@66 -- # sync 00:30:08.586 00:42:36 nvmf_tcp.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:08.843 00:42:36 nvmf_tcp.nvmf_perf -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:30:08.843 00:42:36 nvmf_tcp.nvmf_perf -- host/perf.sh@71 -- # '[' -n 0000:84:00.0 ']' 00:30:08.843 00:42:36 nvmf_tcp.nvmf_perf -- host/perf.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:30:12.132 00:42:39 nvmf_tcp.nvmf_perf -- host/perf.sh@72 -- # ls_guid=3b6a5aaf-aee7-4a9a-b712-7ad3ab34aee7 00:30:12.132 00:42:39 nvmf_tcp.nvmf_perf -- host/perf.sh@73 -- # get_lvs_free_mb 3b6a5aaf-aee7-4a9a-b712-7ad3ab34aee7 00:30:12.132 00:42:39 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1360 -- # local lvs_uuid=3b6a5aaf-aee7-4a9a-b712-7ad3ab34aee7 00:30:12.132 00:42:39 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1361 -- # local lvs_info 00:30:12.132 00:42:39 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1362 -- # local fc 00:30:12.132 00:42:39 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1363 -- # local cs 00:30:12.132 00:42:39 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1364 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:30:12.390 00:42:40 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1364 -- # lvs_info='[ 00:30:12.390 { 00:30:12.390 "uuid": "3b6a5aaf-aee7-4a9a-b712-7ad3ab34aee7", 00:30:12.390 "name": "lvs_0", 00:30:12.390 "base_bdev": "Nvme0n1", 00:30:12.390 "total_data_clusters": 238234, 00:30:12.390 "free_clusters": 238234, 00:30:12.390 "block_size": 512, 00:30:12.390 "cluster_size": 4194304 00:30:12.390 } 00:30:12.390 ]' 00:30:12.390 00:42:40 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1365 -- # jq '.[] | select(.uuid=="3b6a5aaf-aee7-4a9a-b712-7ad3ab34aee7") .free_clusters' 00:30:12.390 00:42:40 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1365 -- # fc=238234 00:30:12.390 00:42:40 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1366 -- # jq '.[] | select(.uuid=="3b6a5aaf-aee7-4a9a-b712-7ad3ab34aee7") .cluster_size' 00:30:12.390 00:42:40 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1366 -- # cs=4194304 00:30:12.390 00:42:40 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1369 -- # free_mb=952936 00:30:12.390 00:42:40 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1370 -- # echo 952936 00:30:12.390 952936 00:30:12.390 00:42:40 nvmf_tcp.nvmf_perf -- host/perf.sh@77 -- # '[' 952936 -gt 20480 ']' 00:30:12.390 00:42:40 nvmf_tcp.nvmf_perf -- host/perf.sh@78 -- # free_mb=20480 00:30:12.390 00:42:40 nvmf_tcp.nvmf_perf -- host/perf.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 3b6a5aaf-aee7-4a9a-b712-7ad3ab34aee7 lbd_0 20480 00:30:13.326 00:42:40 nvmf_tcp.nvmf_perf -- host/perf.sh@80 -- # lb_guid=19b071c8-a373-438a-86e7-bf09090f4e4d 00:30:13.326 00:42:40 nvmf_tcp.nvmf_perf -- host/perf.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore 19b071c8-a373-438a-86e7-bf09090f4e4d lvs_n_0 00:30:13.893 00:42:41 nvmf_tcp.nvmf_perf -- host/perf.sh@83 -- # ls_nested_guid=e13a4cde-f2b7-49d3-a4e4-43d7e23d4b1e 00:30:13.893 00:42:41 nvmf_tcp.nvmf_perf -- host/perf.sh@84 -- # get_lvs_free_mb e13a4cde-f2b7-49d3-a4e4-43d7e23d4b1e 00:30:13.893 00:42:41 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1360 -- # local lvs_uuid=e13a4cde-f2b7-49d3-a4e4-43d7e23d4b1e 00:30:13.893 00:42:41 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1361 -- # local lvs_info 00:30:13.893 00:42:41 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1362 -- # local fc 00:30:13.893 00:42:41 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1363 -- # local cs 00:30:13.893 00:42:41 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1364 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:30:14.151 00:42:41 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1364 -- # lvs_info='[ 00:30:14.151 { 00:30:14.151 "uuid": "3b6a5aaf-aee7-4a9a-b712-7ad3ab34aee7", 00:30:14.151 "name": "lvs_0", 00:30:14.151 "base_bdev": "Nvme0n1", 00:30:14.151 "total_data_clusters": 238234, 00:30:14.151 "free_clusters": 233114, 00:30:14.151 "block_size": 512, 00:30:14.151 "cluster_size": 4194304 00:30:14.151 }, 00:30:14.151 { 00:30:14.151 "uuid": "e13a4cde-f2b7-49d3-a4e4-43d7e23d4b1e", 00:30:14.151 "name": "lvs_n_0", 00:30:14.151 "base_bdev": "19b071c8-a373-438a-86e7-bf09090f4e4d", 00:30:14.151 "total_data_clusters": 5114, 00:30:14.151 "free_clusters": 5114, 00:30:14.151 "block_size": 512, 00:30:14.151 "cluster_size": 4194304 00:30:14.151 } 00:30:14.151 ]' 00:30:14.151 00:42:41 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1365 -- # jq '.[] | select(.uuid=="e13a4cde-f2b7-49d3-a4e4-43d7e23d4b1e") .free_clusters' 00:30:14.151 00:42:41 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1365 -- # fc=5114 00:30:14.151 00:42:41 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1366 -- # jq '.[] | select(.uuid=="e13a4cde-f2b7-49d3-a4e4-43d7e23d4b1e") .cluster_size' 00:30:14.410 00:42:42 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1366 -- # cs=4194304 00:30:14.410 00:42:42 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1369 -- # free_mb=20456 00:30:14.410 00:42:42 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1370 -- # echo 20456 00:30:14.410 20456 00:30:14.410 00:42:42 nvmf_tcp.nvmf_perf -- host/perf.sh@85 -- # '[' 20456 -gt 20480 ']' 00:30:14.410 00:42:42 nvmf_tcp.nvmf_perf -- host/perf.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u e13a4cde-f2b7-49d3-a4e4-43d7e23d4b1e lbd_nest_0 20456 00:30:14.668 00:42:42 nvmf_tcp.nvmf_perf -- host/perf.sh@88 -- # lb_nested_guid=b46e292e-a440-484b-87b3-0a5728ad1240 00:30:14.668 00:42:42 nvmf_tcp.nvmf_perf -- host/perf.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:14.926 00:42:42 nvmf_tcp.nvmf_perf -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:30:14.926 00:42:42 nvmf_tcp.nvmf_perf -- host/perf.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 b46e292e-a440-484b-87b3-0a5728ad1240 00:30:15.185 00:42:42 nvmf_tcp.nvmf_perf -- host/perf.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:15.444 00:42:43 nvmf_tcp.nvmf_perf -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:30:15.444 00:42:43 nvmf_tcp.nvmf_perf -- host/perf.sh@96 -- # io_size=("512" "131072") 00:30:15.444 00:42:43 nvmf_tcp.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:30:15.444 00:42:43 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:30:15.444 00:42:43 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:15.444 EAL: No free 2048 kB hugepages reported on node 1 00:30:27.665 Initializing NVMe Controllers 00:30:27.665 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:27.665 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:27.665 Initialization complete. Launching workers. 00:30:27.665 ======================================================== 00:30:27.665 Latency(us) 00:30:27.665 Device Information : IOPS MiB/s Average min max 00:30:27.665 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 46.98 0.02 21336.89 190.24 46880.15 00:30:27.665 ======================================================== 00:30:27.665 Total : 46.98 0.02 21336.89 190.24 46880.15 00:30:27.665 00:30:27.665 00:42:53 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:30:27.665 00:42:53 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:27.665 EAL: No free 2048 kB hugepages reported on node 1 00:30:37.663 Initializing NVMe Controllers 00:30:37.663 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:37.663 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:37.663 Initialization complete. Launching workers. 00:30:37.664 ======================================================== 00:30:37.664 Latency(us) 00:30:37.664 Device Information : IOPS MiB/s Average min max 00:30:37.664 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 73.80 9.22 13567.90 4994.78 55811.77 00:30:37.664 ======================================================== 00:30:37.664 Total : 73.80 9.22 13567.90 4994.78 55811.77 00:30:37.664 00:30:37.664 00:43:03 nvmf_tcp.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:30:37.664 00:43:03 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:30:37.664 00:43:03 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:37.664 EAL: No free 2048 kB hugepages reported on node 1 00:30:47.656 Initializing NVMe Controllers 00:30:47.656 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:47.656 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:47.656 Initialization complete. Launching workers. 00:30:47.656 ======================================================== 00:30:47.656 Latency(us) 00:30:47.656 Device Information : IOPS MiB/s Average min max 00:30:47.656 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 6608.63 3.23 4841.21 334.19 13075.69 00:30:47.656 ======================================================== 00:30:47.656 Total : 6608.63 3.23 4841.21 334.19 13075.69 00:30:47.656 00:30:47.656 00:43:14 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:30:47.656 00:43:14 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:47.656 EAL: No free 2048 kB hugepages reported on node 1 00:30:57.637 Initializing NVMe Controllers 00:30:57.637 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:57.637 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:57.637 Initialization complete. Launching workers. 00:30:57.637 ======================================================== 00:30:57.637 Latency(us) 00:30:57.637 Device Information : IOPS MiB/s Average min max 00:30:57.637 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3245.53 405.69 9859.69 1600.90 20450.19 00:30:57.637 ======================================================== 00:30:57.637 Total : 3245.53 405.69 9859.69 1600.90 20450.19 00:30:57.637 00:30:57.637 00:43:24 nvmf_tcp.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:30:57.637 00:43:24 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:30:57.637 00:43:24 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:57.637 EAL: No free 2048 kB hugepages reported on node 1 00:31:07.623 Initializing NVMe Controllers 00:31:07.623 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:07.623 Controller IO queue size 128, less than required. 00:31:07.623 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:07.623 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:07.623 Initialization complete. Launching workers. 00:31:07.623 ======================================================== 00:31:07.623 Latency(us) 00:31:07.623 Device Information : IOPS MiB/s Average min max 00:31:07.623 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 10201.60 4.98 12546.70 1753.78 33722.72 00:31:07.623 ======================================================== 00:31:07.623 Total : 10201.60 4.98 12546.70 1753.78 33722.72 00:31:07.623 00:31:07.623 00:43:34 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:31:07.623 00:43:34 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:07.623 EAL: No free 2048 kB hugepages reported on node 1 00:31:17.665 Initializing NVMe Controllers 00:31:17.665 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:17.665 Controller IO queue size 128, less than required. 00:31:17.665 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:17.665 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:17.665 Initialization complete. Launching workers. 00:31:17.665 ======================================================== 00:31:17.665 Latency(us) 00:31:17.665 Device Information : IOPS MiB/s Average min max 00:31:17.665 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1193.90 149.24 107453.77 15766.85 235190.29 00:31:17.665 ======================================================== 00:31:17.665 Total : 1193.90 149.24 107453.77 15766.85 235190.29 00:31:17.665 00:31:17.665 00:43:45 nvmf_tcp.nvmf_perf -- host/perf.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:17.665 00:43:45 nvmf_tcp.nvmf_perf -- host/perf.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete b46e292e-a440-484b-87b3-0a5728ad1240 00:31:18.602 00:43:46 nvmf_tcp.nvmf_perf -- host/perf.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:31:18.862 00:43:46 nvmf_tcp.nvmf_perf -- host/perf.sh@107 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 19b071c8-a373-438a-86e7-bf09090f4e4d 00:31:19.121 00:43:46 nvmf_tcp.nvmf_perf -- host/perf.sh@108 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:31:19.379 00:43:47 nvmf_tcp.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:31:19.379 00:43:47 nvmf_tcp.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:31:19.379 00:43:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@488 -- # nvmfcleanup 00:31:19.379 00:43:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@117 -- # sync 00:31:19.379 00:43:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:31:19.379 00:43:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@120 -- # set +e 00:31:19.379 00:43:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@121 -- # for i in {1..20} 00:31:19.379 00:43:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:31:19.379 rmmod nvme_tcp 00:31:19.379 rmmod nvme_fabrics 00:31:19.379 rmmod nvme_keyring 00:31:19.379 00:43:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:31:19.379 00:43:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@124 -- # set -e 00:31:19.379 00:43:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@125 -- # return 0 00:31:19.379 00:43:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@489 -- # '[' -n 1035553 ']' 00:31:19.379 00:43:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@490 -- # killprocess 1035553 00:31:19.379 00:43:47 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@946 -- # '[' -z 1035553 ']' 00:31:19.379 00:43:47 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@950 -- # kill -0 1035553 00:31:19.379 00:43:47 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@951 -- # uname 00:31:19.379 00:43:47 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:31:19.379 00:43:47 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1035553 00:31:19.379 00:43:47 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:31:19.379 00:43:47 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:31:19.379 00:43:47 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1035553' 00:31:19.379 killing process with pid 1035553 00:31:19.379 00:43:47 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@965 -- # kill 1035553 00:31:19.379 00:43:47 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@970 -- # wait 1035553 00:31:21.287 00:43:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:31:21.287 00:43:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:31:21.287 00:43:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:31:21.287 00:43:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:21.287 00:43:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:31:21.287 00:43:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:21.287 00:43:48 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:21.287 00:43:48 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:23.195 00:43:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:31:23.196 00:31:23.196 real 1m31.127s 00:31:23.196 user 5m35.401s 00:31:23.196 sys 0m15.752s 00:31:23.196 00:43:50 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:31:23.196 00:43:50 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:31:23.196 ************************************ 00:31:23.196 END TEST nvmf_perf 00:31:23.196 ************************************ 00:31:23.196 00:43:50 nvmf_tcp -- nvmf/nvmf.sh@99 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:31:23.196 00:43:50 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:31:23.196 00:43:50 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:31:23.196 00:43:50 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:23.196 ************************************ 00:31:23.196 START TEST nvmf_fio_host 00:31:23.196 ************************************ 00:31:23.196 00:43:50 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:31:23.196 * Looking for test storage... 00:31:23.196 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:23.196 00:43:50 nvmf_tcp.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:23.196 00:43:50 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:23.196 00:43:50 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:23.196 00:43:50 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:23.196 00:43:50 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:23.196 00:43:50 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:23.196 00:43:50 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:23.196 00:43:50 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:31:23.196 00:43:50 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:23.196 00:43:50 nvmf_tcp.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:23.196 00:43:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:31:23.196 00:43:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:23.196 00:43:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:23.196 00:43:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:23.196 00:43:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:23.196 00:43:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:23.196 00:43:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:23.196 00:43:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:23.196 00:43:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:23.196 00:43:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:23.196 00:43:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:23.196 00:43:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:31:23.196 00:43:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:31:23.196 00:43:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:23.196 00:43:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:23.196 00:43:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:23.196 00:43:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:23.196 00:43:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:23.196 00:43:50 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:23.196 00:43:50 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:23.196 00:43:50 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:23.196 00:43:50 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:23.196 00:43:50 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:23.196 00:43:50 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:23.196 00:43:50 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:31:23.196 00:43:50 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:23.196 00:43:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@47 -- # : 0 00:31:23.196 00:43:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:23.196 00:43:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:23.196 00:43:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:23.196 00:43:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:23.196 00:43:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:23.196 00:43:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:23.196 00:43:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:23.196 00:43:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:23.196 00:43:50 nvmf_tcp.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:23.196 00:43:50 nvmf_tcp.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:31:23.196 00:43:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:31:23.196 00:43:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:23.196 00:43:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:31:23.196 00:43:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:31:23.196 00:43:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:31:23.196 00:43:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:23.196 00:43:50 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:23.196 00:43:50 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:23.196 00:43:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:31:23.196 00:43:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:31:23.196 00:43:50 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@285 -- # xtrace_disable 00:31:23.196 00:43:50 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:31:24.576 00:43:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:24.576 00:43:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # pci_devs=() 00:31:24.576 00:43:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:31:24.576 00:43:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:31:24.576 00:43:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:31:24.576 00:43:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:31:24.576 00:43:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:31:24.576 00:43:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # net_devs=() 00:31:24.576 00:43:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:31:24.576 00:43:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # e810=() 00:31:24.576 00:43:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # local -ga e810 00:31:24.576 00:43:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # x722=() 00:31:24.576 00:43:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # local -ga x722 00:31:24.576 00:43:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # mlx=() 00:31:24.576 00:43:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # local -ga mlx 00:31:24.576 00:43:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:24.576 00:43:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:24.576 00:43:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:24.576 00:43:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:24.576 00:43:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:24.576 00:43:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:24.576 00:43:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:24.576 00:43:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:24.576 00:43:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:24.576 00:43:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:24.576 00:43:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:24.576 00:43:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:31:24.576 00:43:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:31:24.576 00:43:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:31:24.576 00:43:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:31:24.576 00:43:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:31:24.576 00:43:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:31:24.576 00:43:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:24.576 00:43:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:31:24.576 Found 0000:08:00.0 (0x8086 - 0x159b) 00:31:24.576 00:43:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:24.576 00:43:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:24.576 00:43:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:24.576 00:43:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:24.576 00:43:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:24.576 00:43:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:24.576 00:43:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:31:24.576 Found 0000:08:00.1 (0x8086 - 0x159b) 00:31:24.576 00:43:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:24.576 00:43:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:24.576 00:43:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:24.576 00:43:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:24.576 00:43:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:24.576 00:43:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:31:24.576 00:43:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:31:24.576 00:43:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:31:24.576 00:43:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:24.576 00:43:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:24.576 00:43:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:24.576 00:43:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:24.576 00:43:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:24.576 00:43:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:24.576 00:43:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:24.576 00:43:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:31:24.576 Found net devices under 0000:08:00.0: cvl_0_0 00:31:24.576 00:43:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:24.576 00:43:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:24.577 00:43:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:24.577 00:43:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:24.577 00:43:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:24.577 00:43:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:24.577 00:43:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:24.577 00:43:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:24.577 00:43:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:31:24.577 Found net devices under 0000:08:00.1: cvl_0_1 00:31:24.577 00:43:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:24.577 00:43:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:31:24.577 00:43:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # is_hw=yes 00:31:24.577 00:43:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:31:24.577 00:43:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:31:24.577 00:43:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:31:24.577 00:43:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:24.577 00:43:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:24.577 00:43:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:24.577 00:43:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:31:24.577 00:43:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:24.577 00:43:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:24.577 00:43:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:31:24.577 00:43:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:24.577 00:43:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:24.577 00:43:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:31:24.577 00:43:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:31:24.577 00:43:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:31:24.577 00:43:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:24.836 00:43:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:24.836 00:43:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:24.836 00:43:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:31:24.836 00:43:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:24.836 00:43:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:24.836 00:43:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:24.836 00:43:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:31:24.836 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:24.836 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.322 ms 00:31:24.836 00:31:24.836 --- 10.0.0.2 ping statistics --- 00:31:24.836 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:24.836 rtt min/avg/max/mdev = 0.322/0.322/0.322/0.000 ms 00:31:24.836 00:43:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:24.836 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:24.836 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.114 ms 00:31:24.836 00:31:24.836 --- 10.0.0.1 ping statistics --- 00:31:24.836 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:24.836 rtt min/avg/max/mdev = 0.114/0.114/0.114/0.000 ms 00:31:24.836 00:43:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:24.836 00:43:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@422 -- # return 0 00:31:24.836 00:43:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:31:24.836 00:43:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:24.836 00:43:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:31:24.836 00:43:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:31:24.836 00:43:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:24.836 00:43:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:31:24.836 00:43:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:31:24.836 00:43:52 nvmf_tcp.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:31:24.836 00:43:52 nvmf_tcp.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:31:24.836 00:43:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@720 -- # xtrace_disable 00:31:24.836 00:43:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:31:24.836 00:43:52 nvmf_tcp.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=1044930 00:31:24.836 00:43:52 nvmf_tcp.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:31:24.836 00:43:52 nvmf_tcp.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:24.836 00:43:52 nvmf_tcp.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 1044930 00:31:24.836 00:43:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@827 -- # '[' -z 1044930 ']' 00:31:24.836 00:43:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:24.836 00:43:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@832 -- # local max_retries=100 00:31:24.836 00:43:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:24.836 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:24.836 00:43:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@836 -- # xtrace_disable 00:31:24.836 00:43:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:31:24.836 [2024-07-12 00:43:52.567778] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:31:24.836 [2024-07-12 00:43:52.567866] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:24.836 EAL: No free 2048 kB hugepages reported on node 1 00:31:24.836 [2024-07-12 00:43:52.632642] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:25.095 [2024-07-12 00:43:52.720201] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:25.095 [2024-07-12 00:43:52.720253] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:25.095 [2024-07-12 00:43:52.720269] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:25.095 [2024-07-12 00:43:52.720283] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:25.095 [2024-07-12 00:43:52.720295] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:25.095 [2024-07-12 00:43:52.720350] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:31:25.095 [2024-07-12 00:43:52.720382] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:31:25.095 [2024-07-12 00:43:52.720431] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:31:25.095 [2024-07-12 00:43:52.720434] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:25.095 00:43:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:31:25.095 00:43:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@860 -- # return 0 00:31:25.095 00:43:52 nvmf_tcp.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:31:25.356 [2024-07-12 00:43:53.119044] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:25.356 00:43:53 nvmf_tcp.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:31:25.356 00:43:53 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:25.356 00:43:53 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:31:25.356 00:43:53 nvmf_tcp.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:31:25.921 Malloc1 00:31:25.921 00:43:53 nvmf_tcp.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:26.179 00:43:53 nvmf_tcp.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:31:26.437 00:43:54 nvmf_tcp.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:26.695 [2024-07-12 00:43:54.340259] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:26.695 00:43:54 nvmf_tcp.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:31:26.953 00:43:54 nvmf_tcp.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:31:26.953 00:43:54 nvmf_tcp.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:26.954 00:43:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:26.954 00:43:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:31:26.954 00:43:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:26.954 00:43:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # local sanitizers 00:31:26.954 00:43:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:26.954 00:43:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # shift 00:31:26.954 00:43:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local asan_lib= 00:31:26.954 00:43:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:31:26.954 00:43:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:26.954 00:43:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libasan 00:31:26.954 00:43:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:31:26.954 00:43:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:31:26.954 00:43:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:31:26.954 00:43:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:31:26.954 00:43:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:26.954 00:43:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:31:26.954 00:43:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:31:26.954 00:43:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:31:26.954 00:43:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:31:26.954 00:43:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:31:26.954 00:43:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:27.212 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:31:27.212 fio-3.35 00:31:27.212 Starting 1 thread 00:31:27.212 EAL: No free 2048 kB hugepages reported on node 1 00:31:29.741 00:31:29.741 test: (groupid=0, jobs=1): err= 0: pid=1045213: Fri Jul 12 00:43:57 2024 00:31:29.741 read: IOPS=6993, BW=27.3MiB/s (28.6MB/s)(54.9MiB/2008msec) 00:31:29.741 slat (usec): min=2, max=214, avg= 2.88, stdev= 2.70 00:31:29.741 clat (usec): min=3077, max=18015, avg=9934.47, stdev=1957.31 00:31:29.741 lat (usec): min=3118, max=18018, avg=9937.35, stdev=1957.35 00:31:29.741 clat percentiles (usec): 00:31:29.741 | 1.00th=[ 7308], 5.00th=[ 7898], 10.00th=[ 8160], 20.00th=[ 8455], 00:31:29.741 | 30.00th=[ 8717], 40.00th=[ 8979], 50.00th=[ 9241], 60.00th=[ 9634], 00:31:29.741 | 70.00th=[10159], 80.00th=[11600], 90.00th=[13042], 95.00th=[13960], 00:31:29.741 | 99.00th=[15664], 99.50th=[16057], 99.90th=[17433], 99.95th=[17957], 00:31:29.741 | 99.99th=[17957] 00:31:29.741 bw ( KiB/s): min=24096, max=31824, per=99.87%, avg=27938.00, stdev=4256.87, samples=4 00:31:29.741 iops : min= 6024, max= 7956, avg=6984.50, stdev=1064.22, samples=4 00:31:29.741 write: IOPS=7000, BW=27.3MiB/s (28.7MB/s)(54.9MiB/2008msec); 0 zone resets 00:31:29.741 slat (usec): min=2, max=206, avg= 3.00, stdev= 1.98 00:31:29.741 clat (usec): min=2258, max=14946, avg=8276.61, stdev=1634.37 00:31:29.741 lat (usec): min=2272, max=14949, avg=8279.61, stdev=1634.44 00:31:29.741 clat percentiles (usec): 00:31:29.741 | 1.00th=[ 6128], 5.00th=[ 6587], 10.00th=[ 6783], 20.00th=[ 7111], 00:31:29.741 | 30.00th=[ 7308], 40.00th=[ 7504], 50.00th=[ 7701], 60.00th=[ 7963], 00:31:29.741 | 70.00th=[ 8455], 80.00th=[ 9765], 90.00th=[10945], 95.00th=[11600], 00:31:29.741 | 99.00th=[13042], 99.50th=[13435], 99.90th=[14222], 99.95th=[14484], 00:31:29.741 | 99.99th=[14484] 00:31:29.741 bw ( KiB/s): min=24000, max=31680, per=100.00%, avg=28006.00, stdev=3981.35, samples=4 00:31:29.741 iops : min= 6000, max= 7920, avg=7001.50, stdev=995.34, samples=4 00:31:29.741 lat (msec) : 4=0.14%, 10=74.73%, 20=25.13% 00:31:29.741 cpu : usr=69.56%, sys=29.05%, ctx=84, majf=0, minf=31 00:31:29.741 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:31:29.741 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:29.741 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:29.741 issued rwts: total=14043,14056,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:29.741 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:29.741 00:31:29.741 Run status group 0 (all jobs): 00:31:29.741 READ: bw=27.3MiB/s (28.6MB/s), 27.3MiB/s-27.3MiB/s (28.6MB/s-28.6MB/s), io=54.9MiB (57.5MB), run=2008-2008msec 00:31:29.741 WRITE: bw=27.3MiB/s (28.7MB/s), 27.3MiB/s-27.3MiB/s (28.7MB/s-28.7MB/s), io=54.9MiB (57.6MB), run=2008-2008msec 00:31:29.741 00:43:57 nvmf_tcp.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:31:29.741 00:43:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:31:29.741 00:43:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:31:29.741 00:43:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:29.741 00:43:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # local sanitizers 00:31:29.741 00:43:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:29.741 00:43:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # shift 00:31:29.741 00:43:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local asan_lib= 00:31:29.741 00:43:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:31:29.741 00:43:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:29.741 00:43:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libasan 00:31:29.741 00:43:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:31:29.741 00:43:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:31:29.741 00:43:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:31:29.741 00:43:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:31:29.741 00:43:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:29.741 00:43:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:31:29.741 00:43:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:31:29.741 00:43:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:31:29.741 00:43:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:31:29.741 00:43:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:31:29.741 00:43:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:31:29.741 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:31:29.741 fio-3.35 00:31:29.741 Starting 1 thread 00:31:29.741 EAL: No free 2048 kB hugepages reported on node 1 00:31:32.268 00:31:32.268 test: (groupid=0, jobs=1): err= 0: pid=1045463: Fri Jul 12 00:43:59 2024 00:31:32.268 read: IOPS=7621, BW=119MiB/s (125MB/s)(239MiB/2009msec) 00:31:32.268 slat (usec): min=3, max=122, avg= 4.03, stdev= 1.56 00:31:32.268 clat (usec): min=3177, max=18243, avg=9570.67, stdev=2268.83 00:31:32.268 lat (usec): min=3180, max=18247, avg=9574.70, stdev=2268.87 00:31:32.268 clat percentiles (usec): 00:31:32.268 | 1.00th=[ 5080], 5.00th=[ 6259], 10.00th=[ 6915], 20.00th=[ 7767], 00:31:32.268 | 30.00th=[ 8291], 40.00th=[ 8848], 50.00th=[ 9372], 60.00th=[ 9896], 00:31:32.268 | 70.00th=[10421], 80.00th=[11207], 90.00th=[12518], 95.00th=[13960], 00:31:32.268 | 99.00th=[16188], 99.50th=[16909], 99.90th=[17695], 99.95th=[17695], 00:31:32.268 | 99.99th=[17957] 00:31:32.268 bw ( KiB/s): min=57472, max=67200, per=51.12%, avg=62336.00, stdev=4933.12, samples=4 00:31:32.268 iops : min= 3592, max= 4200, avg=3896.00, stdev=308.32, samples=4 00:31:32.268 write: IOPS=4431, BW=69.2MiB/s (72.6MB/s)(127MiB/1841msec); 0 zone resets 00:31:32.268 slat (usec): min=32, max=247, avg=37.91, stdev= 7.32 00:31:32.268 clat (usec): min=4574, max=22450, avg=12680.17, stdev=2002.72 00:31:32.268 lat (usec): min=4623, max=22483, avg=12718.08, stdev=2003.11 00:31:32.268 clat percentiles (usec): 00:31:32.268 | 1.00th=[ 8291], 5.00th=[ 9765], 10.00th=[10290], 20.00th=[10945], 00:31:32.268 | 30.00th=[11600], 40.00th=[11994], 50.00th=[12518], 60.00th=[13173], 00:31:32.268 | 70.00th=[13698], 80.00th=[14353], 90.00th=[15270], 95.00th=[16057], 00:31:32.268 | 99.00th=[17433], 99.50th=[18482], 99.90th=[21365], 99.95th=[21627], 00:31:32.268 | 99.99th=[22414] 00:31:32.268 bw ( KiB/s): min=58560, max=70304, per=91.55%, avg=64920.00, stdev=5948.12, samples=4 00:31:32.268 iops : min= 3660, max= 4394, avg=4057.50, stdev=371.76, samples=4 00:31:32.268 lat (msec) : 4=0.12%, 10=42.96%, 20=56.80%, 50=0.13% 00:31:32.268 cpu : usr=80.33%, sys=18.53%, ctx=39, majf=0, minf=51 00:31:32.268 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:31:32.268 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:32.268 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:32.268 issued rwts: total=15312,8159,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:32.268 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:32.268 00:31:32.268 Run status group 0 (all jobs): 00:31:32.268 READ: bw=119MiB/s (125MB/s), 119MiB/s-119MiB/s (125MB/s-125MB/s), io=239MiB (251MB), run=2009-2009msec 00:31:32.268 WRITE: bw=69.2MiB/s (72.6MB/s), 69.2MiB/s-69.2MiB/s (72.6MB/s-72.6MB/s), io=127MiB (134MB), run=1841-1841msec 00:31:32.268 00:43:59 nvmf_tcp.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:32.268 00:43:59 nvmf_tcp.nvmf_fio_host -- host/fio.sh@49 -- # '[' 1 -eq 1 ']' 00:31:32.268 00:43:59 nvmf_tcp.nvmf_fio_host -- host/fio.sh@51 -- # bdfs=($(get_nvme_bdfs)) 00:31:32.268 00:43:59 nvmf_tcp.nvmf_fio_host -- host/fio.sh@51 -- # get_nvme_bdfs 00:31:32.268 00:43:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1509 -- # bdfs=() 00:31:32.268 00:43:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1509 -- # local bdfs 00:31:32.268 00:43:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:31:32.268 00:43:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1510 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:31:32.268 00:43:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:31:32.268 00:43:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1511 -- # (( 1 == 0 )) 00:31:32.268 00:43:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:84:00.0 00:31:32.268 00:43:59 nvmf_tcp.nvmf_fio_host -- host/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:84:00.0 -i 10.0.0.2 00:31:35.548 Nvme0n1 00:31:35.548 00:44:03 nvmf_tcp.nvmf_fio_host -- host/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:31:38.863 00:44:05 nvmf_tcp.nvmf_fio_host -- host/fio.sh@53 -- # ls_guid=b577ed92-8081-4712-b68c-8ad1fffe360a 00:31:38.863 00:44:05 nvmf_tcp.nvmf_fio_host -- host/fio.sh@54 -- # get_lvs_free_mb b577ed92-8081-4712-b68c-8ad1fffe360a 00:31:38.863 00:44:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # local lvs_uuid=b577ed92-8081-4712-b68c-8ad1fffe360a 00:31:38.863 00:44:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1361 -- # local lvs_info 00:31:38.863 00:44:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1362 -- # local fc 00:31:38.863 00:44:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1363 -- # local cs 00:31:38.863 00:44:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1364 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:31:38.863 00:44:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1364 -- # lvs_info='[ 00:31:38.863 { 00:31:38.863 "uuid": "b577ed92-8081-4712-b68c-8ad1fffe360a", 00:31:38.863 "name": "lvs_0", 00:31:38.863 "base_bdev": "Nvme0n1", 00:31:38.863 "total_data_clusters": 930, 00:31:38.863 "free_clusters": 930, 00:31:38.863 "block_size": 512, 00:31:38.863 "cluster_size": 1073741824 00:31:38.863 } 00:31:38.863 ]' 00:31:38.863 00:44:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1365 -- # jq '.[] | select(.uuid=="b577ed92-8081-4712-b68c-8ad1fffe360a") .free_clusters' 00:31:38.863 00:44:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1365 -- # fc=930 00:31:38.863 00:44:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1366 -- # jq '.[] | select(.uuid=="b577ed92-8081-4712-b68c-8ad1fffe360a") .cluster_size' 00:31:38.863 00:44:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1366 -- # cs=1073741824 00:31:38.863 00:44:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1369 -- # free_mb=952320 00:31:38.863 00:44:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1370 -- # echo 952320 00:31:38.863 952320 00:31:38.863 00:44:06 nvmf_tcp.nvmf_fio_host -- host/fio.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_0 lbd_0 952320 00:31:39.121 c8cad1b0-0297-406c-8306-b438977905f6 00:31:39.121 00:44:06 nvmf_tcp.nvmf_fio_host -- host/fio.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:31:39.379 00:44:07 nvmf_tcp.nvmf_fio_host -- host/fio.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:31:39.636 00:44:07 nvmf_tcp.nvmf_fio_host -- host/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:31:39.893 00:44:07 nvmf_tcp.nvmf_fio_host -- host/fio.sh@59 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:39.893 00:44:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:39.893 00:44:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:31:39.893 00:44:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:39.893 00:44:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # local sanitizers 00:31:39.893 00:44:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:39.893 00:44:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # shift 00:31:39.893 00:44:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local asan_lib= 00:31:39.893 00:44:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:31:39.893 00:44:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:39.893 00:44:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libasan 00:31:39.893 00:44:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:31:39.893 00:44:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:31:39.893 00:44:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:31:39.893 00:44:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:31:39.893 00:44:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:39.893 00:44:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:31:39.893 00:44:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:31:39.893 00:44:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:31:39.893 00:44:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:31:39.893 00:44:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:31:39.893 00:44:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:40.151 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:31:40.151 fio-3.35 00:31:40.151 Starting 1 thread 00:31:40.151 EAL: No free 2048 kB hugepages reported on node 1 00:31:42.677 00:31:42.677 test: (groupid=0, jobs=1): err= 0: pid=1046524: Fri Jul 12 00:44:10 2024 00:31:42.677 read: IOPS=5258, BW=20.5MiB/s (21.5MB/s)(41.3MiB/2009msec) 00:31:42.677 slat (usec): min=2, max=194, avg= 2.89, stdev= 2.79 00:31:42.677 clat (usec): min=1025, max=171651, avg=13274.87, stdev=12300.59 00:31:42.677 lat (usec): min=1028, max=171721, avg=13277.75, stdev=12301.04 00:31:42.677 clat percentiles (msec): 00:31:42.677 | 1.00th=[ 10], 5.00th=[ 11], 10.00th=[ 11], 20.00th=[ 12], 00:31:42.677 | 30.00th=[ 12], 40.00th=[ 12], 50.00th=[ 13], 60.00th=[ 13], 00:31:42.677 | 70.00th=[ 13], 80.00th=[ 14], 90.00th=[ 14], 95.00th=[ 15], 00:31:42.677 | 99.00th=[ 18], 99.50th=[ 157], 99.90th=[ 171], 99.95th=[ 171], 00:31:42.677 | 99.99th=[ 171] 00:31:42.677 bw ( KiB/s): min=15040, max=23096, per=99.73%, avg=20978.00, stdev=3959.90, samples=4 00:31:42.677 iops : min= 3760, max= 5774, avg=5244.50, stdev=989.98, samples=4 00:31:42.677 write: IOPS=5252, BW=20.5MiB/s (21.5MB/s)(41.2MiB/2009msec); 0 zone resets 00:31:42.677 slat (usec): min=2, max=135, avg= 3.01, stdev= 1.54 00:31:42.677 clat (usec): min=280, max=169398, avg=10928.77, stdev=11537.27 00:31:42.677 lat (usec): min=284, max=169406, avg=10931.78, stdev=11537.71 00:31:42.677 clat percentiles (msec): 00:31:42.677 | 1.00th=[ 8], 5.00th=[ 9], 10.00th=[ 9], 20.00th=[ 10], 00:31:42.677 | 30.00th=[ 10], 40.00th=[ 10], 50.00th=[ 11], 60.00th=[ 11], 00:31:42.677 | 70.00th=[ 11], 80.00th=[ 11], 90.00th=[ 12], 95.00th=[ 12], 00:31:42.677 | 99.00th=[ 13], 99.50th=[ 157], 99.90th=[ 169], 99.95th=[ 169], 00:31:42.677 | 99.99th=[ 169] 00:31:42.677 bw ( KiB/s): min=15848, max=22840, per=99.96%, avg=21002.00, stdev=3438.74, samples=4 00:31:42.677 iops : min= 3962, max= 5710, avg=5250.50, stdev=859.68, samples=4 00:31:42.677 lat (usec) : 500=0.01%, 750=0.01% 00:31:42.677 lat (msec) : 2=0.04%, 4=0.09%, 10=23.61%, 20=75.60%, 50=0.03% 00:31:42.677 lat (msec) : 250=0.61% 00:31:42.677 cpu : usr=69.32%, sys=29.48%, ctx=88, majf=0, minf=31 00:31:42.677 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:31:42.677 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:42.677 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:42.677 issued rwts: total=10565,10553,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:42.677 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:42.677 00:31:42.677 Run status group 0 (all jobs): 00:31:42.677 READ: bw=20.5MiB/s (21.5MB/s), 20.5MiB/s-20.5MiB/s (21.5MB/s-21.5MB/s), io=41.3MiB (43.3MB), run=2009-2009msec 00:31:42.677 WRITE: bw=20.5MiB/s (21.5MB/s), 20.5MiB/s-20.5MiB/s (21.5MB/s-21.5MB/s), io=41.2MiB (43.2MB), run=2009-2009msec 00:31:42.677 00:44:10 nvmf_tcp.nvmf_fio_host -- host/fio.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:31:42.677 00:44:10 nvmf_tcp.nvmf_fio_host -- host/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:31:43.610 00:44:11 nvmf_tcp.nvmf_fio_host -- host/fio.sh@64 -- # ls_nested_guid=027b780d-7998-4586-a916-1751188ce6e3 00:31:43.610 00:44:11 nvmf_tcp.nvmf_fio_host -- host/fio.sh@65 -- # get_lvs_free_mb 027b780d-7998-4586-a916-1751188ce6e3 00:31:43.610 00:44:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # local lvs_uuid=027b780d-7998-4586-a916-1751188ce6e3 00:31:43.610 00:44:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1361 -- # local lvs_info 00:31:43.610 00:44:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1362 -- # local fc 00:31:43.610 00:44:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1363 -- # local cs 00:31:43.610 00:44:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1364 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:31:44.175 00:44:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1364 -- # lvs_info='[ 00:31:44.175 { 00:31:44.175 "uuid": "b577ed92-8081-4712-b68c-8ad1fffe360a", 00:31:44.175 "name": "lvs_0", 00:31:44.175 "base_bdev": "Nvme0n1", 00:31:44.175 "total_data_clusters": 930, 00:31:44.175 "free_clusters": 0, 00:31:44.175 "block_size": 512, 00:31:44.175 "cluster_size": 1073741824 00:31:44.175 }, 00:31:44.175 { 00:31:44.175 "uuid": "027b780d-7998-4586-a916-1751188ce6e3", 00:31:44.175 "name": "lvs_n_0", 00:31:44.175 "base_bdev": "c8cad1b0-0297-406c-8306-b438977905f6", 00:31:44.175 "total_data_clusters": 237847, 00:31:44.175 "free_clusters": 237847, 00:31:44.175 "block_size": 512, 00:31:44.175 "cluster_size": 4194304 00:31:44.175 } 00:31:44.175 ]' 00:31:44.175 00:44:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1365 -- # jq '.[] | select(.uuid=="027b780d-7998-4586-a916-1751188ce6e3") .free_clusters' 00:31:44.175 00:44:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1365 -- # fc=237847 00:31:44.175 00:44:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1366 -- # jq '.[] | select(.uuid=="027b780d-7998-4586-a916-1751188ce6e3") .cluster_size' 00:31:44.175 00:44:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1366 -- # cs=4194304 00:31:44.175 00:44:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1369 -- # free_mb=951388 00:31:44.175 00:44:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1370 -- # echo 951388 00:31:44.175 951388 00:31:44.175 00:44:11 nvmf_tcp.nvmf_fio_host -- host/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_n_0 lbd_nest_0 951388 00:31:45.107 f00106d0-409d-4346-81ab-1114eb9de292 00:31:45.107 00:44:12 nvmf_tcp.nvmf_fio_host -- host/fio.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:31:45.107 00:44:12 nvmf_tcp.nvmf_fio_host -- host/fio.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:31:45.365 00:44:13 nvmf_tcp.nvmf_fio_host -- host/fio.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:31:45.623 00:44:13 nvmf_tcp.nvmf_fio_host -- host/fio.sh@70 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:45.623 00:44:13 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:45.623 00:44:13 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:31:45.623 00:44:13 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:45.623 00:44:13 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # local sanitizers 00:31:45.623 00:44:13 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:45.623 00:44:13 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # shift 00:31:45.623 00:44:13 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local asan_lib= 00:31:45.623 00:44:13 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:31:45.623 00:44:13 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:45.623 00:44:13 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libasan 00:31:45.623 00:44:13 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:31:45.623 00:44:13 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:31:45.623 00:44:13 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:31:45.623 00:44:13 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:31:45.623 00:44:13 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:45.623 00:44:13 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:31:45.623 00:44:13 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:31:45.623 00:44:13 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:31:45.623 00:44:13 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:31:45.623 00:44:13 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:31:45.623 00:44:13 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:45.880 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:31:45.880 fio-3.35 00:31:45.880 Starting 1 thread 00:31:45.880 EAL: No free 2048 kB hugepages reported on node 1 00:31:48.405 00:31:48.405 test: (groupid=0, jobs=1): err= 0: pid=1047089: Fri Jul 12 00:44:15 2024 00:31:48.405 read: IOPS=5094, BW=19.9MiB/s (20.9MB/s)(40.0MiB/2010msec) 00:31:48.405 slat (usec): min=2, max=144, avg= 2.56, stdev= 1.82 00:31:48.405 clat (usec): min=4829, max=23004, avg=13631.26, stdev=1293.70 00:31:48.405 lat (usec): min=4833, max=23006, avg=13633.81, stdev=1293.58 00:31:48.405 clat percentiles (usec): 00:31:48.405 | 1.00th=[10683], 5.00th=[11731], 10.00th=[12125], 20.00th=[12518], 00:31:48.405 | 30.00th=[12911], 40.00th=[13304], 50.00th=[13566], 60.00th=[13960], 00:31:48.405 | 70.00th=[14222], 80.00th=[14615], 90.00th=[15139], 95.00th=[15664], 00:31:48.405 | 99.00th=[16581], 99.50th=[17171], 99.90th=[19530], 99.95th=[21103], 00:31:48.405 | 99.99th=[22938] 00:31:48.405 bw ( KiB/s): min=19296, max=20760, per=99.69%, avg=20312.00, stdev=685.86, samples=4 00:31:48.405 iops : min= 4824, max= 5190, avg=5078.00, stdev=171.46, samples=4 00:31:48.405 write: IOPS=5074, BW=19.8MiB/s (20.8MB/s)(39.8MiB/2010msec); 0 zone resets 00:31:48.405 slat (nsec): min=2272, max=91511, avg=2668.83, stdev=1189.93 00:31:48.405 clat (usec): min=2258, max=19386, avg=11368.23, stdev=1049.99 00:31:48.405 lat (usec): min=2263, max=19388, avg=11370.90, stdev=1049.92 00:31:48.405 clat percentiles (usec): 00:31:48.405 | 1.00th=[ 8848], 5.00th=[ 9765], 10.00th=[10159], 20.00th=[10552], 00:31:48.405 | 30.00th=[10945], 40.00th=[11207], 50.00th=[11338], 60.00th=[11600], 00:31:48.405 | 70.00th=[11863], 80.00th=[12125], 90.00th=[12518], 95.00th=[12911], 00:31:48.405 | 99.00th=[13566], 99.50th=[14091], 99.90th=[18744], 99.95th=[19006], 00:31:48.405 | 99.99th=[19268] 00:31:48.405 bw ( KiB/s): min=20184, max=20416, per=100.00%, avg=20310.00, stdev=98.93, samples=4 00:31:48.405 iops : min= 5046, max= 5104, avg=5077.50, stdev=24.73, samples=4 00:31:48.405 lat (msec) : 4=0.05%, 10=3.98%, 20=95.92%, 50=0.04% 00:31:48.405 cpu : usr=67.99%, sys=30.96%, ctx=77, majf=0, minf=31 00:31:48.405 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:31:48.405 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:48.405 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:48.405 issued rwts: total=10239,10200,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:48.405 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:48.405 00:31:48.405 Run status group 0 (all jobs): 00:31:48.405 READ: bw=19.9MiB/s (20.9MB/s), 19.9MiB/s-19.9MiB/s (20.9MB/s-20.9MB/s), io=40.0MiB (41.9MB), run=2010-2010msec 00:31:48.405 WRITE: bw=19.8MiB/s (20.8MB/s), 19.8MiB/s-19.8MiB/s (20.8MB/s-20.8MB/s), io=39.8MiB (41.8MB), run=2010-2010msec 00:31:48.405 00:44:15 nvmf_tcp.nvmf_fio_host -- host/fio.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:31:48.405 00:44:16 nvmf_tcp.nvmf_fio_host -- host/fio.sh@74 -- # sync 00:31:48.405 00:44:16 nvmf_tcp.nvmf_fio_host -- host/fio.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete lvs_n_0/lbd_nest_0 00:31:52.583 00:44:20 nvmf_tcp.nvmf_fio_host -- host/fio.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:31:52.583 00:44:20 nvmf_tcp.nvmf_fio_host -- host/fio.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete lvs_0/lbd_0 00:31:55.859 00:44:23 nvmf_tcp.nvmf_fio_host -- host/fio.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:31:55.859 00:44:23 nvmf_tcp.nvmf_fio_host -- host/fio.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:31:58.438 00:44:25 nvmf_tcp.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:31:58.438 00:44:25 nvmf_tcp.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:31:58.438 00:44:25 nvmf_tcp.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:31:58.438 00:44:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:31:58.438 00:44:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@117 -- # sync 00:31:58.438 00:44:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:31:58.438 00:44:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@120 -- # set +e 00:31:58.438 00:44:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:31:58.438 00:44:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:31:58.438 rmmod nvme_tcp 00:31:58.438 rmmod nvme_fabrics 00:31:58.438 rmmod nvme_keyring 00:31:58.438 00:44:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:31:58.438 00:44:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@124 -- # set -e 00:31:58.438 00:44:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@125 -- # return 0 00:31:58.438 00:44:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@489 -- # '[' -n 1044930 ']' 00:31:58.438 00:44:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@490 -- # killprocess 1044930 00:31:58.438 00:44:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@946 -- # '[' -z 1044930 ']' 00:31:58.438 00:44:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@950 -- # kill -0 1044930 00:31:58.438 00:44:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@951 -- # uname 00:31:58.438 00:44:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:31:58.438 00:44:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1044930 00:31:58.438 00:44:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:31:58.438 00:44:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:31:58.438 00:44:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1044930' 00:31:58.438 killing process with pid 1044930 00:31:58.438 00:44:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@965 -- # kill 1044930 00:31:58.438 00:44:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@970 -- # wait 1044930 00:31:58.438 00:44:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:31:58.438 00:44:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:31:58.438 00:44:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:31:58.438 00:44:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:58.438 00:44:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:31:58.438 00:44:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:58.438 00:44:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:58.438 00:44:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:00.374 00:44:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:32:00.374 00:32:00.374 real 0m37.184s 00:32:00.374 user 2m24.239s 00:32:00.374 sys 0m6.116s 00:32:00.374 00:44:27 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1122 -- # xtrace_disable 00:32:00.374 00:44:27 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:32:00.374 ************************************ 00:32:00.374 END TEST nvmf_fio_host 00:32:00.374 ************************************ 00:32:00.374 00:44:27 nvmf_tcp -- nvmf/nvmf.sh@100 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:32:00.374 00:44:27 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:32:00.374 00:44:27 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:32:00.374 00:44:27 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:00.374 ************************************ 00:32:00.374 START TEST nvmf_failover 00:32:00.374 ************************************ 00:32:00.374 00:44:27 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:32:00.374 * Looking for test storage... 00:32:00.374 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:00.374 00:44:28 nvmf_tcp.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:00.374 00:44:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:32:00.374 00:44:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:00.374 00:44:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:00.374 00:44:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:00.374 00:44:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:00.374 00:44:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:00.374 00:44:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:00.374 00:44:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:00.374 00:44:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:00.374 00:44:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:00.374 00:44:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:00.374 00:44:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:32:00.374 00:44:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:32:00.374 00:44:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:00.374 00:44:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:00.374 00:44:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:00.374 00:44:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:00.374 00:44:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:00.374 00:44:28 nvmf_tcp.nvmf_failover -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:00.374 00:44:28 nvmf_tcp.nvmf_failover -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:00.374 00:44:28 nvmf_tcp.nvmf_failover -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:00.375 00:44:28 nvmf_tcp.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:00.375 00:44:28 nvmf_tcp.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:00.375 00:44:28 nvmf_tcp.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:00.375 00:44:28 nvmf_tcp.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:32:00.375 00:44:28 nvmf_tcp.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:00.375 00:44:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@47 -- # : 0 00:32:00.375 00:44:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:32:00.375 00:44:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:32:00.375 00:44:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:00.375 00:44:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:00.375 00:44:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:00.375 00:44:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:32:00.375 00:44:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:32:00.375 00:44:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@51 -- # have_pci_nics=0 00:32:00.375 00:44:28 nvmf_tcp.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:32:00.375 00:44:28 nvmf_tcp.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:32:00.375 00:44:28 nvmf_tcp.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:00.375 00:44:28 nvmf_tcp.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:32:00.375 00:44:28 nvmf_tcp.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:32:00.375 00:44:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:32:00.375 00:44:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:00.375 00:44:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@448 -- # prepare_net_devs 00:32:00.375 00:44:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@410 -- # local -g is_hw=no 00:32:00.375 00:44:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@412 -- # remove_spdk_ns 00:32:00.375 00:44:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:00.375 00:44:28 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:00.375 00:44:28 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:00.375 00:44:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:32:00.375 00:44:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:32:00.375 00:44:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@285 -- # xtrace_disable 00:32:00.375 00:44:28 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:32:01.753 00:44:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:01.753 00:44:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # pci_devs=() 00:32:01.753 00:44:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # local -a pci_devs 00:32:01.753 00:44:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # pci_net_devs=() 00:32:01.753 00:44:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:32:01.753 00:44:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # pci_drivers=() 00:32:01.753 00:44:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # local -A pci_drivers 00:32:01.753 00:44:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # net_devs=() 00:32:01.753 00:44:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # local -ga net_devs 00:32:01.753 00:44:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # e810=() 00:32:01.753 00:44:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # local -ga e810 00:32:01.753 00:44:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # x722=() 00:32:01.753 00:44:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # local -ga x722 00:32:01.753 00:44:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # mlx=() 00:32:01.753 00:44:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # local -ga mlx 00:32:01.753 00:44:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:01.753 00:44:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:01.753 00:44:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:01.753 00:44:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:01.753 00:44:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:01.753 00:44:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:01.753 00:44:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:01.753 00:44:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:01.753 00:44:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:01.753 00:44:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:01.753 00:44:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:01.753 00:44:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:32:01.753 00:44:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:32:01.753 00:44:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:32:01.753 00:44:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:32:01.753 00:44:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:32:01.753 00:44:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:32:01.753 00:44:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:01.753 00:44:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:32:01.753 Found 0000:08:00.0 (0x8086 - 0x159b) 00:32:01.753 00:44:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:01.753 00:44:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:01.753 00:44:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:01.753 00:44:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:01.753 00:44:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:01.753 00:44:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:01.753 00:44:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:32:01.753 Found 0000:08:00.1 (0x8086 - 0x159b) 00:32:01.753 00:44:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:01.753 00:44:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:01.753 00:44:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:01.753 00:44:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:01.753 00:44:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:01.753 00:44:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:32:01.753 00:44:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:32:01.753 00:44:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:32:01.753 00:44:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:01.753 00:44:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:01.753 00:44:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:01.753 00:44:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:01.753 00:44:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:01.753 00:44:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:01.753 00:44:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:01.753 00:44:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:32:01.753 Found net devices under 0000:08:00.0: cvl_0_0 00:32:01.753 00:44:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:01.753 00:44:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:01.753 00:44:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:01.753 00:44:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:01.753 00:44:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:01.753 00:44:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:01.753 00:44:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:01.753 00:44:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:01.753 00:44:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:32:01.753 Found net devices under 0000:08:00.1: cvl_0_1 00:32:01.753 00:44:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:01.753 00:44:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:32:01.753 00:44:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # is_hw=yes 00:32:01.753 00:44:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:32:01.753 00:44:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:32:01.753 00:44:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:32:01.753 00:44:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:01.753 00:44:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:01.753 00:44:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:01.753 00:44:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:32:01.753 00:44:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:01.753 00:44:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:01.753 00:44:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:32:01.753 00:44:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:01.753 00:44:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:01.753 00:44:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:32:01.753 00:44:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:32:01.753 00:44:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:32:02.012 00:44:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:02.012 00:44:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:02.012 00:44:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:02.012 00:44:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:32:02.012 00:44:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:02.012 00:44:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:02.012 00:44:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:02.012 00:44:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:32:02.012 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:02.012 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.190 ms 00:32:02.012 00:32:02.012 --- 10.0.0.2 ping statistics --- 00:32:02.012 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:02.012 rtt min/avg/max/mdev = 0.190/0.190/0.190/0.000 ms 00:32:02.012 00:44:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:02.012 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:02.012 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.224 ms 00:32:02.012 00:32:02.012 --- 10.0.0.1 ping statistics --- 00:32:02.012 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:02.012 rtt min/avg/max/mdev = 0.224/0.224/0.224/0.000 ms 00:32:02.012 00:44:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:02.012 00:44:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@422 -- # return 0 00:32:02.012 00:44:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:32:02.012 00:44:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:02.012 00:44:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:32:02.012 00:44:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:32:02.012 00:44:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:02.012 00:44:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:32:02.012 00:44:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:32:02.012 00:44:29 nvmf_tcp.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:32:02.012 00:44:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:32:02.012 00:44:29 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@720 -- # xtrace_disable 00:32:02.012 00:44:29 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:32:02.012 00:44:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@481 -- # nvmfpid=1049665 00:32:02.012 00:44:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:32:02.012 00:44:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@482 -- # waitforlisten 1049665 00:32:02.012 00:44:29 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@827 -- # '[' -z 1049665 ']' 00:32:02.013 00:44:29 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:02.013 00:44:29 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@832 -- # local max_retries=100 00:32:02.013 00:44:29 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:02.013 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:02.013 00:44:29 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # xtrace_disable 00:32:02.013 00:44:29 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:32:02.013 [2024-07-12 00:44:29.774801] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:32:02.013 [2024-07-12 00:44:29.774890] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:02.013 EAL: No free 2048 kB hugepages reported on node 1 00:32:02.013 [2024-07-12 00:44:29.840968] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:32:02.271 [2024-07-12 00:44:29.927977] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:02.271 [2024-07-12 00:44:29.928035] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:02.271 [2024-07-12 00:44:29.928050] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:02.271 [2024-07-12 00:44:29.928065] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:02.271 [2024-07-12 00:44:29.928077] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:02.271 [2024-07-12 00:44:29.928158] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:32:02.271 [2024-07-12 00:44:29.928209] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:32:02.271 [2024-07-12 00:44:29.928212] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:32:02.271 00:44:30 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:32:02.271 00:44:30 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@860 -- # return 0 00:32:02.271 00:44:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:32:02.271 00:44:30 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:02.271 00:44:30 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:32:02.271 00:44:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:02.271 00:44:30 nvmf_tcp.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:32:02.530 [2024-07-12 00:44:30.323931] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:02.530 00:44:30 nvmf_tcp.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:32:03.097 Malloc0 00:32:03.097 00:44:30 nvmf_tcp.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:32:03.097 00:44:30 nvmf_tcp.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:03.355 00:44:31 nvmf_tcp.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:03.612 [2024-07-12 00:44:31.364961] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:03.612 00:44:31 nvmf_tcp.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:32:03.870 [2024-07-12 00:44:31.609712] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:32:03.870 00:44:31 nvmf_tcp.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:32:04.128 [2024-07-12 00:44:31.866598] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:32:04.128 00:44:31 nvmf_tcp.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=1049883 00:32:04.128 00:44:31 nvmf_tcp.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:32:04.128 00:44:31 nvmf_tcp.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:32:04.128 00:44:31 nvmf_tcp.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 1049883 /var/tmp/bdevperf.sock 00:32:04.128 00:44:31 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@827 -- # '[' -z 1049883 ']' 00:32:04.128 00:44:31 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:32:04.128 00:44:31 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@832 -- # local max_retries=100 00:32:04.128 00:44:31 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:32:04.128 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:32:04.128 00:44:31 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # xtrace_disable 00:32:04.128 00:44:31 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:32:04.387 00:44:32 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:32:04.387 00:44:32 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@860 -- # return 0 00:32:04.387 00:44:32 nvmf_tcp.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:32:04.952 NVMe0n1 00:32:04.952 00:44:32 nvmf_tcp.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:32:05.210 00:32:05.210 00:44:32 nvmf_tcp.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=1049983 00:32:05.210 00:44:32 nvmf_tcp.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:32:05.210 00:44:32 nvmf_tcp.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:32:06.144 00:44:33 nvmf_tcp.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:06.402 [2024-07-12 00:44:34.182973] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2331c60 is same with the state(5) to be set 00:32:06.402 [2024-07-12 00:44:34.183048] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2331c60 is same with the state(5) to be set 00:32:06.402 [2024-07-12 00:44:34.183066] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2331c60 is same with the state(5) to be set 00:32:06.402 [2024-07-12 00:44:34.183081] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2331c60 is same with the state(5) to be set 00:32:06.402 [2024-07-12 00:44:34.183094] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2331c60 is same with the state(5) to be set 00:32:06.403 [2024-07-12 00:44:34.183107] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2331c60 is same with the state(5) to be set 00:32:06.403 [2024-07-12 00:44:34.183121] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2331c60 is same with the state(5) to be set 00:32:06.403 [2024-07-12 00:44:34.183134] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2331c60 is same with the state(5) to be set 00:32:06.403 [2024-07-12 00:44:34.183148] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2331c60 is same with the state(5) to be set 00:32:06.403 [2024-07-12 00:44:34.183161] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2331c60 is same with the state(5) to be set 00:32:06.403 [2024-07-12 00:44:34.183174] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2331c60 is same with the state(5) to be set 00:32:06.403 [2024-07-12 00:44:34.183188] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2331c60 is same with the state(5) to be set 00:32:06.403 [2024-07-12 00:44:34.183201] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2331c60 is same with the state(5) to be set 00:32:06.403 [2024-07-12 00:44:34.183214] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2331c60 is same with the state(5) to be set 00:32:06.403 [2024-07-12 00:44:34.183228] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2331c60 is same with the state(5) to be set 00:32:06.403 [2024-07-12 00:44:34.183241] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2331c60 is same with the state(5) to be set 00:32:06.403 [2024-07-12 00:44:34.183254] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2331c60 is same with the state(5) to be set 00:32:06.403 [2024-07-12 00:44:34.183268] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2331c60 is same with the state(5) to be set 00:32:06.403 [2024-07-12 00:44:34.183281] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2331c60 is same with the state(5) to be set 00:32:06.403 [2024-07-12 00:44:34.183294] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2331c60 is same with the state(5) to be set 00:32:06.403 [2024-07-12 00:44:34.183322] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2331c60 is same with the state(5) to be set 00:32:06.403 [2024-07-12 00:44:34.183336] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2331c60 is same with the state(5) to be set 00:32:06.403 [2024-07-12 00:44:34.183349] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2331c60 is same with the state(5) to be set 00:32:06.403 [2024-07-12 00:44:34.183362] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2331c60 is same with the state(5) to be set 00:32:06.403 [2024-07-12 00:44:34.183375] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2331c60 is same with the state(5) to be set 00:32:06.403 [2024-07-12 00:44:34.183389] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2331c60 is same with the state(5) to be set 00:32:06.403 [2024-07-12 00:44:34.183402] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2331c60 is same with the state(5) to be set 00:32:06.403 [2024-07-12 00:44:34.183415] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2331c60 is same with the state(5) to be set 00:32:06.403 [2024-07-12 00:44:34.183428] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2331c60 is same with the state(5) to be set 00:32:06.403 [2024-07-12 00:44:34.183441] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2331c60 is same with the state(5) to be set 00:32:06.403 [2024-07-12 00:44:34.183454] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2331c60 is same with the state(5) to be set 00:32:06.403 [2024-07-12 00:44:34.183468] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2331c60 is same with the state(5) to be set 00:32:06.403 [2024-07-12 00:44:34.183481] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2331c60 is same with the state(5) to be set 00:32:06.403 [2024-07-12 00:44:34.183494] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2331c60 is same with the state(5) to be set 00:32:06.403 [2024-07-12 00:44:34.183507] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2331c60 is same with the state(5) to be set 00:32:06.403 [2024-07-12 00:44:34.183521] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2331c60 is same with the state(5) to be set 00:32:06.403 [2024-07-12 00:44:34.183535] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2331c60 is same with the state(5) to be set 00:32:06.403 [2024-07-12 00:44:34.183549] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2331c60 is same with the state(5) to be set 00:32:06.403 [2024-07-12 00:44:34.183562] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2331c60 is same with the state(5) to be set 00:32:06.403 [2024-07-12 00:44:34.183576] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2331c60 is same with the state(5) to be set 00:32:06.403 [2024-07-12 00:44:34.183597] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2331c60 is same with the state(5) to be set 00:32:06.403 [2024-07-12 00:44:34.183612] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2331c60 is same with the state(5) to be set 00:32:06.403 [2024-07-12 00:44:34.183625] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2331c60 is same with the state(5) to be set 00:32:06.403 [2024-07-12 00:44:34.183645] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2331c60 is same with the state(5) to be set 00:32:06.403 [2024-07-12 00:44:34.183659] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2331c60 is same with the state(5) to be set 00:32:06.403 00:44:34 nvmf_tcp.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:32:09.694 00:44:37 nvmf_tcp.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:32:09.952 00:32:09.952 00:44:37 nvmf_tcp.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:32:10.210 00:44:37 nvmf_tcp.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:32:13.489 00:44:41 nvmf_tcp.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:13.489 [2024-07-12 00:44:41.257396] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:13.489 00:44:41 nvmf_tcp.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:32:14.863 00:44:42 nvmf_tcp.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:32:14.863 [2024-07-12 00:44:42.557573] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2333550 is same with the state(5) to be set 00:32:14.863 [2024-07-12 00:44:42.557644] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2333550 is same with the state(5) to be set 00:32:14.863 [2024-07-12 00:44:42.557661] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2333550 is same with the state(5) to be set 00:32:14.863 [2024-07-12 00:44:42.557675] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2333550 is same with the state(5) to be set 00:32:14.863 [2024-07-12 00:44:42.557688] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2333550 is same with the state(5) to be set 00:32:14.863 [2024-07-12 00:44:42.557701] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2333550 is same with the state(5) to be set 00:32:14.863 [2024-07-12 00:44:42.557715] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2333550 is same with the state(5) to be set 00:32:14.863 [2024-07-12 00:44:42.557728] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2333550 is same with the state(5) to be set 00:32:14.863 [2024-07-12 00:44:42.557741] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2333550 is same with the state(5) to be set 00:32:14.863 [2024-07-12 00:44:42.557754] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2333550 is same with the state(5) to be set 00:32:14.864 [2024-07-12 00:44:42.557767] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2333550 is same with the state(5) to be set 00:32:14.864 [2024-07-12 00:44:42.557780] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2333550 is same with the state(5) to be set 00:32:14.864 [2024-07-12 00:44:42.557794] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2333550 is same with the state(5) to be set 00:32:14.864 [2024-07-12 00:44:42.557807] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2333550 is same with the state(5) to be set 00:32:14.864 [2024-07-12 00:44:42.557820] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2333550 is same with the state(5) to be set 00:32:14.864 [2024-07-12 00:44:42.557833] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2333550 is same with the state(5) to be set 00:32:14.864 [2024-07-12 00:44:42.557846] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2333550 is same with the state(5) to be set 00:32:14.864 [2024-07-12 00:44:42.557860] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2333550 is same with the state(5) to be set 00:32:14.864 [2024-07-12 00:44:42.557873] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2333550 is same with the state(5) to be set 00:32:14.864 00:44:42 nvmf_tcp.nvmf_failover -- host/failover.sh@59 -- # wait 1049983 00:32:21.460 0 00:32:21.460 00:44:48 nvmf_tcp.nvmf_failover -- host/failover.sh@61 -- # killprocess 1049883 00:32:21.460 00:44:48 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@946 -- # '[' -z 1049883 ']' 00:32:21.460 00:44:48 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@950 -- # kill -0 1049883 00:32:21.460 00:44:48 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # uname 00:32:21.460 00:44:48 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:32:21.460 00:44:48 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1049883 00:32:21.460 00:44:48 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:32:21.460 00:44:48 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:32:21.461 00:44:48 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1049883' 00:32:21.461 killing process with pid 1049883 00:32:21.461 00:44:48 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@965 -- # kill 1049883 00:32:21.461 00:44:48 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@970 -- # wait 1049883 00:32:21.461 00:44:48 nvmf_tcp.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:32:21.461 [2024-07-12 00:44:31.929895] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:32:21.461 [2024-07-12 00:44:31.930016] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1049883 ] 00:32:21.461 EAL: No free 2048 kB hugepages reported on node 1 00:32:21.461 [2024-07-12 00:44:31.986550] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:21.461 [2024-07-12 00:44:32.073749] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:21.461 Running I/O for 15 seconds... 00:32:21.461 [2024-07-12 00:44:34.184493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:69280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.461 [2024-07-12 00:44:34.184536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.461 [2024-07-12 00:44:34.184565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:69288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.461 [2024-07-12 00:44:34.184582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.461 [2024-07-12 00:44:34.184608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:69296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.461 [2024-07-12 00:44:34.184625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.461 [2024-07-12 00:44:34.184642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:69304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.461 [2024-07-12 00:44:34.184658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.461 [2024-07-12 00:44:34.184675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:69312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.461 [2024-07-12 00:44:34.184691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.461 [2024-07-12 00:44:34.184709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:69320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.461 [2024-07-12 00:44:34.184724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.461 [2024-07-12 00:44:34.184741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:69328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.461 [2024-07-12 00:44:34.184757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.461 [2024-07-12 00:44:34.184775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:69336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.461 [2024-07-12 00:44:34.184790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.461 [2024-07-12 00:44:34.184807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:69344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.461 [2024-07-12 00:44:34.184823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.461 [2024-07-12 00:44:34.184840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:69352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.461 [2024-07-12 00:44:34.184855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.461 [2024-07-12 00:44:34.184872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:69360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.461 [2024-07-12 00:44:34.184888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.461 [2024-07-12 00:44:34.184912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:69368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.461 [2024-07-12 00:44:34.184929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.461 [2024-07-12 00:44:34.184946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:69376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.461 [2024-07-12 00:44:34.184962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.461 [2024-07-12 00:44:34.184980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:70024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:21.461 [2024-07-12 00:44:34.184996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.461 [2024-07-12 00:44:34.185014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:69384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.461 [2024-07-12 00:44:34.185029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.461 [2024-07-12 00:44:34.185047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:69392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.461 [2024-07-12 00:44:34.185062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.461 [2024-07-12 00:44:34.185079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:69400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.461 [2024-07-12 00:44:34.185095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.461 [2024-07-12 00:44:34.185113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:69408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.461 [2024-07-12 00:44:34.185128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.461 [2024-07-12 00:44:34.185145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:69416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.461 [2024-07-12 00:44:34.185161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.461 [2024-07-12 00:44:34.185178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:69424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.461 [2024-07-12 00:44:34.185194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.461 [2024-07-12 00:44:34.185211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:69432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.461 [2024-07-12 00:44:34.185227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.461 [2024-07-12 00:44:34.185245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:69440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.461 [2024-07-12 00:44:34.185261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.461 [2024-07-12 00:44:34.185278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:69448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.461 [2024-07-12 00:44:34.185294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.461 [2024-07-12 00:44:34.185312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:69456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.461 [2024-07-12 00:44:34.185332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.461 [2024-07-12 00:44:34.185350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:69464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.461 [2024-07-12 00:44:34.185366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.461 [2024-07-12 00:44:34.185383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:69472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.461 [2024-07-12 00:44:34.185399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.461 [2024-07-12 00:44:34.185416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:69480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.461 [2024-07-12 00:44:34.185431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.461 [2024-07-12 00:44:34.185448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:69488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.461 [2024-07-12 00:44:34.185466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.461 [2024-07-12 00:44:34.185483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:69496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.461 [2024-07-12 00:44:34.185499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.461 [2024-07-12 00:44:34.185516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:69504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.461 [2024-07-12 00:44:34.185531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.461 [2024-07-12 00:44:34.185548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:69512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.461 [2024-07-12 00:44:34.185564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.461 [2024-07-12 00:44:34.185581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:69520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.461 [2024-07-12 00:44:34.185603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.461 [2024-07-12 00:44:34.185620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:69528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.462 [2024-07-12 00:44:34.185636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.462 [2024-07-12 00:44:34.185659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:69536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.462 [2024-07-12 00:44:34.185676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.462 [2024-07-12 00:44:34.185693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:69544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.462 [2024-07-12 00:44:34.185709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.462 [2024-07-12 00:44:34.185725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:69552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.462 [2024-07-12 00:44:34.185741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.462 [2024-07-12 00:44:34.185762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:69560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.462 [2024-07-12 00:44:34.185779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.462 [2024-07-12 00:44:34.185795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:69568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.462 [2024-07-12 00:44:34.185811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.462 [2024-07-12 00:44:34.185829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:69576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.462 [2024-07-12 00:44:34.185844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.462 [2024-07-12 00:44:34.185861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:69584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.462 [2024-07-12 00:44:34.185876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.462 [2024-07-12 00:44:34.185893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:69592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.462 [2024-07-12 00:44:34.185909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.462 [2024-07-12 00:44:34.185926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:69600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.462 [2024-07-12 00:44:34.185941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.462 [2024-07-12 00:44:34.185958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:69608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.462 [2024-07-12 00:44:34.185973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.462 [2024-07-12 00:44:34.185991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:69616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.462 [2024-07-12 00:44:34.186006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.462 [2024-07-12 00:44:34.186023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:69624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.462 [2024-07-12 00:44:34.186039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.462 [2024-07-12 00:44:34.186056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:69632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.462 [2024-07-12 00:44:34.186071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.462 [2024-07-12 00:44:34.186088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:69640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.462 [2024-07-12 00:44:34.186103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.462 [2024-07-12 00:44:34.186120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:69648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.462 [2024-07-12 00:44:34.186136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.462 [2024-07-12 00:44:34.186153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:69656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.462 [2024-07-12 00:44:34.186172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.462 [2024-07-12 00:44:34.186195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:69664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.462 [2024-07-12 00:44:34.186211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.462 [2024-07-12 00:44:34.186228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:69672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.462 [2024-07-12 00:44:34.186243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.462 [2024-07-12 00:44:34.186260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:69680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.462 [2024-07-12 00:44:34.186276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.462 [2024-07-12 00:44:34.186293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:69688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.462 [2024-07-12 00:44:34.186309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.462 [2024-07-12 00:44:34.186326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:69696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.462 [2024-07-12 00:44:34.186341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.462 [2024-07-12 00:44:34.186358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:69704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.462 [2024-07-12 00:44:34.186373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.462 [2024-07-12 00:44:34.186390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:69712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.462 [2024-07-12 00:44:34.186405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.462 [2024-07-12 00:44:34.186422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:69720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.462 [2024-07-12 00:44:34.186437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.462 [2024-07-12 00:44:34.186454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:69728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.462 [2024-07-12 00:44:34.186469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.462 [2024-07-12 00:44:34.186486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:69736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.462 [2024-07-12 00:44:34.186501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.462 [2024-07-12 00:44:34.186518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:69744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.462 [2024-07-12 00:44:34.186533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.462 [2024-07-12 00:44:34.186550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:69752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.462 [2024-07-12 00:44:34.186565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.462 [2024-07-12 00:44:34.186581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:69760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.462 [2024-07-12 00:44:34.186608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.462 [2024-07-12 00:44:34.186625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:69768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.462 [2024-07-12 00:44:34.186641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.462 [2024-07-12 00:44:34.186658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:69776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.462 [2024-07-12 00:44:34.186673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.462 [2024-07-12 00:44:34.186691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:69784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.462 [2024-07-12 00:44:34.186706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.462 [2024-07-12 00:44:34.186728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:69792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.462 [2024-07-12 00:44:34.186744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.462 [2024-07-12 00:44:34.186762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:69800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.462 [2024-07-12 00:44:34.186778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.462 [2024-07-12 00:44:34.186795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:69808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.462 [2024-07-12 00:44:34.186810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.463 [2024-07-12 00:44:34.186827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:69816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.463 [2024-07-12 00:44:34.186842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.463 [2024-07-12 00:44:34.186859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:69824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.463 [2024-07-12 00:44:34.186874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.463 [2024-07-12 00:44:34.186891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:69832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.463 [2024-07-12 00:44:34.186906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.463 [2024-07-12 00:44:34.186923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:69840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.463 [2024-07-12 00:44:34.186938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.463 [2024-07-12 00:44:34.186955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:69848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.463 [2024-07-12 00:44:34.186970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.463 [2024-07-12 00:44:34.186987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:69856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.463 [2024-07-12 00:44:34.187002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.463 [2024-07-12 00:44:34.187023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:69864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.463 [2024-07-12 00:44:34.187039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.463 [2024-07-12 00:44:34.187056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:69872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.463 [2024-07-12 00:44:34.187072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.463 [2024-07-12 00:44:34.187089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:69880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.463 [2024-07-12 00:44:34.187104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.463 [2024-07-12 00:44:34.187121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:69888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.463 [2024-07-12 00:44:34.187137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.463 [2024-07-12 00:44:34.187154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:69896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.463 [2024-07-12 00:44:34.187169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.463 [2024-07-12 00:44:34.187186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:69904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.463 [2024-07-12 00:44:34.187201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.463 [2024-07-12 00:44:34.187219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:69912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.463 [2024-07-12 00:44:34.187234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.463 [2024-07-12 00:44:34.187255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:69920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.463 [2024-07-12 00:44:34.187271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.463 [2024-07-12 00:44:34.187288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:69928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.463 [2024-07-12 00:44:34.187304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.463 [2024-07-12 00:44:34.187321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:69936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.463 [2024-07-12 00:44:34.187337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.463 [2024-07-12 00:44:34.187353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:69944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.463 [2024-07-12 00:44:34.187369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.463 [2024-07-12 00:44:34.187386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:69952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.463 [2024-07-12 00:44:34.187401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.463 [2024-07-12 00:44:34.187419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:69960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.463 [2024-07-12 00:44:34.187438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.463 [2024-07-12 00:44:34.187456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:69968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.463 [2024-07-12 00:44:34.187472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.463 [2024-07-12 00:44:34.187489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:69976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.463 [2024-07-12 00:44:34.187504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.463 [2024-07-12 00:44:34.187521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:69984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.463 [2024-07-12 00:44:34.187536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.463 [2024-07-12 00:44:34.187553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:69992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.463 [2024-07-12 00:44:34.187569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.463 [2024-07-12 00:44:34.187591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:70000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.463 [2024-07-12 00:44:34.187609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.463 [2024-07-12 00:44:34.187626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:70008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.463 [2024-07-12 00:44:34.187646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.463 [2024-07-12 00:44:34.187663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:70016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.463 [2024-07-12 00:44:34.187679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.463 [2024-07-12 00:44:34.187696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:70032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:21.463 [2024-07-12 00:44:34.187712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.463 [2024-07-12 00:44:34.187729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:70040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:21.463 [2024-07-12 00:44:34.187744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.463 [2024-07-12 00:44:34.187761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:70048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:21.463 [2024-07-12 00:44:34.187776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.463 [2024-07-12 00:44:34.187794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:70056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:21.463 [2024-07-12 00:44:34.187809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.463 [2024-07-12 00:44:34.187829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:70064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:21.463 [2024-07-12 00:44:34.187845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.463 [2024-07-12 00:44:34.187866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:70072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:21.463 [2024-07-12 00:44:34.187882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.463 [2024-07-12 00:44:34.187898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:70080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:21.463 [2024-07-12 00:44:34.187914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.463 [2024-07-12 00:44:34.187931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:70088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:21.463 [2024-07-12 00:44:34.187946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.463 [2024-07-12 00:44:34.187964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:70096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:21.463 [2024-07-12 00:44:34.187979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.464 [2024-07-12 00:44:34.187996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:70104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:21.464 [2024-07-12 00:44:34.188012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.464 [2024-07-12 00:44:34.188029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:70112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:21.464 [2024-07-12 00:44:34.188044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.464 [2024-07-12 00:44:34.188061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:70120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:21.464 [2024-07-12 00:44:34.188077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.464 [2024-07-12 00:44:34.188094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:70128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:21.464 [2024-07-12 00:44:34.188110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.464 [2024-07-12 00:44:34.188126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:70136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:21.464 [2024-07-12 00:44:34.188142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.464 [2024-07-12 00:44:34.188159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:70144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:21.464 [2024-07-12 00:44:34.188175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.464 [2024-07-12 00:44:34.188192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:70152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:21.464 [2024-07-12 00:44:34.188207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.464 [2024-07-12 00:44:34.188224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:70160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:21.464 [2024-07-12 00:44:34.188240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.464 [2024-07-12 00:44:34.188258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:70168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:21.464 [2024-07-12 00:44:34.188280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.464 [2024-07-12 00:44:34.188298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:70176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:21.464 [2024-07-12 00:44:34.188314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.464 [2024-07-12 00:44:34.188332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:70184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:21.464 [2024-07-12 00:44:34.188347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.464 [2024-07-12 00:44:34.188365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:70192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:21.464 [2024-07-12 00:44:34.188380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.464 [2024-07-12 00:44:34.188397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:70200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:21.464 [2024-07-12 00:44:34.188412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.464 [2024-07-12 00:44:34.188429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:70208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:21.464 [2024-07-12 00:44:34.188445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.464 [2024-07-12 00:44:34.188462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:70216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:21.464 [2024-07-12 00:44:34.188477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.464 [2024-07-12 00:44:34.188493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:70224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:21.464 [2024-07-12 00:44:34.188509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.464 [2024-07-12 00:44:34.188526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:70232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:21.464 [2024-07-12 00:44:34.188541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.464 [2024-07-12 00:44:34.188558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:70240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:21.464 [2024-07-12 00:44:34.188573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.464 [2024-07-12 00:44:34.188597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:70248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:21.464 [2024-07-12 00:44:34.188615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.464 [2024-07-12 00:44:34.188632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:70256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:21.464 [2024-07-12 00:44:34.188647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.464 [2024-07-12 00:44:34.188664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:70264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:21.464 [2024-07-12 00:44:34.188680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.464 [2024-07-12 00:44:34.188699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:70272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:21.464 [2024-07-12 00:44:34.188719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.464 [2024-07-12 00:44:34.188737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:70280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:21.464 [2024-07-12 00:44:34.188752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.464 [2024-07-12 00:44:34.188787] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:21.464 [2024-07-12 00:44:34.188806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70288 len:8 PRP1 0x0 PRP2 0x0 00:32:21.464 [2024-07-12 00:44:34.188820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.464 [2024-07-12 00:44:34.188841] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:21.464 [2024-07-12 00:44:34.188853] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:21.464 [2024-07-12 00:44:34.188866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70296 len:8 PRP1 0x0 PRP2 0x0 00:32:21.464 [2024-07-12 00:44:34.188881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.464 [2024-07-12 00:44:34.188940] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2410c10 was disconnected and freed. reset controller. 00:32:21.464 [2024-07-12 00:44:34.188963] bdev_nvme.c:1867:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:32:21.464 [2024-07-12 00:44:34.189001] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:21.464 [2024-07-12 00:44:34.189020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.464 [2024-07-12 00:44:34.189036] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:21.464 [2024-07-12 00:44:34.189051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.464 [2024-07-12 00:44:34.189066] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:21.464 [2024-07-12 00:44:34.189080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.464 [2024-07-12 00:44:34.189096] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:21.464 [2024-07-12 00:44:34.189110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.464 [2024-07-12 00:44:34.189124] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:21.464 [2024-07-12 00:44:34.189174] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23f2360 (9): Bad file descriptor 00:32:21.464 [2024-07-12 00:44:34.193343] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:21.464 [2024-07-12 00:44:34.234115] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:32:21.465 [2024-07-12 00:44:37.981598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:50880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:21.465 [2024-07-12 00:44:37.981667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.465 [2024-07-12 00:44:37.981697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:50888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:21.465 [2024-07-12 00:44:37.981725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.465 [2024-07-12 00:44:37.981744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:50896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:21.465 [2024-07-12 00:44:37.981760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.465 [2024-07-12 00:44:37.981778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:50904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:21.465 [2024-07-12 00:44:37.981793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.465 [2024-07-12 00:44:37.981811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:50912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:21.465 [2024-07-12 00:44:37.981826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.465 [2024-07-12 00:44:37.981843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:50920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:21.465 [2024-07-12 00:44:37.981858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.465 [2024-07-12 00:44:37.981876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:50928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:21.465 [2024-07-12 00:44:37.981891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.465 [2024-07-12 00:44:37.981909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:50936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:21.465 [2024-07-12 00:44:37.981924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.465 [2024-07-12 00:44:37.981941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:50944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:21.465 [2024-07-12 00:44:37.981957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.465 [2024-07-12 00:44:37.981974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:50952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:21.465 [2024-07-12 00:44:37.981990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.465 [2024-07-12 00:44:37.982007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:50960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:21.465 [2024-07-12 00:44:37.982022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.465 [2024-07-12 00:44:37.982039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:50968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:21.465 [2024-07-12 00:44:37.982054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.465 [2024-07-12 00:44:37.982071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:50976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:21.465 [2024-07-12 00:44:37.982087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.465 [2024-07-12 00:44:37.982104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:50984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:21.465 [2024-07-12 00:44:37.982119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.465 [2024-07-12 00:44:37.982140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:50008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.465 [2024-07-12 00:44:37.982157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.465 [2024-07-12 00:44:37.982174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:50016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.465 [2024-07-12 00:44:37.982190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.465 [2024-07-12 00:44:37.982207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:50024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.465 [2024-07-12 00:44:37.982222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.465 [2024-07-12 00:44:37.982240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:50032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.465 [2024-07-12 00:44:37.982255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.465 [2024-07-12 00:44:37.982273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:50040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.465 [2024-07-12 00:44:37.982288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.465 [2024-07-12 00:44:37.982305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:50048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.465 [2024-07-12 00:44:37.982322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.465 [2024-07-12 00:44:37.982339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:50056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.465 [2024-07-12 00:44:37.982355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.465 [2024-07-12 00:44:37.982372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:50064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.465 [2024-07-12 00:44:37.982388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.465 [2024-07-12 00:44:37.982405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:50072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.465 [2024-07-12 00:44:37.982421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.465 [2024-07-12 00:44:37.982438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:50080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.465 [2024-07-12 00:44:37.982454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.465 [2024-07-12 00:44:37.982471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:50088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.465 [2024-07-12 00:44:37.982486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.465 [2024-07-12 00:44:37.982504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:50096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.465 [2024-07-12 00:44:37.982519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.465 [2024-07-12 00:44:37.982536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:50104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.465 [2024-07-12 00:44:37.982552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.465 [2024-07-12 00:44:37.982573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:50112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.465 [2024-07-12 00:44:37.982596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.465 [2024-07-12 00:44:37.982615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:50120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.465 [2024-07-12 00:44:37.982631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.465 [2024-07-12 00:44:37.982648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:50128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.465 [2024-07-12 00:44:37.982664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.465 [2024-07-12 00:44:37.982681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:50136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.465 [2024-07-12 00:44:37.982697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.465 [2024-07-12 00:44:37.982714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:50144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.466 [2024-07-12 00:44:37.982730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.466 [2024-07-12 00:44:37.982747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:50152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.466 [2024-07-12 00:44:37.982763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.466 [2024-07-12 00:44:37.982780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:50160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.466 [2024-07-12 00:44:37.982796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.466 [2024-07-12 00:44:37.982813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:50168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.466 [2024-07-12 00:44:37.982829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.466 [2024-07-12 00:44:37.982846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:50176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.466 [2024-07-12 00:44:37.982864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.466 [2024-07-12 00:44:37.982882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:50184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.466 [2024-07-12 00:44:37.982897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.466 [2024-07-12 00:44:37.982915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:50992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:21.466 [2024-07-12 00:44:37.982930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.466 [2024-07-12 00:44:37.982947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:51000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:21.466 [2024-07-12 00:44:37.982963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.466 [2024-07-12 00:44:37.982980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:50192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.466 [2024-07-12 00:44:37.983000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.466 [2024-07-12 00:44:37.983018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:50200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.466 [2024-07-12 00:44:37.983034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.466 [2024-07-12 00:44:37.983052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:50208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.466 [2024-07-12 00:44:37.983067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.466 [2024-07-12 00:44:37.983085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:50216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.466 [2024-07-12 00:44:37.983100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.466 [2024-07-12 00:44:37.983118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:50224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.466 [2024-07-12 00:44:37.983133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.466 [2024-07-12 00:44:37.983151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:50232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.466 [2024-07-12 00:44:37.983173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.466 [2024-07-12 00:44:37.983191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:50240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.466 [2024-07-12 00:44:37.983206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.466 [2024-07-12 00:44:37.983224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:50248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.466 [2024-07-12 00:44:37.983239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.466 [2024-07-12 00:44:37.983257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:50256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.466 [2024-07-12 00:44:37.983272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.466 [2024-07-12 00:44:37.983289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:50264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.466 [2024-07-12 00:44:37.983305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.466 [2024-07-12 00:44:37.983322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:50272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.466 [2024-07-12 00:44:37.983337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.466 [2024-07-12 00:44:37.983355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:50280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.466 [2024-07-12 00:44:37.983370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.466 [2024-07-12 00:44:37.983386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:50288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.466 [2024-07-12 00:44:37.983402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.466 [2024-07-12 00:44:37.983424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:50296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.466 [2024-07-12 00:44:37.983441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.466 [2024-07-12 00:44:37.983458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:50304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.466 [2024-07-12 00:44:37.983474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.466 [2024-07-12 00:44:37.983491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:50312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.466 [2024-07-12 00:44:37.983507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.466 [2024-07-12 00:44:37.983524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:50320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.466 [2024-07-12 00:44:37.983540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.466 [2024-07-12 00:44:37.983557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:50328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.466 [2024-07-12 00:44:37.983572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.466 [2024-07-12 00:44:37.983595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:50336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.466 [2024-07-12 00:44:37.983612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.467 [2024-07-12 00:44:37.983629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:50344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.467 [2024-07-12 00:44:37.983645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.467 [2024-07-12 00:44:37.983662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:50352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.467 [2024-07-12 00:44:37.983677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.467 [2024-07-12 00:44:37.983694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:50360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.467 [2024-07-12 00:44:37.983714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.467 [2024-07-12 00:44:37.983732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:50368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.467 [2024-07-12 00:44:37.983747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.467 [2024-07-12 00:44:37.983764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:50376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.467 [2024-07-12 00:44:37.983780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.467 [2024-07-12 00:44:37.983796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:50384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.467 [2024-07-12 00:44:37.983812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.467 [2024-07-12 00:44:37.983829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:50392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.467 [2024-07-12 00:44:37.983848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.467 [2024-07-12 00:44:37.983865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:50400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.467 [2024-07-12 00:44:37.983881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.467 [2024-07-12 00:44:37.983898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:50408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.467 [2024-07-12 00:44:37.983913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.467 [2024-07-12 00:44:37.983931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:50416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.467 [2024-07-12 00:44:37.983946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.467 [2024-07-12 00:44:37.983963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:50424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.467 [2024-07-12 00:44:37.983979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.467 [2024-07-12 00:44:37.983995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:50432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.467 [2024-07-12 00:44:37.984011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.467 [2024-07-12 00:44:37.984028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:50440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.467 [2024-07-12 00:44:37.984043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.467 [2024-07-12 00:44:37.984060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:50448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.467 [2024-07-12 00:44:37.984075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.467 [2024-07-12 00:44:37.984092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:50456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.467 [2024-07-12 00:44:37.984107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.467 [2024-07-12 00:44:37.984124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:50464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.467 [2024-07-12 00:44:37.984139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.467 [2024-07-12 00:44:37.984156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:50472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.467 [2024-07-12 00:44:37.984172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.467 [2024-07-12 00:44:37.984189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:50480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.467 [2024-07-12 00:44:37.984204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.467 [2024-07-12 00:44:37.984221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:50488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.467 [2024-07-12 00:44:37.984241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.467 [2024-07-12 00:44:37.984262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:50496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.467 [2024-07-12 00:44:37.984279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.467 [2024-07-12 00:44:37.984296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:51008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:21.467 [2024-07-12 00:44:37.984312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.467 [2024-07-12 00:44:37.984328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:51016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:21.467 [2024-07-12 00:44:37.984344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.467 [2024-07-12 00:44:37.984361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:50504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.467 [2024-07-12 00:44:37.984376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.467 [2024-07-12 00:44:37.984393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:50512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.467 [2024-07-12 00:44:37.984409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.467 [2024-07-12 00:44:37.984426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:50520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.467 [2024-07-12 00:44:37.984441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.467 [2024-07-12 00:44:37.984458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:50528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.467 [2024-07-12 00:44:37.984473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.467 [2024-07-12 00:44:37.984490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:50536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.467 [2024-07-12 00:44:37.984506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.467 [2024-07-12 00:44:37.984523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:50544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.467 [2024-07-12 00:44:37.984538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.467 [2024-07-12 00:44:37.984555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:50552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.467 [2024-07-12 00:44:37.984571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.467 [2024-07-12 00:44:37.984595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:50560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.467 [2024-07-12 00:44:37.984612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.467 [2024-07-12 00:44:37.984629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:50568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.467 [2024-07-12 00:44:37.984644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.467 [2024-07-12 00:44:37.984661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:50576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.467 [2024-07-12 00:44:37.984676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.467 [2024-07-12 00:44:37.984698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:50584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.467 [2024-07-12 00:44:37.984714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.467 [2024-07-12 00:44:37.984731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:50592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.467 [2024-07-12 00:44:37.984747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.467 [2024-07-12 00:44:37.984764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:50600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.467 [2024-07-12 00:44:37.984780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.467 [2024-07-12 00:44:37.984797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:50608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.468 [2024-07-12 00:44:37.984813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.468 [2024-07-12 00:44:37.984830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:50616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.468 [2024-07-12 00:44:37.984845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.468 [2024-07-12 00:44:37.984862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:51024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:21.468 [2024-07-12 00:44:37.984877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.468 [2024-07-12 00:44:37.984894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:50624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.468 [2024-07-12 00:44:37.984910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.468 [2024-07-12 00:44:37.984927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:50632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.468 [2024-07-12 00:44:37.984942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.468 [2024-07-12 00:44:37.984959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:50640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.468 [2024-07-12 00:44:37.984975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.468 [2024-07-12 00:44:37.984992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:50648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.468 [2024-07-12 00:44:37.985008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.468 [2024-07-12 00:44:37.985025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:50656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.468 [2024-07-12 00:44:37.985040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.468 [2024-07-12 00:44:37.985057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:50664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.468 [2024-07-12 00:44:37.985072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.468 [2024-07-12 00:44:37.985089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:50672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.468 [2024-07-12 00:44:37.985109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.468 [2024-07-12 00:44:37.985127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:50680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.468 [2024-07-12 00:44:37.985142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.468 [2024-07-12 00:44:37.985159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:50688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.468 [2024-07-12 00:44:37.985175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.468 [2024-07-12 00:44:37.985192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:50696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.468 [2024-07-12 00:44:37.985207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.468 [2024-07-12 00:44:37.985224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:50704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.468 [2024-07-12 00:44:37.985239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.468 [2024-07-12 00:44:37.985256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:50712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.468 [2024-07-12 00:44:37.985272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.468 [2024-07-12 00:44:37.985289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:50720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.468 [2024-07-12 00:44:37.985304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.468 [2024-07-12 00:44:37.985321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:50728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.468 [2024-07-12 00:44:37.985336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.468 [2024-07-12 00:44:37.985353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:50736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.468 [2024-07-12 00:44:37.985369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.468 [2024-07-12 00:44:37.985386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:50744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.468 [2024-07-12 00:44:37.985401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.468 [2024-07-12 00:44:37.985418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:50752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.468 [2024-07-12 00:44:37.985433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.468 [2024-07-12 00:44:37.985450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:50760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.468 [2024-07-12 00:44:37.985465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.468 [2024-07-12 00:44:37.985488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:50768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.468 [2024-07-12 00:44:37.985503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.468 [2024-07-12 00:44:37.985524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:50776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.468 [2024-07-12 00:44:37.985541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.468 [2024-07-12 00:44:37.985558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:50784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.468 [2024-07-12 00:44:37.985573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.468 [2024-07-12 00:44:37.985597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:50792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.468 [2024-07-12 00:44:37.985613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.468 [2024-07-12 00:44:37.985631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:50800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.468 [2024-07-12 00:44:37.985646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.468 [2024-07-12 00:44:37.985663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:50808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.468 [2024-07-12 00:44:37.985678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.468 [2024-07-12 00:44:37.985695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:50816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.468 [2024-07-12 00:44:37.985710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.468 [2024-07-12 00:44:37.985727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:50824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.468 [2024-07-12 00:44:37.985743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.468 [2024-07-12 00:44:37.985759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:50832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.468 [2024-07-12 00:44:37.985775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.468 [2024-07-12 00:44:37.985791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:50840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.468 [2024-07-12 00:44:37.985807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.468 [2024-07-12 00:44:37.985824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:50848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.468 [2024-07-12 00:44:37.985839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.468 [2024-07-12 00:44:37.985856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:50856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.468 [2024-07-12 00:44:37.985871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.468 [2024-07-12 00:44:37.985888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:50864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.468 [2024-07-12 00:44:37.985903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.468 [2024-07-12 00:44:37.985919] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25bbd00 is same with the state(5) to be set 00:32:21.469 [2024-07-12 00:44:37.985942] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:21.469 [2024-07-12 00:44:37.985961] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:21.469 [2024-07-12 00:44:37.985975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:50872 len:8 PRP1 0x0 PRP2 0x0 00:32:21.469 [2024-07-12 00:44:37.985989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.469 [2024-07-12 00:44:37.986052] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x25bbd00 was disconnected and freed. reset controller. 00:32:21.469 [2024-07-12 00:44:37.986076] bdev_nvme.c:1867:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:32:21.469 [2024-07-12 00:44:37.986114] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:21.469 [2024-07-12 00:44:37.986132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.469 [2024-07-12 00:44:37.986149] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:21.469 [2024-07-12 00:44:37.986164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.469 [2024-07-12 00:44:37.986179] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:21.469 [2024-07-12 00:44:37.986193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.469 [2024-07-12 00:44:37.986209] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:21.469 [2024-07-12 00:44:37.986223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.469 [2024-07-12 00:44:37.986237] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:21.469 [2024-07-12 00:44:37.990348] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:21.469 [2024-07-12 00:44:37.990392] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23f2360 (9): Bad file descriptor 00:32:21.469 [2024-07-12 00:44:38.155779] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:32:21.469 [2024-07-12 00:44:42.558437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:99632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.469 [2024-07-12 00:44:42.558483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.469 [2024-07-12 00:44:42.558514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:99640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.469 [2024-07-12 00:44:42.558530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.469 [2024-07-12 00:44:42.558549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:99648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.469 [2024-07-12 00:44:42.558564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.469 [2024-07-12 00:44:42.558581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:99656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.469 [2024-07-12 00:44:42.558604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.469 [2024-07-12 00:44:42.558622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:99664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.469 [2024-07-12 00:44:42.558651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.469 [2024-07-12 00:44:42.558670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:99672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.469 [2024-07-12 00:44:42.558685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.469 [2024-07-12 00:44:42.558702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:99680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.469 [2024-07-12 00:44:42.558718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.469 [2024-07-12 00:44:42.558735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:99688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.469 [2024-07-12 00:44:42.558750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.469 [2024-07-12 00:44:42.558767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:99696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.469 [2024-07-12 00:44:42.558783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.469 [2024-07-12 00:44:42.558800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:99704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.469 [2024-07-12 00:44:42.558815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.469 [2024-07-12 00:44:42.558832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:99712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.469 [2024-07-12 00:44:42.558847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.469 [2024-07-12 00:44:42.558864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:99720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.469 [2024-07-12 00:44:42.558880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.469 [2024-07-12 00:44:42.558896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:99728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.469 [2024-07-12 00:44:42.558911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.469 [2024-07-12 00:44:42.558928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:99736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.469 [2024-07-12 00:44:42.558943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.469 [2024-07-12 00:44:42.558960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:99744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.469 [2024-07-12 00:44:42.558976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.469 [2024-07-12 00:44:42.558993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:99752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.469 [2024-07-12 00:44:42.559008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.469 [2024-07-12 00:44:42.559024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:99760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.469 [2024-07-12 00:44:42.559040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.469 [2024-07-12 00:44:42.559057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:99768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.469 [2024-07-12 00:44:42.559076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.469 [2024-07-12 00:44:42.559094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:99776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.469 [2024-07-12 00:44:42.559110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.469 [2024-07-12 00:44:42.559127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:99784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.469 [2024-07-12 00:44:42.559143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.469 [2024-07-12 00:44:42.559160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:99792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.469 [2024-07-12 00:44:42.559175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.469 [2024-07-12 00:44:42.559193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:99800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.469 [2024-07-12 00:44:42.559208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.469 [2024-07-12 00:44:42.559225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:99808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.469 [2024-07-12 00:44:42.559240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.469 [2024-07-12 00:44:42.559257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:99816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.469 [2024-07-12 00:44:42.559272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.469 [2024-07-12 00:44:42.559290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:99824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.469 [2024-07-12 00:44:42.559305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.469 [2024-07-12 00:44:42.559322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:99832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.469 [2024-07-12 00:44:42.559337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.470 [2024-07-12 00:44:42.559353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:99840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.470 [2024-07-12 00:44:42.559368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.470 [2024-07-12 00:44:42.559385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:99848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.470 [2024-07-12 00:44:42.559400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.470 [2024-07-12 00:44:42.559417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:99856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.470 [2024-07-12 00:44:42.559432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.470 [2024-07-12 00:44:42.559449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:99864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.470 [2024-07-12 00:44:42.559465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.470 [2024-07-12 00:44:42.559485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:99872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.470 [2024-07-12 00:44:42.559502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.470 [2024-07-12 00:44:42.559518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:99880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.470 [2024-07-12 00:44:42.559534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.470 [2024-07-12 00:44:42.559557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:99944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:21.470 [2024-07-12 00:44:42.559572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.470 [2024-07-12 00:44:42.559596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:99952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:21.470 [2024-07-12 00:44:42.559613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.470 [2024-07-12 00:44:42.559630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:99960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:21.470 [2024-07-12 00:44:42.559646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.470 [2024-07-12 00:44:42.559662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:99968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:21.470 [2024-07-12 00:44:42.559677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.470 [2024-07-12 00:44:42.559694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:99976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:21.470 [2024-07-12 00:44:42.559709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.470 [2024-07-12 00:44:42.559727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:99984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:21.470 [2024-07-12 00:44:42.559743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.470 [2024-07-12 00:44:42.559759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:99992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:21.470 [2024-07-12 00:44:42.559775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.470 [2024-07-12 00:44:42.559791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:100000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:21.470 [2024-07-12 00:44:42.559807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.470 [2024-07-12 00:44:42.559824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:100008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:21.470 [2024-07-12 00:44:42.559839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.470 [2024-07-12 00:44:42.559856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:100016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:21.470 [2024-07-12 00:44:42.559872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.470 [2024-07-12 00:44:42.559888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:100024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:21.470 [2024-07-12 00:44:42.559908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.470 [2024-07-12 00:44:42.559925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:100032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:21.470 [2024-07-12 00:44:42.559941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.470 [2024-07-12 00:44:42.559957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:100040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:21.470 [2024-07-12 00:44:42.559973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.470 [2024-07-12 00:44:42.559990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:100048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:21.470 [2024-07-12 00:44:42.560005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.470 [2024-07-12 00:44:42.560021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:100056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:21.470 [2024-07-12 00:44:42.560037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.470 [2024-07-12 00:44:42.560053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:100064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:21.470 [2024-07-12 00:44:42.560068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.470 [2024-07-12 00:44:42.560085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:100072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:21.470 [2024-07-12 00:44:42.560100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.470 [2024-07-12 00:44:42.560117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:100080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:21.470 [2024-07-12 00:44:42.560132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.470 [2024-07-12 00:44:42.560149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:100088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:21.470 [2024-07-12 00:44:42.560164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.470 [2024-07-12 00:44:42.560181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:100096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:21.470 [2024-07-12 00:44:42.560196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.470 [2024-07-12 00:44:42.560212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:100104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:21.470 [2024-07-12 00:44:42.560227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.470 [2024-07-12 00:44:42.560245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:100112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:21.470 [2024-07-12 00:44:42.560260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.470 [2024-07-12 00:44:42.560277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:100120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:21.470 [2024-07-12 00:44:42.560292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.470 [2024-07-12 00:44:42.560312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:100128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:21.470 [2024-07-12 00:44:42.560329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.470 [2024-07-12 00:44:42.560346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:100136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:21.470 [2024-07-12 00:44:42.560361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.470 [2024-07-12 00:44:42.560377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:100144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:21.470 [2024-07-12 00:44:42.560392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.470 [2024-07-12 00:44:42.560410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:100152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:21.470 [2024-07-12 00:44:42.560425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.470 [2024-07-12 00:44:42.560442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:100160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:21.471 [2024-07-12 00:44:42.560457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.471 [2024-07-12 00:44:42.560474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:100168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:21.471 [2024-07-12 00:44:42.560489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.471 [2024-07-12 00:44:42.560506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:100176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:21.471 [2024-07-12 00:44:42.560521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.471 [2024-07-12 00:44:42.560537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:100184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:21.471 [2024-07-12 00:44:42.560553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.471 [2024-07-12 00:44:42.560569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:100192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:21.471 [2024-07-12 00:44:42.560590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.471 [2024-07-12 00:44:42.560608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:100200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:21.471 [2024-07-12 00:44:42.560627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.471 [2024-07-12 00:44:42.560643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:100208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:21.471 [2024-07-12 00:44:42.560659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.471 [2024-07-12 00:44:42.560675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:100216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:21.471 [2024-07-12 00:44:42.560691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.471 [2024-07-12 00:44:42.560707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:100224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:21.471 [2024-07-12 00:44:42.560727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.471 [2024-07-12 00:44:42.560744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:100232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:21.471 [2024-07-12 00:44:42.560760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.471 [2024-07-12 00:44:42.560777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:100240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:21.471 [2024-07-12 00:44:42.560793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.471 [2024-07-12 00:44:42.560810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:100248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:21.471 [2024-07-12 00:44:42.560825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.471 [2024-07-12 00:44:42.560841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:100256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:21.471 [2024-07-12 00:44:42.560857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.471 [2024-07-12 00:44:42.560873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:100264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:21.471 [2024-07-12 00:44:42.560888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.471 [2024-07-12 00:44:42.560905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:100272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:21.471 [2024-07-12 00:44:42.560920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.471 [2024-07-12 00:44:42.560937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:100280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:21.471 [2024-07-12 00:44:42.560952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.471 [2024-07-12 00:44:42.560969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:100288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:21.471 [2024-07-12 00:44:42.560984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.471 [2024-07-12 00:44:42.561007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:100296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:21.471 [2024-07-12 00:44:42.561022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.471 [2024-07-12 00:44:42.561039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:100304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:21.471 [2024-07-12 00:44:42.561054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.471 [2024-07-12 00:44:42.561070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:100312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:21.471 [2024-07-12 00:44:42.561085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.471 [2024-07-12 00:44:42.561102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:100320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:21.471 [2024-07-12 00:44:42.561117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.471 [2024-07-12 00:44:42.561134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:100328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:21.471 [2024-07-12 00:44:42.561153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.471 [2024-07-12 00:44:42.561170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:100336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:21.471 [2024-07-12 00:44:42.561186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.471 [2024-07-12 00:44:42.561202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:100344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:21.471 [2024-07-12 00:44:42.561218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.471 [2024-07-12 00:44:42.561234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:99888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.471 [2024-07-12 00:44:42.561249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.471 [2024-07-12 00:44:42.561266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:99896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.471 [2024-07-12 00:44:42.561282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.471 [2024-07-12 00:44:42.561299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:99904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.471 [2024-07-12 00:44:42.561314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.471 [2024-07-12 00:44:42.561331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:99912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.471 [2024-07-12 00:44:42.561346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.471 [2024-07-12 00:44:42.561363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:99920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.471 [2024-07-12 00:44:42.561378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.471 [2024-07-12 00:44:42.561395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:99928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.471 [2024-07-12 00:44:42.561410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.472 [2024-07-12 00:44:42.561427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:99936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:21.472 [2024-07-12 00:44:42.561442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.472 [2024-07-12 00:44:42.561459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:100352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:21.472 [2024-07-12 00:44:42.561474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.472 [2024-07-12 00:44:42.561491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:100360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:21.472 [2024-07-12 00:44:42.561506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.472 [2024-07-12 00:44:42.561523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:100368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:21.472 [2024-07-12 00:44:42.561538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.472 [2024-07-12 00:44:42.561559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:100376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:21.472 [2024-07-12 00:44:42.561575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.472 [2024-07-12 00:44:42.561598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:100384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:21.472 [2024-07-12 00:44:42.561614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.472 [2024-07-12 00:44:42.561631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:100392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:21.472 [2024-07-12 00:44:42.561646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.472 [2024-07-12 00:44:42.561663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:100400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:21.472 [2024-07-12 00:44:42.561678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.472 [2024-07-12 00:44:42.561694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:100408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:21.472 [2024-07-12 00:44:42.561710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.472 [2024-07-12 00:44:42.561726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:100416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:21.472 [2024-07-12 00:44:42.561741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.472 [2024-07-12 00:44:42.561758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:100424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:21.472 [2024-07-12 00:44:42.561773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.472 [2024-07-12 00:44:42.561790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:100432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:21.472 [2024-07-12 00:44:42.561805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.472 [2024-07-12 00:44:42.561822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:100440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:21.472 [2024-07-12 00:44:42.561837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.472 [2024-07-12 00:44:42.561854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:100448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:21.472 [2024-07-12 00:44:42.561869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.472 [2024-07-12 00:44:42.561886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:100456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:21.472 [2024-07-12 00:44:42.561901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.472 [2024-07-12 00:44:42.561918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:100464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:21.472 [2024-07-12 00:44:42.561934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.472 [2024-07-12 00:44:42.561951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:100472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:21.472 [2024-07-12 00:44:42.561970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.472 [2024-07-12 00:44:42.561987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:100480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:21.472 [2024-07-12 00:44:42.562003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.472 [2024-07-12 00:44:42.562019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:100488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:21.472 [2024-07-12 00:44:42.562035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.472 [2024-07-12 00:44:42.562052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:100496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:21.472 [2024-07-12 00:44:42.562067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.472 [2024-07-12 00:44:42.562083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:100504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:21.472 [2024-07-12 00:44:42.562098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.472 [2024-07-12 00:44:42.562115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:100512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:21.472 [2024-07-12 00:44:42.562130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.472 [2024-07-12 00:44:42.562147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:100520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:21.472 [2024-07-12 00:44:42.562162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.472 [2024-07-12 00:44:42.562178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:100528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:21.472 [2024-07-12 00:44:42.562193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.472 [2024-07-12 00:44:42.562210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:100536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:21.472 [2024-07-12 00:44:42.562226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.472 [2024-07-12 00:44:42.562242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:100544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:21.472 [2024-07-12 00:44:42.562258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.472 [2024-07-12 00:44:42.562274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:100552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:21.472 [2024-07-12 00:44:42.562289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.472 [2024-07-12 00:44:42.562306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:100560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:21.472 [2024-07-12 00:44:42.562322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.472 [2024-07-12 00:44:42.562338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:100568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:21.472 [2024-07-12 00:44:42.562354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.472 [2024-07-12 00:44:42.562374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:100576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:21.472 [2024-07-12 00:44:42.562390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.472 [2024-07-12 00:44:42.562407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:100584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:21.472 [2024-07-12 00:44:42.562422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.472 [2024-07-12 00:44:42.562439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:100592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:21.472 [2024-07-12 00:44:42.562454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.472 [2024-07-12 00:44:42.562471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:100600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:21.472 [2024-07-12 00:44:42.562486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.473 [2024-07-12 00:44:42.562527] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:21.473 [2024-07-12 00:44:42.562546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100608 len:8 PRP1 0x0 PRP2 0x0 00:32:21.473 [2024-07-12 00:44:42.562560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.473 [2024-07-12 00:44:42.562581] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:21.473 [2024-07-12 00:44:42.562601] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:21.473 [2024-07-12 00:44:42.562614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100616 len:8 PRP1 0x0 PRP2 0x0 00:32:21.473 [2024-07-12 00:44:42.562628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.473 [2024-07-12 00:44:42.562643] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:21.473 [2024-07-12 00:44:42.562655] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:21.473 [2024-07-12 00:44:42.562668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100624 len:8 PRP1 0x0 PRP2 0x0 00:32:21.473 [2024-07-12 00:44:42.562681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.473 [2024-07-12 00:44:42.562696] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:21.473 [2024-07-12 00:44:42.562708] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:21.473 [2024-07-12 00:44:42.562720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100632 len:8 PRP1 0x0 PRP2 0x0 00:32:21.473 [2024-07-12 00:44:42.562734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.473 [2024-07-12 00:44:42.562749] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:21.473 [2024-07-12 00:44:42.562760] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:21.473 [2024-07-12 00:44:42.562773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100640 len:8 PRP1 0x0 PRP2 0x0 00:32:21.473 [2024-07-12 00:44:42.562787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.473 [2024-07-12 00:44:42.562801] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:21.473 [2024-07-12 00:44:42.562813] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:21.473 [2024-07-12 00:44:42.562829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100648 len:8 PRP1 0x0 PRP2 0x0 00:32:21.473 [2024-07-12 00:44:42.562845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.473 [2024-07-12 00:44:42.562907] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x23ee130 was disconnected and freed. reset controller. 00:32:21.473 [2024-07-12 00:44:42.562931] bdev_nvme.c:1867:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:32:21.473 [2024-07-12 00:44:42.562971] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:21.473 [2024-07-12 00:44:42.562990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.473 [2024-07-12 00:44:42.563007] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:21.473 [2024-07-12 00:44:42.563021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.473 [2024-07-12 00:44:42.563036] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:21.473 [2024-07-12 00:44:42.563050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.473 [2024-07-12 00:44:42.563066] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:21.473 [2024-07-12 00:44:42.563087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.473 [2024-07-12 00:44:42.563102] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:21.473 [2024-07-12 00:44:42.563167] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23f2360 (9): Bad file descriptor 00:32:21.473 [2024-07-12 00:44:42.567212] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:21.473 [2024-07-12 00:44:42.692423] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:32:21.473 00:32:21.473 Latency(us) 00:32:21.473 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:21.473 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:32:21.473 Verification LBA range: start 0x0 length 0x4000 00:32:21.473 NVMe0n1 : 15.01 7638.17 29.84 631.40 0.00 15443.81 667.50 19320.98 00:32:21.473 =================================================================================================================== 00:32:21.473 Total : 7638.17 29.84 631.40 0.00 15443.81 667.50 19320.98 00:32:21.473 Received shutdown signal, test time was about 15.000000 seconds 00:32:21.473 00:32:21.473 Latency(us) 00:32:21.473 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:21.473 =================================================================================================================== 00:32:21.473 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:21.473 00:44:48 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:32:21.473 00:44:48 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # count=3 00:32:21.473 00:44:48 nvmf_tcp.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:32:21.473 00:44:48 nvmf_tcp.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=1051370 00:32:21.473 00:44:48 nvmf_tcp.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:32:21.473 00:44:48 nvmf_tcp.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 1051370 /var/tmp/bdevperf.sock 00:32:21.473 00:44:48 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@827 -- # '[' -z 1051370 ']' 00:32:21.473 00:44:48 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:32:21.473 00:44:48 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@832 -- # local max_retries=100 00:32:21.473 00:44:48 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:32:21.473 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:32:21.473 00:44:48 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # xtrace_disable 00:32:21.473 00:44:48 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:32:21.473 00:44:48 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:32:21.473 00:44:48 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@860 -- # return 0 00:32:21.473 00:44:48 nvmf_tcp.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:32:21.473 [2024-07-12 00:44:48.880252] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:32:21.473 00:44:48 nvmf_tcp.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:32:21.473 [2024-07-12 00:44:49.120947] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:32:21.473 00:44:49 nvmf_tcp.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:32:21.732 NVMe0n1 00:32:21.732 00:44:49 nvmf_tcp.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:32:22.297 00:32:22.297 00:44:49 nvmf_tcp.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:32:22.554 00:32:22.555 00:44:50 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:32:22.555 00:44:50 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:32:22.812 00:44:50 nvmf_tcp.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:32:23.070 00:44:50 nvmf_tcp.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:32:26.349 00:44:53 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:32:26.349 00:44:53 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:32:26.349 00:44:54 nvmf_tcp.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=1051887 00:32:26.349 00:44:54 nvmf_tcp.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:32:26.349 00:44:54 nvmf_tcp.nvmf_failover -- host/failover.sh@92 -- # wait 1051887 00:32:27.722 0 00:32:27.722 00:44:55 nvmf_tcp.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:32:27.722 [2024-07-12 00:44:48.362106] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:32:27.722 [2024-07-12 00:44:48.362212] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1051370 ] 00:32:27.722 EAL: No free 2048 kB hugepages reported on node 1 00:32:27.722 [2024-07-12 00:44:48.422913] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:27.722 [2024-07-12 00:44:48.510249] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:27.722 [2024-07-12 00:44:50.820735] bdev_nvme.c:1867:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:32:27.722 [2024-07-12 00:44:50.820824] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:27.722 [2024-07-12 00:44:50.820849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.722 [2024-07-12 00:44:50.820869] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:27.722 [2024-07-12 00:44:50.820884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.722 [2024-07-12 00:44:50.820900] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:27.722 [2024-07-12 00:44:50.820915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.722 [2024-07-12 00:44:50.820930] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:27.722 [2024-07-12 00:44:50.820945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:27.722 [2024-07-12 00:44:50.820960] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:27.722 [2024-07-12 00:44:50.821018] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:27.722 [2024-07-12 00:44:50.821052] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dc4360 (9): Bad file descriptor 00:32:27.722 [2024-07-12 00:44:50.826015] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:32:27.722 Running I/O for 1 seconds... 00:32:27.722 00:32:27.722 Latency(us) 00:32:27.722 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:27.722 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:32:27.722 Verification LBA range: start 0x0 length 0x4000 00:32:27.722 NVMe0n1 : 1.01 7640.93 29.85 0.00 0.00 16675.25 3592.34 13398.47 00:32:27.722 =================================================================================================================== 00:32:27.722 Total : 7640.93 29.85 0.00 0.00 16675.25 3592.34 13398.47 00:32:27.722 00:44:55 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:32:27.722 00:44:55 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:32:27.980 00:44:55 nvmf_tcp.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:32:28.238 00:44:55 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:32:28.238 00:44:55 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:32:28.496 00:44:56 nvmf_tcp.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:32:28.754 00:44:56 nvmf_tcp.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:32:32.033 00:44:59 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:32:32.033 00:44:59 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:32:32.033 00:44:59 nvmf_tcp.nvmf_failover -- host/failover.sh@108 -- # killprocess 1051370 00:32:32.033 00:44:59 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@946 -- # '[' -z 1051370 ']' 00:32:32.033 00:44:59 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@950 -- # kill -0 1051370 00:32:32.033 00:44:59 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # uname 00:32:32.033 00:44:59 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:32:32.033 00:44:59 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1051370 00:32:32.033 00:44:59 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:32:32.033 00:44:59 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:32:32.033 00:44:59 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1051370' 00:32:32.033 killing process with pid 1051370 00:32:32.033 00:44:59 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@965 -- # kill 1051370 00:32:32.033 00:44:59 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@970 -- # wait 1051370 00:32:32.033 00:44:59 nvmf_tcp.nvmf_failover -- host/failover.sh@110 -- # sync 00:32:32.033 00:44:59 nvmf_tcp.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:32.291 00:45:00 nvmf_tcp.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:32:32.291 00:45:00 nvmf_tcp.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:32:32.291 00:45:00 nvmf_tcp.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:32:32.291 00:45:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@488 -- # nvmfcleanup 00:32:32.291 00:45:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@117 -- # sync 00:32:32.291 00:45:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:32:32.548 00:45:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@120 -- # set +e 00:32:32.548 00:45:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@121 -- # for i in {1..20} 00:32:32.548 00:45:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:32:32.548 rmmod nvme_tcp 00:32:32.548 rmmod nvme_fabrics 00:32:32.548 rmmod nvme_keyring 00:32:32.548 00:45:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:32:32.548 00:45:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@124 -- # set -e 00:32:32.548 00:45:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@125 -- # return 0 00:32:32.548 00:45:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@489 -- # '[' -n 1049665 ']' 00:32:32.548 00:45:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@490 -- # killprocess 1049665 00:32:32.548 00:45:00 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@946 -- # '[' -z 1049665 ']' 00:32:32.548 00:45:00 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@950 -- # kill -0 1049665 00:32:32.548 00:45:00 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # uname 00:32:32.548 00:45:00 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:32:32.548 00:45:00 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1049665 00:32:32.548 00:45:00 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:32:32.548 00:45:00 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:32:32.548 00:45:00 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1049665' 00:32:32.548 killing process with pid 1049665 00:32:32.548 00:45:00 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@965 -- # kill 1049665 00:32:32.548 00:45:00 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@970 -- # wait 1049665 00:32:32.806 00:45:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:32:32.806 00:45:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:32:32.806 00:45:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:32:32.806 00:45:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:32:32.806 00:45:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@278 -- # remove_spdk_ns 00:32:32.806 00:45:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:32.806 00:45:00 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:32.806 00:45:00 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:34.707 00:45:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:32:34.707 00:32:34.707 real 0m34.462s 00:32:34.707 user 2m3.132s 00:32:34.707 sys 0m5.530s 00:32:34.707 00:45:02 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1122 -- # xtrace_disable 00:32:34.707 00:45:02 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:32:34.707 ************************************ 00:32:34.707 END TEST nvmf_failover 00:32:34.707 ************************************ 00:32:34.707 00:45:02 nvmf_tcp -- nvmf/nvmf.sh@101 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:32:34.707 00:45:02 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:32:34.707 00:45:02 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:32:34.707 00:45:02 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:34.707 ************************************ 00:32:34.707 START TEST nvmf_host_discovery 00:32:34.707 ************************************ 00:32:34.707 00:45:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:32:34.707 * Looking for test storage... 00:32:34.707 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:34.707 00:45:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:34.707 00:45:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:32:34.966 00:45:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:34.966 00:45:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:34.966 00:45:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:34.966 00:45:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:34.966 00:45:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:34.966 00:45:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:34.966 00:45:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:34.966 00:45:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:34.966 00:45:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:34.966 00:45:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:34.966 00:45:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:32:34.966 00:45:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:32:34.966 00:45:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:34.966 00:45:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:34.966 00:45:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:34.966 00:45:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:34.966 00:45:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:34.966 00:45:02 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:34.966 00:45:02 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:34.966 00:45:02 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:34.966 00:45:02 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:34.966 00:45:02 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:34.966 00:45:02 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:34.966 00:45:02 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:32:34.966 00:45:02 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:34.966 00:45:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@47 -- # : 0 00:32:34.966 00:45:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:32:34.966 00:45:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:32:34.966 00:45:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:34.966 00:45:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:34.966 00:45:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:34.966 00:45:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:32:34.966 00:45:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:32:34.966 00:45:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:32:34.966 00:45:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:32:34.966 00:45:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:32:34.966 00:45:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:32:34.966 00:45:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:32:34.966 00:45:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:32:34.966 00:45:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:32:34.966 00:45:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:32:34.966 00:45:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:32:34.966 00:45:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:34.966 00:45:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:32:34.966 00:45:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:32:34.966 00:45:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:32:34.966 00:45:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:34.966 00:45:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:34.966 00:45:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:34.966 00:45:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:32:34.966 00:45:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:32:34.966 00:45:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:32:34.966 00:45:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:36.344 00:45:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:36.345 00:45:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:32:36.345 00:45:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:32:36.345 00:45:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:32:36.345 00:45:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:32:36.345 00:45:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:32:36.345 00:45:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:32:36.345 00:45:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:32:36.345 00:45:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:32:36.345 00:45:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # e810=() 00:32:36.345 00:45:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:32:36.345 00:45:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # x722=() 00:32:36.345 00:45:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:32:36.345 00:45:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # mlx=() 00:32:36.345 00:45:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:32:36.345 00:45:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:36.345 00:45:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:36.345 00:45:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:36.345 00:45:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:36.345 00:45:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:36.345 00:45:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:36.345 00:45:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:36.345 00:45:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:36.345 00:45:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:36.345 00:45:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:36.345 00:45:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:36.345 00:45:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:32:36.345 00:45:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:32:36.345 00:45:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:32:36.345 00:45:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:32:36.345 00:45:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:32:36.345 00:45:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:32:36.345 00:45:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:36.345 00:45:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:32:36.345 Found 0000:08:00.0 (0x8086 - 0x159b) 00:32:36.345 00:45:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:36.345 00:45:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:36.345 00:45:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:36.345 00:45:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:36.345 00:45:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:36.345 00:45:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:36.345 00:45:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:32:36.345 Found 0000:08:00.1 (0x8086 - 0x159b) 00:32:36.345 00:45:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:36.345 00:45:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:36.345 00:45:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:36.345 00:45:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:36.345 00:45:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:36.345 00:45:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:32:36.345 00:45:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:32:36.345 00:45:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:32:36.345 00:45:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:36.345 00:45:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:36.345 00:45:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:36.345 00:45:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:36.345 00:45:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:36.345 00:45:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:36.345 00:45:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:36.345 00:45:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:32:36.345 Found net devices under 0000:08:00.0: cvl_0_0 00:32:36.345 00:45:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:36.345 00:45:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:36.345 00:45:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:36.345 00:45:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:36.345 00:45:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:36.345 00:45:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:36.345 00:45:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:36.345 00:45:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:36.345 00:45:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:32:36.345 Found net devices under 0000:08:00.1: cvl_0_1 00:32:36.345 00:45:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:36.345 00:45:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:32:36.345 00:45:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:32:36.345 00:45:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:32:36.345 00:45:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:32:36.345 00:45:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:32:36.345 00:45:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:36.345 00:45:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:36.345 00:45:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:36.345 00:45:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:32:36.345 00:45:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:36.345 00:45:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:36.345 00:45:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:32:36.345 00:45:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:36.345 00:45:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:36.345 00:45:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:32:36.345 00:45:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:32:36.345 00:45:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:32:36.345 00:45:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:36.345 00:45:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:36.345 00:45:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:36.345 00:45:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:32:36.345 00:45:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:36.604 00:45:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:36.604 00:45:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:36.604 00:45:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:32:36.604 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:36.604 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.237 ms 00:32:36.604 00:32:36.604 --- 10.0.0.2 ping statistics --- 00:32:36.604 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:36.604 rtt min/avg/max/mdev = 0.237/0.237/0.237/0.000 ms 00:32:36.604 00:45:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:36.604 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:36.604 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.168 ms 00:32:36.604 00:32:36.604 --- 10.0.0.1 ping statistics --- 00:32:36.604 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:36.604 rtt min/avg/max/mdev = 0.168/0.168/0.168/0.000 ms 00:32:36.604 00:45:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:36.604 00:45:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@422 -- # return 0 00:32:36.604 00:45:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:32:36.604 00:45:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:36.604 00:45:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:32:36.604 00:45:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:32:36.604 00:45:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:36.604 00:45:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:32:36.604 00:45:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:32:36.604 00:45:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:32:36.604 00:45:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:32:36.604 00:45:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@720 -- # xtrace_disable 00:32:36.604 00:45:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:36.604 00:45:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@481 -- # nvmfpid=1053999 00:32:36.604 00:45:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:32:36.604 00:45:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@482 -- # waitforlisten 1053999 00:32:36.604 00:45:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@827 -- # '[' -z 1053999 ']' 00:32:36.604 00:45:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:36.604 00:45:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@832 -- # local max_retries=100 00:32:36.604 00:45:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:36.604 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:36.604 00:45:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # xtrace_disable 00:32:36.604 00:45:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:36.604 [2024-07-12 00:45:04.310931] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:32:36.604 [2024-07-12 00:45:04.311021] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:36.604 EAL: No free 2048 kB hugepages reported on node 1 00:32:36.604 [2024-07-12 00:45:04.376613] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:36.863 [2024-07-12 00:45:04.466316] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:36.863 [2024-07-12 00:45:04.466369] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:36.863 [2024-07-12 00:45:04.466384] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:36.863 [2024-07-12 00:45:04.466398] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:36.863 [2024-07-12 00:45:04.466410] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:36.863 [2024-07-12 00:45:04.466440] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:32:36.863 00:45:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:32:36.863 00:45:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@860 -- # return 0 00:32:36.863 00:45:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:32:36.863 00:45:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:36.863 00:45:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:36.863 00:45:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:36.863 00:45:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:36.863 00:45:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:36.863 00:45:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:36.863 [2024-07-12 00:45:04.598770] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:36.863 00:45:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:36.863 00:45:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:32:36.863 00:45:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:36.863 00:45:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:36.863 [2024-07-12 00:45:04.606906] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:32:36.863 00:45:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:36.863 00:45:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:32:36.863 00:45:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:36.863 00:45:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:36.863 null0 00:32:36.863 00:45:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:36.863 00:45:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:32:36.863 00:45:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:36.863 00:45:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:36.863 null1 00:32:36.863 00:45:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:36.863 00:45:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:32:36.863 00:45:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:36.863 00:45:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:36.863 00:45:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:36.863 00:45:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=1054028 00:32:36.863 00:45:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 1054028 /tmp/host.sock 00:32:36.863 00:45:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:32:36.863 00:45:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@827 -- # '[' -z 1054028 ']' 00:32:36.863 00:45:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@831 -- # local rpc_addr=/tmp/host.sock 00:32:36.863 00:45:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@832 -- # local max_retries=100 00:32:36.863 00:45:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:32:36.863 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:32:36.863 00:45:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # xtrace_disable 00:32:36.863 00:45:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:36.863 [2024-07-12 00:45:04.684530] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:32:36.863 [2024-07-12 00:45:04.684636] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1054028 ] 00:32:37.121 EAL: No free 2048 kB hugepages reported on node 1 00:32:37.121 [2024-07-12 00:45:04.745175] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:37.121 [2024-07-12 00:45:04.832854] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:37.121 00:45:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:32:37.121 00:45:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@860 -- # return 0 00:32:37.121 00:45:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:32:37.121 00:45:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:32:37.121 00:45:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:37.121 00:45:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:37.121 00:45:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:37.121 00:45:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:32:37.121 00:45:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:37.121 00:45:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:37.121 00:45:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:37.121 00:45:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:32:37.121 00:45:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:32:37.121 00:45:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:37.121 00:45:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:37.121 00:45:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:37.122 00:45:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:37.122 00:45:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:32:37.122 00:45:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:32:37.380 00:45:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:37.380 00:45:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:32:37.380 00:45:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:32:37.380 00:45:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:37.380 00:45:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:37.380 00:45:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:37.380 00:45:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:37.380 00:45:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:37.380 00:45:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:37.380 00:45:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:37.380 00:45:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:32:37.381 00:45:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:32:37.381 00:45:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:37.381 00:45:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:37.381 00:45:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:37.381 00:45:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:32:37.381 00:45:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:37.381 00:45:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:37.381 00:45:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:37.381 00:45:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:37.381 00:45:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:32:37.381 00:45:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:32:37.381 00:45:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:37.381 00:45:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:32:37.381 00:45:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:32:37.381 00:45:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:37.381 00:45:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:37.381 00:45:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:37.381 00:45:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:37.381 00:45:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:37.381 00:45:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:37.381 00:45:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:37.381 00:45:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:32:37.381 00:45:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:32:37.381 00:45:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:37.381 00:45:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:37.381 00:45:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:37.381 00:45:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:32:37.381 00:45:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:37.381 00:45:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:37.381 00:45:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:37.381 00:45:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:37.381 00:45:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:32:37.381 00:45:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:32:37.381 00:45:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:37.381 00:45:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:32:37.381 00:45:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:32:37.381 00:45:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:37.381 00:45:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:37.381 00:45:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:37.381 00:45:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:37.381 00:45:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:37.381 00:45:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:37.381 00:45:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:37.639 00:45:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:32:37.639 00:45:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:37.639 00:45:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:37.639 00:45:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:37.639 [2024-07-12 00:45:05.236612] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:37.639 00:45:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:37.639 00:45:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:32:37.639 00:45:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:37.639 00:45:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:37.639 00:45:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:37.639 00:45:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:37.639 00:45:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:32:37.639 00:45:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:32:37.639 00:45:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:37.639 00:45:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:32:37.639 00:45:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:32:37.639 00:45:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:37.639 00:45:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:37.639 00:45:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:37.639 00:45:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:37.639 00:45:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:37.639 00:45:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:37.639 00:45:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:37.639 00:45:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:32:37.639 00:45:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:32:37.639 00:45:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:32:37.639 00:45:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:32:37.639 00:45:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:32:37.639 00:45:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:32:37.639 00:45:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:32:37.639 00:45:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:32:37.639 00:45:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:32:37.639 00:45:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:32:37.639 00:45:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:32:37.639 00:45:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:37.640 00:45:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:37.640 00:45:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:37.640 00:45:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:32:37.640 00:45:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:32:37.640 00:45:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:32:37.640 00:45:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:32:37.640 00:45:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:32:37.640 00:45:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:37.640 00:45:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:37.640 00:45:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:37.640 00:45:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:32:37.640 00:45:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:32:37.640 00:45:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:32:37.640 00:45:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:32:37.640 00:45:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:32:37.640 00:45:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_names 00:32:37.640 00:45:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:37.640 00:45:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:37.640 00:45:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:37.640 00:45:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:37.640 00:45:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:32:37.640 00:45:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:32:37.640 00:45:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:37.640 00:45:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ '' == \n\v\m\e\0 ]] 00:32:37.640 00:45:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # sleep 1 00:32:38.207 [2024-07-12 00:45:05.996764] bdev_nvme.c:6984:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:32:38.207 [2024-07-12 00:45:05.996796] bdev_nvme.c:7064:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:32:38.207 [2024-07-12 00:45:05.996822] bdev_nvme.c:6947:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:32:38.465 [2024-07-12 00:45:06.083108] bdev_nvme.c:6913:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:32:38.465 [2024-07-12 00:45:06.267125] bdev_nvme.c:6803:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:32:38.465 [2024-07-12 00:45:06.267158] bdev_nvme.c:6762:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:32:38.751 00:45:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:32:38.751 00:45:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:32:38.751 00:45:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_names 00:32:38.751 00:45:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:38.751 00:45:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:38.751 00:45:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:38.751 00:45:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:32:38.751 00:45:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:38.751 00:45:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:32:38.751 00:45:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:38.751 00:45:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:38.751 00:45:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:32:38.751 00:45:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:32:38.751 00:45:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:32:38.751 00:45:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:32:38.751 00:45:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:32:38.751 00:45:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:32:38.751 00:45:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_bdev_list 00:32:38.751 00:45:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:38.751 00:45:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:38.751 00:45:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:38.751 00:45:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:38.751 00:45:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:38.751 00:45:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:38.751 00:45:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:38.751 00:45:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:32:38.751 00:45:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:32:38.751 00:45:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:32:38.751 00:45:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:32:38.751 00:45:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:32:38.751 00:45:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:32:38.751 00:45:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:32:38.751 00:45:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_paths nvme0 00:32:38.751 00:45:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:32:38.751 00:45:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:32:38.752 00:45:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:38.752 00:45:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:38.752 00:45:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:32:38.752 00:45:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:32:38.752 00:45:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:38.752 00:45:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ 4420 == \4\4\2\0 ]] 00:32:38.752 00:45:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:32:38.752 00:45:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:32:38.752 00:45:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:32:38.752 00:45:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:32:38.752 00:45:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:32:38.752 00:45:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:32:38.752 00:45:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:32:38.752 00:45:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:32:38.752 00:45:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:32:38.752 00:45:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:32:38.752 00:45:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:32:38.752 00:45:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:38.752 00:45:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:39.017 00:45:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:39.017 00:45:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:32:39.017 00:45:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:32:39.017 00:45:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:32:39.017 00:45:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:32:39.017 00:45:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:32:39.017 00:45:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:39.017 00:45:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:39.017 00:45:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:39.017 00:45:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:32:39.017 00:45:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:32:39.017 00:45:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:32:39.017 00:45:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:32:39.017 00:45:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:32:39.017 00:45:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_bdev_list 00:32:39.017 00:45:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:39.017 00:45:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:39.017 00:45:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:39.017 00:45:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:39.017 00:45:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:39.017 00:45:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:39.017 00:45:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:39.017 00:45:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:32:39.017 00:45:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:32:39.017 00:45:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:32:39.017 00:45:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:32:39.017 00:45:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:32:39.017 00:45:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:32:39.017 00:45:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:32:39.017 00:45:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:32:39.017 00:45:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:32:39.017 00:45:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:32:39.018 00:45:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:32:39.018 00:45:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:32:39.018 00:45:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:39.018 00:45:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:39.018 00:45:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:39.018 00:45:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:32:39.018 00:45:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:32:39.018 00:45:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:32:39.018 00:45:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:32:39.018 00:45:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:32:39.018 00:45:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:39.018 00:45:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:39.018 [2024-07-12 00:45:06.705124] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:32:39.018 [2024-07-12 00:45:06.706195] bdev_nvme.c:6966:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:32:39.018 [2024-07-12 00:45:06.706240] bdev_nvme.c:6947:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:32:39.018 00:45:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:39.018 00:45:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:32:39.018 00:45:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:32:39.018 00:45:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:32:39.018 00:45:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:32:39.018 00:45:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:32:39.018 00:45:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_names 00:32:39.018 00:45:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:39.018 00:45:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:39.018 00:45:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:39.018 00:45:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:39.018 00:45:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:32:39.018 00:45:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:32:39.018 00:45:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:39.018 00:45:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:39.018 00:45:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:32:39.018 00:45:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:32:39.018 00:45:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:32:39.018 00:45:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:32:39.018 00:45:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:32:39.018 00:45:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:32:39.018 00:45:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_bdev_list 00:32:39.018 00:45:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:39.018 00:45:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:39.018 00:45:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:39.018 00:45:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:39.018 00:45:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:39.018 00:45:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:39.018 00:45:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:39.018 [2024-07-12 00:45:06.791862] bdev_nvme.c:6908:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:32:39.018 00:45:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:32:39.018 00:45:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:32:39.018 00:45:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:32:39.018 00:45:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:32:39.018 00:45:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:32:39.018 00:45:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:32:39.018 00:45:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:32:39.018 00:45:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_paths nvme0 00:32:39.018 00:45:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:32:39.018 00:45:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:32:39.018 00:45:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:39.018 00:45:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:39.018 00:45:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:32:39.018 00:45:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:32:39.018 00:45:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:39.018 00:45:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:32:39.018 00:45:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # sleep 1 00:32:39.279 [2024-07-12 00:45:07.093278] bdev_nvme.c:6803:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:32:39.279 [2024-07-12 00:45:07.093323] bdev_nvme.c:6762:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:32:39.279 [2024-07-12 00:45:07.093335] bdev_nvme.c:6762:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:32:40.213 00:45:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:32:40.213 00:45:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:32:40.213 00:45:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_paths nvme0 00:32:40.213 00:45:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:32:40.213 00:45:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:32:40.213 00:45:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:40.213 00:45:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:40.213 00:45:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:32:40.213 00:45:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:32:40.213 00:45:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:40.213 00:45:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:32:40.213 00:45:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:32:40.213 00:45:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:32:40.213 00:45:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:32:40.213 00:45:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:32:40.213 00:45:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:32:40.213 00:45:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:32:40.213 00:45:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:32:40.213 00:45:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:32:40.213 00:45:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:32:40.213 00:45:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:32:40.213 00:45:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:32:40.213 00:45:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:40.213 00:45:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:40.213 00:45:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:40.213 00:45:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:32:40.213 00:45:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:32:40.213 00:45:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:32:40.213 00:45:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:32:40.213 00:45:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:40.213 00:45:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:40.213 00:45:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:40.213 [2024-07-12 00:45:07.940983] bdev_nvme.c:6966:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:32:40.213 [2024-07-12 00:45:07.941024] bdev_nvme.c:6947:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:32:40.213 00:45:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:40.213 00:45:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:32:40.213 00:45:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:32:40.213 00:45:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:32:40.213 00:45:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:32:40.213 00:45:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:32:40.213 00:45:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_names 00:32:40.213 00:45:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:40.213 00:45:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:40.213 00:45:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:40.213 [2024-07-12 00:45:07.948390] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:40.213 [2024-07-12 00:45:07.948427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.213 [2024-07-12 00:45:07.948446] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:40.213 [2024-07-12 00:45:07.948462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.213 00:45:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:40.213 [2024-07-12 00:45:07.948478] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:40.213 [2024-07-12 00:45:07.948496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.213 [2024-07-12 00:45:07.948511] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:40.213 [2024-07-12 00:45:07.948526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:40.213 [2024-07-12 00:45:07.948541] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1308a00 is same with the state(5) to be set 00:32:40.213 00:45:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:32:40.213 00:45:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:32:40.213 00:45:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:40.213 [2024-07-12 00:45:07.958397] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1308a00 (9): Bad file descriptor 00:32:40.213 [2024-07-12 00:45:07.968445] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:32:40.213 [2024-07-12 00:45:07.968685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.213 [2024-07-12 00:45:07.968729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1308a00 with addr=10.0.0.2, port=4420 00:32:40.213 [2024-07-12 00:45:07.968749] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1308a00 is same with the state(5) to be set 00:32:40.213 [2024-07-12 00:45:07.968776] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1308a00 (9): Bad file descriptor 00:32:40.213 [2024-07-12 00:45:07.968801] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:32:40.213 [2024-07-12 00:45:07.968817] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:32:40.213 [2024-07-12 00:45:07.968835] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:32:40.213 [2024-07-12 00:45:07.968860] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:40.213 [2024-07-12 00:45:07.978530] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:32:40.213 [2024-07-12 00:45:07.978692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.213 [2024-07-12 00:45:07.978722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1308a00 with addr=10.0.0.2, port=4420 00:32:40.213 [2024-07-12 00:45:07.978740] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1308a00 is same with the state(5) to be set 00:32:40.213 [2024-07-12 00:45:07.978764] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1308a00 (9): Bad file descriptor 00:32:40.213 [2024-07-12 00:45:07.978803] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:32:40.213 [2024-07-12 00:45:07.978822] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:32:40.214 [2024-07-12 00:45:07.978837] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:32:40.214 [2024-07-12 00:45:07.978858] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:40.214 00:45:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:40.214 00:45:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:32:40.214 [2024-07-12 00:45:07.988616] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:32:40.214 [2024-07-12 00:45:07.988804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.214 [2024-07-12 00:45:07.988833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1308a00 with addr=10.0.0.2, port=4420 00:32:40.214 00:45:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:32:40.214 [2024-07-12 00:45:07.988859] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1308a00 is same with the state(5) to be set 00:32:40.214 [2024-07-12 00:45:07.988887] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1308a00 (9): Bad file descriptor 00:32:40.214 [2024-07-12 00:45:07.988910] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:32:40.214 [2024-07-12 00:45:07.988925] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:32:40.214 [2024-07-12 00:45:07.988940] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:32:40.214 [2024-07-12 00:45:07.988960] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:40.214 00:45:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:32:40.214 00:45:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:32:40.214 00:45:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:32:40.214 00:45:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:32:40.214 00:45:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_bdev_list 00:32:40.214 00:45:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:40.214 00:45:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:40.214 00:45:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:40.214 00:45:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:40.214 00:45:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:40.214 00:45:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:40.214 [2024-07-12 00:45:07.998708] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:32:40.214 [2024-07-12 00:45:07.998884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.214 [2024-07-12 00:45:07.998914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1308a00 with addr=10.0.0.2, port=4420 00:32:40.214 [2024-07-12 00:45:07.998932] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1308a00 is same with the state(5) to be set 00:32:40.214 [2024-07-12 00:45:07.998956] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1308a00 (9): Bad file descriptor 00:32:40.214 [2024-07-12 00:45:07.999010] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:32:40.214 [2024-07-12 00:45:07.999030] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:32:40.214 [2024-07-12 00:45:07.999045] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:32:40.214 [2024-07-12 00:45:07.999067] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:40.214 [2024-07-12 00:45:08.008788] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:32:40.214 [2024-07-12 00:45:08.008939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.214 [2024-07-12 00:45:08.008968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1308a00 with addr=10.0.0.2, port=4420 00:32:40.214 [2024-07-12 00:45:08.008985] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1308a00 is same with the state(5) to be set 00:32:40.214 [2024-07-12 00:45:08.009009] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1308a00 (9): Bad file descriptor 00:32:40.214 [2024-07-12 00:45:08.009032] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:32:40.214 [2024-07-12 00:45:08.009052] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:32:40.214 [2024-07-12 00:45:08.009068] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:32:40.214 [2024-07-12 00:45:08.009089] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:40.214 [2024-07-12 00:45:08.018866] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:32:40.214 00:45:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:40.214 [2024-07-12 00:45:08.019034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.214 [2024-07-12 00:45:08.019063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1308a00 with addr=10.0.0.2, port=4420 00:32:40.214 [2024-07-12 00:45:08.019080] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1308a00 is same with the state(5) to be set 00:32:40.214 [2024-07-12 00:45:08.019105] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1308a00 (9): Bad file descriptor 00:32:40.214 [2024-07-12 00:45:08.019142] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:32:40.214 [2024-07-12 00:45:08.019160] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:32:40.214 [2024-07-12 00:45:08.019175] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:32:40.214 [2024-07-12 00:45:08.019197] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:40.214 [2024-07-12 00:45:08.028945] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:32:40.214 [2024-07-12 00:45:08.029099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.214 [2024-07-12 00:45:08.029129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1308a00 with addr=10.0.0.2, port=4420 00:32:40.214 [2024-07-12 00:45:08.029146] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1308a00 is same with the state(5) to be set 00:32:40.214 [2024-07-12 00:45:08.029171] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1308a00 (9): Bad file descriptor 00:32:40.214 [2024-07-12 00:45:08.029193] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:32:40.214 [2024-07-12 00:45:08.029208] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:32:40.214 [2024-07-12 00:45:08.029223] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:32:40.214 [2024-07-12 00:45:08.029244] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:40.214 00:45:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:32:40.214 00:45:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:32:40.214 00:45:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:32:40.214 00:45:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:32:40.214 00:45:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:32:40.214 00:45:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:32:40.214 00:45:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:32:40.214 00:45:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_paths nvme0 00:32:40.214 00:45:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:32:40.214 00:45:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:32:40.214 00:45:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:40.214 00:45:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:40.214 00:45:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:32:40.214 00:45:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:32:40.214 [2024-07-12 00:45:08.039025] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:32:40.214 [2024-07-12 00:45:08.039191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.214 [2024-07-12 00:45:08.039220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1308a00 with addr=10.0.0.2, port=4420 00:32:40.214 [2024-07-12 00:45:08.039237] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1308a00 is same with the state(5) to be set 00:32:40.214 [2024-07-12 00:45:08.039261] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1308a00 (9): Bad file descriptor 00:32:40.214 [2024-07-12 00:45:08.039455] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:32:40.214 [2024-07-12 00:45:08.039479] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:32:40.214 [2024-07-12 00:45:08.039495] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:32:40.214 [2024-07-12 00:45:08.039517] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:40.214 00:45:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:40.214 [2024-07-12 00:45:08.049102] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:32:40.214 [2024-07-12 00:45:08.049834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.214 [2024-07-12 00:45:08.049870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1308a00 with addr=10.0.0.2, port=4420 00:32:40.214 [2024-07-12 00:45:08.049888] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1308a00 is same with the state(5) to be set 00:32:40.214 [2024-07-12 00:45:08.049913] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1308a00 (9): Bad file descriptor 00:32:40.214 [2024-07-12 00:45:08.049951] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:32:40.214 [2024-07-12 00:45:08.049969] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:32:40.214 [2024-07-12 00:45:08.049985] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:32:40.214 [2024-07-12 00:45:08.050007] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:40.471 [2024-07-12 00:45:08.059188] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:32:40.471 [2024-07-12 00:45:08.059334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.471 [2024-07-12 00:45:08.059364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1308a00 with addr=10.0.0.2, port=4420 00:32:40.471 [2024-07-12 00:45:08.059382] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1308a00 is same with the state(5) to be set 00:32:40.471 [2024-07-12 00:45:08.059407] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1308a00 (9): Bad file descriptor 00:32:40.471 [2024-07-12 00:45:08.059445] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:32:40.471 [2024-07-12 00:45:08.059463] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:32:40.471 [2024-07-12 00:45:08.059479] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:32:40.471 [2024-07-12 00:45:08.059502] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:40.471 [2024-07-12 00:45:08.067000] bdev_nvme.c:6771:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:32:40.471 [2024-07-12 00:45:08.067034] bdev_nvme.c:6762:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:32:40.471 00:45:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ 4420 4421 == \4\4\2\1 ]] 00:32:40.471 00:45:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # sleep 1 00:32:41.403 00:45:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:32:41.403 00:45:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:32:41.403 00:45:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_paths nvme0 00:32:41.403 00:45:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:32:41.403 00:45:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:32:41.403 00:45:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:41.403 00:45:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:41.403 00:45:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:32:41.403 00:45:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:32:41.403 00:45:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:41.403 00:45:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ 4421 == \4\4\2\1 ]] 00:32:41.403 00:45:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:32:41.403 00:45:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:32:41.403 00:45:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:32:41.403 00:45:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:32:41.403 00:45:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:32:41.403 00:45:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:32:41.403 00:45:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:32:41.403 00:45:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:32:41.403 00:45:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:32:41.403 00:45:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:32:41.403 00:45:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:32:41.403 00:45:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:41.403 00:45:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:41.403 00:45:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:41.403 00:45:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:32:41.403 00:45:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:32:41.403 00:45:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:32:41.403 00:45:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:32:41.403 00:45:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:32:41.403 00:45:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:41.403 00:45:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:41.403 00:45:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:41.403 00:45:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:32:41.403 00:45:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:32:41.403 00:45:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:32:41.403 00:45:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:32:41.403 00:45:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:32:41.403 00:45:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_names 00:32:41.404 00:45:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:41.404 00:45:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:41.404 00:45:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:41.404 00:45:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:41.404 00:45:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:32:41.404 00:45:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:32:41.404 00:45:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:41.404 00:45:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ '' == '' ]] 00:32:41.404 00:45:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:32:41.404 00:45:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:32:41.404 00:45:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:32:41.404 00:45:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:32:41.404 00:45:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:32:41.404 00:45:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:32:41.404 00:45:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_bdev_list 00:32:41.404 00:45:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:41.404 00:45:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:41.404 00:45:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:41.404 00:45:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:41.404 00:45:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:41.404 00:45:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:41.404 00:45:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:41.661 00:45:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ '' == '' ]] 00:32:41.661 00:45:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:32:41.661 00:45:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:32:41.661 00:45:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:32:41.661 00:45:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:32:41.661 00:45:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:32:41.661 00:45:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:32:41.661 00:45:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:32:41.661 00:45:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:32:41.661 00:45:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:32:41.661 00:45:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:32:41.661 00:45:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:32:41.661 00:45:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:41.661 00:45:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:41.661 00:45:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:41.661 00:45:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:32:41.661 00:45:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:32:41.661 00:45:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:32:41.661 00:45:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:32:41.661 00:45:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:32:41.661 00:45:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:41.661 00:45:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:42.592 [2024-07-12 00:45:10.369389] bdev_nvme.c:6984:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:32:42.592 [2024-07-12 00:45:10.369434] bdev_nvme.c:7064:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:32:42.592 [2024-07-12 00:45:10.369460] bdev_nvme.c:6947:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:32:42.851 [2024-07-12 00:45:10.456734] bdev_nvme.c:6913:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:32:42.851 [2024-07-12 00:45:10.521808] bdev_nvme.c:6803:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:32:42.851 [2024-07-12 00:45:10.521869] bdev_nvme.c:6762:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:32:42.851 00:45:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:42.851 00:45:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:32:42.851 00:45:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:32:42.851 00:45:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:32:42.851 00:45:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:32:42.851 00:45:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:42.851 00:45:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:32:42.851 00:45:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:42.851 00:45:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:32:42.851 00:45:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:42.851 00:45:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:42.851 request: 00:32:42.851 { 00:32:42.851 "name": "nvme", 00:32:42.851 "trtype": "tcp", 00:32:42.851 "traddr": "10.0.0.2", 00:32:42.851 "hostnqn": "nqn.2021-12.io.spdk:test", 00:32:42.851 "adrfam": "ipv4", 00:32:42.851 "trsvcid": "8009", 00:32:42.851 "wait_for_attach": true, 00:32:42.851 "method": "bdev_nvme_start_discovery", 00:32:42.851 "req_id": 1 00:32:42.851 } 00:32:42.851 Got JSON-RPC error response 00:32:42.851 response: 00:32:42.851 { 00:32:42.851 "code": -17, 00:32:42.851 "message": "File exists" 00:32:42.851 } 00:32:42.851 00:45:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:32:42.851 00:45:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:32:42.851 00:45:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:32:42.851 00:45:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:32:42.851 00:45:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:32:42.851 00:45:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:32:42.851 00:45:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:32:42.851 00:45:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:32:42.851 00:45:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:42.851 00:45:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:32:42.851 00:45:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:42.851 00:45:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:32:42.851 00:45:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:42.851 00:45:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:32:42.851 00:45:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:32:42.851 00:45:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:42.851 00:45:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:42.851 00:45:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:42.851 00:45:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:42.851 00:45:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:42.851 00:45:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:42.852 00:45:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:42.852 00:45:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:32:42.852 00:45:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:32:42.852 00:45:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:32:42.852 00:45:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:32:42.852 00:45:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:32:42.852 00:45:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:42.852 00:45:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:32:42.852 00:45:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:42.852 00:45:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:32:42.852 00:45:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:42.852 00:45:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:42.852 request: 00:32:42.852 { 00:32:42.852 "name": "nvme_second", 00:32:42.852 "trtype": "tcp", 00:32:42.852 "traddr": "10.0.0.2", 00:32:42.852 "hostnqn": "nqn.2021-12.io.spdk:test", 00:32:42.852 "adrfam": "ipv4", 00:32:42.852 "trsvcid": "8009", 00:32:42.852 "wait_for_attach": true, 00:32:42.852 "method": "bdev_nvme_start_discovery", 00:32:42.852 "req_id": 1 00:32:42.852 } 00:32:42.852 Got JSON-RPC error response 00:32:42.852 response: 00:32:42.852 { 00:32:42.852 "code": -17, 00:32:42.852 "message": "File exists" 00:32:42.852 } 00:32:42.852 00:45:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:32:42.852 00:45:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:32:42.852 00:45:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:32:42.852 00:45:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:32:42.852 00:45:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:32:42.852 00:45:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:32:42.852 00:45:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:32:42.852 00:45:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:32:42.852 00:45:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:42.852 00:45:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:42.852 00:45:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:32:42.852 00:45:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:32:42.852 00:45:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:42.852 00:45:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:32:43.110 00:45:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:32:43.110 00:45:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:43.110 00:45:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:43.110 00:45:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:43.110 00:45:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:43.110 00:45:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:43.110 00:45:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:43.110 00:45:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:43.110 00:45:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:32:43.110 00:45:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:32:43.110 00:45:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:32:43.110 00:45:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:32:43.110 00:45:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:32:43.110 00:45:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:43.110 00:45:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:32:43.110 00:45:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:43.110 00:45:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:32:43.110 00:45:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:43.110 00:45:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:44.050 [2024-07-12 00:45:11.745434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.050 [2024-07-12 00:45:11.745492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a3b40 with addr=10.0.0.2, port=8010 00:32:44.050 [2024-07-12 00:45:11.745522] nvme_tcp.c:2702:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:32:44.050 [2024-07-12 00:45:11.745539] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:32:44.050 [2024-07-12 00:45:11.745555] bdev_nvme.c:7046:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:32:44.984 [2024-07-12 00:45:12.747908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:44.984 [2024-07-12 00:45:12.747982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a3b40 with addr=10.0.0.2, port=8010 00:32:44.984 [2024-07-12 00:45:12.748013] nvme_tcp.c:2702:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:32:44.984 [2024-07-12 00:45:12.748031] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:32:44.984 [2024-07-12 00:45:12.748053] bdev_nvme.c:7046:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:32:45.917 [2024-07-12 00:45:13.750082] bdev_nvme.c:7027:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:32:45.917 request: 00:32:45.917 { 00:32:45.917 "name": "nvme_second", 00:32:45.917 "trtype": "tcp", 00:32:45.917 "traddr": "10.0.0.2", 00:32:45.917 "hostnqn": "nqn.2021-12.io.spdk:test", 00:32:45.917 "adrfam": "ipv4", 00:32:45.917 "trsvcid": "8010", 00:32:45.917 "attach_timeout_ms": 3000, 00:32:45.917 "method": "bdev_nvme_start_discovery", 00:32:45.917 "req_id": 1 00:32:45.917 } 00:32:45.917 Got JSON-RPC error response 00:32:45.917 response: 00:32:45.917 { 00:32:45.917 "code": -110, 00:32:45.917 "message": "Connection timed out" 00:32:45.917 } 00:32:45.917 00:45:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:32:45.917 00:45:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:32:45.917 00:45:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:32:45.917 00:45:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:32:45.917 00:45:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:32:45.917 00:45:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:32:46.176 00:45:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:32:46.176 00:45:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:46.176 00:45:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:32:46.176 00:45:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:46.176 00:45:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:32:46.176 00:45:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:32:46.176 00:45:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:46.176 00:45:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:32:46.176 00:45:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:32:46.176 00:45:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 1054028 00:32:46.176 00:45:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:32:46.176 00:45:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:32:46.176 00:45:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@117 -- # sync 00:32:46.176 00:45:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:32:46.176 00:45:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@120 -- # set +e 00:32:46.176 00:45:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:32:46.176 00:45:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:32:46.176 rmmod nvme_tcp 00:32:46.176 rmmod nvme_fabrics 00:32:46.176 rmmod nvme_keyring 00:32:46.176 00:45:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:32:46.176 00:45:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@124 -- # set -e 00:32:46.176 00:45:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@125 -- # return 0 00:32:46.176 00:45:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@489 -- # '[' -n 1053999 ']' 00:32:46.176 00:45:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@490 -- # killprocess 1053999 00:32:46.176 00:45:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@946 -- # '[' -z 1053999 ']' 00:32:46.176 00:45:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@950 -- # kill -0 1053999 00:32:46.176 00:45:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@951 -- # uname 00:32:46.176 00:45:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:32:46.176 00:45:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1053999 00:32:46.176 00:45:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:32:46.176 00:45:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:32:46.176 00:45:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1053999' 00:32:46.176 killing process with pid 1053999 00:32:46.176 00:45:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@965 -- # kill 1053999 00:32:46.176 00:45:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@970 -- # wait 1053999 00:32:46.435 00:45:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:32:46.435 00:45:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:32:46.435 00:45:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:32:46.435 00:45:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:32:46.435 00:45:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:32:46.435 00:45:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:46.435 00:45:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:46.435 00:45:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:48.340 00:45:16 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:32:48.340 00:32:48.340 real 0m13.595s 00:32:48.340 user 0m20.729s 00:32:48.340 sys 0m2.568s 00:32:48.340 00:45:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1122 -- # xtrace_disable 00:32:48.340 00:45:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:48.340 ************************************ 00:32:48.340 END TEST nvmf_host_discovery 00:32:48.340 ************************************ 00:32:48.340 00:45:16 nvmf_tcp -- nvmf/nvmf.sh@102 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:32:48.340 00:45:16 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:32:48.340 00:45:16 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:32:48.340 00:45:16 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:48.340 ************************************ 00:32:48.340 START TEST nvmf_host_multipath_status 00:32:48.340 ************************************ 00:32:48.340 00:45:16 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:32:48.599 * Looking for test storage... 00:32:48.599 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:48.599 00:45:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:48.599 00:45:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:32:48.599 00:45:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:48.599 00:45:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:48.599 00:45:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:48.599 00:45:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:48.599 00:45:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:48.599 00:45:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:48.599 00:45:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:48.599 00:45:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:48.599 00:45:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:48.599 00:45:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:48.599 00:45:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:32:48.599 00:45:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:32:48.599 00:45:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:48.599 00:45:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:48.600 00:45:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:48.600 00:45:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:48.600 00:45:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:48.600 00:45:16 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:48.600 00:45:16 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:48.600 00:45:16 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:48.600 00:45:16 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:48.600 00:45:16 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:48.600 00:45:16 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:48.600 00:45:16 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:32:48.600 00:45:16 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:48.600 00:45:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # : 0 00:32:48.600 00:45:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:32:48.600 00:45:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:32:48.600 00:45:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:48.600 00:45:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:48.600 00:45:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:48.600 00:45:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:32:48.600 00:45:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:32:48.600 00:45:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # have_pci_nics=0 00:32:48.600 00:45:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:32:48.600 00:45:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:32:48.600 00:45:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:48.600 00:45:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:32:48.600 00:45:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:32:48.600 00:45:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:32:48.600 00:45:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:32:48.600 00:45:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:32:48.600 00:45:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:48.600 00:45:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # prepare_net_devs 00:32:48.600 00:45:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # local -g is_hw=no 00:32:48.600 00:45:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # remove_spdk_ns 00:32:48.600 00:45:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:48.600 00:45:16 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:48.600 00:45:16 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:48.600 00:45:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:32:48.600 00:45:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:32:48.600 00:45:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@285 -- # xtrace_disable 00:32:48.600 00:45:16 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:32:49.977 00:45:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:49.977 00:45:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # pci_devs=() 00:32:49.977 00:45:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # local -a pci_devs 00:32:49.977 00:45:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # pci_net_devs=() 00:32:49.977 00:45:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:32:49.977 00:45:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # pci_drivers=() 00:32:49.977 00:45:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # local -A pci_drivers 00:32:49.977 00:45:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # net_devs=() 00:32:49.977 00:45:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # local -ga net_devs 00:32:49.977 00:45:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # e810=() 00:32:49.977 00:45:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # local -ga e810 00:32:49.977 00:45:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # x722=() 00:32:49.977 00:45:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # local -ga x722 00:32:49.977 00:45:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # mlx=() 00:32:49.977 00:45:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # local -ga mlx 00:32:49.977 00:45:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:49.977 00:45:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:49.977 00:45:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:49.977 00:45:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:49.977 00:45:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:49.977 00:45:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:49.977 00:45:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:49.977 00:45:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:49.977 00:45:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:49.977 00:45:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:49.977 00:45:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:49.977 00:45:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:32:49.977 00:45:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:32:49.977 00:45:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:32:49.977 00:45:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:32:49.977 00:45:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:32:49.977 00:45:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:32:49.977 00:45:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:49.977 00:45:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:32:49.977 Found 0000:08:00.0 (0x8086 - 0x159b) 00:32:49.977 00:45:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:49.977 00:45:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:49.977 00:45:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:49.977 00:45:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:49.977 00:45:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:49.977 00:45:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:49.977 00:45:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:32:49.977 Found 0000:08:00.1 (0x8086 - 0x159b) 00:32:49.977 00:45:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:49.977 00:45:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:49.977 00:45:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:49.977 00:45:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:49.977 00:45:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:49.977 00:45:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:32:49.977 00:45:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:32:49.977 00:45:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:32:49.977 00:45:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:49.977 00:45:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:49.977 00:45:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:49.977 00:45:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:49.977 00:45:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:49.977 00:45:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:49.977 00:45:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:49.977 00:45:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:32:49.977 Found net devices under 0000:08:00.0: cvl_0_0 00:32:49.977 00:45:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:49.977 00:45:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:49.977 00:45:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:49.977 00:45:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:49.977 00:45:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:49.977 00:45:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:49.977 00:45:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:49.977 00:45:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:49.977 00:45:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:32:49.977 Found net devices under 0000:08:00.1: cvl_0_1 00:32:49.977 00:45:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:49.977 00:45:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:32:49.977 00:45:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # is_hw=yes 00:32:49.977 00:45:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:32:49.977 00:45:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:32:49.977 00:45:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:32:49.977 00:45:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:49.978 00:45:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:49.978 00:45:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:49.978 00:45:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:32:49.978 00:45:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:49.978 00:45:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:49.978 00:45:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:32:49.978 00:45:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:49.978 00:45:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:49.978 00:45:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:32:49.978 00:45:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:32:49.978 00:45:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:32:49.978 00:45:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:49.978 00:45:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:49.978 00:45:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:50.236 00:45:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:32:50.236 00:45:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:50.236 00:45:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:50.236 00:45:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:50.236 00:45:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:32:50.236 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:50.236 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.233 ms 00:32:50.236 00:32:50.236 --- 10.0.0.2 ping statistics --- 00:32:50.236 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:50.236 rtt min/avg/max/mdev = 0.233/0.233/0.233/0.000 ms 00:32:50.236 00:45:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:50.236 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:50.236 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.107 ms 00:32:50.236 00:32:50.236 --- 10.0.0.1 ping statistics --- 00:32:50.236 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:50.236 rtt min/avg/max/mdev = 0.107/0.107/0.107/0.000 ms 00:32:50.236 00:45:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:50.236 00:45:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # return 0 00:32:50.236 00:45:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:32:50.236 00:45:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:50.236 00:45:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:32:50.236 00:45:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:32:50.236 00:45:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:50.236 00:45:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:32:50.236 00:45:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:32:50.236 00:45:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:32:50.236 00:45:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:32:50.236 00:45:17 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@720 -- # xtrace_disable 00:32:50.237 00:45:17 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:32:50.237 00:45:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # nvmfpid=1057015 00:32:50.237 00:45:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:32:50.237 00:45:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # waitforlisten 1057015 00:32:50.237 00:45:17 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@827 -- # '[' -z 1057015 ']' 00:32:50.237 00:45:17 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:50.237 00:45:17 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@832 -- # local max_retries=100 00:32:50.237 00:45:17 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:50.237 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:50.237 00:45:17 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # xtrace_disable 00:32:50.237 00:45:17 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:32:50.237 [2024-07-12 00:45:17.948602] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:32:50.237 [2024-07-12 00:45:17.948696] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:50.237 EAL: No free 2048 kB hugepages reported on node 1 00:32:50.237 [2024-07-12 00:45:18.011973] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:32:50.495 [2024-07-12 00:45:18.099078] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:50.495 [2024-07-12 00:45:18.099133] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:50.495 [2024-07-12 00:45:18.099148] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:50.495 [2024-07-12 00:45:18.099162] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:50.495 [2024-07-12 00:45:18.099174] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:50.495 [2024-07-12 00:45:18.099255] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:32:50.495 [2024-07-12 00:45:18.099261] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:50.495 00:45:18 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:32:50.495 00:45:18 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # return 0 00:32:50.495 00:45:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:32:50.495 00:45:18 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:50.495 00:45:18 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:32:50.495 00:45:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:50.495 00:45:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=1057015 00:32:50.495 00:45:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:32:50.753 [2024-07-12 00:45:18.498681] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:50.753 00:45:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:32:51.011 Malloc0 00:32:51.011 00:45:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:32:51.577 00:45:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:51.835 00:45:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:52.093 [2024-07-12 00:45:19.707395] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:52.093 00:45:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:32:52.352 [2024-07-12 00:45:19.972077] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:32:52.352 00:45:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=1057235 00:32:52.352 00:45:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:32:52.352 00:45:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:32:52.352 00:45:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 1057235 /var/tmp/bdevperf.sock 00:32:52.352 00:45:19 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@827 -- # '[' -z 1057235 ']' 00:32:52.352 00:45:19 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:32:52.352 00:45:19 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@832 -- # local max_retries=100 00:32:52.352 00:45:19 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:32:52.352 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:32:52.352 00:45:19 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # xtrace_disable 00:32:52.352 00:45:19 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:32:52.611 00:45:20 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:32:52.611 00:45:20 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # return 0 00:32:52.611 00:45:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:32:52.869 00:45:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:32:53.435 Nvme0n1 00:32:53.435 00:45:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:32:53.694 Nvme0n1 00:32:53.694 00:45:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:32:53.694 00:45:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:32:56.274 00:45:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:32:56.274 00:45:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:32:56.274 00:45:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:32:56.274 00:45:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:32:57.205 00:45:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:32:57.205 00:45:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:32:57.205 00:45:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:57.205 00:45:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:32:57.771 00:45:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:57.771 00:45:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:32:57.771 00:45:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:57.771 00:45:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:32:57.771 00:45:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:57.771 00:45:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:32:57.771 00:45:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:57.771 00:45:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:32:58.029 00:45:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:58.029 00:45:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:32:58.029 00:45:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:58.029 00:45:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:32:58.287 00:45:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:58.287 00:45:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:32:58.287 00:45:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:58.287 00:45:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:32:58.544 00:45:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:58.544 00:45:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:32:58.544 00:45:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:58.544 00:45:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:32:58.802 00:45:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:58.802 00:45:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:32:58.802 00:45:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:32:59.060 00:45:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:32:59.320 00:45:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:33:00.257 00:45:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:33:00.257 00:45:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:33:00.257 00:45:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:00.257 00:45:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:00.821 00:45:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:00.821 00:45:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:33:00.821 00:45:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:00.821 00:45:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:01.079 00:45:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:01.079 00:45:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:01.079 00:45:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:01.079 00:45:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:01.337 00:45:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:01.337 00:45:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:01.337 00:45:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:01.337 00:45:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:01.594 00:45:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:01.594 00:45:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:33:01.594 00:45:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:01.594 00:45:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:01.852 00:45:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:01.852 00:45:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:33:01.852 00:45:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:01.852 00:45:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:02.110 00:45:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:02.110 00:45:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:33:02.110 00:45:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:33:02.368 00:45:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:33:02.626 00:45:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:33:03.560 00:45:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:33:03.560 00:45:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:33:03.560 00:45:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:03.560 00:45:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:03.818 00:45:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:03.818 00:45:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:33:03.818 00:45:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:03.818 00:45:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:04.076 00:45:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:04.076 00:45:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:04.076 00:45:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:04.077 00:45:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:04.335 00:45:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:04.335 00:45:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:04.335 00:45:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:04.335 00:45:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:04.593 00:45:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:04.593 00:45:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:33:04.593 00:45:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:04.594 00:45:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:05.160 00:45:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:05.160 00:45:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:33:05.160 00:45:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:05.160 00:45:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:05.160 00:45:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:05.160 00:45:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:33:05.161 00:45:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:33:05.418 00:45:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:33:05.675 00:45:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:33:07.052 00:45:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:33:07.052 00:45:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:33:07.052 00:45:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:07.052 00:45:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:07.052 00:45:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:07.052 00:45:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:33:07.052 00:45:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:07.052 00:45:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:07.310 00:45:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:07.310 00:45:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:07.310 00:45:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:07.310 00:45:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:07.567 00:45:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:07.567 00:45:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:07.567 00:45:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:07.567 00:45:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:08.133 00:45:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:08.133 00:45:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:33:08.133 00:45:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:08.133 00:45:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:08.392 00:45:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:08.392 00:45:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:33:08.392 00:45:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:08.392 00:45:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:08.650 00:45:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:08.650 00:45:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:33:08.650 00:45:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:33:08.908 00:45:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:33:09.166 00:45:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:33:10.104 00:45:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:33:10.104 00:45:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:33:10.104 00:45:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:10.104 00:45:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:10.362 00:45:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:10.362 00:45:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:33:10.362 00:45:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:10.362 00:45:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:10.647 00:45:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:10.647 00:45:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:10.647 00:45:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:10.647 00:45:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:10.914 00:45:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:10.914 00:45:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:10.914 00:45:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:10.914 00:45:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:11.172 00:45:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:11.172 00:45:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:33:11.172 00:45:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:11.172 00:45:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:11.740 00:45:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:11.740 00:45:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:33:11.740 00:45:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:11.740 00:45:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:11.740 00:45:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:11.740 00:45:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:33:11.740 00:45:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:33:12.306 00:45:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:33:12.566 00:45:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:33:13.504 00:45:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:33:13.504 00:45:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:33:13.504 00:45:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:13.504 00:45:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:13.763 00:45:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:13.763 00:45:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:33:13.763 00:45:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:13.763 00:45:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:14.021 00:45:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:14.021 00:45:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:14.021 00:45:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:14.021 00:45:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:14.280 00:45:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:14.280 00:45:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:14.280 00:45:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:14.280 00:45:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:14.538 00:45:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:14.538 00:45:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:33:14.538 00:45:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:14.538 00:45:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:14.796 00:45:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:14.796 00:45:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:33:14.796 00:45:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:14.796 00:45:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:15.055 00:45:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:15.055 00:45:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:33:15.319 00:45:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:33:15.319 00:45:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:33:15.580 00:45:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:33:15.837 00:45:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:33:16.778 00:45:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:33:16.778 00:45:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:33:16.778 00:45:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:16.778 00:45:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:17.035 00:45:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:17.035 00:45:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:33:17.035 00:45:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:17.035 00:45:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:17.292 00:45:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:17.292 00:45:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:17.292 00:45:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:17.292 00:45:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:17.856 00:45:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:17.856 00:45:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:17.856 00:45:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:17.856 00:45:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:17.856 00:45:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:17.856 00:45:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:33:17.856 00:45:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:17.856 00:45:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:18.114 00:45:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:18.114 00:45:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:33:18.114 00:45:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:18.114 00:45:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:18.372 00:45:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:18.372 00:45:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:33:18.372 00:45:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:33:18.630 00:45:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:33:18.888 00:45:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:33:19.822 00:45:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:33:19.822 00:45:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:33:19.822 00:45:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:19.822 00:45:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:20.080 00:45:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:20.080 00:45:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:33:20.080 00:45:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:20.080 00:45:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:20.339 00:45:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:20.339 00:45:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:20.339 00:45:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:20.339 00:45:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:20.597 00:45:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:20.597 00:45:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:20.597 00:45:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:20.597 00:45:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:20.855 00:45:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:20.855 00:45:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:33:20.855 00:45:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:20.855 00:45:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:21.113 00:45:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:21.113 00:45:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:33:21.113 00:45:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:21.113 00:45:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:21.371 00:45:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:21.371 00:45:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:33:21.371 00:45:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:33:21.629 00:45:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:33:21.888 00:45:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:33:22.823 00:45:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:33:22.823 00:45:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:33:22.823 00:45:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:22.823 00:45:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:23.389 00:45:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:23.389 00:45:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:33:23.389 00:45:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:23.389 00:45:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:23.389 00:45:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:23.389 00:45:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:23.389 00:45:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:23.389 00:45:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:23.954 00:45:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:23.954 00:45:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:23.954 00:45:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:23.954 00:45:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:24.212 00:45:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:24.212 00:45:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:33:24.212 00:45:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:24.212 00:45:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:24.470 00:45:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:24.470 00:45:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:33:24.470 00:45:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:24.470 00:45:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:24.728 00:45:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:24.728 00:45:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:33:24.728 00:45:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:33:24.994 00:45:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:33:25.258 00:45:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:33:26.194 00:45:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:33:26.194 00:45:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:33:26.194 00:45:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:26.194 00:45:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:26.760 00:45:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:26.760 00:45:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:33:26.760 00:45:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:26.760 00:45:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:27.018 00:45:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:27.018 00:45:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:27.018 00:45:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:27.018 00:45:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:27.277 00:45:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:27.277 00:45:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:27.277 00:45:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:27.277 00:45:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:27.534 00:45:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:27.534 00:45:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:33:27.535 00:45:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:27.535 00:45:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:27.791 00:45:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:27.791 00:45:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:33:27.791 00:45:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:27.791 00:45:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:28.051 00:45:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:28.051 00:45:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 1057235 00:33:28.051 00:45:55 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@946 -- # '[' -z 1057235 ']' 00:33:28.051 00:45:55 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # kill -0 1057235 00:33:28.051 00:45:55 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@951 -- # uname 00:33:28.051 00:45:55 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:33:28.051 00:45:55 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1057235 00:33:28.051 00:45:55 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:33:28.051 00:45:55 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:33:28.051 00:45:55 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1057235' 00:33:28.051 killing process with pid 1057235 00:33:28.051 00:45:55 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@965 -- # kill 1057235 00:33:28.051 00:45:55 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@970 -- # wait 1057235 00:33:28.051 Connection closed with partial response: 00:33:28.051 00:33:28.051 00:33:28.051 00:45:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 1057235 00:33:28.051 00:45:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:33:28.051 [2024-07-12 00:45:20.028402] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:33:28.051 [2024-07-12 00:45:20.028510] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1057235 ] 00:33:28.051 EAL: No free 2048 kB hugepages reported on node 1 00:33:28.051 [2024-07-12 00:45:20.082469] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:28.051 [2024-07-12 00:45:20.172251] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:33:28.051 Running I/O for 90 seconds... 00:33:28.051 [2024-07-12 00:45:36.510991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:22360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.051 [2024-07-12 00:45:36.511057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:33:28.051 [2024-07-12 00:45:36.511119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.051 [2024-07-12 00:45:36.511142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:33:28.051 [2024-07-12 00:45:36.511169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:22376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.051 [2024-07-12 00:45:36.511186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:33:28.051 [2024-07-12 00:45:36.511212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:22384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.051 [2024-07-12 00:45:36.511229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:33:28.051 [2024-07-12 00:45:36.511254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:22392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.051 [2024-07-12 00:45:36.511271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:33:28.051 [2024-07-12 00:45:36.511296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:22400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.051 [2024-07-12 00:45:36.511313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:33:28.051 [2024-07-12 00:45:36.511338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.051 [2024-07-12 00:45:36.511356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:33:28.051 [2024-07-12 00:45:36.511382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:21472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.051 [2024-07-12 00:45:36.511399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:33:28.051 [2024-07-12 00:45:36.511423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:21480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.051 [2024-07-12 00:45:36.511441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:33:28.051 [2024-07-12 00:45:36.511466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:21488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.051 [2024-07-12 00:45:36.511483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:33:28.051 [2024-07-12 00:45:36.511508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.051 [2024-07-12 00:45:36.511534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:33:28.051 [2024-07-12 00:45:36.511560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:21504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.051 [2024-07-12 00:45:36.511578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:33:28.051 [2024-07-12 00:45:36.511613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:21512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.051 [2024-07-12 00:45:36.511631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:33:28.051 [2024-07-12 00:45:36.511656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:21520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.051 [2024-07-12 00:45:36.511673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:33:28.051 [2024-07-12 00:45:36.511698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.051 [2024-07-12 00:45:36.511715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:33:28.051 [2024-07-12 00:45:36.511740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:21536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.051 [2024-07-12 00:45:36.511758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:33:28.051 [2024-07-12 00:45:36.511783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:21544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.051 [2024-07-12 00:45:36.511800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:33:28.051 [2024-07-12 00:45:36.511905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:21552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.051 [2024-07-12 00:45:36.511928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:33:28.051 [2024-07-12 00:45:36.511958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:21560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.052 [2024-07-12 00:45:36.511976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:33:28.052 [2024-07-12 00:45:36.512003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:21568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.052 [2024-07-12 00:45:36.512021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:33:28.052 [2024-07-12 00:45:36.512048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:21576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.052 [2024-07-12 00:45:36.512065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:33:28.052 [2024-07-12 00:45:36.512091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:21584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.052 [2024-07-12 00:45:36.512109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:33:28.052 [2024-07-12 00:45:36.512136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:21592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.052 [2024-07-12 00:45:36.512153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:33:28.052 [2024-07-12 00:45:36.512185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:21600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.052 [2024-07-12 00:45:36.512203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:33:28.052 [2024-07-12 00:45:36.512230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:21608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.052 [2024-07-12 00:45:36.512247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:28.052 [2024-07-12 00:45:36.512274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:21616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.052 [2024-07-12 00:45:36.512292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:28.052 [2024-07-12 00:45:36.512318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.052 [2024-07-12 00:45:36.512335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:33:28.052 [2024-07-12 00:45:36.512362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:21632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.052 [2024-07-12 00:45:36.512380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:33:28.052 [2024-07-12 00:45:36.512407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:21640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.052 [2024-07-12 00:45:36.512425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:33:28.052 [2024-07-12 00:45:36.512452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:21648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.052 [2024-07-12 00:45:36.512470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:33:28.052 [2024-07-12 00:45:36.512496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:22408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.052 [2024-07-12 00:45:36.512514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:33:28.052 [2024-07-12 00:45:36.512540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:22416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.052 [2024-07-12 00:45:36.512558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:33:28.052 [2024-07-12 00:45:36.513004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:22424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.052 [2024-07-12 00:45:36.513027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:33:28.052 [2024-07-12 00:45:36.513059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:21656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.052 [2024-07-12 00:45:36.513077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:33:28.052 [2024-07-12 00:45:36.513105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.052 [2024-07-12 00:45:36.513123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:33:28.052 [2024-07-12 00:45:36.513156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:21672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.052 [2024-07-12 00:45:36.513175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:33:28.052 [2024-07-12 00:45:36.513203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:21680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.052 [2024-07-12 00:45:36.513220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:33:28.052 [2024-07-12 00:45:36.513248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:21688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.052 [2024-07-12 00:45:36.513266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:33:28.052 [2024-07-12 00:45:36.513293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:21696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.052 [2024-07-12 00:45:36.513311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:33:28.052 [2024-07-12 00:45:36.513338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:21704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.052 [2024-07-12 00:45:36.513356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:33:28.052 [2024-07-12 00:45:36.513384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:21712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.052 [2024-07-12 00:45:36.513402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:33:28.052 [2024-07-12 00:45:36.513429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.052 [2024-07-12 00:45:36.513447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:33:28.052 [2024-07-12 00:45:36.513474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:21728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.052 [2024-07-12 00:45:36.513492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:33:28.052 [2024-07-12 00:45:36.513519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:21736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.052 [2024-07-12 00:45:36.513538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:33:28.052 [2024-07-12 00:45:36.513566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:21744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.052 [2024-07-12 00:45:36.513584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:33:28.052 [2024-07-12 00:45:36.513623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:21752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.052 [2024-07-12 00:45:36.513642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:33:28.052 [2024-07-12 00:45:36.513670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.052 [2024-07-12 00:45:36.513688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:33:28.052 [2024-07-12 00:45:36.513716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:21768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.052 [2024-07-12 00:45:36.513737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:33:28.052 [2024-07-12 00:45:36.513771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.052 [2024-07-12 00:45:36.513789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:33:28.052 [2024-07-12 00:45:36.513875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:21784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.052 [2024-07-12 00:45:36.513896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:33:28.053 [2024-07-12 00:45:36.513928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:21792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.053 [2024-07-12 00:45:36.513946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:33:28.053 [2024-07-12 00:45:36.513975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:21800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.053 [2024-07-12 00:45:36.513993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:33:28.053 [2024-07-12 00:45:36.514023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:21808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.053 [2024-07-12 00:45:36.514040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:33:28.053 [2024-07-12 00:45:36.514070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:21816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.053 [2024-07-12 00:45:36.514087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:33:28.053 [2024-07-12 00:45:36.514117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:21824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.053 [2024-07-12 00:45:36.514134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:33:28.053 [2024-07-12 00:45:36.514163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:21832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.053 [2024-07-12 00:45:36.514181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:33:28.053 [2024-07-12 00:45:36.514210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:21840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.053 [2024-07-12 00:45:36.514228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:28.053 [2024-07-12 00:45:36.514257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:21848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.053 [2024-07-12 00:45:36.514274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:28.053 [2024-07-12 00:45:36.514304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.053 [2024-07-12 00:45:36.514322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:33:28.053 [2024-07-12 00:45:36.514352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:21864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.053 [2024-07-12 00:45:36.514374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:33:28.053 [2024-07-12 00:45:36.514404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:21872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.053 [2024-07-12 00:45:36.514422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:33:28.053 [2024-07-12 00:45:36.514452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:21880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.053 [2024-07-12 00:45:36.514470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:33:28.053 [2024-07-12 00:45:36.514500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:21888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.053 [2024-07-12 00:45:36.514518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:33:28.053 [2024-07-12 00:45:36.514547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:21896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.053 [2024-07-12 00:45:36.514565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:33:28.053 [2024-07-12 00:45:36.514602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:21904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.053 [2024-07-12 00:45:36.514621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:33:28.053 [2024-07-12 00:45:36.514651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:21912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.053 [2024-07-12 00:45:36.514669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:33:28.053 [2024-07-12 00:45:36.514699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:21920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.053 [2024-07-12 00:45:36.514716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:33:28.053 [2024-07-12 00:45:36.514746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.053 [2024-07-12 00:45:36.514764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:33:28.053 [2024-07-12 00:45:36.514793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:21936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.053 [2024-07-12 00:45:36.514817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:33:28.053 [2024-07-12 00:45:36.514846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.053 [2024-07-12 00:45:36.514864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:33:28.053 [2024-07-12 00:45:36.514899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:21952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.053 [2024-07-12 00:45:36.514917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:33:28.053 [2024-07-12 00:45:36.514946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:21960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.053 [2024-07-12 00:45:36.514968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:33:28.053 [2024-07-12 00:45:36.514998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:21968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.053 [2024-07-12 00:45:36.515018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:33:28.053 [2024-07-12 00:45:36.515048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:21976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.053 [2024-07-12 00:45:36.515066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:33:28.053 [2024-07-12 00:45:36.515095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:21984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.053 [2024-07-12 00:45:36.515113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:33:28.053 [2024-07-12 00:45:36.515143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:21992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.053 [2024-07-12 00:45:36.515160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:33:28.053 [2024-07-12 00:45:36.515189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:22000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.053 [2024-07-12 00:45:36.515208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:33:28.053 [2024-07-12 00:45:36.515237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:22008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.053 [2024-07-12 00:45:36.515255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:33:28.053 [2024-07-12 00:45:36.515286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.053 [2024-07-12 00:45:36.515304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:33:28.053 [2024-07-12 00:45:36.515333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:22024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.053 [2024-07-12 00:45:36.515350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:33:28.053 [2024-07-12 00:45:36.515379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.053 [2024-07-12 00:45:36.515397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:28.053 [2024-07-12 00:45:36.515426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:22040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.053 [2024-07-12 00:45:36.515443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:33:28.053 [2024-07-12 00:45:36.515472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:22048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.054 [2024-07-12 00:45:36.515490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:33:28.054 [2024-07-12 00:45:36.515519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:22056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.054 [2024-07-12 00:45:36.515536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:33:28.054 [2024-07-12 00:45:36.515573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.054 [2024-07-12 00:45:36.515600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:28.054 [2024-07-12 00:45:36.515630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:22072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.054 [2024-07-12 00:45:36.515648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:28.054 [2024-07-12 00:45:36.515678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:22080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.054 [2024-07-12 00:45:36.515695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:33:28.054 [2024-07-12 00:45:36.515724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.054 [2024-07-12 00:45:36.515741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:33:28.054 [2024-07-12 00:45:36.515770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:22096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.054 [2024-07-12 00:45:36.515788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:28.054 [2024-07-12 00:45:36.515817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:22104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.054 [2024-07-12 00:45:36.515835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:28.054 [2024-07-12 00:45:36.515864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:22112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.054 [2024-07-12 00:45:36.515881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:33:28.054 [2024-07-12 00:45:36.515910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:22120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.054 [2024-07-12 00:45:36.515928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:33:28.054 [2024-07-12 00:45:36.515957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:22128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.054 [2024-07-12 00:45:36.515975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:28.054 [2024-07-12 00:45:36.516004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:22136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.054 [2024-07-12 00:45:36.516022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:33:28.054 [2024-07-12 00:45:36.516052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:22144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.054 [2024-07-12 00:45:36.516070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:33:28.054 [2024-07-12 00:45:36.516099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:22152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.054 [2024-07-12 00:45:36.516117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:28.054 [2024-07-12 00:45:36.516151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.054 [2024-07-12 00:45:36.516170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:33:28.054 [2024-07-12 00:45:36.516200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:22168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.054 [2024-07-12 00:45:36.516219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:33:28.054 [2024-07-12 00:45:36.516248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:22176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.054 [2024-07-12 00:45:36.516266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:33:28.054 [2024-07-12 00:45:36.516295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:22184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.054 [2024-07-12 00:45:36.516314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:33:28.054 [2024-07-12 00:45:36.516344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:22192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.054 [2024-07-12 00:45:36.516362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:33:28.054 [2024-07-12 00:45:36.516391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:22200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.054 [2024-07-12 00:45:36.516410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:28.054 [2024-07-12 00:45:36.516439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:22208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.054 [2024-07-12 00:45:36.516457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:33:28.054 [2024-07-12 00:45:36.516486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:22216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.054 [2024-07-12 00:45:36.516505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:33:28.054 [2024-07-12 00:45:36.516534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:22224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.054 [2024-07-12 00:45:36.516552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:33:28.054 [2024-07-12 00:45:36.516582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:22432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.054 [2024-07-12 00:45:36.516607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:33:28.054 [2024-07-12 00:45:36.516637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:22440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.054 [2024-07-12 00:45:36.516655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:33:28.054 [2024-07-12 00:45:36.516686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:22448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.054 [2024-07-12 00:45:36.516703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:33:28.054 [2024-07-12 00:45:36.516732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:22456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.054 [2024-07-12 00:45:36.516754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:33:28.054 [2024-07-12 00:45:36.516783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:22464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.054 [2024-07-12 00:45:36.516801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:33:28.054 [2024-07-12 00:45:36.516831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:22472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.054 [2024-07-12 00:45:36.516849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:33:28.054 [2024-07-12 00:45:36.517139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:22480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.054 [2024-07-12 00:45:36.517164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:33:28.054 [2024-07-12 00:45:36.517202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.054 [2024-07-12 00:45:36.517221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:33:28.054 [2024-07-12 00:45:36.517256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:22240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.054 [2024-07-12 00:45:36.517274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:28.054 [2024-07-12 00:45:36.517309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.054 [2024-07-12 00:45:36.517327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:28.054 [2024-07-12 00:45:36.517361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:22256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.055 [2024-07-12 00:45:36.517379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:28.055 [2024-07-12 00:45:36.517414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.055 [2024-07-12 00:45:36.517431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:28.055 [2024-07-12 00:45:36.517466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:22272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.055 [2024-07-12 00:45:36.517484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:28.055 [2024-07-12 00:45:36.517518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:22280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.055 [2024-07-12 00:45:36.517536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:28.055 [2024-07-12 00:45:36.517571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:22288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.055 [2024-07-12 00:45:36.517595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.055 [2024-07-12 00:45:36.517632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:22296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.055 [2024-07-12 00:45:36.517655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:28.055 [2024-07-12 00:45:36.517690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:22304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.055 [2024-07-12 00:45:36.517708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:28.055 [2024-07-12 00:45:36.517743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:22312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.055 [2024-07-12 00:45:36.517761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:33:28.055 [2024-07-12 00:45:36.517796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:22320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.055 [2024-07-12 00:45:36.517815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:33:28.055 [2024-07-12 00:45:36.517849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.055 [2024-07-12 00:45:36.517867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:33:28.055 [2024-07-12 00:45:36.517901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:22336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.055 [2024-07-12 00:45:36.517919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:33:28.055 [2024-07-12 00:45:36.517954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:22344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.055 [2024-07-12 00:45:36.517972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:33:28.055 [2024-07-12 00:45:36.518006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.055 [2024-07-12 00:45:36.518025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:33:28.055 [2024-07-12 00:45:52.993714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:22216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.055 [2024-07-12 00:45:52.993788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:28.055 [2024-07-12 00:45:52.993853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:22232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.055 [2024-07-12 00:45:52.993876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:33:28.055 [2024-07-12 00:45:52.993903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:22248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.055 [2024-07-12 00:45:52.993922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:33:28.055 [2024-07-12 00:45:52.993947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:22264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.055 [2024-07-12 00:45:52.993965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:33:28.055 [2024-07-12 00:45:52.993990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:22280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.055 [2024-07-12 00:45:52.994008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:33:28.055 [2024-07-12 00:45:52.994044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:22296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.055 [2024-07-12 00:45:52.994063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:33:28.055 [2024-07-12 00:45:52.994089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:22312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.055 [2024-07-12 00:45:52.994107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:28.055 [2024-07-12 00:45:52.994132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:22056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.055 [2024-07-12 00:45:52.994150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:33:28.055 [2024-07-12 00:45:52.994175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:22328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.055 [2024-07-12 00:45:52.994192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:33:28.055 [2024-07-12 00:45:52.994217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:22344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.055 [2024-07-12 00:45:52.994234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:33:28.055 [2024-07-12 00:45:52.994259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:22360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.055 [2024-07-12 00:45:52.994276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:33:28.055 [2024-07-12 00:45:52.994301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.055 [2024-07-12 00:45:52.994319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:33:28.055 [2024-07-12 00:45:52.994344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:22392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.055 [2024-07-12 00:45:52.994362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:33:28.055 [2024-07-12 00:45:52.994387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:22408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.055 [2024-07-12 00:45:52.994405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:33:28.055 [2024-07-12 00:45:52.994430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:22424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.055 [2024-07-12 00:45:52.994448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:33:28.055 [2024-07-12 00:45:52.994473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:22440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.055 [2024-07-12 00:45:52.994491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:33:28.055 [2024-07-12 00:45:52.994516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:22456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.055 [2024-07-12 00:45:52.994533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:33:28.055 [2024-07-12 00:45:52.994563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:22472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.055 [2024-07-12 00:45:52.994581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:33:28.055 [2024-07-12 00:45:52.994616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:22488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.055 [2024-07-12 00:45:52.994634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:28.055 [2024-07-12 00:45:52.994659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:22504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.055 [2024-07-12 00:45:52.994677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:28.055 [2024-07-12 00:45:52.994702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:22520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.056 [2024-07-12 00:45:52.994720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:28.056 [2024-07-12 00:45:52.994745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.056 [2024-07-12 00:45:52.994763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:28.056 [2024-07-12 00:45:52.994789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:22552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.056 [2024-07-12 00:45:52.994807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:28.056 [2024-07-12 00:45:52.994833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:22064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.056 [2024-07-12 00:45:52.994851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:28.056 [2024-07-12 00:45:52.997061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:22576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.056 [2024-07-12 00:45:52.997091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:28.056 [2024-07-12 00:45:52.997122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:22592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.056 [2024-07-12 00:45:52.997141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:28.056 [2024-07-12 00:45:52.997166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:22608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.056 [2024-07-12 00:45:52.997184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:28.056 [2024-07-12 00:45:52.997210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:22624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.056 [2024-07-12 00:45:52.997229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:33:28.056 [2024-07-12 00:45:52.997254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:22640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.056 [2024-07-12 00:45:52.997272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:33:28.056 [2024-07-12 00:45:52.997297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:22656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.056 [2024-07-12 00:45:52.997320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:33:28.056 [2024-07-12 00:45:52.997346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:22672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.056 [2024-07-12 00:45:52.997365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:33:28.056 [2024-07-12 00:45:52.997390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:22688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.056 [2024-07-12 00:45:52.997407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:33:28.056 [2024-07-12 00:45:52.997432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:22704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.056 [2024-07-12 00:45:52.997450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:33:28.056 [2024-07-12 00:45:52.997476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:22720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.056 [2024-07-12 00:45:52.997494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:33:28.056 [2024-07-12 00:45:52.997519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:22736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.056 [2024-07-12 00:45:52.997536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:33:28.056 [2024-07-12 00:45:52.997561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:22752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.056 [2024-07-12 00:45:52.997579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:33:28.056 [2024-07-12 00:45:52.997617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:22768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.056 [2024-07-12 00:45:52.997636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:33:28.056 [2024-07-12 00:45:52.997661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:22784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.056 [2024-07-12 00:45:52.997679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:33:28.056 [2024-07-12 00:45:52.997704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:22800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.056 [2024-07-12 00:45:52.997722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:33:28.056 [2024-07-12 00:45:52.997748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:22816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.056 [2024-07-12 00:45:52.997766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:33:28.056 [2024-07-12 00:45:52.997790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:22832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.056 [2024-07-12 00:45:52.997808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:33:28.057 [2024-07-12 00:45:52.997834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:22848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.057 [2024-07-12 00:45:52.997856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:33:28.057 [2024-07-12 00:45:52.997881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:22864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.057 [2024-07-12 00:45:52.997899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:33:28.057 [2024-07-12 00:45:52.997924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:22880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.057 [2024-07-12 00:45:52.997942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:33:28.057 [2024-07-12 00:45:52.997967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:22896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.057 [2024-07-12 00:45:52.997985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:33:28.057 [2024-07-12 00:45:52.998010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:22912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.057 [2024-07-12 00:45:52.998028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:33:28.057 [2024-07-12 00:45:52.998053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:22928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.057 [2024-07-12 00:45:52.998071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:33:28.057 [2024-07-12 00:45:52.998096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:22944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.057 [2024-07-12 00:45:52.998114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:33:28.057 [2024-07-12 00:45:52.998139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:22960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.057 [2024-07-12 00:45:52.998157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:33:28.057 [2024-07-12 00:45:52.998182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:22976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.057 [2024-07-12 00:45:52.998200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:33:28.057 [2024-07-12 00:45:52.998225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:22072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.057 [2024-07-12 00:45:52.998243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:33:28.057 [2024-07-12 00:45:52.998268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.057 [2024-07-12 00:45:52.998286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:33:28.057 [2024-07-12 00:45:52.998311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:22136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.057 [2024-07-12 00:45:52.998329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:33:28.057 [2024-07-12 00:45:52.998354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.057 [2024-07-12 00:45:52.998372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:33:28.057 [2024-07-12 00:45:52.998401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:23000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.057 [2024-07-12 00:45:52.998421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:33:28.057 [2024-07-12 00:45:52.998446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:23016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.057 [2024-07-12 00:45:52.998464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:33:28.057 [2024-07-12 00:45:52.998489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.057 [2024-07-12 00:45:52.998507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:33:28.057 Received shutdown signal, test time was about 33.956714 seconds 00:33:28.057 00:33:28.057 Latency(us) 00:33:28.057 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:28.057 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:33:28.057 Verification LBA range: start 0x0 length 0x4000 00:33:28.057 Nvme0n1 : 33.96 7471.60 29.19 0.00 0.00 17097.79 223.00 4026531.84 00:33:28.057 =================================================================================================================== 00:33:28.057 Total : 7471.60 29.19 0.00 0.00 17097.79 223.00 4026531.84 00:33:28.057 00:45:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:28.315 00:45:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:33:28.315 00:45:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:33:28.573 00:45:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:33:28.573 00:45:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # nvmfcleanup 00:33:28.573 00:45:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # sync 00:33:28.573 00:45:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:33:28.573 00:45:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@120 -- # set +e 00:33:28.573 00:45:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # for i in {1..20} 00:33:28.573 00:45:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:33:28.573 rmmod nvme_tcp 00:33:28.573 rmmod nvme_fabrics 00:33:28.573 rmmod nvme_keyring 00:33:28.573 00:45:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:33:28.573 00:45:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set -e 00:33:28.573 00:45:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # return 0 00:33:28.573 00:45:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # '[' -n 1057015 ']' 00:33:28.573 00:45:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # killprocess 1057015 00:33:28.573 00:45:56 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@946 -- # '[' -z 1057015 ']' 00:33:28.573 00:45:56 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # kill -0 1057015 00:33:28.573 00:45:56 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@951 -- # uname 00:33:28.573 00:45:56 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:33:28.573 00:45:56 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1057015 00:33:28.573 00:45:56 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:33:28.573 00:45:56 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:33:28.573 00:45:56 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1057015' 00:33:28.573 killing process with pid 1057015 00:33:28.573 00:45:56 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@965 -- # kill 1057015 00:33:28.573 00:45:56 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@970 -- # wait 1057015 00:33:28.830 00:45:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:33:28.830 00:45:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:33:28.830 00:45:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:33:28.830 00:45:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:33:28.830 00:45:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # remove_spdk_ns 00:33:28.830 00:45:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:28.830 00:45:56 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:28.830 00:45:56 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:30.743 00:45:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:33:30.743 00:33:30.743 real 0m42.324s 00:33:30.743 user 2m10.982s 00:33:30.743 sys 0m9.777s 00:33:30.743 00:45:58 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1122 -- # xtrace_disable 00:33:30.743 00:45:58 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:33:30.743 ************************************ 00:33:30.743 END TEST nvmf_host_multipath_status 00:33:30.743 ************************************ 00:33:30.743 00:45:58 nvmf_tcp -- nvmf/nvmf.sh@103 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:33:30.743 00:45:58 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:33:30.743 00:45:58 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:33:30.743 00:45:58 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:30.743 ************************************ 00:33:30.743 START TEST nvmf_discovery_remove_ifc 00:33:30.743 ************************************ 00:33:30.743 00:45:58 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:33:30.743 * Looking for test storage... 00:33:30.743 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:33:30.743 00:45:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:30.743 00:45:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:33:30.743 00:45:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:30.743 00:45:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:30.743 00:45:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:30.743 00:45:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:30.743 00:45:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:30.743 00:45:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:30.743 00:45:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:30.743 00:45:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:30.743 00:45:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:30.743 00:45:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:30.743 00:45:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:33:30.743 00:45:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:33:30.743 00:45:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:30.743 00:45:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:30.743 00:45:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:30.743 00:45:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:30.743 00:45:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:30.743 00:45:58 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:30.743 00:45:58 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:30.743 00:45:58 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:30.743 00:45:58 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:30.743 00:45:58 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:30.743 00:45:58 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:30.743 00:45:58 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:33:30.743 00:45:58 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:30.743 00:45:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # : 0 00:33:30.743 00:45:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:33:30.743 00:45:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:33:30.743 00:45:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:30.743 00:45:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:30.743 00:45:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:30.743 00:45:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:33:30.743 00:45:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:33:30.743 00:45:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:33:30.743 00:45:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:33:30.743 00:45:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:33:30.743 00:45:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:33:31.000 00:45:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:33:31.001 00:45:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:33:31.001 00:45:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:33:31.001 00:45:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:33:31.001 00:45:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:33:31.001 00:45:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:31.001 00:45:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # prepare_net_devs 00:33:31.001 00:45:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:33:31.001 00:45:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:33:31.001 00:45:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:31.001 00:45:58 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:31.001 00:45:58 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:31.001 00:45:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:33:31.001 00:45:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:33:31.001 00:45:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@285 -- # xtrace_disable 00:33:31.001 00:45:58 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:32.378 00:46:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:32.378 00:46:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # pci_devs=() 00:33:32.378 00:46:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # local -a pci_devs 00:33:32.378 00:46:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:33:32.378 00:46:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:33:32.378 00:46:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # pci_drivers=() 00:33:32.378 00:46:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:33:32.378 00:46:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # net_devs=() 00:33:32.378 00:46:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # local -ga net_devs 00:33:32.378 00:46:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # e810=() 00:33:32.378 00:46:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # local -ga e810 00:33:32.378 00:46:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # x722=() 00:33:32.378 00:46:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # local -ga x722 00:33:32.378 00:46:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # mlx=() 00:33:32.378 00:46:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # local -ga mlx 00:33:32.378 00:46:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:32.378 00:46:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:32.378 00:46:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:32.378 00:46:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:32.378 00:46:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:32.378 00:46:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:32.378 00:46:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:32.378 00:46:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:32.378 00:46:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:32.378 00:46:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:32.378 00:46:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:32.378 00:46:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:33:32.378 00:46:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:33:32.378 00:46:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:33:32.378 00:46:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:33:32.378 00:46:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:33:32.378 00:46:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:33:32.378 00:46:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:32.378 00:46:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:33:32.378 Found 0000:08:00.0 (0x8086 - 0x159b) 00:33:32.378 00:46:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:32.378 00:46:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:32.378 00:46:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:32.378 00:46:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:32.378 00:46:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:32.378 00:46:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:32.378 00:46:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:33:32.378 Found 0000:08:00.1 (0x8086 - 0x159b) 00:33:32.378 00:46:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:32.378 00:46:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:32.378 00:46:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:32.378 00:46:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:32.378 00:46:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:32.378 00:46:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:33:32.378 00:46:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:33:32.378 00:46:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:33:32.378 00:46:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:32.378 00:46:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:32.378 00:46:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:32.378 00:46:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:32.378 00:46:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:32.378 00:46:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:32.378 00:46:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:32.378 00:46:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:33:32.378 Found net devices under 0000:08:00.0: cvl_0_0 00:33:32.378 00:46:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:32.378 00:46:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:32.378 00:46:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:32.378 00:46:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:32.378 00:46:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:32.378 00:46:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:32.378 00:46:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:32.378 00:46:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:32.378 00:46:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:33:32.378 Found net devices under 0000:08:00.1: cvl_0_1 00:33:32.378 00:46:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:32.378 00:46:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:33:32.378 00:46:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # is_hw=yes 00:33:32.378 00:46:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:33:32.378 00:46:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:33:32.379 00:46:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:33:32.379 00:46:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:32.379 00:46:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:32.379 00:46:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:32.379 00:46:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:33:32.379 00:46:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:32.379 00:46:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:32.379 00:46:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:33:32.379 00:46:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:32.379 00:46:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:32.379 00:46:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:33:32.379 00:46:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:33:32.379 00:46:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:33:32.379 00:46:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:32.637 00:46:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:32.637 00:46:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:32.637 00:46:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:33:32.637 00:46:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:32.637 00:46:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:32.637 00:46:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:32.637 00:46:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:33:32.637 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:32.637 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.282 ms 00:33:32.637 00:33:32.637 --- 10.0.0.2 ping statistics --- 00:33:32.637 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:32.637 rtt min/avg/max/mdev = 0.282/0.282/0.282/0.000 ms 00:33:32.637 00:46:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:32.637 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:32.637 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.087 ms 00:33:32.637 00:33:32.637 --- 10.0.0.1 ping statistics --- 00:33:32.637 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:32.637 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:33:32.637 00:46:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:32.637 00:46:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # return 0 00:33:32.637 00:46:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:33:32.637 00:46:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:32.637 00:46:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:33:32.637 00:46:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:33:32.637 00:46:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:32.637 00:46:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:33:32.637 00:46:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:33:32.637 00:46:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:33:32.637 00:46:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:33:32.637 00:46:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@720 -- # xtrace_disable 00:33:32.637 00:46:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:32.637 00:46:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # nvmfpid=1062193 00:33:32.637 00:46:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:33:32.637 00:46:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # waitforlisten 1062193 00:33:32.637 00:46:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@827 -- # '[' -z 1062193 ']' 00:33:32.637 00:46:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:32.637 00:46:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@832 -- # local max_retries=100 00:33:32.637 00:46:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:32.637 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:32.637 00:46:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # xtrace_disable 00:33:32.637 00:46:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:32.637 [2024-07-12 00:46:00.371817] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:33:32.637 [2024-07-12 00:46:00.371921] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:32.637 EAL: No free 2048 kB hugepages reported on node 1 00:33:32.637 [2024-07-12 00:46:00.436308] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:32.895 [2024-07-12 00:46:00.522496] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:32.895 [2024-07-12 00:46:00.522556] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:32.895 [2024-07-12 00:46:00.522572] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:32.895 [2024-07-12 00:46:00.522593] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:32.895 [2024-07-12 00:46:00.522607] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:32.895 [2024-07-12 00:46:00.522639] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:33:32.895 00:46:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:33:32.895 00:46:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # return 0 00:33:32.895 00:46:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:33:32.895 00:46:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:32.895 00:46:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:32.895 00:46:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:32.895 00:46:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:33:32.895 00:46:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:32.895 00:46:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:32.895 [2024-07-12 00:46:00.652757] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:32.895 [2024-07-12 00:46:00.660922] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:33:32.895 null0 00:33:32.895 [2024-07-12 00:46:00.692889] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:32.895 00:46:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:32.895 00:46:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=1062213 00:33:32.895 00:46:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:33:32.895 00:46:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 1062213 /tmp/host.sock 00:33:32.895 00:46:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@827 -- # '[' -z 1062213 ']' 00:33:32.895 00:46:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # local rpc_addr=/tmp/host.sock 00:33:32.895 00:46:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@832 -- # local max_retries=100 00:33:32.895 00:46:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:33:32.895 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:33:32.895 00:46:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # xtrace_disable 00:33:32.895 00:46:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:33.154 [2024-07-12 00:46:00.761101] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:33:33.154 [2024-07-12 00:46:00.761199] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1062213 ] 00:33:33.154 EAL: No free 2048 kB hugepages reported on node 1 00:33:33.154 [2024-07-12 00:46:00.821873] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:33.154 [2024-07-12 00:46:00.912773] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:33:33.412 00:46:01 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:33:33.412 00:46:01 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # return 0 00:33:33.412 00:46:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:33:33.412 00:46:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:33:33.412 00:46:01 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:33.412 00:46:01 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:33.412 00:46:01 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:33.412 00:46:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:33:33.412 00:46:01 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:33.412 00:46:01 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:33.412 00:46:01 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:33.412 00:46:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:33:33.412 00:46:01 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:33.412 00:46:01 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:34.344 [2024-07-12 00:46:02.160279] bdev_nvme.c:6984:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:33:34.345 [2024-07-12 00:46:02.160325] bdev_nvme.c:7064:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:33:34.345 [2024-07-12 00:46:02.160351] bdev_nvme.c:6947:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:33:34.603 [2024-07-12 00:46:02.288774] bdev_nvme.c:6913:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:33:34.603 [2024-07-12 00:46:02.391407] bdev_nvme.c:7774:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:33:34.603 [2024-07-12 00:46:02.391490] bdev_nvme.c:7774:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:33:34.603 [2024-07-12 00:46:02.391533] bdev_nvme.c:7774:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:33:34.603 [2024-07-12 00:46:02.391570] bdev_nvme.c:6803:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:33:34.603 [2024-07-12 00:46:02.391618] bdev_nvme.c:6762:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:33:34.603 00:46:02 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:34.603 00:46:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:33:34.603 00:46:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:34.603 00:46:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:34.603 00:46:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:34.603 00:46:02 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:34.603 00:46:02 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:34.603 00:46:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:34.603 00:46:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:34.603 [2024-07-12 00:46:02.398935] bdev_nvme.c:1614:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x2406530 was disconnected and freed. delete nvme_qpair. 00:33:34.603 00:46:02 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:34.603 00:46:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:33:34.603 00:46:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:33:34.860 00:46:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:33:34.860 00:46:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:33:34.860 00:46:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:34.860 00:46:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:34.860 00:46:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:34.860 00:46:02 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:34.860 00:46:02 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:34.860 00:46:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:34.860 00:46:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:34.860 00:46:02 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:34.860 00:46:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:33:34.860 00:46:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:35.788 00:46:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:35.788 00:46:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:35.788 00:46:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:35.788 00:46:03 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:35.788 00:46:03 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:35.788 00:46:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:35.788 00:46:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:35.788 00:46:03 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:35.788 00:46:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:33:35.788 00:46:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:37.158 00:46:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:37.158 00:46:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:37.158 00:46:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:37.158 00:46:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:37.158 00:46:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:37.158 00:46:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:37.158 00:46:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:37.158 00:46:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:37.158 00:46:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:33:37.158 00:46:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:38.090 00:46:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:38.090 00:46:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:38.090 00:46:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:38.090 00:46:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:38.090 00:46:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:38.090 00:46:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:38.090 00:46:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:38.090 00:46:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:38.090 00:46:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:33:38.090 00:46:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:39.021 00:46:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:39.021 00:46:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:39.021 00:46:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:39.021 00:46:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:39.021 00:46:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:39.021 00:46:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:39.021 00:46:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:39.021 00:46:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:39.021 00:46:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:33:39.021 00:46:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:39.952 00:46:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:39.952 00:46:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:39.952 00:46:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:39.952 00:46:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:39.952 00:46:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:39.952 00:46:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:39.952 00:46:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:39.952 00:46:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:39.952 00:46:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:33:39.952 00:46:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:40.209 [2024-07-12 00:46:07.832432] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:33:40.209 [2024-07-12 00:46:07.832511] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:33:40.209 [2024-07-12 00:46:07.832545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:40.209 [2024-07-12 00:46:07.832572] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:33:40.209 [2024-07-12 00:46:07.832593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:40.209 [2024-07-12 00:46:07.832618] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:33:40.209 [2024-07-12 00:46:07.832633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:40.209 [2024-07-12 00:46:07.832649] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:33:40.209 [2024-07-12 00:46:07.832663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:40.209 [2024-07-12 00:46:07.832679] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:33:40.209 [2024-07-12 00:46:07.832694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:40.209 [2024-07-12 00:46:07.832708] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23cd560 is same with the state(5) to be set 00:33:40.209 [2024-07-12 00:46:07.842450] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23cd560 (9): Bad file descriptor 00:33:40.209 [2024-07-12 00:46:07.852497] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:33:41.182 00:46:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:41.182 00:46:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:41.182 00:46:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:41.182 00:46:08 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:41.182 00:46:08 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:41.182 00:46:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:41.182 00:46:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:41.182 [2024-07-12 00:46:08.880669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:33:41.182 [2024-07-12 00:46:08.880742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23cd560 with addr=10.0.0.2, port=4420 00:33:41.182 [2024-07-12 00:46:08.880772] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23cd560 is same with the state(5) to be set 00:33:41.182 [2024-07-12 00:46:08.880844] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23cd560 (9): Bad file descriptor 00:33:41.182 [2024-07-12 00:46:08.881315] bdev_nvme.c:2896:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:33:41.182 [2024-07-12 00:46:08.881348] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:33:41.182 [2024-07-12 00:46:08.881365] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:33:41.182 [2024-07-12 00:46:08.881384] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:33:41.182 [2024-07-12 00:46:08.881418] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:41.182 [2024-07-12 00:46:08.881437] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:33:41.182 00:46:08 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:41.182 00:46:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:33:41.182 00:46:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:42.116 [2024-07-12 00:46:09.883926] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:33:42.116 [2024-07-12 00:46:09.883957] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:33:42.116 [2024-07-12 00:46:09.883973] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:33:42.116 [2024-07-12 00:46:09.883987] nvme_ctrlr.c:1031:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:33:42.116 [2024-07-12 00:46:09.884009] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:42.116 [2024-07-12 00:46:09.884048] bdev_nvme.c:6735:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:33:42.116 [2024-07-12 00:46:09.884086] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:33:42.116 [2024-07-12 00:46:09.884108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.116 [2024-07-12 00:46:09.884130] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:33:42.116 [2024-07-12 00:46:09.884145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.116 [2024-07-12 00:46:09.884161] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:33:42.116 [2024-07-12 00:46:09.884176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.116 [2024-07-12 00:46:09.884192] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:33:42.116 [2024-07-12 00:46:09.884207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.116 [2024-07-12 00:46:09.884224] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:33:42.116 [2024-07-12 00:46:09.884239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:42.116 [2024-07-12 00:46:09.884254] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:33:42.116 [2024-07-12 00:46:09.884457] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23cc9f0 (9): Bad file descriptor 00:33:42.116 [2024-07-12 00:46:09.885478] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:33:42.116 [2024-07-12 00:46:09.885503] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:33:42.116 00:46:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:42.116 00:46:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:42.116 00:46:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:42.116 00:46:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:42.116 00:46:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:42.116 00:46:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:42.116 00:46:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:42.116 00:46:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:42.116 00:46:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:33:42.116 00:46:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:42.116 00:46:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:42.374 00:46:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:33:42.374 00:46:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:42.374 00:46:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:42.374 00:46:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:42.374 00:46:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:42.374 00:46:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:42.374 00:46:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:42.374 00:46:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:42.374 00:46:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:42.374 00:46:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:33:42.374 00:46:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:43.307 00:46:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:43.307 00:46:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:43.307 00:46:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:43.307 00:46:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:43.307 00:46:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:43.307 00:46:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:43.307 00:46:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:43.307 00:46:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:43.307 00:46:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:33:43.307 00:46:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:44.243 [2024-07-12 00:46:11.936250] bdev_nvme.c:6984:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:33:44.243 [2024-07-12 00:46:11.936299] bdev_nvme.c:7064:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:33:44.243 [2024-07-12 00:46:11.936324] bdev_nvme.c:6947:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:33:44.243 00:46:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:44.243 00:46:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:44.243 00:46:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:44.243 00:46:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:44.243 00:46:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:44.243 00:46:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:44.243 00:46:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:44.243 [2024-07-12 00:46:12.063736] bdev_nvme.c:6913:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:33:44.244 00:46:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:44.502 00:46:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:33:44.502 00:46:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:44.502 [2024-07-12 00:46:12.247869] bdev_nvme.c:7774:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:33:44.502 [2024-07-12 00:46:12.247943] bdev_nvme.c:7774:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:33:44.502 [2024-07-12 00:46:12.247981] bdev_nvme.c:7774:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:33:44.502 [2024-07-12 00:46:12.248008] bdev_nvme.c:6803:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:33:44.502 [2024-07-12 00:46:12.248023] bdev_nvme.c:6762:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:33:44.502 [2024-07-12 00:46:12.254797] bdev_nvme.c:1614:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x23e66f0 was disconnected and freed. delete nvme_qpair. 00:33:45.441 00:46:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:45.441 00:46:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:45.441 00:46:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:45.441 00:46:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:45.441 00:46:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:45.441 00:46:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:45.441 00:46:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:45.441 00:46:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:45.441 00:46:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:33:45.441 00:46:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:33:45.441 00:46:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 1062213 00:33:45.441 00:46:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@946 -- # '[' -z 1062213 ']' 00:33:45.441 00:46:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # kill -0 1062213 00:33:45.441 00:46:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@951 -- # uname 00:33:45.441 00:46:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:33:45.441 00:46:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1062213 00:33:45.441 00:46:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:33:45.441 00:46:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:33:45.441 00:46:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1062213' 00:33:45.441 killing process with pid 1062213 00:33:45.441 00:46:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@965 -- # kill 1062213 00:33:45.441 00:46:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@970 -- # wait 1062213 00:33:45.699 00:46:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:33:45.699 00:46:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # nvmfcleanup 00:33:45.699 00:46:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@117 -- # sync 00:33:45.699 00:46:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:33:45.699 00:46:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@120 -- # set +e 00:33:45.699 00:46:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # for i in {1..20} 00:33:45.699 00:46:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:33:45.699 rmmod nvme_tcp 00:33:45.699 rmmod nvme_fabrics 00:33:45.699 rmmod nvme_keyring 00:33:45.699 00:46:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:33:45.699 00:46:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set -e 00:33:45.699 00:46:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # return 0 00:33:45.699 00:46:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@489 -- # '[' -n 1062193 ']' 00:33:45.699 00:46:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # killprocess 1062193 00:33:45.699 00:46:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@946 -- # '[' -z 1062193 ']' 00:33:45.699 00:46:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # kill -0 1062193 00:33:45.699 00:46:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@951 -- # uname 00:33:45.699 00:46:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:33:45.699 00:46:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1062193 00:33:45.699 00:46:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:33:45.699 00:46:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:33:45.699 00:46:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1062193' 00:33:45.699 killing process with pid 1062193 00:33:45.699 00:46:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@965 -- # kill 1062193 00:33:45.699 00:46:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@970 -- # wait 1062193 00:33:45.959 00:46:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:33:45.959 00:46:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:33:45.959 00:46:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:33:45.959 00:46:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:33:45.959 00:46:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:33:45.959 00:46:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:45.959 00:46:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:45.959 00:46:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:47.865 00:46:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:33:47.865 00:33:47.865 real 0m17.112s 00:33:47.865 user 0m25.445s 00:33:47.865 sys 0m2.587s 00:33:47.865 00:46:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:33:47.865 00:46:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:47.865 ************************************ 00:33:47.865 END TEST nvmf_discovery_remove_ifc 00:33:47.865 ************************************ 00:33:47.865 00:46:15 nvmf_tcp -- nvmf/nvmf.sh@104 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:33:47.865 00:46:15 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:33:47.865 00:46:15 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:33:47.865 00:46:15 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:47.865 ************************************ 00:33:47.865 START TEST nvmf_identify_kernel_target 00:33:47.865 ************************************ 00:33:47.865 00:46:15 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:33:48.124 * Looking for test storage... 00:33:48.124 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:33:48.124 00:46:15 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:48.124 00:46:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:33:48.124 00:46:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:48.124 00:46:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:48.124 00:46:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:48.124 00:46:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:48.124 00:46:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:48.124 00:46:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:48.124 00:46:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:48.124 00:46:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:48.124 00:46:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:48.124 00:46:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:48.124 00:46:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:33:48.124 00:46:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:33:48.124 00:46:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:48.124 00:46:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:48.124 00:46:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:48.124 00:46:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:48.124 00:46:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:48.124 00:46:15 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:48.124 00:46:15 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:48.124 00:46:15 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:48.124 00:46:15 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:48.124 00:46:15 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:48.124 00:46:15 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:48.124 00:46:15 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:33:48.124 00:46:15 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:48.124 00:46:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # : 0 00:33:48.124 00:46:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:33:48.124 00:46:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:33:48.124 00:46:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:48.124 00:46:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:48.124 00:46:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:48.124 00:46:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:33:48.124 00:46:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:33:48.124 00:46:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:33:48.124 00:46:15 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:33:48.124 00:46:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:33:48.124 00:46:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:48.124 00:46:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:33:48.124 00:46:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:33:48.124 00:46:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:33:48.124 00:46:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:48.124 00:46:15 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:48.124 00:46:15 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:48.124 00:46:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:33:48.124 00:46:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:33:48.124 00:46:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@285 -- # xtrace_disable 00:33:48.124 00:46:15 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:33:50.022 00:46:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:50.022 00:46:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # pci_devs=() 00:33:50.022 00:46:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:33:50.022 00:46:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:33:50.022 00:46:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:33:50.022 00:46:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:33:50.022 00:46:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:33:50.022 00:46:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # net_devs=() 00:33:50.022 00:46:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:33:50.022 00:46:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # e810=() 00:33:50.022 00:46:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # local -ga e810 00:33:50.022 00:46:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # x722=() 00:33:50.022 00:46:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # local -ga x722 00:33:50.022 00:46:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # mlx=() 00:33:50.022 00:46:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # local -ga mlx 00:33:50.022 00:46:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:50.022 00:46:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:50.022 00:46:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:50.022 00:46:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:50.022 00:46:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:50.022 00:46:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:50.022 00:46:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:50.022 00:46:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:50.022 00:46:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:50.022 00:46:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:50.022 00:46:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:50.022 00:46:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:33:50.022 00:46:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:33:50.022 00:46:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:33:50.022 00:46:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:33:50.022 00:46:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:33:50.022 00:46:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:33:50.022 00:46:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:50.022 00:46:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:33:50.022 Found 0000:08:00.0 (0x8086 - 0x159b) 00:33:50.022 00:46:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:50.022 00:46:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:50.022 00:46:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:50.022 00:46:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:50.022 00:46:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:50.022 00:46:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:50.022 00:46:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:33:50.022 Found 0000:08:00.1 (0x8086 - 0x159b) 00:33:50.022 00:46:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:50.022 00:46:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:50.022 00:46:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:50.022 00:46:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:50.022 00:46:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:50.022 00:46:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:33:50.022 00:46:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:33:50.022 00:46:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:33:50.022 00:46:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:50.022 00:46:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:50.022 00:46:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:50.022 00:46:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:50.022 00:46:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:50.022 00:46:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:50.022 00:46:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:50.022 00:46:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:33:50.022 Found net devices under 0000:08:00.0: cvl_0_0 00:33:50.022 00:46:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:50.022 00:46:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:50.023 00:46:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:50.023 00:46:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:50.023 00:46:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:50.023 00:46:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:50.023 00:46:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:50.023 00:46:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:50.023 00:46:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:33:50.023 Found net devices under 0000:08:00.1: cvl_0_1 00:33:50.023 00:46:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:50.023 00:46:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:33:50.023 00:46:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # is_hw=yes 00:33:50.023 00:46:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:33:50.023 00:46:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:33:50.023 00:46:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:33:50.023 00:46:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:50.023 00:46:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:50.023 00:46:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:50.023 00:46:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:33:50.023 00:46:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:50.023 00:46:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:50.023 00:46:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:33:50.023 00:46:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:50.023 00:46:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:50.023 00:46:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:33:50.023 00:46:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:33:50.023 00:46:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:33:50.023 00:46:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:50.023 00:46:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:50.023 00:46:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:50.023 00:46:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:33:50.023 00:46:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:50.023 00:46:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:50.023 00:46:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:50.023 00:46:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:33:50.023 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:50.023 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.241 ms 00:33:50.023 00:33:50.023 --- 10.0.0.2 ping statistics --- 00:33:50.023 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:50.023 rtt min/avg/max/mdev = 0.241/0.241/0.241/0.000 ms 00:33:50.023 00:46:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:50.023 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:50.023 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.077 ms 00:33:50.023 00:33:50.023 --- 10.0.0.1 ping statistics --- 00:33:50.023 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:50.023 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:33:50.023 00:46:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:50.023 00:46:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # return 0 00:33:50.023 00:46:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:33:50.023 00:46:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:50.023 00:46:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:33:50.023 00:46:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:33:50.023 00:46:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:50.023 00:46:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:33:50.023 00:46:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:33:50.023 00:46:17 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:33:50.023 00:46:17 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:33:50.023 00:46:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@741 -- # local ip 00:33:50.023 00:46:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:50.023 00:46:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:50.023 00:46:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:50.023 00:46:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:50.023 00:46:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:50.023 00:46:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:50.023 00:46:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:50.023 00:46:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:50.023 00:46:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:50.023 00:46:17 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:33:50.023 00:46:17 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:33:50.023 00:46:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:33:50.023 00:46:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:33:50.023 00:46:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:33:50.023 00:46:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:33:50.023 00:46:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:33:50.023 00:46:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@639 -- # local block nvme 00:33:50.023 00:46:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:33:50.023 00:46:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@642 -- # modprobe nvmet 00:33:50.023 00:46:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:33:50.023 00:46:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:33:50.959 Waiting for block devices as requested 00:33:50.959 0000:84:00.0 (8086 0a54): vfio-pci -> nvme 00:33:50.959 0000:00:04.7 (8086 3c27): vfio-pci -> ioatdma 00:33:50.959 0000:00:04.6 (8086 3c26): vfio-pci -> ioatdma 00:33:50.959 0000:00:04.5 (8086 3c25): vfio-pci -> ioatdma 00:33:51.219 0000:00:04.4 (8086 3c24): vfio-pci -> ioatdma 00:33:51.219 0000:00:04.3 (8086 3c23): vfio-pci -> ioatdma 00:33:51.219 0000:00:04.2 (8086 3c22): vfio-pci -> ioatdma 00:33:51.219 0000:00:04.1 (8086 3c21): vfio-pci -> ioatdma 00:33:51.478 0000:00:04.0 (8086 3c20): vfio-pci -> ioatdma 00:33:51.478 0000:80:04.7 (8086 3c27): vfio-pci -> ioatdma 00:33:51.478 0000:80:04.6 (8086 3c26): vfio-pci -> ioatdma 00:33:51.478 0000:80:04.5 (8086 3c25): vfio-pci -> ioatdma 00:33:51.738 0000:80:04.4 (8086 3c24): vfio-pci -> ioatdma 00:33:51.738 0000:80:04.3 (8086 3c23): vfio-pci -> ioatdma 00:33:51.738 0000:80:04.2 (8086 3c22): vfio-pci -> ioatdma 00:33:51.738 0000:80:04.1 (8086 3c21): vfio-pci -> ioatdma 00:33:51.997 0000:80:04.0 (8086 3c20): vfio-pci -> ioatdma 00:33:51.997 00:46:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:33:51.997 00:46:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:33:51.997 00:46:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:33:51.997 00:46:19 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:33:51.997 00:46:19 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:33:51.997 00:46:19 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:33:51.997 00:46:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:33:51.997 00:46:19 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:33:51.997 00:46:19 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:33:51.997 No valid GPT data, bailing 00:33:51.997 00:46:19 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:33:51.997 00:46:19 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:33:51.997 00:46:19 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:33:51.997 00:46:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:33:51.997 00:46:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:33:51.997 00:46:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:33:51.997 00:46:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:33:51.997 00:46:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:33:51.997 00:46:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:33:51.997 00:46:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # echo 1 00:33:51.997 00:46:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:33:51.997 00:46:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # echo 1 00:33:51.997 00:46:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:33:51.997 00:46:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@672 -- # echo tcp 00:33:51.997 00:46:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # echo 4420 00:33:51.997 00:46:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # echo ipv4 00:33:51.997 00:46:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:33:51.997 00:46:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -a 10.0.0.1 -t tcp -s 4420 00:33:52.258 00:33:52.258 Discovery Log Number of Records 2, Generation counter 2 00:33:52.258 =====Discovery Log Entry 0====== 00:33:52.258 trtype: tcp 00:33:52.258 adrfam: ipv4 00:33:52.258 subtype: current discovery subsystem 00:33:52.258 treq: not specified, sq flow control disable supported 00:33:52.258 portid: 1 00:33:52.258 trsvcid: 4420 00:33:52.258 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:33:52.258 traddr: 10.0.0.1 00:33:52.258 eflags: none 00:33:52.258 sectype: none 00:33:52.258 =====Discovery Log Entry 1====== 00:33:52.258 trtype: tcp 00:33:52.258 adrfam: ipv4 00:33:52.258 subtype: nvme subsystem 00:33:52.258 treq: not specified, sq flow control disable supported 00:33:52.258 portid: 1 00:33:52.258 trsvcid: 4420 00:33:52.258 subnqn: nqn.2016-06.io.spdk:testnqn 00:33:52.258 traddr: 10.0.0.1 00:33:52.258 eflags: none 00:33:52.258 sectype: none 00:33:52.258 00:46:19 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:33:52.258 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:33:52.258 EAL: No free 2048 kB hugepages reported on node 1 00:33:52.258 ===================================================== 00:33:52.258 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:33:52.258 ===================================================== 00:33:52.258 Controller Capabilities/Features 00:33:52.258 ================================ 00:33:52.258 Vendor ID: 0000 00:33:52.258 Subsystem Vendor ID: 0000 00:33:52.258 Serial Number: 82bbddbf5bc34c430f4d 00:33:52.258 Model Number: Linux 00:33:52.258 Firmware Version: 6.7.0-68 00:33:52.258 Recommended Arb Burst: 0 00:33:52.258 IEEE OUI Identifier: 00 00 00 00:33:52.258 Multi-path I/O 00:33:52.258 May have multiple subsystem ports: No 00:33:52.258 May have multiple controllers: No 00:33:52.258 Associated with SR-IOV VF: No 00:33:52.258 Max Data Transfer Size: Unlimited 00:33:52.258 Max Number of Namespaces: 0 00:33:52.258 Max Number of I/O Queues: 1024 00:33:52.258 NVMe Specification Version (VS): 1.3 00:33:52.258 NVMe Specification Version (Identify): 1.3 00:33:52.258 Maximum Queue Entries: 1024 00:33:52.258 Contiguous Queues Required: No 00:33:52.258 Arbitration Mechanisms Supported 00:33:52.258 Weighted Round Robin: Not Supported 00:33:52.258 Vendor Specific: Not Supported 00:33:52.258 Reset Timeout: 7500 ms 00:33:52.258 Doorbell Stride: 4 bytes 00:33:52.258 NVM Subsystem Reset: Not Supported 00:33:52.258 Command Sets Supported 00:33:52.258 NVM Command Set: Supported 00:33:52.258 Boot Partition: Not Supported 00:33:52.258 Memory Page Size Minimum: 4096 bytes 00:33:52.258 Memory Page Size Maximum: 4096 bytes 00:33:52.258 Persistent Memory Region: Not Supported 00:33:52.258 Optional Asynchronous Events Supported 00:33:52.258 Namespace Attribute Notices: Not Supported 00:33:52.258 Firmware Activation Notices: Not Supported 00:33:52.258 ANA Change Notices: Not Supported 00:33:52.258 PLE Aggregate Log Change Notices: Not Supported 00:33:52.258 LBA Status Info Alert Notices: Not Supported 00:33:52.258 EGE Aggregate Log Change Notices: Not Supported 00:33:52.258 Normal NVM Subsystem Shutdown event: Not Supported 00:33:52.258 Zone Descriptor Change Notices: Not Supported 00:33:52.258 Discovery Log Change Notices: Supported 00:33:52.258 Controller Attributes 00:33:52.258 128-bit Host Identifier: Not Supported 00:33:52.258 Non-Operational Permissive Mode: Not Supported 00:33:52.259 NVM Sets: Not Supported 00:33:52.259 Read Recovery Levels: Not Supported 00:33:52.259 Endurance Groups: Not Supported 00:33:52.259 Predictable Latency Mode: Not Supported 00:33:52.259 Traffic Based Keep ALive: Not Supported 00:33:52.259 Namespace Granularity: Not Supported 00:33:52.259 SQ Associations: Not Supported 00:33:52.259 UUID List: Not Supported 00:33:52.259 Multi-Domain Subsystem: Not Supported 00:33:52.259 Fixed Capacity Management: Not Supported 00:33:52.259 Variable Capacity Management: Not Supported 00:33:52.259 Delete Endurance Group: Not Supported 00:33:52.259 Delete NVM Set: Not Supported 00:33:52.259 Extended LBA Formats Supported: Not Supported 00:33:52.259 Flexible Data Placement Supported: Not Supported 00:33:52.259 00:33:52.259 Controller Memory Buffer Support 00:33:52.259 ================================ 00:33:52.259 Supported: No 00:33:52.259 00:33:52.259 Persistent Memory Region Support 00:33:52.259 ================================ 00:33:52.259 Supported: No 00:33:52.259 00:33:52.259 Admin Command Set Attributes 00:33:52.259 ============================ 00:33:52.259 Security Send/Receive: Not Supported 00:33:52.259 Format NVM: Not Supported 00:33:52.259 Firmware Activate/Download: Not Supported 00:33:52.259 Namespace Management: Not Supported 00:33:52.259 Device Self-Test: Not Supported 00:33:52.259 Directives: Not Supported 00:33:52.259 NVMe-MI: Not Supported 00:33:52.259 Virtualization Management: Not Supported 00:33:52.259 Doorbell Buffer Config: Not Supported 00:33:52.259 Get LBA Status Capability: Not Supported 00:33:52.259 Command & Feature Lockdown Capability: Not Supported 00:33:52.259 Abort Command Limit: 1 00:33:52.259 Async Event Request Limit: 1 00:33:52.259 Number of Firmware Slots: N/A 00:33:52.259 Firmware Slot 1 Read-Only: N/A 00:33:52.259 Firmware Activation Without Reset: N/A 00:33:52.259 Multiple Update Detection Support: N/A 00:33:52.259 Firmware Update Granularity: No Information Provided 00:33:52.259 Per-Namespace SMART Log: No 00:33:52.259 Asymmetric Namespace Access Log Page: Not Supported 00:33:52.259 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:33:52.259 Command Effects Log Page: Not Supported 00:33:52.259 Get Log Page Extended Data: Supported 00:33:52.259 Telemetry Log Pages: Not Supported 00:33:52.259 Persistent Event Log Pages: Not Supported 00:33:52.259 Supported Log Pages Log Page: May Support 00:33:52.259 Commands Supported & Effects Log Page: Not Supported 00:33:52.259 Feature Identifiers & Effects Log Page:May Support 00:33:52.259 NVMe-MI Commands & Effects Log Page: May Support 00:33:52.259 Data Area 4 for Telemetry Log: Not Supported 00:33:52.259 Error Log Page Entries Supported: 1 00:33:52.259 Keep Alive: Not Supported 00:33:52.259 00:33:52.259 NVM Command Set Attributes 00:33:52.259 ========================== 00:33:52.259 Submission Queue Entry Size 00:33:52.259 Max: 1 00:33:52.259 Min: 1 00:33:52.259 Completion Queue Entry Size 00:33:52.259 Max: 1 00:33:52.259 Min: 1 00:33:52.259 Number of Namespaces: 0 00:33:52.259 Compare Command: Not Supported 00:33:52.259 Write Uncorrectable Command: Not Supported 00:33:52.259 Dataset Management Command: Not Supported 00:33:52.259 Write Zeroes Command: Not Supported 00:33:52.259 Set Features Save Field: Not Supported 00:33:52.259 Reservations: Not Supported 00:33:52.259 Timestamp: Not Supported 00:33:52.259 Copy: Not Supported 00:33:52.259 Volatile Write Cache: Not Present 00:33:52.259 Atomic Write Unit (Normal): 1 00:33:52.259 Atomic Write Unit (PFail): 1 00:33:52.259 Atomic Compare & Write Unit: 1 00:33:52.259 Fused Compare & Write: Not Supported 00:33:52.259 Scatter-Gather List 00:33:52.259 SGL Command Set: Supported 00:33:52.259 SGL Keyed: Not Supported 00:33:52.259 SGL Bit Bucket Descriptor: Not Supported 00:33:52.259 SGL Metadata Pointer: Not Supported 00:33:52.259 Oversized SGL: Not Supported 00:33:52.259 SGL Metadata Address: Not Supported 00:33:52.259 SGL Offset: Supported 00:33:52.259 Transport SGL Data Block: Not Supported 00:33:52.259 Replay Protected Memory Block: Not Supported 00:33:52.259 00:33:52.259 Firmware Slot Information 00:33:52.259 ========================= 00:33:52.259 Active slot: 0 00:33:52.259 00:33:52.259 00:33:52.259 Error Log 00:33:52.259 ========= 00:33:52.259 00:33:52.259 Active Namespaces 00:33:52.259 ================= 00:33:52.259 Discovery Log Page 00:33:52.259 ================== 00:33:52.259 Generation Counter: 2 00:33:52.259 Number of Records: 2 00:33:52.259 Record Format: 0 00:33:52.259 00:33:52.259 Discovery Log Entry 0 00:33:52.259 ---------------------- 00:33:52.259 Transport Type: 3 (TCP) 00:33:52.259 Address Family: 1 (IPv4) 00:33:52.259 Subsystem Type: 3 (Current Discovery Subsystem) 00:33:52.259 Entry Flags: 00:33:52.259 Duplicate Returned Information: 0 00:33:52.259 Explicit Persistent Connection Support for Discovery: 0 00:33:52.259 Transport Requirements: 00:33:52.259 Secure Channel: Not Specified 00:33:52.259 Port ID: 1 (0x0001) 00:33:52.259 Controller ID: 65535 (0xffff) 00:33:52.259 Admin Max SQ Size: 32 00:33:52.259 Transport Service Identifier: 4420 00:33:52.259 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:33:52.259 Transport Address: 10.0.0.1 00:33:52.259 Discovery Log Entry 1 00:33:52.259 ---------------------- 00:33:52.259 Transport Type: 3 (TCP) 00:33:52.259 Address Family: 1 (IPv4) 00:33:52.259 Subsystem Type: 2 (NVM Subsystem) 00:33:52.259 Entry Flags: 00:33:52.259 Duplicate Returned Information: 0 00:33:52.259 Explicit Persistent Connection Support for Discovery: 0 00:33:52.259 Transport Requirements: 00:33:52.259 Secure Channel: Not Specified 00:33:52.259 Port ID: 1 (0x0001) 00:33:52.259 Controller ID: 65535 (0xffff) 00:33:52.259 Admin Max SQ Size: 32 00:33:52.259 Transport Service Identifier: 4420 00:33:52.259 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:33:52.259 Transport Address: 10.0.0.1 00:33:52.259 00:46:19 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:33:52.259 EAL: No free 2048 kB hugepages reported on node 1 00:33:52.259 get_feature(0x01) failed 00:33:52.259 get_feature(0x02) failed 00:33:52.259 get_feature(0x04) failed 00:33:52.259 ===================================================== 00:33:52.259 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:33:52.259 ===================================================== 00:33:52.259 Controller Capabilities/Features 00:33:52.259 ================================ 00:33:52.259 Vendor ID: 0000 00:33:52.259 Subsystem Vendor ID: 0000 00:33:52.259 Serial Number: 2d6475f333468ce44461 00:33:52.259 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:33:52.259 Firmware Version: 6.7.0-68 00:33:52.259 Recommended Arb Burst: 6 00:33:52.259 IEEE OUI Identifier: 00 00 00 00:33:52.259 Multi-path I/O 00:33:52.259 May have multiple subsystem ports: Yes 00:33:52.259 May have multiple controllers: Yes 00:33:52.259 Associated with SR-IOV VF: No 00:33:52.259 Max Data Transfer Size: Unlimited 00:33:52.259 Max Number of Namespaces: 1024 00:33:52.259 Max Number of I/O Queues: 128 00:33:52.259 NVMe Specification Version (VS): 1.3 00:33:52.259 NVMe Specification Version (Identify): 1.3 00:33:52.259 Maximum Queue Entries: 1024 00:33:52.259 Contiguous Queues Required: No 00:33:52.259 Arbitration Mechanisms Supported 00:33:52.259 Weighted Round Robin: Not Supported 00:33:52.259 Vendor Specific: Not Supported 00:33:52.259 Reset Timeout: 7500 ms 00:33:52.259 Doorbell Stride: 4 bytes 00:33:52.259 NVM Subsystem Reset: Not Supported 00:33:52.259 Command Sets Supported 00:33:52.259 NVM Command Set: Supported 00:33:52.259 Boot Partition: Not Supported 00:33:52.259 Memory Page Size Minimum: 4096 bytes 00:33:52.259 Memory Page Size Maximum: 4096 bytes 00:33:52.259 Persistent Memory Region: Not Supported 00:33:52.259 Optional Asynchronous Events Supported 00:33:52.259 Namespace Attribute Notices: Supported 00:33:52.259 Firmware Activation Notices: Not Supported 00:33:52.259 ANA Change Notices: Supported 00:33:52.259 PLE Aggregate Log Change Notices: Not Supported 00:33:52.259 LBA Status Info Alert Notices: Not Supported 00:33:52.259 EGE Aggregate Log Change Notices: Not Supported 00:33:52.259 Normal NVM Subsystem Shutdown event: Not Supported 00:33:52.259 Zone Descriptor Change Notices: Not Supported 00:33:52.259 Discovery Log Change Notices: Not Supported 00:33:52.259 Controller Attributes 00:33:52.259 128-bit Host Identifier: Supported 00:33:52.259 Non-Operational Permissive Mode: Not Supported 00:33:52.259 NVM Sets: Not Supported 00:33:52.259 Read Recovery Levels: Not Supported 00:33:52.259 Endurance Groups: Not Supported 00:33:52.259 Predictable Latency Mode: Not Supported 00:33:52.259 Traffic Based Keep ALive: Supported 00:33:52.259 Namespace Granularity: Not Supported 00:33:52.259 SQ Associations: Not Supported 00:33:52.259 UUID List: Not Supported 00:33:52.259 Multi-Domain Subsystem: Not Supported 00:33:52.259 Fixed Capacity Management: Not Supported 00:33:52.259 Variable Capacity Management: Not Supported 00:33:52.260 Delete Endurance Group: Not Supported 00:33:52.260 Delete NVM Set: Not Supported 00:33:52.260 Extended LBA Formats Supported: Not Supported 00:33:52.260 Flexible Data Placement Supported: Not Supported 00:33:52.260 00:33:52.260 Controller Memory Buffer Support 00:33:52.260 ================================ 00:33:52.260 Supported: No 00:33:52.260 00:33:52.260 Persistent Memory Region Support 00:33:52.260 ================================ 00:33:52.260 Supported: No 00:33:52.260 00:33:52.260 Admin Command Set Attributes 00:33:52.260 ============================ 00:33:52.260 Security Send/Receive: Not Supported 00:33:52.260 Format NVM: Not Supported 00:33:52.260 Firmware Activate/Download: Not Supported 00:33:52.260 Namespace Management: Not Supported 00:33:52.260 Device Self-Test: Not Supported 00:33:52.260 Directives: Not Supported 00:33:52.260 NVMe-MI: Not Supported 00:33:52.260 Virtualization Management: Not Supported 00:33:52.260 Doorbell Buffer Config: Not Supported 00:33:52.260 Get LBA Status Capability: Not Supported 00:33:52.260 Command & Feature Lockdown Capability: Not Supported 00:33:52.260 Abort Command Limit: 4 00:33:52.260 Async Event Request Limit: 4 00:33:52.260 Number of Firmware Slots: N/A 00:33:52.260 Firmware Slot 1 Read-Only: N/A 00:33:52.260 Firmware Activation Without Reset: N/A 00:33:52.260 Multiple Update Detection Support: N/A 00:33:52.260 Firmware Update Granularity: No Information Provided 00:33:52.260 Per-Namespace SMART Log: Yes 00:33:52.260 Asymmetric Namespace Access Log Page: Supported 00:33:52.260 ANA Transition Time : 10 sec 00:33:52.260 00:33:52.260 Asymmetric Namespace Access Capabilities 00:33:52.260 ANA Optimized State : Supported 00:33:52.260 ANA Non-Optimized State : Supported 00:33:52.260 ANA Inaccessible State : Supported 00:33:52.260 ANA Persistent Loss State : Supported 00:33:52.260 ANA Change State : Supported 00:33:52.260 ANAGRPID is not changed : No 00:33:52.260 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:33:52.260 00:33:52.260 ANA Group Identifier Maximum : 128 00:33:52.260 Number of ANA Group Identifiers : 128 00:33:52.260 Max Number of Allowed Namespaces : 1024 00:33:52.260 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:33:52.260 Command Effects Log Page: Supported 00:33:52.260 Get Log Page Extended Data: Supported 00:33:52.260 Telemetry Log Pages: Not Supported 00:33:52.260 Persistent Event Log Pages: Not Supported 00:33:52.260 Supported Log Pages Log Page: May Support 00:33:52.260 Commands Supported & Effects Log Page: Not Supported 00:33:52.260 Feature Identifiers & Effects Log Page:May Support 00:33:52.260 NVMe-MI Commands & Effects Log Page: May Support 00:33:52.260 Data Area 4 for Telemetry Log: Not Supported 00:33:52.260 Error Log Page Entries Supported: 128 00:33:52.260 Keep Alive: Supported 00:33:52.260 Keep Alive Granularity: 1000 ms 00:33:52.260 00:33:52.260 NVM Command Set Attributes 00:33:52.260 ========================== 00:33:52.260 Submission Queue Entry Size 00:33:52.260 Max: 64 00:33:52.260 Min: 64 00:33:52.260 Completion Queue Entry Size 00:33:52.260 Max: 16 00:33:52.260 Min: 16 00:33:52.260 Number of Namespaces: 1024 00:33:52.260 Compare Command: Not Supported 00:33:52.260 Write Uncorrectable Command: Not Supported 00:33:52.260 Dataset Management Command: Supported 00:33:52.260 Write Zeroes Command: Supported 00:33:52.260 Set Features Save Field: Not Supported 00:33:52.260 Reservations: Not Supported 00:33:52.260 Timestamp: Not Supported 00:33:52.260 Copy: Not Supported 00:33:52.260 Volatile Write Cache: Present 00:33:52.260 Atomic Write Unit (Normal): 1 00:33:52.260 Atomic Write Unit (PFail): 1 00:33:52.260 Atomic Compare & Write Unit: 1 00:33:52.260 Fused Compare & Write: Not Supported 00:33:52.260 Scatter-Gather List 00:33:52.260 SGL Command Set: Supported 00:33:52.260 SGL Keyed: Not Supported 00:33:52.260 SGL Bit Bucket Descriptor: Not Supported 00:33:52.260 SGL Metadata Pointer: Not Supported 00:33:52.260 Oversized SGL: Not Supported 00:33:52.260 SGL Metadata Address: Not Supported 00:33:52.260 SGL Offset: Supported 00:33:52.260 Transport SGL Data Block: Not Supported 00:33:52.260 Replay Protected Memory Block: Not Supported 00:33:52.260 00:33:52.260 Firmware Slot Information 00:33:52.260 ========================= 00:33:52.260 Active slot: 0 00:33:52.260 00:33:52.260 Asymmetric Namespace Access 00:33:52.260 =========================== 00:33:52.260 Change Count : 0 00:33:52.260 Number of ANA Group Descriptors : 1 00:33:52.260 ANA Group Descriptor : 0 00:33:52.260 ANA Group ID : 1 00:33:52.260 Number of NSID Values : 1 00:33:52.260 Change Count : 0 00:33:52.260 ANA State : 1 00:33:52.260 Namespace Identifier : 1 00:33:52.260 00:33:52.260 Commands Supported and Effects 00:33:52.260 ============================== 00:33:52.260 Admin Commands 00:33:52.260 -------------- 00:33:52.260 Get Log Page (02h): Supported 00:33:52.260 Identify (06h): Supported 00:33:52.260 Abort (08h): Supported 00:33:52.260 Set Features (09h): Supported 00:33:52.260 Get Features (0Ah): Supported 00:33:52.260 Asynchronous Event Request (0Ch): Supported 00:33:52.260 Keep Alive (18h): Supported 00:33:52.260 I/O Commands 00:33:52.260 ------------ 00:33:52.260 Flush (00h): Supported 00:33:52.260 Write (01h): Supported LBA-Change 00:33:52.260 Read (02h): Supported 00:33:52.260 Write Zeroes (08h): Supported LBA-Change 00:33:52.260 Dataset Management (09h): Supported 00:33:52.260 00:33:52.260 Error Log 00:33:52.260 ========= 00:33:52.260 Entry: 0 00:33:52.260 Error Count: 0x3 00:33:52.260 Submission Queue Id: 0x0 00:33:52.260 Command Id: 0x5 00:33:52.260 Phase Bit: 0 00:33:52.260 Status Code: 0x2 00:33:52.260 Status Code Type: 0x0 00:33:52.260 Do Not Retry: 1 00:33:52.260 Error Location: 0x28 00:33:52.260 LBA: 0x0 00:33:52.260 Namespace: 0x0 00:33:52.260 Vendor Log Page: 0x0 00:33:52.260 ----------- 00:33:52.260 Entry: 1 00:33:52.260 Error Count: 0x2 00:33:52.260 Submission Queue Id: 0x0 00:33:52.260 Command Id: 0x5 00:33:52.260 Phase Bit: 0 00:33:52.260 Status Code: 0x2 00:33:52.260 Status Code Type: 0x0 00:33:52.260 Do Not Retry: 1 00:33:52.260 Error Location: 0x28 00:33:52.260 LBA: 0x0 00:33:52.260 Namespace: 0x0 00:33:52.260 Vendor Log Page: 0x0 00:33:52.260 ----------- 00:33:52.260 Entry: 2 00:33:52.260 Error Count: 0x1 00:33:52.260 Submission Queue Id: 0x0 00:33:52.260 Command Id: 0x4 00:33:52.260 Phase Bit: 0 00:33:52.260 Status Code: 0x2 00:33:52.260 Status Code Type: 0x0 00:33:52.260 Do Not Retry: 1 00:33:52.260 Error Location: 0x28 00:33:52.260 LBA: 0x0 00:33:52.260 Namespace: 0x0 00:33:52.260 Vendor Log Page: 0x0 00:33:52.260 00:33:52.260 Number of Queues 00:33:52.260 ================ 00:33:52.260 Number of I/O Submission Queues: 128 00:33:52.260 Number of I/O Completion Queues: 128 00:33:52.260 00:33:52.260 ZNS Specific Controller Data 00:33:52.260 ============================ 00:33:52.260 Zone Append Size Limit: 0 00:33:52.260 00:33:52.260 00:33:52.260 Active Namespaces 00:33:52.260 ================= 00:33:52.260 get_feature(0x05) failed 00:33:52.260 Namespace ID:1 00:33:52.260 Command Set Identifier: NVM (00h) 00:33:52.260 Deallocate: Supported 00:33:52.260 Deallocated/Unwritten Error: Not Supported 00:33:52.260 Deallocated Read Value: Unknown 00:33:52.260 Deallocate in Write Zeroes: Not Supported 00:33:52.260 Deallocated Guard Field: 0xFFFF 00:33:52.260 Flush: Supported 00:33:52.260 Reservation: Not Supported 00:33:52.260 Namespace Sharing Capabilities: Multiple Controllers 00:33:52.260 Size (in LBAs): 1953525168 (931GiB) 00:33:52.260 Capacity (in LBAs): 1953525168 (931GiB) 00:33:52.260 Utilization (in LBAs): 1953525168 (931GiB) 00:33:52.260 UUID: f9e734f7-816c-45d2-aeec-edd7423b8cb0 00:33:52.260 Thin Provisioning: Not Supported 00:33:52.260 Per-NS Atomic Units: Yes 00:33:52.260 Atomic Boundary Size (Normal): 0 00:33:52.260 Atomic Boundary Size (PFail): 0 00:33:52.260 Atomic Boundary Offset: 0 00:33:52.260 NGUID/EUI64 Never Reused: No 00:33:52.260 ANA group ID: 1 00:33:52.260 Namespace Write Protected: No 00:33:52.260 Number of LBA Formats: 1 00:33:52.260 Current LBA Format: LBA Format #00 00:33:52.260 LBA Format #00: Data Size: 512 Metadata Size: 0 00:33:52.260 00:33:52.260 00:46:20 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:33:52.260 00:46:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:33:52.260 00:46:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # sync 00:33:52.260 00:46:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:33:52.260 00:46:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@120 -- # set +e 00:33:52.260 00:46:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:33:52.260 00:46:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:33:52.260 rmmod nvme_tcp 00:33:52.260 rmmod nvme_fabrics 00:33:52.260 00:46:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:33:52.261 00:46:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set -e 00:33:52.261 00:46:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # return 0 00:33:52.261 00:46:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:33:52.261 00:46:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:33:52.261 00:46:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:33:52.261 00:46:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:33:52.261 00:46:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:33:52.261 00:46:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:33:52.261 00:46:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:52.261 00:46:20 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:52.261 00:46:20 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:54.793 00:46:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:33:54.793 00:46:22 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:33:54.793 00:46:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:33:54.793 00:46:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # echo 0 00:33:54.794 00:46:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:33:54.794 00:46:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:33:54.794 00:46:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:33:54.794 00:46:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:33:54.794 00:46:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:33:54.794 00:46:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:33:54.794 00:46:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:33:55.374 0000:00:04.7 (8086 3c27): ioatdma -> vfio-pci 00:33:55.375 0000:00:04.6 (8086 3c26): ioatdma -> vfio-pci 00:33:55.375 0000:00:04.5 (8086 3c25): ioatdma -> vfio-pci 00:33:55.375 0000:00:04.4 (8086 3c24): ioatdma -> vfio-pci 00:33:55.375 0000:00:04.3 (8086 3c23): ioatdma -> vfio-pci 00:33:55.375 0000:00:04.2 (8086 3c22): ioatdma -> vfio-pci 00:33:55.375 0000:00:04.1 (8086 3c21): ioatdma -> vfio-pci 00:33:55.375 0000:00:04.0 (8086 3c20): ioatdma -> vfio-pci 00:33:55.375 0000:80:04.7 (8086 3c27): ioatdma -> vfio-pci 00:33:55.375 0000:80:04.6 (8086 3c26): ioatdma -> vfio-pci 00:33:55.375 0000:80:04.5 (8086 3c25): ioatdma -> vfio-pci 00:33:55.375 0000:80:04.4 (8086 3c24): ioatdma -> vfio-pci 00:33:55.375 0000:80:04.3 (8086 3c23): ioatdma -> vfio-pci 00:33:55.375 0000:80:04.2 (8086 3c22): ioatdma -> vfio-pci 00:33:55.375 0000:80:04.1 (8086 3c21): ioatdma -> vfio-pci 00:33:55.638 0000:80:04.0 (8086 3c20): ioatdma -> vfio-pci 00:33:56.575 0000:84:00.0 (8086 0a54): nvme -> vfio-pci 00:33:56.575 00:33:56.575 real 0m8.537s 00:33:56.575 user 0m1.707s 00:33:56.575 sys 0m2.914s 00:33:56.575 00:46:24 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1122 -- # xtrace_disable 00:33:56.575 00:46:24 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:33:56.575 ************************************ 00:33:56.575 END TEST nvmf_identify_kernel_target 00:33:56.575 ************************************ 00:33:56.575 00:46:24 nvmf_tcp -- nvmf/nvmf.sh@105 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:33:56.575 00:46:24 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:33:56.575 00:46:24 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:33:56.575 00:46:24 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:56.575 ************************************ 00:33:56.575 START TEST nvmf_auth_host 00:33:56.575 ************************************ 00:33:56.575 00:46:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:33:56.575 * Looking for test storage... 00:33:56.575 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:33:56.575 00:46:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:56.575 00:46:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:33:56.575 00:46:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:56.575 00:46:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:56.575 00:46:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:56.575 00:46:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:56.575 00:46:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:56.575 00:46:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:56.575 00:46:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:56.575 00:46:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:56.575 00:46:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:56.575 00:46:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:56.575 00:46:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:33:56.575 00:46:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:33:56.575 00:46:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:56.575 00:46:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:56.575 00:46:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:56.575 00:46:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:56.575 00:46:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:56.575 00:46:24 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:56.575 00:46:24 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:56.575 00:46:24 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:56.575 00:46:24 nvmf_tcp.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:56.575 00:46:24 nvmf_tcp.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:56.575 00:46:24 nvmf_tcp.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:56.575 00:46:24 nvmf_tcp.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:33:56.575 00:46:24 nvmf_tcp.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:56.575 00:46:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@47 -- # : 0 00:33:56.575 00:46:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:33:56.575 00:46:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:33:56.575 00:46:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:56.575 00:46:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:56.575 00:46:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:56.575 00:46:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:33:56.575 00:46:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:33:56.575 00:46:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:33:56.575 00:46:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:33:56.575 00:46:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:33:56.575 00:46:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:33:56.575 00:46:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:33:56.575 00:46:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:33:56.575 00:46:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:33:56.575 00:46:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:33:56.575 00:46:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:33:56.575 00:46:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:33:56.575 00:46:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:33:56.575 00:46:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:56.575 00:46:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:33:56.575 00:46:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:33:56.575 00:46:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:33:56.575 00:46:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:56.575 00:46:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:56.575 00:46:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:56.575 00:46:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:33:56.575 00:46:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:33:56.575 00:46:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@285 -- # xtrace_disable 00:33:56.575 00:46:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:58.487 00:46:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:58.487 00:46:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@291 -- # pci_devs=() 00:33:58.487 00:46:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:33:58.487 00:46:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:33:58.487 00:46:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:33:58.487 00:46:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:33:58.487 00:46:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:33:58.487 00:46:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@295 -- # net_devs=() 00:33:58.487 00:46:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:33:58.487 00:46:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@296 -- # e810=() 00:33:58.487 00:46:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@296 -- # local -ga e810 00:33:58.487 00:46:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@297 -- # x722=() 00:33:58.487 00:46:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@297 -- # local -ga x722 00:33:58.487 00:46:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@298 -- # mlx=() 00:33:58.487 00:46:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@298 -- # local -ga mlx 00:33:58.487 00:46:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:58.487 00:46:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:58.487 00:46:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:58.487 00:46:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:58.487 00:46:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:58.487 00:46:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:58.487 00:46:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:58.487 00:46:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:58.487 00:46:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:58.487 00:46:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:58.487 00:46:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:58.487 00:46:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:33:58.487 00:46:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:33:58.487 00:46:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:33:58.487 00:46:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:33:58.487 00:46:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:33:58.487 00:46:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:33:58.487 00:46:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:58.487 00:46:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:33:58.487 Found 0000:08:00.0 (0x8086 - 0x159b) 00:33:58.487 00:46:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:58.487 00:46:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:58.487 00:46:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:58.487 00:46:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:58.487 00:46:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:58.487 00:46:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:58.487 00:46:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:33:58.487 Found 0000:08:00.1 (0x8086 - 0x159b) 00:33:58.487 00:46:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:58.487 00:46:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:58.487 00:46:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:58.487 00:46:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:58.487 00:46:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:58.487 00:46:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:33:58.487 00:46:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:33:58.487 00:46:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:33:58.487 00:46:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:58.487 00:46:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:58.487 00:46:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:58.487 00:46:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:58.487 00:46:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:58.487 00:46:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:58.487 00:46:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:58.487 00:46:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:33:58.487 Found net devices under 0000:08:00.0: cvl_0_0 00:33:58.487 00:46:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:58.487 00:46:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:58.487 00:46:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:58.487 00:46:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:58.487 00:46:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:58.487 00:46:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:58.487 00:46:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:58.487 00:46:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:58.488 00:46:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:33:58.488 Found net devices under 0000:08:00.1: cvl_0_1 00:33:58.488 00:46:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:58.488 00:46:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:33:58.488 00:46:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # is_hw=yes 00:33:58.488 00:46:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:33:58.488 00:46:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:33:58.488 00:46:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:33:58.488 00:46:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:58.488 00:46:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:58.488 00:46:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:58.488 00:46:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:33:58.488 00:46:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:58.488 00:46:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:58.488 00:46:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:33:58.488 00:46:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:58.488 00:46:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:58.488 00:46:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:33:58.488 00:46:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:33:58.488 00:46:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:33:58.488 00:46:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:58.488 00:46:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:58.488 00:46:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:58.488 00:46:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:33:58.488 00:46:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:58.488 00:46:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:58.488 00:46:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:58.488 00:46:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:33:58.488 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:58.488 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.170 ms 00:33:58.488 00:33:58.488 --- 10.0.0.2 ping statistics --- 00:33:58.488 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:58.488 rtt min/avg/max/mdev = 0.170/0.170/0.170/0.000 ms 00:33:58.488 00:46:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:58.488 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:58.488 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.107 ms 00:33:58.488 00:33:58.488 --- 10.0.0.1 ping statistics --- 00:33:58.488 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:58.488 rtt min/avg/max/mdev = 0.107/0.107/0.107/0.000 ms 00:33:58.488 00:46:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:58.488 00:46:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@422 -- # return 0 00:33:58.488 00:46:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:33:58.488 00:46:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:58.488 00:46:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:33:58.488 00:46:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:33:58.488 00:46:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:58.488 00:46:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:33:58.488 00:46:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:33:58.488 00:46:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:33:58.488 00:46:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:33:58.488 00:46:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@720 -- # xtrace_disable 00:33:58.488 00:46:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:58.488 00:46:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@481 -- # nvmfpid=1067620 00:33:58.488 00:46:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@482 -- # waitforlisten 1067620 00:33:58.488 00:46:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@827 -- # '[' -z 1067620 ']' 00:33:58.488 00:46:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:33:58.488 00:46:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:58.488 00:46:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@832 -- # local max_retries=100 00:33:58.488 00:46:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:58.488 00:46:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # xtrace_disable 00:33:58.488 00:46:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:58.488 00:46:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:33:58.488 00:46:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@860 -- # return 0 00:33:58.488 00:46:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:33:58.488 00:46:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:58.488 00:46:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:58.799 00:46:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:58.799 00:46:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:33:58.799 00:46:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:33:58.799 00:46:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:33:58.799 00:46:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:33:58.799 00:46:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:33:58.799 00:46:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:33:58.799 00:46:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:33:58.799 00:46:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:33:58.799 00:46:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=334fe20a2b473a09d19008b78d24dbaa 00:33:58.799 00:46:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:33:58.799 00:46:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.V8Q 00:33:58.799 00:46:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 334fe20a2b473a09d19008b78d24dbaa 0 00:33:58.799 00:46:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 334fe20a2b473a09d19008b78d24dbaa 0 00:33:58.799 00:46:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:33:58.799 00:46:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:33:58.799 00:46:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=334fe20a2b473a09d19008b78d24dbaa 00:33:58.799 00:46:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:33:58.799 00:46:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:33:58.799 00:46:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.V8Q 00:33:58.799 00:46:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.V8Q 00:33:58.799 00:46:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.V8Q 00:33:58.799 00:46:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:33:58.799 00:46:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:33:58.799 00:46:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:33:58.799 00:46:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:33:58.799 00:46:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:33:58.799 00:46:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:33:58.799 00:46:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:33:58.799 00:46:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=9a72328472485440d3d454434a639aaf65117381d2f8b6fb85b119a04859b929 00:33:58.799 00:46:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:33:58.799 00:46:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.nt0 00:33:58.799 00:46:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 9a72328472485440d3d454434a639aaf65117381d2f8b6fb85b119a04859b929 3 00:33:58.799 00:46:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 9a72328472485440d3d454434a639aaf65117381d2f8b6fb85b119a04859b929 3 00:33:58.799 00:46:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:33:58.799 00:46:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:33:58.799 00:46:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=9a72328472485440d3d454434a639aaf65117381d2f8b6fb85b119a04859b929 00:33:58.799 00:46:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:33:58.799 00:46:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:33:58.799 00:46:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.nt0 00:33:58.799 00:46:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.nt0 00:33:58.799 00:46:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.nt0 00:33:58.799 00:46:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:33:58.799 00:46:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:33:58.799 00:46:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:33:58.799 00:46:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:33:58.799 00:46:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:33:58.799 00:46:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:33:58.799 00:46:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:33:58.799 00:46:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=c2fbf19b43d226ac3df946ba1ff9b7c36a115f66e1f3b39a 00:33:58.799 00:46:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:33:58.799 00:46:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.CX9 00:33:58.799 00:46:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key c2fbf19b43d226ac3df946ba1ff9b7c36a115f66e1f3b39a 0 00:33:58.799 00:46:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 c2fbf19b43d226ac3df946ba1ff9b7c36a115f66e1f3b39a 0 00:33:58.799 00:46:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:33:58.799 00:46:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:33:58.799 00:46:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=c2fbf19b43d226ac3df946ba1ff9b7c36a115f66e1f3b39a 00:33:58.799 00:46:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:33:58.799 00:46:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:33:58.799 00:46:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.CX9 00:33:58.799 00:46:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.CX9 00:33:58.799 00:46:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.CX9 00:33:58.799 00:46:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:33:58.799 00:46:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:33:58.799 00:46:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:33:58.799 00:46:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:33:58.799 00:46:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:33:58.799 00:46:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:33:58.799 00:46:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:33:58.799 00:46:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=44db64185cd8b0b44f8117e599d98901de4eced5b0939e60 00:33:58.799 00:46:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:33:58.799 00:46:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.Scb 00:33:58.799 00:46:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 44db64185cd8b0b44f8117e599d98901de4eced5b0939e60 2 00:33:58.799 00:46:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 44db64185cd8b0b44f8117e599d98901de4eced5b0939e60 2 00:33:58.799 00:46:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:33:58.799 00:46:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:33:58.799 00:46:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=44db64185cd8b0b44f8117e599d98901de4eced5b0939e60 00:33:58.799 00:46:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:33:58.799 00:46:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:33:58.799 00:46:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.Scb 00:33:58.799 00:46:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.Scb 00:33:58.799 00:46:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.Scb 00:33:58.799 00:46:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:33:58.799 00:46:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:33:58.799 00:46:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:33:58.799 00:46:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:33:58.799 00:46:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:33:58.799 00:46:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:33:58.799 00:46:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:33:58.799 00:46:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=e0eb6a7c854d73fc1ad6b7515d8ee8f6 00:33:58.799 00:46:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:33:58.799 00:46:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.V9d 00:33:58.799 00:46:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key e0eb6a7c854d73fc1ad6b7515d8ee8f6 1 00:33:58.799 00:46:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 e0eb6a7c854d73fc1ad6b7515d8ee8f6 1 00:33:58.799 00:46:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:33:58.799 00:46:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:33:58.799 00:46:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=e0eb6a7c854d73fc1ad6b7515d8ee8f6 00:33:58.799 00:46:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:33:58.799 00:46:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:33:58.799 00:46:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.V9d 00:33:58.799 00:46:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.V9d 00:33:58.799 00:46:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.V9d 00:33:58.799 00:46:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:33:58.799 00:46:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:33:58.799 00:46:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:33:58.799 00:46:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:33:58.799 00:46:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:33:58.799 00:46:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:33:58.799 00:46:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:33:58.799 00:46:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=36d2d400cefd755576760a6ab8b6fc96 00:33:58.799 00:46:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:33:58.799 00:46:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.wgO 00:33:58.799 00:46:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 36d2d400cefd755576760a6ab8b6fc96 1 00:33:58.799 00:46:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 36d2d400cefd755576760a6ab8b6fc96 1 00:33:58.799 00:46:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:33:58.799 00:46:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:33:58.799 00:46:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=36d2d400cefd755576760a6ab8b6fc96 00:33:58.799 00:46:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:33:58.799 00:46:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:33:59.059 00:46:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.wgO 00:33:59.059 00:46:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.wgO 00:33:59.059 00:46:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.wgO 00:33:59.059 00:46:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:33:59.059 00:46:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:33:59.059 00:46:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:33:59.059 00:46:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:33:59.059 00:46:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:33:59.059 00:46:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:33:59.059 00:46:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:33:59.059 00:46:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=67a3a0f73a9f52268f3fb99391263df6bb937170e5e800f4 00:33:59.059 00:46:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:33:59.059 00:46:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.1wV 00:33:59.059 00:46:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 67a3a0f73a9f52268f3fb99391263df6bb937170e5e800f4 2 00:33:59.059 00:46:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 67a3a0f73a9f52268f3fb99391263df6bb937170e5e800f4 2 00:33:59.059 00:46:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:33:59.059 00:46:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:33:59.059 00:46:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=67a3a0f73a9f52268f3fb99391263df6bb937170e5e800f4 00:33:59.059 00:46:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:33:59.059 00:46:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:33:59.059 00:46:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.1wV 00:33:59.059 00:46:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.1wV 00:33:59.059 00:46:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.1wV 00:33:59.059 00:46:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:33:59.059 00:46:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:33:59.059 00:46:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:33:59.059 00:46:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:33:59.059 00:46:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:33:59.059 00:46:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:33:59.059 00:46:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:33:59.059 00:46:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=4b6c7e9b329087437a7a2618a15c882c 00:33:59.059 00:46:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:33:59.059 00:46:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.55p 00:33:59.059 00:46:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 4b6c7e9b329087437a7a2618a15c882c 0 00:33:59.059 00:46:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 4b6c7e9b329087437a7a2618a15c882c 0 00:33:59.059 00:46:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:33:59.059 00:46:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:33:59.059 00:46:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=4b6c7e9b329087437a7a2618a15c882c 00:33:59.059 00:46:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:33:59.059 00:46:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:33:59.059 00:46:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.55p 00:33:59.059 00:46:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.55p 00:33:59.059 00:46:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.55p 00:33:59.059 00:46:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:33:59.059 00:46:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:33:59.059 00:46:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:33:59.059 00:46:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:33:59.059 00:46:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:33:59.059 00:46:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:33:59.059 00:46:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:33:59.059 00:46:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=bca6f7ba3acae80ed88c0365bbcc94095b4fef8debce3505e16edbac60627d77 00:33:59.059 00:46:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:33:59.059 00:46:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.8VW 00:33:59.059 00:46:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key bca6f7ba3acae80ed88c0365bbcc94095b4fef8debce3505e16edbac60627d77 3 00:33:59.059 00:46:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 bca6f7ba3acae80ed88c0365bbcc94095b4fef8debce3505e16edbac60627d77 3 00:33:59.059 00:46:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:33:59.059 00:46:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:33:59.059 00:46:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=bca6f7ba3acae80ed88c0365bbcc94095b4fef8debce3505e16edbac60627d77 00:33:59.059 00:46:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:33:59.059 00:46:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:33:59.059 00:46:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.8VW 00:33:59.059 00:46:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.8VW 00:33:59.059 00:46:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.8VW 00:33:59.059 00:46:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:33:59.059 00:46:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 1067620 00:33:59.059 00:46:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@827 -- # '[' -z 1067620 ']' 00:33:59.059 00:46:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:59.059 00:46:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@832 -- # local max_retries=100 00:33:59.059 00:46:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:59.059 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:59.059 00:46:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # xtrace_disable 00:33:59.059 00:46:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:59.318 00:46:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:33:59.318 00:46:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@860 -- # return 0 00:33:59.318 00:46:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:33:59.318 00:46:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.V8Q 00:33:59.318 00:46:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:59.318 00:46:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:59.318 00:46:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:59.318 00:46:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.nt0 ]] 00:33:59.318 00:46:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.nt0 00:33:59.318 00:46:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:59.318 00:46:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:59.318 00:46:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:59.318 00:46:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:33:59.318 00:46:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.CX9 00:33:59.318 00:46:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:59.318 00:46:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:59.318 00:46:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:59.318 00:46:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.Scb ]] 00:33:59.318 00:46:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Scb 00:33:59.318 00:46:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:59.318 00:46:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:59.318 00:46:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:59.318 00:46:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:33:59.318 00:46:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.V9d 00:33:59.578 00:46:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:59.578 00:46:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:59.578 00:46:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:59.578 00:46:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.wgO ]] 00:33:59.578 00:46:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.wgO 00:33:59.578 00:46:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:59.578 00:46:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:59.578 00:46:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:59.578 00:46:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:33:59.578 00:46:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.1wV 00:33:59.578 00:46:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:59.578 00:46:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:59.578 00:46:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:59.578 00:46:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.55p ]] 00:33:59.578 00:46:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.55p 00:33:59.578 00:46:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:59.578 00:46:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:59.578 00:46:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:59.578 00:46:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:33:59.578 00:46:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.8VW 00:33:59.578 00:46:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:59.578 00:46:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:59.578 00:46:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:59.578 00:46:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:33:59.578 00:46:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:33:59.578 00:46:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:33:59.578 00:46:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:59.578 00:46:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:59.578 00:46:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:59.578 00:46:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:59.578 00:46:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:59.578 00:46:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:59.578 00:46:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:59.578 00:46:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:59.578 00:46:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:59.578 00:46:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:59.578 00:46:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:33:59.578 00:46:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@632 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:33:59.578 00:46:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:33:59.578 00:46:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:33:59.578 00:46:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:33:59.578 00:46:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:33:59.578 00:46:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@639 -- # local block nvme 00:33:59.578 00:46:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:33:59.578 00:46:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@642 -- # modprobe nvmet 00:33:59.578 00:46:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:33:59.578 00:46:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:34:00.512 Waiting for block devices as requested 00:34:00.512 0000:84:00.0 (8086 0a54): vfio-pci -> nvme 00:34:00.512 0000:00:04.7 (8086 3c27): vfio-pci -> ioatdma 00:34:00.770 0000:00:04.6 (8086 3c26): vfio-pci -> ioatdma 00:34:00.770 0000:00:04.5 (8086 3c25): vfio-pci -> ioatdma 00:34:00.770 0000:00:04.4 (8086 3c24): vfio-pci -> ioatdma 00:34:00.770 0000:00:04.3 (8086 3c23): vfio-pci -> ioatdma 00:34:01.028 0000:00:04.2 (8086 3c22): vfio-pci -> ioatdma 00:34:01.028 0000:00:04.1 (8086 3c21): vfio-pci -> ioatdma 00:34:01.028 0000:00:04.0 (8086 3c20): vfio-pci -> ioatdma 00:34:01.028 0000:80:04.7 (8086 3c27): vfio-pci -> ioatdma 00:34:01.286 0000:80:04.6 (8086 3c26): vfio-pci -> ioatdma 00:34:01.286 0000:80:04.5 (8086 3c25): vfio-pci -> ioatdma 00:34:01.286 0000:80:04.4 (8086 3c24): vfio-pci -> ioatdma 00:34:01.286 0000:80:04.3 (8086 3c23): vfio-pci -> ioatdma 00:34:01.544 0000:80:04.2 (8086 3c22): vfio-pci -> ioatdma 00:34:01.544 0000:80:04.1 (8086 3c21): vfio-pci -> ioatdma 00:34:01.544 0000:80:04.0 (8086 3c20): vfio-pci -> ioatdma 00:34:02.110 00:46:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:34:02.110 00:46:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:34:02.110 00:46:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:34:02.110 00:46:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:34:02.110 00:46:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:34:02.110 00:46:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:34:02.110 00:46:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:34:02.110 00:46:29 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:34:02.110 00:46:29 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:34:02.110 No valid GPT data, bailing 00:34:02.110 00:46:29 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:34:02.110 00:46:29 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:34:02.110 00:46:29 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:34:02.110 00:46:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:34:02.111 00:46:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:34:02.111 00:46:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:34:02.111 00:46:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:34:02.111 00:46:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:34:02.111 00:46:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@665 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:34:02.111 00:46:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@667 -- # echo 1 00:34:02.111 00:46:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:34:02.111 00:46:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@669 -- # echo 1 00:34:02.111 00:46:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:34:02.111 00:46:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@672 -- # echo tcp 00:34:02.111 00:46:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@673 -- # echo 4420 00:34:02.111 00:46:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@674 -- # echo ipv4 00:34:02.111 00:46:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:34:02.111 00:46:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -a 10.0.0.1 -t tcp -s 4420 00:34:02.111 00:34:02.111 Discovery Log Number of Records 2, Generation counter 2 00:34:02.111 =====Discovery Log Entry 0====== 00:34:02.111 trtype: tcp 00:34:02.111 adrfam: ipv4 00:34:02.111 subtype: current discovery subsystem 00:34:02.111 treq: not specified, sq flow control disable supported 00:34:02.111 portid: 1 00:34:02.111 trsvcid: 4420 00:34:02.111 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:34:02.111 traddr: 10.0.0.1 00:34:02.111 eflags: none 00:34:02.111 sectype: none 00:34:02.111 =====Discovery Log Entry 1====== 00:34:02.111 trtype: tcp 00:34:02.111 adrfam: ipv4 00:34:02.111 subtype: nvme subsystem 00:34:02.111 treq: not specified, sq flow control disable supported 00:34:02.111 portid: 1 00:34:02.111 trsvcid: 4420 00:34:02.111 subnqn: nqn.2024-02.io.spdk:cnode0 00:34:02.111 traddr: 10.0.0.1 00:34:02.111 eflags: none 00:34:02.111 sectype: none 00:34:02.111 00:46:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:34:02.111 00:46:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:34:02.111 00:46:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:34:02.111 00:46:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:34:02.111 00:46:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:02.111 00:46:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:02.111 00:46:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:02.111 00:46:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:02.111 00:46:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzJmYmYxOWI0M2QyMjZhYzNkZjk0NmJhMWZmOWI3YzM2YTExNWY2NmUxZjNiMzlhzzwroQ==: 00:34:02.111 00:46:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDRkYjY0MTg1Y2Q4YjBiNDRmODExN2U1OTlkOTg5MDFkZTRlY2VkNWIwOTM5ZTYwTB+MXQ==: 00:34:02.111 00:46:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:02.111 00:46:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:02.369 00:46:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzJmYmYxOWI0M2QyMjZhYzNkZjk0NmJhMWZmOWI3YzM2YTExNWY2NmUxZjNiMzlhzzwroQ==: 00:34:02.369 00:46:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDRkYjY0MTg1Y2Q4YjBiNDRmODExN2U1OTlkOTg5MDFkZTRlY2VkNWIwOTM5ZTYwTB+MXQ==: ]] 00:34:02.369 00:46:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDRkYjY0MTg1Y2Q4YjBiNDRmODExN2U1OTlkOTg5MDFkZTRlY2VkNWIwOTM5ZTYwTB+MXQ==: 00:34:02.370 00:46:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:34:02.370 00:46:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:34:02.370 00:46:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:34:02.370 00:46:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:34:02.370 00:46:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:34:02.370 00:46:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:02.370 00:46:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:34:02.370 00:46:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:34:02.370 00:46:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:02.370 00:46:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:02.370 00:46:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:34:02.370 00:46:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:02.370 00:46:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:02.370 00:46:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:02.370 00:46:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:02.370 00:46:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:02.370 00:46:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:02.370 00:46:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:02.370 00:46:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:02.370 00:46:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:02.370 00:46:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:02.370 00:46:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:02.370 00:46:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:02.370 00:46:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:02.370 00:46:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:02.370 00:46:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:02.370 00:46:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:02.370 00:46:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:02.629 nvme0n1 00:34:02.629 00:46:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:02.629 00:46:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:02.629 00:46:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:02.629 00:46:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:02.629 00:46:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:02.629 00:46:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:02.629 00:46:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:02.629 00:46:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:02.629 00:46:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:02.629 00:46:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:02.629 00:46:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:02.629 00:46:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:34:02.629 00:46:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:02.629 00:46:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:02.629 00:46:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:34:02.629 00:46:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:02.629 00:46:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:02.629 00:46:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:02.629 00:46:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:02.629 00:46:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzM0ZmUyMGEyYjQ3M2EwOWQxOTAwOGI3OGQyNGRiYWEOBnsC: 00:34:02.629 00:46:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OWE3MjMyODQ3MjQ4NTQ0MGQzZDQ1NDQzNGE2MzlhYWY2NTExNzM4MWQyZjhiNmZiODViMTE5YTA0ODU5YjkyOVg9ftU=: 00:34:02.629 00:46:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:02.629 00:46:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:02.629 00:46:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzM0ZmUyMGEyYjQ3M2EwOWQxOTAwOGI3OGQyNGRiYWEOBnsC: 00:34:02.629 00:46:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OWE3MjMyODQ3MjQ4NTQ0MGQzZDQ1NDQzNGE2MzlhYWY2NTExNzM4MWQyZjhiNmZiODViMTE5YTA0ODU5YjkyOVg9ftU=: ]] 00:34:02.629 00:46:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OWE3MjMyODQ3MjQ4NTQ0MGQzZDQ1NDQzNGE2MzlhYWY2NTExNzM4MWQyZjhiNmZiODViMTE5YTA0ODU5YjkyOVg9ftU=: 00:34:02.629 00:46:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:34:02.629 00:46:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:02.629 00:46:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:02.629 00:46:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:02.629 00:46:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:02.629 00:46:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:02.629 00:46:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:34:02.629 00:46:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:02.629 00:46:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:02.629 00:46:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:02.629 00:46:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:02.629 00:46:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:02.629 00:46:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:02.629 00:46:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:02.629 00:46:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:02.629 00:46:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:02.629 00:46:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:02.629 00:46:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:02.629 00:46:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:02.629 00:46:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:02.629 00:46:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:02.629 00:46:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:02.629 00:46:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:02.629 00:46:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:02.629 nvme0n1 00:34:02.629 00:46:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:02.629 00:46:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:02.629 00:46:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:02.629 00:46:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:02.629 00:46:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:02.629 00:46:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:02.888 00:46:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:02.888 00:46:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:02.888 00:46:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:02.888 00:46:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:02.888 00:46:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:02.888 00:46:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:02.888 00:46:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:34:02.888 00:46:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:02.888 00:46:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:02.888 00:46:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:02.888 00:46:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:02.888 00:46:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzJmYmYxOWI0M2QyMjZhYzNkZjk0NmJhMWZmOWI3YzM2YTExNWY2NmUxZjNiMzlhzzwroQ==: 00:34:02.888 00:46:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDRkYjY0MTg1Y2Q4YjBiNDRmODExN2U1OTlkOTg5MDFkZTRlY2VkNWIwOTM5ZTYwTB+MXQ==: 00:34:02.888 00:46:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:02.888 00:46:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:02.888 00:46:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzJmYmYxOWI0M2QyMjZhYzNkZjk0NmJhMWZmOWI3YzM2YTExNWY2NmUxZjNiMzlhzzwroQ==: 00:34:02.888 00:46:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDRkYjY0MTg1Y2Q4YjBiNDRmODExN2U1OTlkOTg5MDFkZTRlY2VkNWIwOTM5ZTYwTB+MXQ==: ]] 00:34:02.888 00:46:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDRkYjY0MTg1Y2Q4YjBiNDRmODExN2U1OTlkOTg5MDFkZTRlY2VkNWIwOTM5ZTYwTB+MXQ==: 00:34:02.888 00:46:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:34:02.888 00:46:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:02.888 00:46:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:02.888 00:46:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:02.888 00:46:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:02.888 00:46:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:02.888 00:46:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:34:02.888 00:46:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:02.888 00:46:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:02.888 00:46:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:02.888 00:46:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:02.888 00:46:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:02.888 00:46:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:02.888 00:46:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:02.888 00:46:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:02.888 00:46:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:02.888 00:46:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:02.888 00:46:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:02.888 00:46:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:02.888 00:46:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:02.888 00:46:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:02.888 00:46:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:02.888 00:46:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:02.888 00:46:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:02.888 nvme0n1 00:34:02.888 00:46:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:02.888 00:46:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:02.888 00:46:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:02.888 00:46:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:02.888 00:46:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:02.888 00:46:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:02.888 00:46:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:02.888 00:46:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:02.888 00:46:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:02.888 00:46:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:03.148 00:46:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:03.148 00:46:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:03.148 00:46:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:34:03.148 00:46:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:03.148 00:46:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:03.148 00:46:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:03.148 00:46:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:03.148 00:46:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTBlYjZhN2M4NTRkNzNmYzFhZDZiNzUxNWQ4ZWU4ZjYC+55A: 00:34:03.148 00:46:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzZkMmQ0MDBjZWZkNzU1NTc2NzYwYTZhYjhiNmZjOTYVZUF1: 00:34:03.148 00:46:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:03.148 00:46:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:03.148 00:46:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTBlYjZhN2M4NTRkNzNmYzFhZDZiNzUxNWQ4ZWU4ZjYC+55A: 00:34:03.148 00:46:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzZkMmQ0MDBjZWZkNzU1NTc2NzYwYTZhYjhiNmZjOTYVZUF1: ]] 00:34:03.148 00:46:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzZkMmQ0MDBjZWZkNzU1NTc2NzYwYTZhYjhiNmZjOTYVZUF1: 00:34:03.148 00:46:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:34:03.148 00:46:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:03.148 00:46:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:03.148 00:46:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:03.148 00:46:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:03.148 00:46:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:03.148 00:46:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:34:03.148 00:46:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:03.148 00:46:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:03.148 00:46:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:03.148 00:46:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:03.148 00:46:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:03.148 00:46:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:03.148 00:46:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:03.148 00:46:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:03.148 00:46:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:03.148 00:46:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:03.148 00:46:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:03.148 00:46:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:03.148 00:46:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:03.148 00:46:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:03.148 00:46:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:03.148 00:46:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:03.148 00:46:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:03.148 nvme0n1 00:34:03.148 00:46:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:03.148 00:46:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:03.148 00:46:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:03.148 00:46:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:03.148 00:46:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:03.148 00:46:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:03.148 00:46:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:03.148 00:46:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:03.148 00:46:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:03.148 00:46:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:03.148 00:46:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:03.148 00:46:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:03.148 00:46:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:34:03.148 00:46:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:03.148 00:46:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:03.148 00:46:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:03.148 00:46:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:03.148 00:46:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NjdhM2EwZjczYTlmNTIyNjhmM2ZiOTkzOTEyNjNkZjZiYjkzNzE3MGU1ZTgwMGY0Ph/xtA==: 00:34:03.148 00:46:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NGI2YzdlOWIzMjkwODc0MzdhN2EyNjE4YTE1Yzg4MmM23AVj: 00:34:03.148 00:46:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:03.148 00:46:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:03.148 00:46:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NjdhM2EwZjczYTlmNTIyNjhmM2ZiOTkzOTEyNjNkZjZiYjkzNzE3MGU1ZTgwMGY0Ph/xtA==: 00:34:03.148 00:46:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NGI2YzdlOWIzMjkwODc0MzdhN2EyNjE4YTE1Yzg4MmM23AVj: ]] 00:34:03.148 00:46:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NGI2YzdlOWIzMjkwODc0MzdhN2EyNjE4YTE1Yzg4MmM23AVj: 00:34:03.148 00:46:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:34:03.148 00:46:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:03.148 00:46:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:03.148 00:46:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:03.148 00:46:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:03.148 00:46:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:03.148 00:46:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:34:03.148 00:46:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:03.148 00:46:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:03.148 00:46:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:03.148 00:46:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:03.148 00:46:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:03.148 00:46:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:03.148 00:46:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:03.148 00:46:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:03.148 00:46:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:03.148 00:46:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:03.148 00:46:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:03.148 00:46:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:03.148 00:46:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:03.148 00:46:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:03.148 00:46:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:03.148 00:46:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:03.148 00:46:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:03.407 nvme0n1 00:34:03.407 00:46:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:03.407 00:46:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:03.407 00:46:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:03.407 00:46:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:03.407 00:46:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:03.407 00:46:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:03.407 00:46:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:03.407 00:46:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:03.407 00:46:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:03.407 00:46:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:03.407 00:46:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:03.407 00:46:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:03.407 00:46:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:34:03.407 00:46:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:03.407 00:46:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:03.407 00:46:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:03.407 00:46:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:03.407 00:46:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YmNhNmY3YmEzYWNhZTgwZWQ4OGMwMzY1YmJjYzk0MDk1YjRmZWY4ZGViY2UzNTA1ZTE2ZWRiYWM2MDYyN2Q3N8fUjj8=: 00:34:03.407 00:46:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:03.407 00:46:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:03.407 00:46:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:03.407 00:46:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YmNhNmY3YmEzYWNhZTgwZWQ4OGMwMzY1YmJjYzk0MDk1YjRmZWY4ZGViY2UzNTA1ZTE2ZWRiYWM2MDYyN2Q3N8fUjj8=: 00:34:03.407 00:46:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:03.407 00:46:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:34:03.407 00:46:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:03.407 00:46:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:03.407 00:46:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:03.407 00:46:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:03.407 00:46:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:03.407 00:46:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:34:03.407 00:46:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:03.407 00:46:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:03.407 00:46:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:03.407 00:46:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:03.407 00:46:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:03.407 00:46:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:03.407 00:46:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:03.407 00:46:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:03.407 00:46:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:03.407 00:46:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:03.407 00:46:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:03.407 00:46:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:03.407 00:46:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:03.407 00:46:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:03.407 00:46:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:03.407 00:46:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:03.407 00:46:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:03.665 nvme0n1 00:34:03.665 00:46:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:03.665 00:46:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:03.665 00:46:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:03.665 00:46:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:03.665 00:46:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:03.665 00:46:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:03.665 00:46:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:03.665 00:46:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:03.665 00:46:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:03.665 00:46:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:03.665 00:46:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:03.665 00:46:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:03.665 00:46:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:03.665 00:46:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:34:03.665 00:46:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:03.665 00:46:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:03.665 00:46:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:03.665 00:46:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:03.665 00:46:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzM0ZmUyMGEyYjQ3M2EwOWQxOTAwOGI3OGQyNGRiYWEOBnsC: 00:34:03.665 00:46:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OWE3MjMyODQ3MjQ4NTQ0MGQzZDQ1NDQzNGE2MzlhYWY2NTExNzM4MWQyZjhiNmZiODViMTE5YTA0ODU5YjkyOVg9ftU=: 00:34:03.665 00:46:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:03.665 00:46:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:04.232 00:46:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzM0ZmUyMGEyYjQ3M2EwOWQxOTAwOGI3OGQyNGRiYWEOBnsC: 00:34:04.232 00:46:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OWE3MjMyODQ3MjQ4NTQ0MGQzZDQ1NDQzNGE2MzlhYWY2NTExNzM4MWQyZjhiNmZiODViMTE5YTA0ODU5YjkyOVg9ftU=: ]] 00:34:04.232 00:46:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OWE3MjMyODQ3MjQ4NTQ0MGQzZDQ1NDQzNGE2MzlhYWY2NTExNzM4MWQyZjhiNmZiODViMTE5YTA0ODU5YjkyOVg9ftU=: 00:34:04.232 00:46:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:34:04.232 00:46:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:04.232 00:46:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:04.232 00:46:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:04.232 00:46:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:04.232 00:46:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:04.232 00:46:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:34:04.232 00:46:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:04.232 00:46:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:04.232 00:46:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:04.232 00:46:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:04.232 00:46:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:04.232 00:46:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:04.232 00:46:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:04.232 00:46:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:04.232 00:46:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:04.232 00:46:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:04.232 00:46:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:04.232 00:46:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:04.232 00:46:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:04.232 00:46:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:04.232 00:46:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:04.232 00:46:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:04.232 00:46:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:04.232 nvme0n1 00:34:04.232 00:46:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:04.232 00:46:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:04.232 00:46:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:04.232 00:46:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:04.232 00:46:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:04.232 00:46:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:04.232 00:46:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:04.232 00:46:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:04.232 00:46:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:04.232 00:46:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:04.232 00:46:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:04.232 00:46:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:04.232 00:46:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:34:04.232 00:46:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:04.232 00:46:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:04.232 00:46:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:04.232 00:46:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:04.232 00:46:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzJmYmYxOWI0M2QyMjZhYzNkZjk0NmJhMWZmOWI3YzM2YTExNWY2NmUxZjNiMzlhzzwroQ==: 00:34:04.232 00:46:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDRkYjY0MTg1Y2Q4YjBiNDRmODExN2U1OTlkOTg5MDFkZTRlY2VkNWIwOTM5ZTYwTB+MXQ==: 00:34:04.232 00:46:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:04.232 00:46:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:04.232 00:46:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzJmYmYxOWI0M2QyMjZhYzNkZjk0NmJhMWZmOWI3YzM2YTExNWY2NmUxZjNiMzlhzzwroQ==: 00:34:04.232 00:46:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDRkYjY0MTg1Y2Q4YjBiNDRmODExN2U1OTlkOTg5MDFkZTRlY2VkNWIwOTM5ZTYwTB+MXQ==: ]] 00:34:04.232 00:46:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDRkYjY0MTg1Y2Q4YjBiNDRmODExN2U1OTlkOTg5MDFkZTRlY2VkNWIwOTM5ZTYwTB+MXQ==: 00:34:04.232 00:46:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:34:04.232 00:46:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:04.232 00:46:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:04.232 00:46:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:04.232 00:46:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:04.232 00:46:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:04.232 00:46:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:34:04.232 00:46:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:04.232 00:46:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:04.491 00:46:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:04.491 00:46:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:04.491 00:46:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:04.491 00:46:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:04.491 00:46:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:04.491 00:46:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:04.491 00:46:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:04.491 00:46:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:04.491 00:46:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:04.491 00:46:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:04.491 00:46:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:04.491 00:46:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:04.491 00:46:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:04.491 00:46:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:04.491 00:46:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:04.491 nvme0n1 00:34:04.491 00:46:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:04.491 00:46:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:04.491 00:46:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:04.491 00:46:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:04.491 00:46:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:04.491 00:46:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:04.491 00:46:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:04.491 00:46:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:04.491 00:46:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:04.491 00:46:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:04.491 00:46:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:04.491 00:46:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:04.749 00:46:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:34:04.749 00:46:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:04.749 00:46:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:04.749 00:46:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:04.749 00:46:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:04.749 00:46:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTBlYjZhN2M4NTRkNzNmYzFhZDZiNzUxNWQ4ZWU4ZjYC+55A: 00:34:04.749 00:46:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzZkMmQ0MDBjZWZkNzU1NTc2NzYwYTZhYjhiNmZjOTYVZUF1: 00:34:04.749 00:46:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:04.749 00:46:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:04.749 00:46:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTBlYjZhN2M4NTRkNzNmYzFhZDZiNzUxNWQ4ZWU4ZjYC+55A: 00:34:04.749 00:46:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzZkMmQ0MDBjZWZkNzU1NTc2NzYwYTZhYjhiNmZjOTYVZUF1: ]] 00:34:04.749 00:46:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzZkMmQ0MDBjZWZkNzU1NTc2NzYwYTZhYjhiNmZjOTYVZUF1: 00:34:04.749 00:46:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:34:04.750 00:46:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:04.750 00:46:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:04.750 00:46:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:04.750 00:46:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:04.750 00:46:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:04.750 00:46:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:34:04.750 00:46:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:04.750 00:46:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:04.750 00:46:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:04.750 00:46:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:04.750 00:46:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:04.750 00:46:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:04.750 00:46:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:04.750 00:46:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:04.750 00:46:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:04.750 00:46:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:04.750 00:46:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:04.750 00:46:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:04.750 00:46:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:04.750 00:46:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:04.750 00:46:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:04.750 00:46:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:04.750 00:46:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:04.750 nvme0n1 00:34:04.750 00:46:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:04.750 00:46:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:04.750 00:46:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:04.750 00:46:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:04.750 00:46:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:04.750 00:46:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:04.750 00:46:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:04.750 00:46:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:04.750 00:46:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:04.750 00:46:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:05.008 00:46:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:05.008 00:46:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:05.008 00:46:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:34:05.008 00:46:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:05.008 00:46:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:05.008 00:46:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:05.008 00:46:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:05.008 00:46:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NjdhM2EwZjczYTlmNTIyNjhmM2ZiOTkzOTEyNjNkZjZiYjkzNzE3MGU1ZTgwMGY0Ph/xtA==: 00:34:05.008 00:46:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NGI2YzdlOWIzMjkwODc0MzdhN2EyNjE4YTE1Yzg4MmM23AVj: 00:34:05.008 00:46:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:05.008 00:46:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:05.008 00:46:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NjdhM2EwZjczYTlmNTIyNjhmM2ZiOTkzOTEyNjNkZjZiYjkzNzE3MGU1ZTgwMGY0Ph/xtA==: 00:34:05.008 00:46:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NGI2YzdlOWIzMjkwODc0MzdhN2EyNjE4YTE1Yzg4MmM23AVj: ]] 00:34:05.008 00:46:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NGI2YzdlOWIzMjkwODc0MzdhN2EyNjE4YTE1Yzg4MmM23AVj: 00:34:05.008 00:46:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:34:05.008 00:46:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:05.008 00:46:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:05.008 00:46:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:05.008 00:46:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:05.008 00:46:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:05.008 00:46:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:34:05.008 00:46:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:05.008 00:46:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:05.008 00:46:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:05.008 00:46:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:05.008 00:46:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:05.008 00:46:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:05.008 00:46:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:05.008 00:46:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:05.008 00:46:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:05.008 00:46:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:05.008 00:46:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:05.008 00:46:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:05.008 00:46:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:05.008 00:46:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:05.008 00:46:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:05.008 00:46:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:05.008 00:46:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:05.008 nvme0n1 00:34:05.008 00:46:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:05.008 00:46:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:05.008 00:46:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:05.008 00:46:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:05.008 00:46:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:05.008 00:46:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:05.268 00:46:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:05.268 00:46:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:05.268 00:46:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:05.268 00:46:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:05.268 00:46:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:05.269 00:46:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:05.269 00:46:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:34:05.269 00:46:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:05.269 00:46:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:05.269 00:46:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:05.269 00:46:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:05.269 00:46:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YmNhNmY3YmEzYWNhZTgwZWQ4OGMwMzY1YmJjYzk0MDk1YjRmZWY4ZGViY2UzNTA1ZTE2ZWRiYWM2MDYyN2Q3N8fUjj8=: 00:34:05.269 00:46:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:05.269 00:46:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:05.269 00:46:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:05.269 00:46:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YmNhNmY3YmEzYWNhZTgwZWQ4OGMwMzY1YmJjYzk0MDk1YjRmZWY4ZGViY2UzNTA1ZTE2ZWRiYWM2MDYyN2Q3N8fUjj8=: 00:34:05.269 00:46:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:05.269 00:46:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:34:05.269 00:46:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:05.269 00:46:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:05.269 00:46:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:05.269 00:46:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:05.269 00:46:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:05.269 00:46:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:34:05.269 00:46:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:05.269 00:46:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:05.269 00:46:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:05.269 00:46:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:05.269 00:46:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:05.269 00:46:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:05.269 00:46:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:05.269 00:46:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:05.269 00:46:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:05.269 00:46:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:05.269 00:46:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:05.269 00:46:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:05.269 00:46:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:05.269 00:46:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:05.269 00:46:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:05.269 00:46:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:05.269 00:46:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:05.269 nvme0n1 00:34:05.269 00:46:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:05.269 00:46:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:05.269 00:46:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:05.269 00:46:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:05.269 00:46:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:05.269 00:46:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:05.526 00:46:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:05.526 00:46:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:05.526 00:46:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:05.526 00:46:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:05.526 00:46:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:05.526 00:46:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:05.526 00:46:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:05.526 00:46:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:34:05.526 00:46:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:05.526 00:46:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:05.526 00:46:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:05.526 00:46:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:05.526 00:46:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzM0ZmUyMGEyYjQ3M2EwOWQxOTAwOGI3OGQyNGRiYWEOBnsC: 00:34:05.526 00:46:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OWE3MjMyODQ3MjQ4NTQ0MGQzZDQ1NDQzNGE2MzlhYWY2NTExNzM4MWQyZjhiNmZiODViMTE5YTA0ODU5YjkyOVg9ftU=: 00:34:05.526 00:46:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:05.526 00:46:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:06.091 00:46:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzM0ZmUyMGEyYjQ3M2EwOWQxOTAwOGI3OGQyNGRiYWEOBnsC: 00:34:06.091 00:46:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OWE3MjMyODQ3MjQ4NTQ0MGQzZDQ1NDQzNGE2MzlhYWY2NTExNzM4MWQyZjhiNmZiODViMTE5YTA0ODU5YjkyOVg9ftU=: ]] 00:34:06.091 00:46:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OWE3MjMyODQ3MjQ4NTQ0MGQzZDQ1NDQzNGE2MzlhYWY2NTExNzM4MWQyZjhiNmZiODViMTE5YTA0ODU5YjkyOVg9ftU=: 00:34:06.091 00:46:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:34:06.091 00:46:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:06.091 00:46:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:06.091 00:46:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:06.091 00:46:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:06.091 00:46:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:06.091 00:46:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:34:06.091 00:46:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:06.091 00:46:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:06.091 00:46:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:06.091 00:46:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:06.091 00:46:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:06.091 00:46:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:06.091 00:46:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:06.091 00:46:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:06.091 00:46:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:06.091 00:46:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:06.091 00:46:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:06.091 00:46:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:06.091 00:46:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:06.091 00:46:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:06.091 00:46:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:06.091 00:46:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:06.091 00:46:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:06.658 nvme0n1 00:34:06.658 00:46:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:06.658 00:46:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:06.658 00:46:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:06.658 00:46:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:06.658 00:46:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:06.658 00:46:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:06.658 00:46:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:06.658 00:46:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:06.658 00:46:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:06.658 00:46:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:06.658 00:46:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:06.658 00:46:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:06.658 00:46:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:34:06.658 00:46:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:06.658 00:46:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:06.658 00:46:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:06.658 00:46:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:06.658 00:46:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzJmYmYxOWI0M2QyMjZhYzNkZjk0NmJhMWZmOWI3YzM2YTExNWY2NmUxZjNiMzlhzzwroQ==: 00:34:06.658 00:46:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDRkYjY0MTg1Y2Q4YjBiNDRmODExN2U1OTlkOTg5MDFkZTRlY2VkNWIwOTM5ZTYwTB+MXQ==: 00:34:06.658 00:46:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:06.658 00:46:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:06.658 00:46:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzJmYmYxOWI0M2QyMjZhYzNkZjk0NmJhMWZmOWI3YzM2YTExNWY2NmUxZjNiMzlhzzwroQ==: 00:34:06.658 00:46:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDRkYjY0MTg1Y2Q4YjBiNDRmODExN2U1OTlkOTg5MDFkZTRlY2VkNWIwOTM5ZTYwTB+MXQ==: ]] 00:34:06.658 00:46:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDRkYjY0MTg1Y2Q4YjBiNDRmODExN2U1OTlkOTg5MDFkZTRlY2VkNWIwOTM5ZTYwTB+MXQ==: 00:34:06.658 00:46:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:34:06.658 00:46:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:06.658 00:46:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:06.658 00:46:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:06.658 00:46:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:06.658 00:46:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:06.658 00:46:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:34:06.658 00:46:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:06.658 00:46:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:06.658 00:46:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:06.658 00:46:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:06.658 00:46:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:06.658 00:46:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:06.658 00:46:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:06.658 00:46:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:06.658 00:46:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:06.658 00:46:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:06.658 00:46:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:06.658 00:46:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:06.658 00:46:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:06.658 00:46:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:06.658 00:46:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:06.658 00:46:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:06.658 00:46:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:06.917 nvme0n1 00:34:06.917 00:46:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:06.917 00:46:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:06.917 00:46:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:06.917 00:46:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:06.917 00:46:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:06.917 00:46:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:06.917 00:46:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:06.917 00:46:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:06.917 00:46:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:06.917 00:46:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:06.917 00:46:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:06.917 00:46:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:06.917 00:46:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:34:06.917 00:46:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:06.917 00:46:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:06.917 00:46:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:06.917 00:46:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:06.917 00:46:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTBlYjZhN2M4NTRkNzNmYzFhZDZiNzUxNWQ4ZWU4ZjYC+55A: 00:34:06.917 00:46:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzZkMmQ0MDBjZWZkNzU1NTc2NzYwYTZhYjhiNmZjOTYVZUF1: 00:34:06.917 00:46:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:06.917 00:46:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:06.917 00:46:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTBlYjZhN2M4NTRkNzNmYzFhZDZiNzUxNWQ4ZWU4ZjYC+55A: 00:34:06.917 00:46:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzZkMmQ0MDBjZWZkNzU1NTc2NzYwYTZhYjhiNmZjOTYVZUF1: ]] 00:34:06.917 00:46:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzZkMmQ0MDBjZWZkNzU1NTc2NzYwYTZhYjhiNmZjOTYVZUF1: 00:34:06.917 00:46:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:34:06.917 00:46:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:06.917 00:46:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:06.917 00:46:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:06.917 00:46:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:06.917 00:46:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:06.917 00:46:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:34:06.917 00:46:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:06.917 00:46:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:06.917 00:46:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:06.917 00:46:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:06.917 00:46:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:06.917 00:46:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:06.917 00:46:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:06.917 00:46:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:06.917 00:46:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:06.917 00:46:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:06.917 00:46:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:06.917 00:46:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:06.917 00:46:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:06.917 00:46:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:06.917 00:46:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:06.917 00:46:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:06.917 00:46:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:07.175 nvme0n1 00:34:07.175 00:46:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:07.175 00:46:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:07.175 00:46:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:07.175 00:46:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:07.175 00:46:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:07.175 00:46:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:07.433 00:46:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:07.433 00:46:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:07.433 00:46:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:07.433 00:46:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:07.433 00:46:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:07.433 00:46:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:07.433 00:46:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:34:07.433 00:46:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:07.433 00:46:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:07.433 00:46:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:07.433 00:46:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:07.433 00:46:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NjdhM2EwZjczYTlmNTIyNjhmM2ZiOTkzOTEyNjNkZjZiYjkzNzE3MGU1ZTgwMGY0Ph/xtA==: 00:34:07.433 00:46:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NGI2YzdlOWIzMjkwODc0MzdhN2EyNjE4YTE1Yzg4MmM23AVj: 00:34:07.433 00:46:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:07.433 00:46:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:07.433 00:46:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NjdhM2EwZjczYTlmNTIyNjhmM2ZiOTkzOTEyNjNkZjZiYjkzNzE3MGU1ZTgwMGY0Ph/xtA==: 00:34:07.433 00:46:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NGI2YzdlOWIzMjkwODc0MzdhN2EyNjE4YTE1Yzg4MmM23AVj: ]] 00:34:07.433 00:46:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NGI2YzdlOWIzMjkwODc0MzdhN2EyNjE4YTE1Yzg4MmM23AVj: 00:34:07.433 00:46:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:34:07.433 00:46:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:07.433 00:46:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:07.433 00:46:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:07.433 00:46:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:07.433 00:46:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:07.433 00:46:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:34:07.433 00:46:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:07.433 00:46:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:07.433 00:46:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:07.433 00:46:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:07.433 00:46:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:07.433 00:46:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:07.433 00:46:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:07.433 00:46:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:07.433 00:46:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:07.433 00:46:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:07.433 00:46:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:07.433 00:46:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:07.433 00:46:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:07.433 00:46:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:07.433 00:46:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:07.433 00:46:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:07.433 00:46:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:07.691 nvme0n1 00:34:07.692 00:46:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:07.692 00:46:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:07.692 00:46:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:07.692 00:46:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:07.692 00:46:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:07.692 00:46:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:07.692 00:46:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:07.692 00:46:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:07.692 00:46:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:07.692 00:46:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:07.692 00:46:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:07.692 00:46:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:07.692 00:46:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:34:07.692 00:46:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:07.692 00:46:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:07.692 00:46:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:07.692 00:46:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:07.692 00:46:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YmNhNmY3YmEzYWNhZTgwZWQ4OGMwMzY1YmJjYzk0MDk1YjRmZWY4ZGViY2UzNTA1ZTE2ZWRiYWM2MDYyN2Q3N8fUjj8=: 00:34:07.692 00:46:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:07.692 00:46:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:07.692 00:46:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:07.692 00:46:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YmNhNmY3YmEzYWNhZTgwZWQ4OGMwMzY1YmJjYzk0MDk1YjRmZWY4ZGViY2UzNTA1ZTE2ZWRiYWM2MDYyN2Q3N8fUjj8=: 00:34:07.692 00:46:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:07.692 00:46:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:34:07.692 00:46:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:07.692 00:46:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:07.692 00:46:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:07.692 00:46:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:07.692 00:46:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:07.692 00:46:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:34:07.692 00:46:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:07.692 00:46:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:07.692 00:46:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:07.692 00:46:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:07.692 00:46:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:07.692 00:46:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:07.692 00:46:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:07.692 00:46:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:07.692 00:46:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:07.692 00:46:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:07.692 00:46:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:07.692 00:46:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:07.692 00:46:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:07.692 00:46:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:07.692 00:46:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:07.692 00:46:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:07.692 00:46:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:07.950 nvme0n1 00:34:07.950 00:46:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:07.950 00:46:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:07.950 00:46:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:07.950 00:46:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:07.950 00:46:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:07.950 00:46:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:08.208 00:46:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:08.208 00:46:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:08.208 00:46:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:08.208 00:46:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:08.208 00:46:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:08.208 00:46:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:08.208 00:46:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:08.208 00:46:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:34:08.208 00:46:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:08.208 00:46:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:08.208 00:46:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:08.208 00:46:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:08.208 00:46:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzM0ZmUyMGEyYjQ3M2EwOWQxOTAwOGI3OGQyNGRiYWEOBnsC: 00:34:08.208 00:46:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OWE3MjMyODQ3MjQ4NTQ0MGQzZDQ1NDQzNGE2MzlhYWY2NTExNzM4MWQyZjhiNmZiODViMTE5YTA0ODU5YjkyOVg9ftU=: 00:34:08.208 00:46:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:08.208 00:46:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:10.108 00:46:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzM0ZmUyMGEyYjQ3M2EwOWQxOTAwOGI3OGQyNGRiYWEOBnsC: 00:34:10.108 00:46:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OWE3MjMyODQ3MjQ4NTQ0MGQzZDQ1NDQzNGE2MzlhYWY2NTExNzM4MWQyZjhiNmZiODViMTE5YTA0ODU5YjkyOVg9ftU=: ]] 00:34:10.108 00:46:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OWE3MjMyODQ3MjQ4NTQ0MGQzZDQ1NDQzNGE2MzlhYWY2NTExNzM4MWQyZjhiNmZiODViMTE5YTA0ODU5YjkyOVg9ftU=: 00:34:10.108 00:46:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:34:10.108 00:46:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:10.108 00:46:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:10.108 00:46:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:10.108 00:46:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:10.108 00:46:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:10.108 00:46:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:34:10.108 00:46:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:10.108 00:46:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:10.108 00:46:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:10.108 00:46:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:10.108 00:46:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:10.108 00:46:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:10.108 00:46:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:10.108 00:46:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:10.108 00:46:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:10.108 00:46:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:10.108 00:46:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:10.108 00:46:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:10.108 00:46:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:10.108 00:46:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:10.108 00:46:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:10.108 00:46:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:10.108 00:46:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:10.673 nvme0n1 00:34:10.673 00:46:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:10.673 00:46:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:10.673 00:46:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:10.673 00:46:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:10.673 00:46:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:10.673 00:46:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:10.674 00:46:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:10.674 00:46:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:10.674 00:46:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:10.674 00:46:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:10.674 00:46:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:10.674 00:46:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:10.674 00:46:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:34:10.674 00:46:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:10.674 00:46:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:10.674 00:46:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:10.674 00:46:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:10.674 00:46:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzJmYmYxOWI0M2QyMjZhYzNkZjk0NmJhMWZmOWI3YzM2YTExNWY2NmUxZjNiMzlhzzwroQ==: 00:34:10.674 00:46:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDRkYjY0MTg1Y2Q4YjBiNDRmODExN2U1OTlkOTg5MDFkZTRlY2VkNWIwOTM5ZTYwTB+MXQ==: 00:34:10.674 00:46:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:10.674 00:46:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:10.674 00:46:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzJmYmYxOWI0M2QyMjZhYzNkZjk0NmJhMWZmOWI3YzM2YTExNWY2NmUxZjNiMzlhzzwroQ==: 00:34:10.674 00:46:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDRkYjY0MTg1Y2Q4YjBiNDRmODExN2U1OTlkOTg5MDFkZTRlY2VkNWIwOTM5ZTYwTB+MXQ==: ]] 00:34:10.674 00:46:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDRkYjY0MTg1Y2Q4YjBiNDRmODExN2U1OTlkOTg5MDFkZTRlY2VkNWIwOTM5ZTYwTB+MXQ==: 00:34:10.674 00:46:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:34:10.674 00:46:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:10.674 00:46:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:10.674 00:46:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:10.674 00:46:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:10.674 00:46:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:10.674 00:46:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:34:10.674 00:46:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:10.674 00:46:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:10.674 00:46:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:10.674 00:46:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:10.674 00:46:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:10.674 00:46:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:10.674 00:46:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:10.674 00:46:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:10.674 00:46:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:10.674 00:46:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:10.674 00:46:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:10.674 00:46:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:10.674 00:46:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:10.674 00:46:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:10.932 00:46:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:10.932 00:46:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:10.932 00:46:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:11.497 nvme0n1 00:34:11.497 00:46:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:11.497 00:46:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:11.497 00:46:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:11.497 00:46:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:11.497 00:46:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:11.497 00:46:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:11.497 00:46:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:11.497 00:46:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:11.497 00:46:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:11.497 00:46:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:11.497 00:46:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:11.497 00:46:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:11.497 00:46:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:34:11.497 00:46:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:11.497 00:46:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:11.497 00:46:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:11.497 00:46:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:11.497 00:46:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTBlYjZhN2M4NTRkNzNmYzFhZDZiNzUxNWQ4ZWU4ZjYC+55A: 00:34:11.497 00:46:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzZkMmQ0MDBjZWZkNzU1NTc2NzYwYTZhYjhiNmZjOTYVZUF1: 00:34:11.497 00:46:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:11.497 00:46:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:11.497 00:46:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTBlYjZhN2M4NTRkNzNmYzFhZDZiNzUxNWQ4ZWU4ZjYC+55A: 00:34:11.497 00:46:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzZkMmQ0MDBjZWZkNzU1NTc2NzYwYTZhYjhiNmZjOTYVZUF1: ]] 00:34:11.497 00:46:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzZkMmQ0MDBjZWZkNzU1NTc2NzYwYTZhYjhiNmZjOTYVZUF1: 00:34:11.497 00:46:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:34:11.497 00:46:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:11.497 00:46:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:11.497 00:46:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:11.497 00:46:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:11.497 00:46:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:11.497 00:46:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:34:11.497 00:46:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:11.497 00:46:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:11.497 00:46:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:11.497 00:46:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:11.497 00:46:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:11.497 00:46:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:11.497 00:46:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:11.497 00:46:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:11.497 00:46:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:11.497 00:46:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:11.497 00:46:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:11.497 00:46:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:11.497 00:46:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:11.497 00:46:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:11.497 00:46:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:11.497 00:46:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:11.497 00:46:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:12.062 nvme0n1 00:34:12.062 00:46:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:12.062 00:46:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:12.062 00:46:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:12.062 00:46:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:12.062 00:46:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:12.062 00:46:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:12.062 00:46:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:12.062 00:46:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:12.062 00:46:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:12.062 00:46:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:12.062 00:46:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:12.062 00:46:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:12.062 00:46:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:34:12.062 00:46:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:12.062 00:46:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:12.062 00:46:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:12.062 00:46:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:12.062 00:46:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NjdhM2EwZjczYTlmNTIyNjhmM2ZiOTkzOTEyNjNkZjZiYjkzNzE3MGU1ZTgwMGY0Ph/xtA==: 00:34:12.062 00:46:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NGI2YzdlOWIzMjkwODc0MzdhN2EyNjE4YTE1Yzg4MmM23AVj: 00:34:12.062 00:46:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:12.062 00:46:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:12.062 00:46:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NjdhM2EwZjczYTlmNTIyNjhmM2ZiOTkzOTEyNjNkZjZiYjkzNzE3MGU1ZTgwMGY0Ph/xtA==: 00:34:12.062 00:46:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NGI2YzdlOWIzMjkwODc0MzdhN2EyNjE4YTE1Yzg4MmM23AVj: ]] 00:34:12.062 00:46:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NGI2YzdlOWIzMjkwODc0MzdhN2EyNjE4YTE1Yzg4MmM23AVj: 00:34:12.062 00:46:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:34:12.062 00:46:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:12.062 00:46:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:12.062 00:46:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:12.062 00:46:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:12.062 00:46:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:12.062 00:46:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:34:12.062 00:46:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:12.062 00:46:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:12.062 00:46:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:12.062 00:46:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:12.062 00:46:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:12.062 00:46:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:12.062 00:46:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:12.062 00:46:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:12.062 00:46:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:12.062 00:46:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:12.062 00:46:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:12.062 00:46:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:12.062 00:46:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:12.062 00:46:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:12.062 00:46:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:12.062 00:46:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:12.062 00:46:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:12.995 nvme0n1 00:34:12.995 00:46:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:12.995 00:46:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:12.995 00:46:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:12.995 00:46:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:12.995 00:46:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:12.995 00:46:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:12.995 00:46:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:12.995 00:46:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:12.995 00:46:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:12.995 00:46:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:12.995 00:46:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:12.995 00:46:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:12.995 00:46:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:34:12.995 00:46:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:12.995 00:46:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:12.995 00:46:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:12.995 00:46:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:12.995 00:46:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YmNhNmY3YmEzYWNhZTgwZWQ4OGMwMzY1YmJjYzk0MDk1YjRmZWY4ZGViY2UzNTA1ZTE2ZWRiYWM2MDYyN2Q3N8fUjj8=: 00:34:12.995 00:46:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:12.995 00:46:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:12.995 00:46:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:12.995 00:46:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YmNhNmY3YmEzYWNhZTgwZWQ4OGMwMzY1YmJjYzk0MDk1YjRmZWY4ZGViY2UzNTA1ZTE2ZWRiYWM2MDYyN2Q3N8fUjj8=: 00:34:12.995 00:46:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:12.995 00:46:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:34:12.995 00:46:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:12.995 00:46:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:12.995 00:46:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:12.995 00:46:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:12.995 00:46:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:12.995 00:46:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:34:12.995 00:46:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:12.995 00:46:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:12.995 00:46:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:12.995 00:46:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:12.995 00:46:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:12.995 00:46:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:12.995 00:46:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:12.995 00:46:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:12.995 00:46:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:12.995 00:46:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:12.995 00:46:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:12.995 00:46:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:12.995 00:46:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:12.995 00:46:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:12.995 00:46:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:12.995 00:46:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:12.995 00:46:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:13.604 nvme0n1 00:34:13.604 00:46:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:13.604 00:46:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:13.604 00:46:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:13.604 00:46:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:13.604 00:46:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:13.604 00:46:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:13.604 00:46:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:13.604 00:46:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:13.604 00:46:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:13.604 00:46:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:13.604 00:46:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:13.604 00:46:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:13.604 00:46:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:13.604 00:46:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:34:13.604 00:46:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:13.604 00:46:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:13.604 00:46:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:13.604 00:46:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:13.604 00:46:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzM0ZmUyMGEyYjQ3M2EwOWQxOTAwOGI3OGQyNGRiYWEOBnsC: 00:34:13.604 00:46:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OWE3MjMyODQ3MjQ4NTQ0MGQzZDQ1NDQzNGE2MzlhYWY2NTExNzM4MWQyZjhiNmZiODViMTE5YTA0ODU5YjkyOVg9ftU=: 00:34:13.604 00:46:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:13.604 00:46:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:13.604 00:46:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzM0ZmUyMGEyYjQ3M2EwOWQxOTAwOGI3OGQyNGRiYWEOBnsC: 00:34:13.604 00:46:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OWE3MjMyODQ3MjQ4NTQ0MGQzZDQ1NDQzNGE2MzlhYWY2NTExNzM4MWQyZjhiNmZiODViMTE5YTA0ODU5YjkyOVg9ftU=: ]] 00:34:13.604 00:46:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OWE3MjMyODQ3MjQ4NTQ0MGQzZDQ1NDQzNGE2MzlhYWY2NTExNzM4MWQyZjhiNmZiODViMTE5YTA0ODU5YjkyOVg9ftU=: 00:34:13.604 00:46:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:34:13.604 00:46:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:13.604 00:46:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:13.604 00:46:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:13.604 00:46:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:13.604 00:46:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:13.604 00:46:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:34:13.604 00:46:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:13.604 00:46:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:13.604 00:46:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:13.604 00:46:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:13.604 00:46:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:13.604 00:46:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:13.604 00:46:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:13.604 00:46:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:13.604 00:46:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:13.604 00:46:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:13.604 00:46:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:13.604 00:46:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:13.604 00:46:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:13.604 00:46:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:13.604 00:46:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:13.604 00:46:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:13.604 00:46:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:14.536 nvme0n1 00:34:14.536 00:46:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:14.793 00:46:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:14.793 00:46:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:14.793 00:46:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:14.793 00:46:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:14.793 00:46:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:14.793 00:46:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:14.793 00:46:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:14.793 00:46:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:14.793 00:46:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:14.793 00:46:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:14.793 00:46:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:14.793 00:46:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:34:14.793 00:46:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:14.793 00:46:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:14.793 00:46:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:14.793 00:46:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:14.793 00:46:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzJmYmYxOWI0M2QyMjZhYzNkZjk0NmJhMWZmOWI3YzM2YTExNWY2NmUxZjNiMzlhzzwroQ==: 00:34:14.793 00:46:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDRkYjY0MTg1Y2Q4YjBiNDRmODExN2U1OTlkOTg5MDFkZTRlY2VkNWIwOTM5ZTYwTB+MXQ==: 00:34:14.793 00:46:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:14.793 00:46:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:14.793 00:46:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzJmYmYxOWI0M2QyMjZhYzNkZjk0NmJhMWZmOWI3YzM2YTExNWY2NmUxZjNiMzlhzzwroQ==: 00:34:14.793 00:46:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDRkYjY0MTg1Y2Q4YjBiNDRmODExN2U1OTlkOTg5MDFkZTRlY2VkNWIwOTM5ZTYwTB+MXQ==: ]] 00:34:14.793 00:46:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDRkYjY0MTg1Y2Q4YjBiNDRmODExN2U1OTlkOTg5MDFkZTRlY2VkNWIwOTM5ZTYwTB+MXQ==: 00:34:14.793 00:46:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:34:14.793 00:46:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:14.793 00:46:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:14.793 00:46:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:14.793 00:46:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:14.793 00:46:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:14.793 00:46:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:34:14.793 00:46:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:14.793 00:46:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:14.793 00:46:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:14.793 00:46:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:14.793 00:46:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:14.793 00:46:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:14.793 00:46:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:14.793 00:46:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:14.793 00:46:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:14.793 00:46:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:14.793 00:46:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:14.793 00:46:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:14.793 00:46:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:14.793 00:46:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:14.793 00:46:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:14.793 00:46:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:14.793 00:46:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:15.724 nvme0n1 00:34:15.982 00:46:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:15.982 00:46:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:15.982 00:46:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:15.982 00:46:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:15.982 00:46:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:15.982 00:46:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:15.982 00:46:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:15.982 00:46:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:15.982 00:46:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:15.982 00:46:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:15.982 00:46:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:15.982 00:46:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:15.982 00:46:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:34:15.982 00:46:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:15.982 00:46:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:15.982 00:46:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:15.982 00:46:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:15.982 00:46:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTBlYjZhN2M4NTRkNzNmYzFhZDZiNzUxNWQ4ZWU4ZjYC+55A: 00:34:15.982 00:46:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzZkMmQ0MDBjZWZkNzU1NTc2NzYwYTZhYjhiNmZjOTYVZUF1: 00:34:15.982 00:46:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:15.982 00:46:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:15.982 00:46:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTBlYjZhN2M4NTRkNzNmYzFhZDZiNzUxNWQ4ZWU4ZjYC+55A: 00:34:15.982 00:46:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzZkMmQ0MDBjZWZkNzU1NTc2NzYwYTZhYjhiNmZjOTYVZUF1: ]] 00:34:15.982 00:46:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzZkMmQ0MDBjZWZkNzU1NTc2NzYwYTZhYjhiNmZjOTYVZUF1: 00:34:15.982 00:46:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:34:15.982 00:46:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:15.982 00:46:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:15.982 00:46:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:15.982 00:46:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:15.982 00:46:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:15.982 00:46:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:34:15.982 00:46:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:15.982 00:46:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:15.982 00:46:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:15.982 00:46:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:15.982 00:46:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:15.982 00:46:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:15.982 00:46:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:15.982 00:46:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:15.982 00:46:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:15.982 00:46:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:15.982 00:46:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:15.982 00:46:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:15.982 00:46:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:15.982 00:46:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:15.982 00:46:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:15.982 00:46:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:15.982 00:46:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:16.914 nvme0n1 00:34:16.914 00:46:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:16.914 00:46:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:16.914 00:46:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:16.914 00:46:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:16.914 00:46:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:17.172 00:46:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:17.172 00:46:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:17.172 00:46:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:17.172 00:46:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:17.172 00:46:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:17.172 00:46:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:17.172 00:46:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:17.172 00:46:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:34:17.172 00:46:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:17.172 00:46:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:17.172 00:46:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:17.172 00:46:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:17.172 00:46:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NjdhM2EwZjczYTlmNTIyNjhmM2ZiOTkzOTEyNjNkZjZiYjkzNzE3MGU1ZTgwMGY0Ph/xtA==: 00:34:17.172 00:46:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NGI2YzdlOWIzMjkwODc0MzdhN2EyNjE4YTE1Yzg4MmM23AVj: 00:34:17.172 00:46:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:17.172 00:46:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:17.172 00:46:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NjdhM2EwZjczYTlmNTIyNjhmM2ZiOTkzOTEyNjNkZjZiYjkzNzE3MGU1ZTgwMGY0Ph/xtA==: 00:34:17.172 00:46:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NGI2YzdlOWIzMjkwODc0MzdhN2EyNjE4YTE1Yzg4MmM23AVj: ]] 00:34:17.172 00:46:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NGI2YzdlOWIzMjkwODc0MzdhN2EyNjE4YTE1Yzg4MmM23AVj: 00:34:17.172 00:46:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:34:17.172 00:46:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:17.172 00:46:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:17.172 00:46:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:17.172 00:46:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:17.172 00:46:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:17.172 00:46:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:34:17.172 00:46:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:17.172 00:46:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:17.172 00:46:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:17.172 00:46:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:17.172 00:46:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:17.172 00:46:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:17.172 00:46:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:17.172 00:46:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:17.172 00:46:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:17.172 00:46:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:17.172 00:46:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:17.172 00:46:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:17.172 00:46:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:17.172 00:46:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:17.172 00:46:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:17.172 00:46:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:17.172 00:46:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:18.545 nvme0n1 00:34:18.545 00:46:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:18.545 00:46:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:18.545 00:46:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:18.545 00:46:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:18.545 00:46:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:18.545 00:46:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:18.545 00:46:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:18.545 00:46:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:18.545 00:46:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:18.545 00:46:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:18.545 00:46:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:18.545 00:46:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:18.545 00:46:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:34:18.545 00:46:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:18.545 00:46:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:18.545 00:46:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:18.545 00:46:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:18.545 00:46:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YmNhNmY3YmEzYWNhZTgwZWQ4OGMwMzY1YmJjYzk0MDk1YjRmZWY4ZGViY2UzNTA1ZTE2ZWRiYWM2MDYyN2Q3N8fUjj8=: 00:34:18.545 00:46:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:18.545 00:46:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:18.545 00:46:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:18.545 00:46:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YmNhNmY3YmEzYWNhZTgwZWQ4OGMwMzY1YmJjYzk0MDk1YjRmZWY4ZGViY2UzNTA1ZTE2ZWRiYWM2MDYyN2Q3N8fUjj8=: 00:34:18.545 00:46:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:18.545 00:46:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:34:18.545 00:46:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:18.545 00:46:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:18.545 00:46:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:18.545 00:46:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:18.545 00:46:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:18.545 00:46:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:34:18.545 00:46:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:18.545 00:46:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:18.545 00:46:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:18.545 00:46:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:18.545 00:46:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:18.545 00:46:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:18.545 00:46:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:18.545 00:46:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:18.545 00:46:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:18.545 00:46:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:18.545 00:46:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:18.545 00:46:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:18.545 00:46:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:18.545 00:46:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:18.545 00:46:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:18.545 00:46:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:18.545 00:46:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:19.475 nvme0n1 00:34:19.475 00:46:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:19.475 00:46:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:19.475 00:46:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:19.475 00:46:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:19.475 00:46:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:19.475 00:46:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:19.475 00:46:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:19.475 00:46:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:19.475 00:46:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:19.475 00:46:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:19.475 00:46:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:19.475 00:46:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:34:19.475 00:46:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:19.475 00:46:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:19.475 00:46:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:34:19.475 00:46:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:19.475 00:46:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:19.475 00:46:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:19.475 00:46:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:19.475 00:46:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzM0ZmUyMGEyYjQ3M2EwOWQxOTAwOGI3OGQyNGRiYWEOBnsC: 00:34:19.475 00:46:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OWE3MjMyODQ3MjQ4NTQ0MGQzZDQ1NDQzNGE2MzlhYWY2NTExNzM4MWQyZjhiNmZiODViMTE5YTA0ODU5YjkyOVg9ftU=: 00:34:19.475 00:46:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:19.475 00:46:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:19.475 00:46:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzM0ZmUyMGEyYjQ3M2EwOWQxOTAwOGI3OGQyNGRiYWEOBnsC: 00:34:19.475 00:46:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OWE3MjMyODQ3MjQ4NTQ0MGQzZDQ1NDQzNGE2MzlhYWY2NTExNzM4MWQyZjhiNmZiODViMTE5YTA0ODU5YjkyOVg9ftU=: ]] 00:34:19.475 00:46:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OWE3MjMyODQ3MjQ4NTQ0MGQzZDQ1NDQzNGE2MzlhYWY2NTExNzM4MWQyZjhiNmZiODViMTE5YTA0ODU5YjkyOVg9ftU=: 00:34:19.475 00:46:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:34:19.475 00:46:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:19.475 00:46:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:19.475 00:46:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:19.475 00:46:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:19.475 00:46:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:19.475 00:46:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:34:19.475 00:46:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:19.475 00:46:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:19.475 00:46:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:19.475 00:46:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:19.475 00:46:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:19.475 00:46:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:19.475 00:46:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:19.475 00:46:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:19.475 00:46:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:19.475 00:46:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:19.475 00:46:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:19.475 00:46:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:19.475 00:46:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:19.475 00:46:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:19.475 00:46:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:19.475 00:46:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:19.475 00:46:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:19.733 nvme0n1 00:34:19.733 00:46:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:19.733 00:46:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:19.733 00:46:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:19.733 00:46:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:19.733 00:46:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:19.733 00:46:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:19.733 00:46:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:19.733 00:46:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:19.733 00:46:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:19.733 00:46:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:19.733 00:46:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:19.733 00:46:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:19.733 00:46:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:34:19.733 00:46:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:19.733 00:46:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:19.733 00:46:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:19.733 00:46:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:19.733 00:46:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzJmYmYxOWI0M2QyMjZhYzNkZjk0NmJhMWZmOWI3YzM2YTExNWY2NmUxZjNiMzlhzzwroQ==: 00:34:19.733 00:46:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDRkYjY0MTg1Y2Q4YjBiNDRmODExN2U1OTlkOTg5MDFkZTRlY2VkNWIwOTM5ZTYwTB+MXQ==: 00:34:19.733 00:46:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:19.733 00:46:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:19.733 00:46:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzJmYmYxOWI0M2QyMjZhYzNkZjk0NmJhMWZmOWI3YzM2YTExNWY2NmUxZjNiMzlhzzwroQ==: 00:34:19.733 00:46:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDRkYjY0MTg1Y2Q4YjBiNDRmODExN2U1OTlkOTg5MDFkZTRlY2VkNWIwOTM5ZTYwTB+MXQ==: ]] 00:34:19.733 00:46:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDRkYjY0MTg1Y2Q4YjBiNDRmODExN2U1OTlkOTg5MDFkZTRlY2VkNWIwOTM5ZTYwTB+MXQ==: 00:34:19.733 00:46:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:34:19.733 00:46:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:19.733 00:46:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:19.733 00:46:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:19.733 00:46:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:19.733 00:46:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:19.733 00:46:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:34:19.733 00:46:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:19.733 00:46:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:19.733 00:46:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:19.733 00:46:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:19.733 00:46:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:19.733 00:46:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:19.733 00:46:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:19.733 00:46:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:19.733 00:46:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:19.733 00:46:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:19.733 00:46:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:19.733 00:46:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:19.733 00:46:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:19.733 00:46:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:19.733 00:46:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:19.733 00:46:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:19.733 00:46:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:19.991 nvme0n1 00:34:19.991 00:46:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:19.991 00:46:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:19.991 00:46:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:19.991 00:46:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:19.991 00:46:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:19.991 00:46:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:19.991 00:46:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:19.991 00:46:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:19.991 00:46:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:19.991 00:46:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:19.991 00:46:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:19.991 00:46:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:19.991 00:46:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:34:19.991 00:46:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:19.991 00:46:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:19.991 00:46:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:19.991 00:46:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:19.991 00:46:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTBlYjZhN2M4NTRkNzNmYzFhZDZiNzUxNWQ4ZWU4ZjYC+55A: 00:34:19.991 00:46:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzZkMmQ0MDBjZWZkNzU1NTc2NzYwYTZhYjhiNmZjOTYVZUF1: 00:34:19.991 00:46:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:19.991 00:46:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:19.991 00:46:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTBlYjZhN2M4NTRkNzNmYzFhZDZiNzUxNWQ4ZWU4ZjYC+55A: 00:34:19.991 00:46:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzZkMmQ0MDBjZWZkNzU1NTc2NzYwYTZhYjhiNmZjOTYVZUF1: ]] 00:34:19.991 00:46:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzZkMmQ0MDBjZWZkNzU1NTc2NzYwYTZhYjhiNmZjOTYVZUF1: 00:34:19.991 00:46:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:34:19.991 00:46:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:19.991 00:46:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:19.991 00:46:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:19.991 00:46:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:19.991 00:46:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:19.991 00:46:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:34:19.991 00:46:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:19.991 00:46:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:19.991 00:46:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:19.991 00:46:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:19.991 00:46:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:19.991 00:46:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:19.991 00:46:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:19.991 00:46:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:19.991 00:46:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:19.991 00:46:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:19.991 00:46:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:19.991 00:46:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:19.991 00:46:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:19.991 00:46:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:19.991 00:46:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:19.991 00:46:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:19.991 00:46:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:20.250 nvme0n1 00:34:20.250 00:46:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:20.250 00:46:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:20.250 00:46:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:20.250 00:46:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:20.250 00:46:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:20.250 00:46:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:20.250 00:46:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:20.250 00:46:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:20.250 00:46:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:20.250 00:46:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:20.250 00:46:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:20.250 00:46:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:20.250 00:46:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:34:20.250 00:46:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:20.250 00:46:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:20.250 00:46:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:20.250 00:46:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:20.250 00:46:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NjdhM2EwZjczYTlmNTIyNjhmM2ZiOTkzOTEyNjNkZjZiYjkzNzE3MGU1ZTgwMGY0Ph/xtA==: 00:34:20.250 00:46:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NGI2YzdlOWIzMjkwODc0MzdhN2EyNjE4YTE1Yzg4MmM23AVj: 00:34:20.250 00:46:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:20.250 00:46:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:20.250 00:46:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NjdhM2EwZjczYTlmNTIyNjhmM2ZiOTkzOTEyNjNkZjZiYjkzNzE3MGU1ZTgwMGY0Ph/xtA==: 00:34:20.250 00:46:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NGI2YzdlOWIzMjkwODc0MzdhN2EyNjE4YTE1Yzg4MmM23AVj: ]] 00:34:20.250 00:46:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NGI2YzdlOWIzMjkwODc0MzdhN2EyNjE4YTE1Yzg4MmM23AVj: 00:34:20.250 00:46:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:34:20.250 00:46:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:20.250 00:46:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:20.250 00:46:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:20.250 00:46:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:20.250 00:46:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:20.250 00:46:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:34:20.250 00:46:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:20.250 00:46:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:20.250 00:46:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:20.250 00:46:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:20.250 00:46:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:20.250 00:46:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:20.250 00:46:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:20.250 00:46:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:20.250 00:46:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:20.250 00:46:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:20.250 00:46:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:20.250 00:46:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:20.250 00:46:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:20.250 00:46:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:20.250 00:46:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:20.250 00:46:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:20.250 00:46:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:20.509 nvme0n1 00:34:20.509 00:46:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:20.509 00:46:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:20.509 00:46:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:20.509 00:46:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:20.509 00:46:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:20.509 00:46:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:20.509 00:46:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:20.509 00:46:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:20.509 00:46:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:20.509 00:46:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:20.509 00:46:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:20.509 00:46:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:20.509 00:46:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:34:20.509 00:46:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:20.509 00:46:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:20.509 00:46:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:20.509 00:46:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:20.509 00:46:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YmNhNmY3YmEzYWNhZTgwZWQ4OGMwMzY1YmJjYzk0MDk1YjRmZWY4ZGViY2UzNTA1ZTE2ZWRiYWM2MDYyN2Q3N8fUjj8=: 00:34:20.509 00:46:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:20.509 00:46:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:20.509 00:46:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:20.509 00:46:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YmNhNmY3YmEzYWNhZTgwZWQ4OGMwMzY1YmJjYzk0MDk1YjRmZWY4ZGViY2UzNTA1ZTE2ZWRiYWM2MDYyN2Q3N8fUjj8=: 00:34:20.509 00:46:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:20.509 00:46:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:34:20.509 00:46:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:20.509 00:46:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:20.509 00:46:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:20.509 00:46:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:20.509 00:46:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:20.509 00:46:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:34:20.509 00:46:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:20.509 00:46:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:20.509 00:46:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:20.509 00:46:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:20.509 00:46:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:20.509 00:46:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:20.509 00:46:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:20.509 00:46:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:20.509 00:46:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:20.509 00:46:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:20.509 00:46:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:20.509 00:46:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:20.509 00:46:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:20.509 00:46:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:20.509 00:46:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:20.509 00:46:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:20.509 00:46:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:20.509 nvme0n1 00:34:20.509 00:46:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:20.510 00:46:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:20.510 00:46:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:20.510 00:46:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:20.510 00:46:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:20.510 00:46:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:20.768 00:46:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:20.768 00:46:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:20.768 00:46:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:20.768 00:46:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:20.768 00:46:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:20.768 00:46:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:20.768 00:46:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:20.768 00:46:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:34:20.768 00:46:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:20.768 00:46:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:20.768 00:46:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:20.768 00:46:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:20.768 00:46:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzM0ZmUyMGEyYjQ3M2EwOWQxOTAwOGI3OGQyNGRiYWEOBnsC: 00:34:20.768 00:46:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OWE3MjMyODQ3MjQ4NTQ0MGQzZDQ1NDQzNGE2MzlhYWY2NTExNzM4MWQyZjhiNmZiODViMTE5YTA0ODU5YjkyOVg9ftU=: 00:34:20.768 00:46:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:20.768 00:46:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:20.768 00:46:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzM0ZmUyMGEyYjQ3M2EwOWQxOTAwOGI3OGQyNGRiYWEOBnsC: 00:34:20.768 00:46:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OWE3MjMyODQ3MjQ4NTQ0MGQzZDQ1NDQzNGE2MzlhYWY2NTExNzM4MWQyZjhiNmZiODViMTE5YTA0ODU5YjkyOVg9ftU=: ]] 00:34:20.768 00:46:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OWE3MjMyODQ3MjQ4NTQ0MGQzZDQ1NDQzNGE2MzlhYWY2NTExNzM4MWQyZjhiNmZiODViMTE5YTA0ODU5YjkyOVg9ftU=: 00:34:20.768 00:46:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:34:20.768 00:46:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:20.768 00:46:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:20.768 00:46:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:20.768 00:46:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:20.768 00:46:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:20.768 00:46:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:34:20.768 00:46:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:20.768 00:46:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:20.768 00:46:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:20.768 00:46:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:20.768 00:46:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:20.768 00:46:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:20.768 00:46:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:20.768 00:46:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:20.768 00:46:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:20.768 00:46:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:20.768 00:46:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:20.768 00:46:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:20.768 00:46:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:20.768 00:46:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:20.768 00:46:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:20.768 00:46:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:20.768 00:46:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:21.027 nvme0n1 00:34:21.027 00:46:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:21.027 00:46:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:21.027 00:46:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:21.027 00:46:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:21.027 00:46:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:21.027 00:46:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:21.027 00:46:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:21.027 00:46:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:21.027 00:46:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:21.027 00:46:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:21.027 00:46:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:21.027 00:46:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:21.027 00:46:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:34:21.027 00:46:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:21.027 00:46:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:21.027 00:46:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:21.027 00:46:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:21.027 00:46:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzJmYmYxOWI0M2QyMjZhYzNkZjk0NmJhMWZmOWI3YzM2YTExNWY2NmUxZjNiMzlhzzwroQ==: 00:34:21.027 00:46:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDRkYjY0MTg1Y2Q4YjBiNDRmODExN2U1OTlkOTg5MDFkZTRlY2VkNWIwOTM5ZTYwTB+MXQ==: 00:34:21.027 00:46:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:21.027 00:46:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:21.027 00:46:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzJmYmYxOWI0M2QyMjZhYzNkZjk0NmJhMWZmOWI3YzM2YTExNWY2NmUxZjNiMzlhzzwroQ==: 00:34:21.027 00:46:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDRkYjY0MTg1Y2Q4YjBiNDRmODExN2U1OTlkOTg5MDFkZTRlY2VkNWIwOTM5ZTYwTB+MXQ==: ]] 00:34:21.027 00:46:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDRkYjY0MTg1Y2Q4YjBiNDRmODExN2U1OTlkOTg5MDFkZTRlY2VkNWIwOTM5ZTYwTB+MXQ==: 00:34:21.027 00:46:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:34:21.027 00:46:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:21.027 00:46:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:21.027 00:46:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:21.027 00:46:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:21.027 00:46:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:21.027 00:46:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:34:21.027 00:46:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:21.027 00:46:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:21.028 00:46:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:21.028 00:46:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:21.028 00:46:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:21.028 00:46:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:21.028 00:46:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:21.028 00:46:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:21.028 00:46:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:21.028 00:46:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:21.028 00:46:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:21.028 00:46:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:21.028 00:46:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:21.028 00:46:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:21.028 00:46:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:21.028 00:46:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:21.028 00:46:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:21.286 nvme0n1 00:34:21.286 00:46:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:21.286 00:46:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:21.286 00:46:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:21.286 00:46:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:21.286 00:46:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:21.286 00:46:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:21.286 00:46:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:21.287 00:46:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:21.287 00:46:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:21.287 00:46:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:21.287 00:46:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:21.287 00:46:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:21.287 00:46:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:34:21.287 00:46:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:21.287 00:46:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:21.287 00:46:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:21.287 00:46:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:21.287 00:46:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTBlYjZhN2M4NTRkNzNmYzFhZDZiNzUxNWQ4ZWU4ZjYC+55A: 00:34:21.287 00:46:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzZkMmQ0MDBjZWZkNzU1NTc2NzYwYTZhYjhiNmZjOTYVZUF1: 00:34:21.287 00:46:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:21.287 00:46:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:21.287 00:46:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTBlYjZhN2M4NTRkNzNmYzFhZDZiNzUxNWQ4ZWU4ZjYC+55A: 00:34:21.287 00:46:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzZkMmQ0MDBjZWZkNzU1NTc2NzYwYTZhYjhiNmZjOTYVZUF1: ]] 00:34:21.287 00:46:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzZkMmQ0MDBjZWZkNzU1NTc2NzYwYTZhYjhiNmZjOTYVZUF1: 00:34:21.287 00:46:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:34:21.287 00:46:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:21.287 00:46:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:21.287 00:46:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:21.287 00:46:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:21.287 00:46:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:21.287 00:46:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:34:21.287 00:46:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:21.287 00:46:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:21.287 00:46:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:21.287 00:46:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:21.287 00:46:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:21.287 00:46:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:21.287 00:46:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:21.287 00:46:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:21.287 00:46:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:21.287 00:46:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:21.287 00:46:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:21.287 00:46:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:21.287 00:46:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:21.287 00:46:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:21.287 00:46:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:21.287 00:46:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:21.287 00:46:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:21.546 nvme0n1 00:34:21.546 00:46:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:21.546 00:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:21.546 00:46:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:21.546 00:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:21.546 00:46:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:21.546 00:46:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:21.546 00:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:21.546 00:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:21.546 00:46:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:21.546 00:46:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:21.546 00:46:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:21.546 00:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:21.546 00:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:34:21.546 00:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:21.546 00:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:21.546 00:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:21.546 00:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:21.546 00:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NjdhM2EwZjczYTlmNTIyNjhmM2ZiOTkzOTEyNjNkZjZiYjkzNzE3MGU1ZTgwMGY0Ph/xtA==: 00:34:21.546 00:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NGI2YzdlOWIzMjkwODc0MzdhN2EyNjE4YTE1Yzg4MmM23AVj: 00:34:21.546 00:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:21.546 00:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:21.546 00:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NjdhM2EwZjczYTlmNTIyNjhmM2ZiOTkzOTEyNjNkZjZiYjkzNzE3MGU1ZTgwMGY0Ph/xtA==: 00:34:21.546 00:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NGI2YzdlOWIzMjkwODc0MzdhN2EyNjE4YTE1Yzg4MmM23AVj: ]] 00:34:21.546 00:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NGI2YzdlOWIzMjkwODc0MzdhN2EyNjE4YTE1Yzg4MmM23AVj: 00:34:21.546 00:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:34:21.546 00:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:21.546 00:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:21.546 00:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:21.546 00:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:21.546 00:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:21.546 00:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:34:21.546 00:46:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:21.546 00:46:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:21.546 00:46:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:21.546 00:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:21.546 00:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:21.547 00:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:21.547 00:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:21.547 00:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:21.547 00:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:21.547 00:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:21.547 00:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:21.547 00:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:21.547 00:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:21.547 00:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:21.547 00:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:21.547 00:46:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:21.547 00:46:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:21.805 nvme0n1 00:34:21.805 00:46:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:21.805 00:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:21.805 00:46:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:21.805 00:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:21.805 00:46:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:21.805 00:46:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:21.805 00:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:21.805 00:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:21.805 00:46:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:21.805 00:46:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:21.805 00:46:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:21.805 00:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:21.805 00:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:34:21.805 00:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:21.805 00:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:21.805 00:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:21.805 00:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:21.805 00:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YmNhNmY3YmEzYWNhZTgwZWQ4OGMwMzY1YmJjYzk0MDk1YjRmZWY4ZGViY2UzNTA1ZTE2ZWRiYWM2MDYyN2Q3N8fUjj8=: 00:34:21.805 00:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:21.805 00:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:21.805 00:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:21.805 00:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YmNhNmY3YmEzYWNhZTgwZWQ4OGMwMzY1YmJjYzk0MDk1YjRmZWY4ZGViY2UzNTA1ZTE2ZWRiYWM2MDYyN2Q3N8fUjj8=: 00:34:21.805 00:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:21.805 00:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:34:21.805 00:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:21.805 00:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:21.805 00:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:21.805 00:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:21.805 00:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:21.805 00:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:34:21.805 00:46:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:21.805 00:46:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:21.805 00:46:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:21.805 00:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:21.805 00:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:21.805 00:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:21.805 00:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:21.805 00:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:21.805 00:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:21.805 00:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:21.805 00:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:21.805 00:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:21.805 00:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:21.805 00:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:21.805 00:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:21.805 00:46:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:21.805 00:46:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:22.064 nvme0n1 00:34:22.064 00:46:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:22.064 00:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:22.064 00:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:22.064 00:46:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:22.064 00:46:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:22.064 00:46:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:22.064 00:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:22.064 00:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:22.064 00:46:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:22.064 00:46:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:22.064 00:46:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:22.064 00:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:22.064 00:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:22.064 00:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:34:22.064 00:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:22.064 00:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:22.064 00:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:22.064 00:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:22.064 00:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzM0ZmUyMGEyYjQ3M2EwOWQxOTAwOGI3OGQyNGRiYWEOBnsC: 00:34:22.064 00:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OWE3MjMyODQ3MjQ4NTQ0MGQzZDQ1NDQzNGE2MzlhYWY2NTExNzM4MWQyZjhiNmZiODViMTE5YTA0ODU5YjkyOVg9ftU=: 00:34:22.064 00:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:22.064 00:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:22.064 00:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzM0ZmUyMGEyYjQ3M2EwOWQxOTAwOGI3OGQyNGRiYWEOBnsC: 00:34:22.064 00:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OWE3MjMyODQ3MjQ4NTQ0MGQzZDQ1NDQzNGE2MzlhYWY2NTExNzM4MWQyZjhiNmZiODViMTE5YTA0ODU5YjkyOVg9ftU=: ]] 00:34:22.064 00:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OWE3MjMyODQ3MjQ4NTQ0MGQzZDQ1NDQzNGE2MzlhYWY2NTExNzM4MWQyZjhiNmZiODViMTE5YTA0ODU5YjkyOVg9ftU=: 00:34:22.064 00:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:34:22.064 00:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:22.064 00:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:22.064 00:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:22.064 00:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:22.064 00:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:22.064 00:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:34:22.064 00:46:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:22.064 00:46:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:22.064 00:46:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:22.064 00:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:22.064 00:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:22.064 00:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:22.064 00:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:22.064 00:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:22.064 00:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:22.064 00:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:22.064 00:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:22.064 00:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:22.064 00:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:22.064 00:46:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:22.064 00:46:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:22.064 00:46:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:22.064 00:46:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:22.323 nvme0n1 00:34:22.323 00:46:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:22.323 00:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:22.323 00:46:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:22.323 00:46:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:22.323 00:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:22.323 00:46:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:22.323 00:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:22.323 00:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:22.323 00:46:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:22.323 00:46:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:22.323 00:46:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:22.323 00:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:22.323 00:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:34:22.323 00:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:22.323 00:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:22.323 00:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:22.323 00:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:22.323 00:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzJmYmYxOWI0M2QyMjZhYzNkZjk0NmJhMWZmOWI3YzM2YTExNWY2NmUxZjNiMzlhzzwroQ==: 00:34:22.323 00:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDRkYjY0MTg1Y2Q4YjBiNDRmODExN2U1OTlkOTg5MDFkZTRlY2VkNWIwOTM5ZTYwTB+MXQ==: 00:34:22.323 00:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:22.323 00:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:22.323 00:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzJmYmYxOWI0M2QyMjZhYzNkZjk0NmJhMWZmOWI3YzM2YTExNWY2NmUxZjNiMzlhzzwroQ==: 00:34:22.323 00:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDRkYjY0MTg1Y2Q4YjBiNDRmODExN2U1OTlkOTg5MDFkZTRlY2VkNWIwOTM5ZTYwTB+MXQ==: ]] 00:34:22.323 00:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDRkYjY0MTg1Y2Q4YjBiNDRmODExN2U1OTlkOTg5MDFkZTRlY2VkNWIwOTM5ZTYwTB+MXQ==: 00:34:22.323 00:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:34:22.323 00:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:22.323 00:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:22.323 00:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:22.323 00:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:22.323 00:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:22.323 00:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:34:22.323 00:46:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:22.323 00:46:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:22.581 00:46:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:22.581 00:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:22.581 00:46:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:22.581 00:46:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:22.581 00:46:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:22.581 00:46:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:22.581 00:46:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:22.581 00:46:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:22.581 00:46:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:22.581 00:46:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:22.581 00:46:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:22.581 00:46:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:22.581 00:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:22.581 00:46:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:22.581 00:46:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:22.839 nvme0n1 00:34:22.839 00:46:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:22.839 00:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:22.839 00:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:22.839 00:46:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:22.839 00:46:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:22.839 00:46:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:22.839 00:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:22.839 00:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:22.839 00:46:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:22.839 00:46:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:22.839 00:46:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:22.839 00:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:22.839 00:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:34:22.839 00:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:22.839 00:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:22.839 00:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:22.839 00:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:22.839 00:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTBlYjZhN2M4NTRkNzNmYzFhZDZiNzUxNWQ4ZWU4ZjYC+55A: 00:34:22.839 00:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzZkMmQ0MDBjZWZkNzU1NTc2NzYwYTZhYjhiNmZjOTYVZUF1: 00:34:22.839 00:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:22.840 00:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:22.840 00:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTBlYjZhN2M4NTRkNzNmYzFhZDZiNzUxNWQ4ZWU4ZjYC+55A: 00:34:22.840 00:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzZkMmQ0MDBjZWZkNzU1NTc2NzYwYTZhYjhiNmZjOTYVZUF1: ]] 00:34:22.840 00:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzZkMmQ0MDBjZWZkNzU1NTc2NzYwYTZhYjhiNmZjOTYVZUF1: 00:34:22.840 00:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:34:22.840 00:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:22.840 00:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:22.840 00:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:22.840 00:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:22.840 00:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:22.840 00:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:34:22.840 00:46:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:22.840 00:46:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:22.840 00:46:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:22.840 00:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:22.840 00:46:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:22.840 00:46:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:22.840 00:46:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:22.840 00:46:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:22.840 00:46:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:22.840 00:46:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:22.840 00:46:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:22.840 00:46:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:22.840 00:46:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:22.840 00:46:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:22.840 00:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:22.840 00:46:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:22.840 00:46:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:23.098 nvme0n1 00:34:23.098 00:46:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:23.098 00:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:23.098 00:46:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:23.098 00:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:23.098 00:46:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:23.098 00:46:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:23.098 00:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:23.098 00:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:23.098 00:46:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:23.098 00:46:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:23.098 00:46:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:23.356 00:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:23.356 00:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:34:23.356 00:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:23.356 00:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:23.356 00:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:23.356 00:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:23.356 00:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NjdhM2EwZjczYTlmNTIyNjhmM2ZiOTkzOTEyNjNkZjZiYjkzNzE3MGU1ZTgwMGY0Ph/xtA==: 00:34:23.356 00:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NGI2YzdlOWIzMjkwODc0MzdhN2EyNjE4YTE1Yzg4MmM23AVj: 00:34:23.356 00:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:23.356 00:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:23.356 00:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NjdhM2EwZjczYTlmNTIyNjhmM2ZiOTkzOTEyNjNkZjZiYjkzNzE3MGU1ZTgwMGY0Ph/xtA==: 00:34:23.356 00:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NGI2YzdlOWIzMjkwODc0MzdhN2EyNjE4YTE1Yzg4MmM23AVj: ]] 00:34:23.356 00:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NGI2YzdlOWIzMjkwODc0MzdhN2EyNjE4YTE1Yzg4MmM23AVj: 00:34:23.356 00:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:34:23.356 00:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:23.356 00:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:23.356 00:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:23.356 00:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:23.356 00:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:23.356 00:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:34:23.356 00:46:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:23.356 00:46:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:23.356 00:46:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:23.356 00:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:23.356 00:46:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:23.356 00:46:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:23.356 00:46:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:23.356 00:46:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:23.356 00:46:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:23.356 00:46:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:23.356 00:46:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:23.356 00:46:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:23.356 00:46:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:23.356 00:46:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:23.356 00:46:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:23.356 00:46:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:23.356 00:46:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:23.615 nvme0n1 00:34:23.615 00:46:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:23.615 00:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:23.615 00:46:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:23.615 00:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:23.615 00:46:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:23.615 00:46:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:23.615 00:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:23.615 00:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:23.615 00:46:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:23.615 00:46:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:23.615 00:46:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:23.615 00:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:23.615 00:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:34:23.615 00:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:23.615 00:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:23.615 00:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:23.615 00:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:23.615 00:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YmNhNmY3YmEzYWNhZTgwZWQ4OGMwMzY1YmJjYzk0MDk1YjRmZWY4ZGViY2UzNTA1ZTE2ZWRiYWM2MDYyN2Q3N8fUjj8=: 00:34:23.615 00:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:23.615 00:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:23.615 00:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:23.615 00:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YmNhNmY3YmEzYWNhZTgwZWQ4OGMwMzY1YmJjYzk0MDk1YjRmZWY4ZGViY2UzNTA1ZTE2ZWRiYWM2MDYyN2Q3N8fUjj8=: 00:34:23.615 00:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:23.615 00:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:34:23.615 00:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:23.615 00:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:23.615 00:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:23.615 00:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:23.615 00:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:23.615 00:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:34:23.615 00:46:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:23.615 00:46:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:23.615 00:46:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:23.615 00:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:23.615 00:46:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:23.615 00:46:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:23.615 00:46:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:23.615 00:46:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:23.615 00:46:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:23.615 00:46:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:23.615 00:46:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:23.615 00:46:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:23.615 00:46:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:23.615 00:46:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:23.615 00:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:23.615 00:46:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:23.615 00:46:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:23.874 nvme0n1 00:34:23.874 00:46:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:23.874 00:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:23.874 00:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:23.874 00:46:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:23.874 00:46:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:23.874 00:46:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:23.874 00:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:23.874 00:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:23.874 00:46:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:23.874 00:46:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:24.133 00:46:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:24.133 00:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:24.133 00:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:24.133 00:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:34:24.133 00:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:24.133 00:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:24.133 00:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:24.133 00:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:24.133 00:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzM0ZmUyMGEyYjQ3M2EwOWQxOTAwOGI3OGQyNGRiYWEOBnsC: 00:34:24.133 00:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OWE3MjMyODQ3MjQ4NTQ0MGQzZDQ1NDQzNGE2MzlhYWY2NTExNzM4MWQyZjhiNmZiODViMTE5YTA0ODU5YjkyOVg9ftU=: 00:34:24.133 00:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:24.133 00:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:24.133 00:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzM0ZmUyMGEyYjQ3M2EwOWQxOTAwOGI3OGQyNGRiYWEOBnsC: 00:34:24.133 00:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OWE3MjMyODQ3MjQ4NTQ0MGQzZDQ1NDQzNGE2MzlhYWY2NTExNzM4MWQyZjhiNmZiODViMTE5YTA0ODU5YjkyOVg9ftU=: ]] 00:34:24.133 00:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OWE3MjMyODQ3MjQ4NTQ0MGQzZDQ1NDQzNGE2MzlhYWY2NTExNzM4MWQyZjhiNmZiODViMTE5YTA0ODU5YjkyOVg9ftU=: 00:34:24.133 00:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:34:24.133 00:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:24.133 00:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:24.133 00:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:24.133 00:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:24.133 00:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:24.133 00:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:34:24.133 00:46:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:24.133 00:46:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:24.133 00:46:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:24.133 00:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:24.133 00:46:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:24.133 00:46:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:24.133 00:46:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:24.133 00:46:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:24.133 00:46:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:24.133 00:46:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:24.133 00:46:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:24.133 00:46:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:24.133 00:46:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:24.133 00:46:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:24.133 00:46:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:24.133 00:46:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:24.133 00:46:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:24.699 nvme0n1 00:34:24.699 00:46:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:24.699 00:46:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:24.699 00:46:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:24.699 00:46:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:24.699 00:46:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:24.699 00:46:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:24.699 00:46:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:24.699 00:46:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:24.699 00:46:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:24.699 00:46:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:24.699 00:46:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:24.700 00:46:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:24.700 00:46:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:34:24.700 00:46:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:24.700 00:46:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:24.700 00:46:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:24.700 00:46:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:24.700 00:46:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzJmYmYxOWI0M2QyMjZhYzNkZjk0NmJhMWZmOWI3YzM2YTExNWY2NmUxZjNiMzlhzzwroQ==: 00:34:24.700 00:46:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDRkYjY0MTg1Y2Q4YjBiNDRmODExN2U1OTlkOTg5MDFkZTRlY2VkNWIwOTM5ZTYwTB+MXQ==: 00:34:24.700 00:46:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:24.700 00:46:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:24.700 00:46:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzJmYmYxOWI0M2QyMjZhYzNkZjk0NmJhMWZmOWI3YzM2YTExNWY2NmUxZjNiMzlhzzwroQ==: 00:34:24.700 00:46:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDRkYjY0MTg1Y2Q4YjBiNDRmODExN2U1OTlkOTg5MDFkZTRlY2VkNWIwOTM5ZTYwTB+MXQ==: ]] 00:34:24.700 00:46:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDRkYjY0MTg1Y2Q4YjBiNDRmODExN2U1OTlkOTg5MDFkZTRlY2VkNWIwOTM5ZTYwTB+MXQ==: 00:34:24.700 00:46:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:34:24.700 00:46:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:24.700 00:46:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:24.700 00:46:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:24.700 00:46:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:24.700 00:46:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:24.700 00:46:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:34:24.700 00:46:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:24.700 00:46:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:24.700 00:46:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:24.700 00:46:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:24.700 00:46:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:24.700 00:46:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:24.700 00:46:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:24.700 00:46:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:24.700 00:46:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:24.700 00:46:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:24.700 00:46:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:24.700 00:46:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:24.700 00:46:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:24.700 00:46:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:24.700 00:46:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:24.700 00:46:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:24.700 00:46:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:25.266 nvme0n1 00:34:25.266 00:46:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:25.266 00:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:25.266 00:46:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:25.266 00:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:25.266 00:46:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:25.266 00:46:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:25.524 00:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:25.524 00:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:25.524 00:46:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:25.524 00:46:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:25.524 00:46:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:25.524 00:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:25.524 00:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:34:25.524 00:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:25.524 00:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:25.524 00:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:25.524 00:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:25.524 00:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTBlYjZhN2M4NTRkNzNmYzFhZDZiNzUxNWQ4ZWU4ZjYC+55A: 00:34:25.524 00:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzZkMmQ0MDBjZWZkNzU1NTc2NzYwYTZhYjhiNmZjOTYVZUF1: 00:34:25.524 00:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:25.524 00:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:25.524 00:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTBlYjZhN2M4NTRkNzNmYzFhZDZiNzUxNWQ4ZWU4ZjYC+55A: 00:34:25.524 00:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzZkMmQ0MDBjZWZkNzU1NTc2NzYwYTZhYjhiNmZjOTYVZUF1: ]] 00:34:25.524 00:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzZkMmQ0MDBjZWZkNzU1NTc2NzYwYTZhYjhiNmZjOTYVZUF1: 00:34:25.524 00:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:34:25.524 00:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:25.524 00:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:25.524 00:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:25.524 00:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:25.524 00:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:25.524 00:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:34:25.524 00:46:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:25.524 00:46:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:25.524 00:46:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:25.524 00:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:25.524 00:46:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:25.524 00:46:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:25.524 00:46:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:25.524 00:46:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:25.524 00:46:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:25.525 00:46:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:25.525 00:46:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:25.525 00:46:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:25.525 00:46:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:25.525 00:46:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:25.525 00:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:25.525 00:46:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:25.525 00:46:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:26.090 nvme0n1 00:34:26.090 00:46:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:26.090 00:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:26.090 00:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:26.090 00:46:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:26.090 00:46:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:26.090 00:46:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:26.090 00:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:26.090 00:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:26.090 00:46:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:26.090 00:46:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:26.090 00:46:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:26.090 00:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:26.090 00:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:34:26.090 00:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:26.090 00:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:26.090 00:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:26.090 00:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:26.090 00:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NjdhM2EwZjczYTlmNTIyNjhmM2ZiOTkzOTEyNjNkZjZiYjkzNzE3MGU1ZTgwMGY0Ph/xtA==: 00:34:26.090 00:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NGI2YzdlOWIzMjkwODc0MzdhN2EyNjE4YTE1Yzg4MmM23AVj: 00:34:26.090 00:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:26.090 00:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:26.090 00:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NjdhM2EwZjczYTlmNTIyNjhmM2ZiOTkzOTEyNjNkZjZiYjkzNzE3MGU1ZTgwMGY0Ph/xtA==: 00:34:26.090 00:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NGI2YzdlOWIzMjkwODc0MzdhN2EyNjE4YTE1Yzg4MmM23AVj: ]] 00:34:26.090 00:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NGI2YzdlOWIzMjkwODc0MzdhN2EyNjE4YTE1Yzg4MmM23AVj: 00:34:26.090 00:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:34:26.090 00:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:26.090 00:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:26.090 00:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:26.090 00:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:26.090 00:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:26.090 00:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:34:26.090 00:46:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:26.090 00:46:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:26.090 00:46:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:26.090 00:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:26.090 00:46:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:26.090 00:46:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:26.090 00:46:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:26.090 00:46:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:26.090 00:46:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:26.090 00:46:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:26.090 00:46:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:26.090 00:46:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:26.090 00:46:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:26.090 00:46:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:26.090 00:46:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:26.090 00:46:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:26.090 00:46:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:26.656 nvme0n1 00:34:26.656 00:46:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:26.656 00:46:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:26.656 00:46:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:26.656 00:46:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:26.656 00:46:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:26.656 00:46:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:26.656 00:46:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:26.656 00:46:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:26.656 00:46:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:26.656 00:46:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:26.656 00:46:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:26.656 00:46:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:26.656 00:46:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:34:26.656 00:46:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:26.656 00:46:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:26.656 00:46:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:26.656 00:46:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:26.656 00:46:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YmNhNmY3YmEzYWNhZTgwZWQ4OGMwMzY1YmJjYzk0MDk1YjRmZWY4ZGViY2UzNTA1ZTE2ZWRiYWM2MDYyN2Q3N8fUjj8=: 00:34:26.656 00:46:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:26.656 00:46:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:26.656 00:46:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:26.656 00:46:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YmNhNmY3YmEzYWNhZTgwZWQ4OGMwMzY1YmJjYzk0MDk1YjRmZWY4ZGViY2UzNTA1ZTE2ZWRiYWM2MDYyN2Q3N8fUjj8=: 00:34:26.656 00:46:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:26.656 00:46:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:34:26.656 00:46:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:26.656 00:46:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:26.656 00:46:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:26.656 00:46:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:26.656 00:46:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:26.656 00:46:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:34:26.656 00:46:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:26.656 00:46:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:26.914 00:46:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:26.914 00:46:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:26.914 00:46:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:26.914 00:46:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:26.914 00:46:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:26.914 00:46:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:26.914 00:46:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:26.914 00:46:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:26.914 00:46:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:26.914 00:46:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:26.914 00:46:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:26.914 00:46:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:26.914 00:46:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:26.914 00:46:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:26.914 00:46:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:27.479 nvme0n1 00:34:27.479 00:46:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:27.479 00:46:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:27.479 00:46:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:27.479 00:46:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:27.479 00:46:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:27.479 00:46:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:27.479 00:46:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:27.479 00:46:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:27.479 00:46:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:27.479 00:46:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:27.479 00:46:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:27.479 00:46:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:27.479 00:46:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:27.479 00:46:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:34:27.479 00:46:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:27.479 00:46:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:27.479 00:46:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:27.479 00:46:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:27.479 00:46:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzM0ZmUyMGEyYjQ3M2EwOWQxOTAwOGI3OGQyNGRiYWEOBnsC: 00:34:27.479 00:46:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OWE3MjMyODQ3MjQ4NTQ0MGQzZDQ1NDQzNGE2MzlhYWY2NTExNzM4MWQyZjhiNmZiODViMTE5YTA0ODU5YjkyOVg9ftU=: 00:34:27.479 00:46:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:27.479 00:46:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:27.479 00:46:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzM0ZmUyMGEyYjQ3M2EwOWQxOTAwOGI3OGQyNGRiYWEOBnsC: 00:34:27.480 00:46:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OWE3MjMyODQ3MjQ4NTQ0MGQzZDQ1NDQzNGE2MzlhYWY2NTExNzM4MWQyZjhiNmZiODViMTE5YTA0ODU5YjkyOVg9ftU=: ]] 00:34:27.480 00:46:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OWE3MjMyODQ3MjQ4NTQ0MGQzZDQ1NDQzNGE2MzlhYWY2NTExNzM4MWQyZjhiNmZiODViMTE5YTA0ODU5YjkyOVg9ftU=: 00:34:27.480 00:46:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:34:27.480 00:46:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:27.480 00:46:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:27.480 00:46:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:27.480 00:46:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:27.480 00:46:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:27.480 00:46:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:34:27.480 00:46:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:27.480 00:46:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:27.480 00:46:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:27.480 00:46:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:27.480 00:46:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:27.480 00:46:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:27.480 00:46:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:27.480 00:46:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:27.480 00:46:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:27.480 00:46:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:27.480 00:46:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:27.480 00:46:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:27.480 00:46:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:27.480 00:46:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:27.480 00:46:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:27.480 00:46:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:27.480 00:46:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:28.910 nvme0n1 00:34:28.910 00:46:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:28.910 00:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:28.910 00:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:28.910 00:46:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:28.910 00:46:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:28.910 00:46:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:28.910 00:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:28.910 00:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:28.910 00:46:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:28.910 00:46:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:28.910 00:46:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:28.910 00:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:28.910 00:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:34:28.910 00:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:28.910 00:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:28.910 00:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:28.910 00:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:28.910 00:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzJmYmYxOWI0M2QyMjZhYzNkZjk0NmJhMWZmOWI3YzM2YTExNWY2NmUxZjNiMzlhzzwroQ==: 00:34:28.910 00:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDRkYjY0MTg1Y2Q4YjBiNDRmODExN2U1OTlkOTg5MDFkZTRlY2VkNWIwOTM5ZTYwTB+MXQ==: 00:34:28.910 00:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:28.910 00:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:28.910 00:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzJmYmYxOWI0M2QyMjZhYzNkZjk0NmJhMWZmOWI3YzM2YTExNWY2NmUxZjNiMzlhzzwroQ==: 00:34:28.910 00:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDRkYjY0MTg1Y2Q4YjBiNDRmODExN2U1OTlkOTg5MDFkZTRlY2VkNWIwOTM5ZTYwTB+MXQ==: ]] 00:34:28.910 00:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDRkYjY0MTg1Y2Q4YjBiNDRmODExN2U1OTlkOTg5MDFkZTRlY2VkNWIwOTM5ZTYwTB+MXQ==: 00:34:28.910 00:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:34:28.910 00:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:28.910 00:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:28.910 00:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:28.910 00:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:28.910 00:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:28.910 00:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:34:28.910 00:46:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:28.910 00:46:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:28.910 00:46:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:28.910 00:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:28.910 00:46:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:28.910 00:46:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:28.910 00:46:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:28.910 00:46:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:28.910 00:46:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:28.910 00:46:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:28.910 00:46:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:28.910 00:46:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:28.910 00:46:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:28.910 00:46:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:28.910 00:46:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:28.910 00:46:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:28.910 00:46:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:29.844 nvme0n1 00:34:29.844 00:46:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:29.844 00:46:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:29.844 00:46:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:29.844 00:46:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:29.844 00:46:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:29.844 00:46:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:29.844 00:46:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:29.844 00:46:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:29.844 00:46:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:29.844 00:46:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:29.844 00:46:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:29.844 00:46:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:29.844 00:46:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:34:29.844 00:46:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:29.844 00:46:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:29.844 00:46:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:29.844 00:46:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:29.844 00:46:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTBlYjZhN2M4NTRkNzNmYzFhZDZiNzUxNWQ4ZWU4ZjYC+55A: 00:34:29.844 00:46:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzZkMmQ0MDBjZWZkNzU1NTc2NzYwYTZhYjhiNmZjOTYVZUF1: 00:34:29.844 00:46:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:29.844 00:46:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:29.844 00:46:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTBlYjZhN2M4NTRkNzNmYzFhZDZiNzUxNWQ4ZWU4ZjYC+55A: 00:34:29.844 00:46:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzZkMmQ0MDBjZWZkNzU1NTc2NzYwYTZhYjhiNmZjOTYVZUF1: ]] 00:34:29.844 00:46:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzZkMmQ0MDBjZWZkNzU1NTc2NzYwYTZhYjhiNmZjOTYVZUF1: 00:34:29.844 00:46:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:34:29.844 00:46:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:29.844 00:46:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:29.844 00:46:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:29.844 00:46:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:29.844 00:46:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:29.844 00:46:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:34:29.844 00:46:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:29.844 00:46:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:29.844 00:46:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:29.844 00:46:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:29.844 00:46:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:29.844 00:46:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:29.844 00:46:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:29.844 00:46:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:29.844 00:46:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:29.844 00:46:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:29.844 00:46:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:29.844 00:46:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:29.844 00:46:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:29.844 00:46:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:29.844 00:46:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:29.844 00:46:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:29.844 00:46:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:31.216 nvme0n1 00:34:31.216 00:46:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:31.216 00:46:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:31.216 00:46:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:31.216 00:46:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:31.216 00:46:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:31.216 00:46:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:31.216 00:46:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:31.216 00:46:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:31.216 00:46:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:31.216 00:46:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:31.216 00:46:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:31.216 00:46:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:31.216 00:46:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:34:31.216 00:46:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:31.216 00:46:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:31.216 00:46:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:31.216 00:46:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:31.216 00:46:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NjdhM2EwZjczYTlmNTIyNjhmM2ZiOTkzOTEyNjNkZjZiYjkzNzE3MGU1ZTgwMGY0Ph/xtA==: 00:34:31.216 00:46:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NGI2YzdlOWIzMjkwODc0MzdhN2EyNjE4YTE1Yzg4MmM23AVj: 00:34:31.216 00:46:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:31.216 00:46:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:31.216 00:46:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NjdhM2EwZjczYTlmNTIyNjhmM2ZiOTkzOTEyNjNkZjZiYjkzNzE3MGU1ZTgwMGY0Ph/xtA==: 00:34:31.216 00:46:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NGI2YzdlOWIzMjkwODc0MzdhN2EyNjE4YTE1Yzg4MmM23AVj: ]] 00:34:31.216 00:46:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NGI2YzdlOWIzMjkwODc0MzdhN2EyNjE4YTE1Yzg4MmM23AVj: 00:34:31.216 00:46:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:34:31.216 00:46:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:31.216 00:46:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:31.216 00:46:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:31.216 00:46:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:31.216 00:46:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:31.216 00:46:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:34:31.216 00:46:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:31.216 00:46:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:31.216 00:46:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:31.216 00:46:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:31.216 00:46:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:31.216 00:46:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:31.216 00:46:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:31.216 00:46:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:31.216 00:46:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:31.216 00:46:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:31.216 00:46:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:31.216 00:46:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:31.216 00:46:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:31.216 00:46:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:31.216 00:46:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:31.216 00:46:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:31.216 00:46:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:32.150 nvme0n1 00:34:32.150 00:46:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:32.150 00:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:32.150 00:46:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:32.150 00:46:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:32.150 00:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:32.150 00:46:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:32.150 00:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:32.150 00:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:32.150 00:46:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:32.150 00:46:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:32.150 00:46:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:32.150 00:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:32.150 00:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:34:32.150 00:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:32.150 00:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:32.150 00:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:32.150 00:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:32.150 00:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YmNhNmY3YmEzYWNhZTgwZWQ4OGMwMzY1YmJjYzk0MDk1YjRmZWY4ZGViY2UzNTA1ZTE2ZWRiYWM2MDYyN2Q3N8fUjj8=: 00:34:32.150 00:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:32.150 00:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:32.150 00:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:32.150 00:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YmNhNmY3YmEzYWNhZTgwZWQ4OGMwMzY1YmJjYzk0MDk1YjRmZWY4ZGViY2UzNTA1ZTE2ZWRiYWM2MDYyN2Q3N8fUjj8=: 00:34:32.150 00:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:32.150 00:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:34:32.150 00:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:32.150 00:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:32.150 00:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:32.150 00:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:32.150 00:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:32.150 00:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:34:32.150 00:46:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:32.150 00:46:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:32.150 00:46:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:32.150 00:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:32.150 00:46:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:32.150 00:46:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:32.150 00:46:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:32.150 00:46:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:32.150 00:46:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:32.150 00:46:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:32.150 00:46:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:32.150 00:46:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:32.150 00:46:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:32.150 00:46:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:32.150 00:46:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:32.150 00:46:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:32.150 00:46:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:33.523 nvme0n1 00:34:33.523 00:47:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:33.523 00:47:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:33.523 00:47:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:33.523 00:47:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:33.523 00:47:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:33.523 00:47:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:33.523 00:47:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:33.523 00:47:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:33.523 00:47:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:33.523 00:47:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:33.523 00:47:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:33.523 00:47:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:34:33.523 00:47:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:33.523 00:47:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:33.523 00:47:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:34:33.523 00:47:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:33.523 00:47:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:33.523 00:47:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:33.523 00:47:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:33.523 00:47:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzM0ZmUyMGEyYjQ3M2EwOWQxOTAwOGI3OGQyNGRiYWEOBnsC: 00:34:33.523 00:47:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OWE3MjMyODQ3MjQ4NTQ0MGQzZDQ1NDQzNGE2MzlhYWY2NTExNzM4MWQyZjhiNmZiODViMTE5YTA0ODU5YjkyOVg9ftU=: 00:34:33.523 00:47:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:33.523 00:47:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:33.523 00:47:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzM0ZmUyMGEyYjQ3M2EwOWQxOTAwOGI3OGQyNGRiYWEOBnsC: 00:34:33.523 00:47:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OWE3MjMyODQ3MjQ4NTQ0MGQzZDQ1NDQzNGE2MzlhYWY2NTExNzM4MWQyZjhiNmZiODViMTE5YTA0ODU5YjkyOVg9ftU=: ]] 00:34:33.523 00:47:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OWE3MjMyODQ3MjQ4NTQ0MGQzZDQ1NDQzNGE2MzlhYWY2NTExNzM4MWQyZjhiNmZiODViMTE5YTA0ODU5YjkyOVg9ftU=: 00:34:33.523 00:47:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:34:33.523 00:47:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:33.523 00:47:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:33.523 00:47:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:33.523 00:47:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:33.523 00:47:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:33.523 00:47:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:34:33.523 00:47:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:33.523 00:47:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:33.523 00:47:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:33.523 00:47:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:33.523 00:47:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:33.523 00:47:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:33.523 00:47:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:33.524 00:47:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:33.524 00:47:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:33.524 00:47:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:33.524 00:47:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:33.524 00:47:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:33.524 00:47:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:33.524 00:47:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:33.524 00:47:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:33.524 00:47:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:33.524 00:47:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:33.524 nvme0n1 00:34:33.524 00:47:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:33.524 00:47:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:33.524 00:47:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:33.524 00:47:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:33.524 00:47:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:33.524 00:47:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:33.524 00:47:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:33.524 00:47:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:33.524 00:47:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:33.524 00:47:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:33.524 00:47:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:33.524 00:47:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:33.524 00:47:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:34:33.524 00:47:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:33.524 00:47:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:33.524 00:47:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:33.524 00:47:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:33.524 00:47:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzJmYmYxOWI0M2QyMjZhYzNkZjk0NmJhMWZmOWI3YzM2YTExNWY2NmUxZjNiMzlhzzwroQ==: 00:34:33.524 00:47:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDRkYjY0MTg1Y2Q4YjBiNDRmODExN2U1OTlkOTg5MDFkZTRlY2VkNWIwOTM5ZTYwTB+MXQ==: 00:34:33.524 00:47:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:33.524 00:47:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:33.524 00:47:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzJmYmYxOWI0M2QyMjZhYzNkZjk0NmJhMWZmOWI3YzM2YTExNWY2NmUxZjNiMzlhzzwroQ==: 00:34:33.524 00:47:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDRkYjY0MTg1Y2Q4YjBiNDRmODExN2U1OTlkOTg5MDFkZTRlY2VkNWIwOTM5ZTYwTB+MXQ==: ]] 00:34:33.524 00:47:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDRkYjY0MTg1Y2Q4YjBiNDRmODExN2U1OTlkOTg5MDFkZTRlY2VkNWIwOTM5ZTYwTB+MXQ==: 00:34:33.524 00:47:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:34:33.524 00:47:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:33.524 00:47:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:33.524 00:47:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:33.524 00:47:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:33.524 00:47:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:33.524 00:47:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:34:33.524 00:47:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:33.524 00:47:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:33.524 00:47:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:33.782 00:47:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:33.782 00:47:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:33.782 00:47:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:33.782 00:47:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:33.782 00:47:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:33.782 00:47:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:33.782 00:47:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:33.782 00:47:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:33.782 00:47:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:33.782 00:47:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:33.782 00:47:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:33.782 00:47:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:33.782 00:47:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:33.782 00:47:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:33.782 nvme0n1 00:34:33.782 00:47:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:33.782 00:47:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:33.782 00:47:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:33.782 00:47:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:33.782 00:47:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:33.782 00:47:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:33.782 00:47:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:33.782 00:47:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:33.782 00:47:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:33.782 00:47:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:33.782 00:47:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:33.782 00:47:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:33.782 00:47:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:34:33.782 00:47:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:33.782 00:47:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:33.782 00:47:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:33.782 00:47:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:33.782 00:47:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTBlYjZhN2M4NTRkNzNmYzFhZDZiNzUxNWQ4ZWU4ZjYC+55A: 00:34:33.782 00:47:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzZkMmQ0MDBjZWZkNzU1NTc2NzYwYTZhYjhiNmZjOTYVZUF1: 00:34:33.782 00:47:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:33.782 00:47:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:33.782 00:47:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTBlYjZhN2M4NTRkNzNmYzFhZDZiNzUxNWQ4ZWU4ZjYC+55A: 00:34:33.782 00:47:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzZkMmQ0MDBjZWZkNzU1NTc2NzYwYTZhYjhiNmZjOTYVZUF1: ]] 00:34:33.782 00:47:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzZkMmQ0MDBjZWZkNzU1NTc2NzYwYTZhYjhiNmZjOTYVZUF1: 00:34:33.782 00:47:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:34:33.782 00:47:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:33.782 00:47:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:33.782 00:47:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:33.782 00:47:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:33.782 00:47:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:33.782 00:47:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:34:33.782 00:47:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:33.782 00:47:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:33.782 00:47:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:33.782 00:47:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:33.782 00:47:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:33.782 00:47:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:33.782 00:47:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:33.782 00:47:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:33.782 00:47:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:33.782 00:47:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:33.782 00:47:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:33.782 00:47:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:33.782 00:47:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:33.782 00:47:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:33.782 00:47:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:33.782 00:47:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:33.782 00:47:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:34.041 nvme0n1 00:34:34.041 00:47:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:34.041 00:47:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:34.041 00:47:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:34.041 00:47:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:34.041 00:47:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:34.041 00:47:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:34.041 00:47:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:34.041 00:47:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:34.041 00:47:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:34.041 00:47:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:34.041 00:47:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:34.041 00:47:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:34.041 00:47:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:34:34.041 00:47:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:34.041 00:47:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:34.041 00:47:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:34.041 00:47:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:34.041 00:47:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NjdhM2EwZjczYTlmNTIyNjhmM2ZiOTkzOTEyNjNkZjZiYjkzNzE3MGU1ZTgwMGY0Ph/xtA==: 00:34:34.041 00:47:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NGI2YzdlOWIzMjkwODc0MzdhN2EyNjE4YTE1Yzg4MmM23AVj: 00:34:34.041 00:47:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:34.041 00:47:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:34.041 00:47:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NjdhM2EwZjczYTlmNTIyNjhmM2ZiOTkzOTEyNjNkZjZiYjkzNzE3MGU1ZTgwMGY0Ph/xtA==: 00:34:34.041 00:47:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NGI2YzdlOWIzMjkwODc0MzdhN2EyNjE4YTE1Yzg4MmM23AVj: ]] 00:34:34.041 00:47:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NGI2YzdlOWIzMjkwODc0MzdhN2EyNjE4YTE1Yzg4MmM23AVj: 00:34:34.041 00:47:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:34:34.041 00:47:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:34.041 00:47:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:34.041 00:47:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:34.041 00:47:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:34.041 00:47:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:34.041 00:47:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:34:34.041 00:47:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:34.041 00:47:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:34.041 00:47:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:34.041 00:47:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:34.041 00:47:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:34.041 00:47:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:34.041 00:47:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:34.041 00:47:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:34.041 00:47:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:34.041 00:47:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:34.041 00:47:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:34.041 00:47:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:34.041 00:47:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:34.041 00:47:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:34.041 00:47:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:34.041 00:47:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:34.041 00:47:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:34.299 nvme0n1 00:34:34.299 00:47:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:34.299 00:47:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:34.300 00:47:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:34.300 00:47:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:34.300 00:47:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:34.300 00:47:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:34.300 00:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:34.300 00:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:34.300 00:47:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:34.300 00:47:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:34.300 00:47:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:34.300 00:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:34.300 00:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:34:34.300 00:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:34.300 00:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:34.300 00:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:34.300 00:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:34.300 00:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YmNhNmY3YmEzYWNhZTgwZWQ4OGMwMzY1YmJjYzk0MDk1YjRmZWY4ZGViY2UzNTA1ZTE2ZWRiYWM2MDYyN2Q3N8fUjj8=: 00:34:34.300 00:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:34.300 00:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:34.300 00:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:34.300 00:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YmNhNmY3YmEzYWNhZTgwZWQ4OGMwMzY1YmJjYzk0MDk1YjRmZWY4ZGViY2UzNTA1ZTE2ZWRiYWM2MDYyN2Q3N8fUjj8=: 00:34:34.300 00:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:34.300 00:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:34:34.300 00:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:34.300 00:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:34.300 00:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:34.300 00:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:34.300 00:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:34.300 00:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:34:34.300 00:47:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:34.300 00:47:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:34.300 00:47:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:34.300 00:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:34.300 00:47:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:34.300 00:47:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:34.300 00:47:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:34.300 00:47:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:34.300 00:47:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:34.300 00:47:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:34.300 00:47:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:34.300 00:47:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:34.300 00:47:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:34.300 00:47:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:34.300 00:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:34.300 00:47:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:34.300 00:47:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:34.558 nvme0n1 00:34:34.558 00:47:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:34.558 00:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:34.558 00:47:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:34.558 00:47:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:34.558 00:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:34.558 00:47:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:34.558 00:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:34.558 00:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:34.558 00:47:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:34.558 00:47:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:34.558 00:47:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:34.558 00:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:34.558 00:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:34.558 00:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:34:34.558 00:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:34.558 00:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:34.558 00:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:34.558 00:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:34.558 00:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzM0ZmUyMGEyYjQ3M2EwOWQxOTAwOGI3OGQyNGRiYWEOBnsC: 00:34:34.558 00:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OWE3MjMyODQ3MjQ4NTQ0MGQzZDQ1NDQzNGE2MzlhYWY2NTExNzM4MWQyZjhiNmZiODViMTE5YTA0ODU5YjkyOVg9ftU=: 00:34:34.558 00:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:34.558 00:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:34.558 00:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzM0ZmUyMGEyYjQ3M2EwOWQxOTAwOGI3OGQyNGRiYWEOBnsC: 00:34:34.558 00:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OWE3MjMyODQ3MjQ4NTQ0MGQzZDQ1NDQzNGE2MzlhYWY2NTExNzM4MWQyZjhiNmZiODViMTE5YTA0ODU5YjkyOVg9ftU=: ]] 00:34:34.558 00:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OWE3MjMyODQ3MjQ4NTQ0MGQzZDQ1NDQzNGE2MzlhYWY2NTExNzM4MWQyZjhiNmZiODViMTE5YTA0ODU5YjkyOVg9ftU=: 00:34:34.558 00:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:34:34.558 00:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:34.558 00:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:34.558 00:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:34.558 00:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:34.558 00:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:34.558 00:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:34:34.558 00:47:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:34.558 00:47:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:34.558 00:47:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:34.558 00:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:34.558 00:47:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:34.558 00:47:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:34.558 00:47:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:34.558 00:47:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:34.558 00:47:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:34.558 00:47:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:34.558 00:47:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:34.558 00:47:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:34.558 00:47:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:34.558 00:47:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:34.558 00:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:34.558 00:47:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:34.558 00:47:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:34.817 nvme0n1 00:34:34.817 00:47:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:34.817 00:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:34.817 00:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:34.817 00:47:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:34.817 00:47:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:34.817 00:47:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:34.817 00:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:34.817 00:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:34.817 00:47:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:34.817 00:47:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:34.817 00:47:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:34.817 00:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:34.817 00:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:34:34.817 00:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:34.817 00:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:34.817 00:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:34.817 00:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:34.817 00:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzJmYmYxOWI0M2QyMjZhYzNkZjk0NmJhMWZmOWI3YzM2YTExNWY2NmUxZjNiMzlhzzwroQ==: 00:34:34.817 00:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDRkYjY0MTg1Y2Q4YjBiNDRmODExN2U1OTlkOTg5MDFkZTRlY2VkNWIwOTM5ZTYwTB+MXQ==: 00:34:34.817 00:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:34.817 00:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:34.818 00:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzJmYmYxOWI0M2QyMjZhYzNkZjk0NmJhMWZmOWI3YzM2YTExNWY2NmUxZjNiMzlhzzwroQ==: 00:34:34.818 00:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDRkYjY0MTg1Y2Q4YjBiNDRmODExN2U1OTlkOTg5MDFkZTRlY2VkNWIwOTM5ZTYwTB+MXQ==: ]] 00:34:34.818 00:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDRkYjY0MTg1Y2Q4YjBiNDRmODExN2U1OTlkOTg5MDFkZTRlY2VkNWIwOTM5ZTYwTB+MXQ==: 00:34:34.818 00:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:34:34.818 00:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:34.818 00:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:34.818 00:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:34.818 00:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:34.818 00:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:34.818 00:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:34:34.818 00:47:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:34.818 00:47:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:34.818 00:47:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:34.818 00:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:34.818 00:47:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:34.818 00:47:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:34.818 00:47:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:34.818 00:47:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:34.818 00:47:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:34.818 00:47:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:34.818 00:47:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:34.818 00:47:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:34.818 00:47:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:34.818 00:47:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:34.818 00:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:34.818 00:47:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:34.818 00:47:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:35.074 nvme0n1 00:34:35.074 00:47:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:35.074 00:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:35.074 00:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:35.074 00:47:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:35.074 00:47:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:35.074 00:47:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:35.074 00:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:35.074 00:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:35.074 00:47:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:35.074 00:47:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:35.074 00:47:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:35.074 00:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:35.075 00:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:34:35.075 00:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:35.075 00:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:35.075 00:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:35.075 00:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:35.075 00:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTBlYjZhN2M4NTRkNzNmYzFhZDZiNzUxNWQ4ZWU4ZjYC+55A: 00:34:35.075 00:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzZkMmQ0MDBjZWZkNzU1NTc2NzYwYTZhYjhiNmZjOTYVZUF1: 00:34:35.075 00:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:35.075 00:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:35.075 00:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTBlYjZhN2M4NTRkNzNmYzFhZDZiNzUxNWQ4ZWU4ZjYC+55A: 00:34:35.075 00:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzZkMmQ0MDBjZWZkNzU1NTc2NzYwYTZhYjhiNmZjOTYVZUF1: ]] 00:34:35.075 00:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzZkMmQ0MDBjZWZkNzU1NTc2NzYwYTZhYjhiNmZjOTYVZUF1: 00:34:35.075 00:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:34:35.075 00:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:35.075 00:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:35.075 00:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:35.075 00:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:35.075 00:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:35.075 00:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:34:35.075 00:47:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:35.075 00:47:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:35.075 00:47:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:35.075 00:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:35.075 00:47:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:35.075 00:47:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:35.075 00:47:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:35.075 00:47:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:35.075 00:47:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:35.075 00:47:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:35.075 00:47:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:35.075 00:47:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:35.075 00:47:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:35.075 00:47:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:35.075 00:47:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:35.075 00:47:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:35.075 00:47:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:35.332 nvme0n1 00:34:35.332 00:47:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:35.332 00:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:35.332 00:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:35.332 00:47:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:35.332 00:47:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:35.332 00:47:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:35.332 00:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:35.332 00:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:35.332 00:47:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:35.332 00:47:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:35.332 00:47:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:35.332 00:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:35.332 00:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:34:35.332 00:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:35.332 00:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:35.332 00:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:35.332 00:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:35.332 00:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NjdhM2EwZjczYTlmNTIyNjhmM2ZiOTkzOTEyNjNkZjZiYjkzNzE3MGU1ZTgwMGY0Ph/xtA==: 00:34:35.332 00:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NGI2YzdlOWIzMjkwODc0MzdhN2EyNjE4YTE1Yzg4MmM23AVj: 00:34:35.332 00:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:35.332 00:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:35.332 00:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NjdhM2EwZjczYTlmNTIyNjhmM2ZiOTkzOTEyNjNkZjZiYjkzNzE3MGU1ZTgwMGY0Ph/xtA==: 00:34:35.332 00:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NGI2YzdlOWIzMjkwODc0MzdhN2EyNjE4YTE1Yzg4MmM23AVj: ]] 00:34:35.332 00:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NGI2YzdlOWIzMjkwODc0MzdhN2EyNjE4YTE1Yzg4MmM23AVj: 00:34:35.332 00:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:34:35.332 00:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:35.332 00:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:35.332 00:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:35.332 00:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:35.332 00:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:35.332 00:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:34:35.332 00:47:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:35.332 00:47:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:35.332 00:47:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:35.332 00:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:35.332 00:47:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:35.332 00:47:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:35.332 00:47:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:35.332 00:47:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:35.332 00:47:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:35.332 00:47:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:35.332 00:47:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:35.332 00:47:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:35.332 00:47:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:35.332 00:47:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:35.332 00:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:35.332 00:47:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:35.332 00:47:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:35.590 nvme0n1 00:34:35.590 00:47:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:35.590 00:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:35.590 00:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:35.590 00:47:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:35.590 00:47:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:35.590 00:47:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:35.590 00:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:35.590 00:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:35.590 00:47:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:35.590 00:47:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:35.590 00:47:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:35.590 00:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:35.590 00:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:34:35.590 00:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:35.590 00:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:35.590 00:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:35.590 00:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:35.590 00:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YmNhNmY3YmEzYWNhZTgwZWQ4OGMwMzY1YmJjYzk0MDk1YjRmZWY4ZGViY2UzNTA1ZTE2ZWRiYWM2MDYyN2Q3N8fUjj8=: 00:34:35.590 00:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:35.590 00:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:35.590 00:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:35.590 00:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YmNhNmY3YmEzYWNhZTgwZWQ4OGMwMzY1YmJjYzk0MDk1YjRmZWY4ZGViY2UzNTA1ZTE2ZWRiYWM2MDYyN2Q3N8fUjj8=: 00:34:35.590 00:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:35.590 00:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:34:35.590 00:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:35.590 00:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:35.590 00:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:35.590 00:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:35.590 00:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:35.590 00:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:34:35.590 00:47:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:35.590 00:47:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:35.590 00:47:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:35.590 00:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:35.590 00:47:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:35.590 00:47:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:35.590 00:47:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:35.590 00:47:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:35.590 00:47:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:35.590 00:47:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:35.590 00:47:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:35.590 00:47:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:35.591 00:47:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:35.591 00:47:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:35.591 00:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:35.591 00:47:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:35.591 00:47:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:35.849 nvme0n1 00:34:35.849 00:47:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:35.849 00:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:35.849 00:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:35.849 00:47:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:35.849 00:47:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:35.849 00:47:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:35.849 00:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:35.849 00:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:35.849 00:47:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:35.849 00:47:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:35.849 00:47:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:35.849 00:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:35.849 00:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:35.849 00:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:34:35.849 00:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:35.849 00:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:35.849 00:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:35.849 00:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:35.849 00:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzM0ZmUyMGEyYjQ3M2EwOWQxOTAwOGI3OGQyNGRiYWEOBnsC: 00:34:35.849 00:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OWE3MjMyODQ3MjQ4NTQ0MGQzZDQ1NDQzNGE2MzlhYWY2NTExNzM4MWQyZjhiNmZiODViMTE5YTA0ODU5YjkyOVg9ftU=: 00:34:35.849 00:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:35.849 00:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:35.849 00:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzM0ZmUyMGEyYjQ3M2EwOWQxOTAwOGI3OGQyNGRiYWEOBnsC: 00:34:35.849 00:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OWE3MjMyODQ3MjQ4NTQ0MGQzZDQ1NDQzNGE2MzlhYWY2NTExNzM4MWQyZjhiNmZiODViMTE5YTA0ODU5YjkyOVg9ftU=: ]] 00:34:35.849 00:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OWE3MjMyODQ3MjQ4NTQ0MGQzZDQ1NDQzNGE2MzlhYWY2NTExNzM4MWQyZjhiNmZiODViMTE5YTA0ODU5YjkyOVg9ftU=: 00:34:35.849 00:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:34:35.849 00:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:35.849 00:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:35.849 00:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:35.849 00:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:35.849 00:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:35.849 00:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:34:35.850 00:47:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:35.850 00:47:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:35.850 00:47:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:35.850 00:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:35.850 00:47:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:35.850 00:47:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:35.850 00:47:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:35.850 00:47:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:35.850 00:47:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:35.850 00:47:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:35.850 00:47:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:35.850 00:47:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:35.850 00:47:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:35.850 00:47:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:35.850 00:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:35.850 00:47:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:35.850 00:47:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:36.415 nvme0n1 00:34:36.415 00:47:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:36.415 00:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:36.415 00:47:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:36.415 00:47:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:36.415 00:47:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:36.415 00:47:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:36.415 00:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:36.415 00:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:36.415 00:47:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:36.415 00:47:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:36.415 00:47:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:36.415 00:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:36.415 00:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:34:36.415 00:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:36.415 00:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:36.415 00:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:36.415 00:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:36.415 00:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzJmYmYxOWI0M2QyMjZhYzNkZjk0NmJhMWZmOWI3YzM2YTExNWY2NmUxZjNiMzlhzzwroQ==: 00:34:36.415 00:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDRkYjY0MTg1Y2Q4YjBiNDRmODExN2U1OTlkOTg5MDFkZTRlY2VkNWIwOTM5ZTYwTB+MXQ==: 00:34:36.415 00:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:36.415 00:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:36.415 00:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzJmYmYxOWI0M2QyMjZhYzNkZjk0NmJhMWZmOWI3YzM2YTExNWY2NmUxZjNiMzlhzzwroQ==: 00:34:36.415 00:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDRkYjY0MTg1Y2Q4YjBiNDRmODExN2U1OTlkOTg5MDFkZTRlY2VkNWIwOTM5ZTYwTB+MXQ==: ]] 00:34:36.415 00:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDRkYjY0MTg1Y2Q4YjBiNDRmODExN2U1OTlkOTg5MDFkZTRlY2VkNWIwOTM5ZTYwTB+MXQ==: 00:34:36.415 00:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:34:36.415 00:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:36.415 00:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:36.415 00:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:36.415 00:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:36.415 00:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:36.415 00:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:34:36.415 00:47:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:36.415 00:47:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:36.415 00:47:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:36.415 00:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:36.415 00:47:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:36.415 00:47:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:36.415 00:47:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:36.415 00:47:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:36.415 00:47:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:36.415 00:47:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:36.415 00:47:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:36.415 00:47:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:36.415 00:47:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:36.415 00:47:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:36.416 00:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:36.416 00:47:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:36.416 00:47:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:36.673 nvme0n1 00:34:36.673 00:47:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:36.673 00:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:36.673 00:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:36.673 00:47:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:36.673 00:47:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:36.673 00:47:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:36.673 00:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:36.673 00:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:36.673 00:47:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:36.673 00:47:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:36.673 00:47:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:36.673 00:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:36.673 00:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:34:36.673 00:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:36.673 00:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:36.673 00:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:36.673 00:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:36.673 00:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTBlYjZhN2M4NTRkNzNmYzFhZDZiNzUxNWQ4ZWU4ZjYC+55A: 00:34:36.673 00:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzZkMmQ0MDBjZWZkNzU1NTc2NzYwYTZhYjhiNmZjOTYVZUF1: 00:34:36.673 00:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:36.673 00:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:36.673 00:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTBlYjZhN2M4NTRkNzNmYzFhZDZiNzUxNWQ4ZWU4ZjYC+55A: 00:34:36.673 00:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzZkMmQ0MDBjZWZkNzU1NTc2NzYwYTZhYjhiNmZjOTYVZUF1: ]] 00:34:36.673 00:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzZkMmQ0MDBjZWZkNzU1NTc2NzYwYTZhYjhiNmZjOTYVZUF1: 00:34:36.673 00:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:34:36.673 00:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:36.673 00:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:36.673 00:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:36.673 00:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:36.673 00:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:36.673 00:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:34:36.673 00:47:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:36.673 00:47:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:36.673 00:47:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:36.673 00:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:36.673 00:47:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:36.673 00:47:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:36.673 00:47:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:36.673 00:47:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:36.673 00:47:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:36.673 00:47:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:36.673 00:47:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:36.673 00:47:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:36.673 00:47:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:36.673 00:47:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:36.673 00:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:36.673 00:47:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:36.673 00:47:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:36.931 nvme0n1 00:34:36.931 00:47:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:36.931 00:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:36.931 00:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:36.931 00:47:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:36.931 00:47:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:36.931 00:47:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:37.190 00:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:37.190 00:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:37.190 00:47:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:37.190 00:47:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:37.190 00:47:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:37.190 00:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:37.190 00:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:34:37.190 00:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:37.190 00:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:37.190 00:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:37.190 00:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:37.190 00:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NjdhM2EwZjczYTlmNTIyNjhmM2ZiOTkzOTEyNjNkZjZiYjkzNzE3MGU1ZTgwMGY0Ph/xtA==: 00:34:37.190 00:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NGI2YzdlOWIzMjkwODc0MzdhN2EyNjE4YTE1Yzg4MmM23AVj: 00:34:37.190 00:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:37.190 00:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:37.190 00:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NjdhM2EwZjczYTlmNTIyNjhmM2ZiOTkzOTEyNjNkZjZiYjkzNzE3MGU1ZTgwMGY0Ph/xtA==: 00:34:37.190 00:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NGI2YzdlOWIzMjkwODc0MzdhN2EyNjE4YTE1Yzg4MmM23AVj: ]] 00:34:37.190 00:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NGI2YzdlOWIzMjkwODc0MzdhN2EyNjE4YTE1Yzg4MmM23AVj: 00:34:37.190 00:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:34:37.190 00:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:37.190 00:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:37.190 00:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:37.190 00:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:37.190 00:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:37.190 00:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:34:37.190 00:47:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:37.190 00:47:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:37.190 00:47:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:37.190 00:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:37.190 00:47:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:37.190 00:47:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:37.190 00:47:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:37.190 00:47:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:37.190 00:47:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:37.190 00:47:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:37.190 00:47:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:37.190 00:47:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:37.190 00:47:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:37.190 00:47:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:37.190 00:47:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:37.190 00:47:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:37.190 00:47:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:37.449 nvme0n1 00:34:37.449 00:47:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:37.449 00:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:37.449 00:47:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:37.449 00:47:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:37.449 00:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:37.449 00:47:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:37.449 00:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:37.449 00:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:37.449 00:47:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:37.449 00:47:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:37.449 00:47:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:37.449 00:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:37.449 00:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:34:37.449 00:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:37.449 00:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:37.449 00:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:37.449 00:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:37.449 00:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YmNhNmY3YmEzYWNhZTgwZWQ4OGMwMzY1YmJjYzk0MDk1YjRmZWY4ZGViY2UzNTA1ZTE2ZWRiYWM2MDYyN2Q3N8fUjj8=: 00:34:37.449 00:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:37.449 00:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:37.449 00:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:37.449 00:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YmNhNmY3YmEzYWNhZTgwZWQ4OGMwMzY1YmJjYzk0MDk1YjRmZWY4ZGViY2UzNTA1ZTE2ZWRiYWM2MDYyN2Q3N8fUjj8=: 00:34:37.449 00:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:37.449 00:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:34:37.449 00:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:37.449 00:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:37.449 00:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:37.449 00:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:37.449 00:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:37.449 00:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:34:37.449 00:47:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:37.449 00:47:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:37.449 00:47:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:37.449 00:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:37.449 00:47:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:37.449 00:47:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:37.449 00:47:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:37.449 00:47:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:37.449 00:47:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:37.449 00:47:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:37.449 00:47:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:37.449 00:47:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:37.449 00:47:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:37.449 00:47:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:37.449 00:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:37.449 00:47:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:37.449 00:47:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:37.707 nvme0n1 00:34:37.707 00:47:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:37.707 00:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:37.707 00:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:37.707 00:47:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:37.707 00:47:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:37.966 00:47:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:37.966 00:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:37.966 00:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:37.966 00:47:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:37.966 00:47:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:37.966 00:47:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:37.966 00:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:37.966 00:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:37.966 00:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:34:37.966 00:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:37.966 00:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:37.966 00:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:37.966 00:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:37.966 00:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzM0ZmUyMGEyYjQ3M2EwOWQxOTAwOGI3OGQyNGRiYWEOBnsC: 00:34:37.966 00:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OWE3MjMyODQ3MjQ4NTQ0MGQzZDQ1NDQzNGE2MzlhYWY2NTExNzM4MWQyZjhiNmZiODViMTE5YTA0ODU5YjkyOVg9ftU=: 00:34:37.966 00:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:37.966 00:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:37.966 00:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzM0ZmUyMGEyYjQ3M2EwOWQxOTAwOGI3OGQyNGRiYWEOBnsC: 00:34:37.966 00:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OWE3MjMyODQ3MjQ4NTQ0MGQzZDQ1NDQzNGE2MzlhYWY2NTExNzM4MWQyZjhiNmZiODViMTE5YTA0ODU5YjkyOVg9ftU=: ]] 00:34:37.966 00:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OWE3MjMyODQ3MjQ4NTQ0MGQzZDQ1NDQzNGE2MzlhYWY2NTExNzM4MWQyZjhiNmZiODViMTE5YTA0ODU5YjkyOVg9ftU=: 00:34:37.966 00:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:34:37.966 00:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:37.966 00:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:37.966 00:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:37.966 00:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:37.966 00:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:37.966 00:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:34:37.966 00:47:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:37.966 00:47:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:37.966 00:47:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:37.966 00:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:37.966 00:47:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:37.966 00:47:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:37.966 00:47:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:37.966 00:47:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:37.966 00:47:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:37.966 00:47:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:37.966 00:47:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:37.966 00:47:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:37.966 00:47:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:37.966 00:47:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:37.966 00:47:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:37.966 00:47:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:37.966 00:47:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:38.533 nvme0n1 00:34:38.533 00:47:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:38.533 00:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:38.533 00:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:38.533 00:47:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:38.533 00:47:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:38.533 00:47:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:38.533 00:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:38.533 00:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:38.533 00:47:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:38.533 00:47:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:38.533 00:47:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:38.533 00:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:38.533 00:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:34:38.533 00:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:38.533 00:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:38.533 00:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:38.533 00:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:38.533 00:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzJmYmYxOWI0M2QyMjZhYzNkZjk0NmJhMWZmOWI3YzM2YTExNWY2NmUxZjNiMzlhzzwroQ==: 00:34:38.533 00:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDRkYjY0MTg1Y2Q4YjBiNDRmODExN2U1OTlkOTg5MDFkZTRlY2VkNWIwOTM5ZTYwTB+MXQ==: 00:34:38.533 00:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:38.533 00:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:38.533 00:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzJmYmYxOWI0M2QyMjZhYzNkZjk0NmJhMWZmOWI3YzM2YTExNWY2NmUxZjNiMzlhzzwroQ==: 00:34:38.533 00:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDRkYjY0MTg1Y2Q4YjBiNDRmODExN2U1OTlkOTg5MDFkZTRlY2VkNWIwOTM5ZTYwTB+MXQ==: ]] 00:34:38.533 00:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDRkYjY0MTg1Y2Q4YjBiNDRmODExN2U1OTlkOTg5MDFkZTRlY2VkNWIwOTM5ZTYwTB+MXQ==: 00:34:38.533 00:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:34:38.533 00:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:38.533 00:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:38.533 00:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:38.533 00:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:38.533 00:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:38.533 00:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:34:38.533 00:47:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:38.533 00:47:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:38.533 00:47:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:38.533 00:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:38.533 00:47:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:38.533 00:47:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:38.533 00:47:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:38.533 00:47:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:38.533 00:47:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:38.533 00:47:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:38.533 00:47:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:38.533 00:47:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:38.533 00:47:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:38.533 00:47:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:38.533 00:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:38.533 00:47:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:38.533 00:47:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:39.467 nvme0n1 00:34:39.467 00:47:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:39.467 00:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:39.467 00:47:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:39.467 00:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:39.467 00:47:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:39.467 00:47:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:39.467 00:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:39.467 00:47:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:39.467 00:47:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:39.467 00:47:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:39.467 00:47:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:39.467 00:47:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:39.467 00:47:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:34:39.467 00:47:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:39.467 00:47:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:39.467 00:47:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:39.467 00:47:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:39.467 00:47:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTBlYjZhN2M4NTRkNzNmYzFhZDZiNzUxNWQ4ZWU4ZjYC+55A: 00:34:39.467 00:47:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzZkMmQ0MDBjZWZkNzU1NTc2NzYwYTZhYjhiNmZjOTYVZUF1: 00:34:39.467 00:47:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:39.467 00:47:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:39.467 00:47:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTBlYjZhN2M4NTRkNzNmYzFhZDZiNzUxNWQ4ZWU4ZjYC+55A: 00:34:39.467 00:47:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzZkMmQ0MDBjZWZkNzU1NTc2NzYwYTZhYjhiNmZjOTYVZUF1: ]] 00:34:39.467 00:47:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzZkMmQ0MDBjZWZkNzU1NTc2NzYwYTZhYjhiNmZjOTYVZUF1: 00:34:39.467 00:47:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:34:39.467 00:47:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:39.467 00:47:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:39.467 00:47:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:39.467 00:47:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:39.467 00:47:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:39.467 00:47:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:34:39.467 00:47:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:39.467 00:47:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:39.468 00:47:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:39.468 00:47:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:39.468 00:47:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:39.468 00:47:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:39.468 00:47:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:39.468 00:47:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:39.468 00:47:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:39.468 00:47:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:39.468 00:47:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:39.468 00:47:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:39.468 00:47:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:39.468 00:47:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:39.468 00:47:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:39.468 00:47:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:39.468 00:47:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:40.034 nvme0n1 00:34:40.034 00:47:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:40.034 00:47:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:40.034 00:47:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:40.034 00:47:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:40.034 00:47:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:40.034 00:47:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:40.034 00:47:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:40.034 00:47:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:40.034 00:47:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:40.034 00:47:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:40.034 00:47:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:40.034 00:47:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:40.034 00:47:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:34:40.034 00:47:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:40.034 00:47:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:40.034 00:47:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:40.034 00:47:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:40.034 00:47:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NjdhM2EwZjczYTlmNTIyNjhmM2ZiOTkzOTEyNjNkZjZiYjkzNzE3MGU1ZTgwMGY0Ph/xtA==: 00:34:40.034 00:47:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NGI2YzdlOWIzMjkwODc0MzdhN2EyNjE4YTE1Yzg4MmM23AVj: 00:34:40.034 00:47:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:40.034 00:47:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:40.034 00:47:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NjdhM2EwZjczYTlmNTIyNjhmM2ZiOTkzOTEyNjNkZjZiYjkzNzE3MGU1ZTgwMGY0Ph/xtA==: 00:34:40.034 00:47:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NGI2YzdlOWIzMjkwODc0MzdhN2EyNjE4YTE1Yzg4MmM23AVj: ]] 00:34:40.034 00:47:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NGI2YzdlOWIzMjkwODc0MzdhN2EyNjE4YTE1Yzg4MmM23AVj: 00:34:40.034 00:47:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:34:40.034 00:47:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:40.035 00:47:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:40.035 00:47:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:40.035 00:47:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:40.035 00:47:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:40.035 00:47:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:34:40.035 00:47:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:40.035 00:47:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:40.035 00:47:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:40.035 00:47:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:40.035 00:47:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:40.035 00:47:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:40.035 00:47:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:40.035 00:47:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:40.035 00:47:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:40.035 00:47:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:40.035 00:47:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:40.035 00:47:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:40.035 00:47:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:40.035 00:47:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:40.035 00:47:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:40.035 00:47:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:40.035 00:47:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:40.602 nvme0n1 00:34:40.602 00:47:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:40.602 00:47:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:40.602 00:47:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:40.602 00:47:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:40.602 00:47:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:40.602 00:47:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:40.602 00:47:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:40.602 00:47:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:40.602 00:47:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:40.602 00:47:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:40.602 00:47:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:40.602 00:47:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:40.602 00:47:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:34:40.602 00:47:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:40.602 00:47:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:40.602 00:47:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:40.602 00:47:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:40.602 00:47:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YmNhNmY3YmEzYWNhZTgwZWQ4OGMwMzY1YmJjYzk0MDk1YjRmZWY4ZGViY2UzNTA1ZTE2ZWRiYWM2MDYyN2Q3N8fUjj8=: 00:34:40.602 00:47:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:40.602 00:47:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:40.602 00:47:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:40.602 00:47:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YmNhNmY3YmEzYWNhZTgwZWQ4OGMwMzY1YmJjYzk0MDk1YjRmZWY4ZGViY2UzNTA1ZTE2ZWRiYWM2MDYyN2Q3N8fUjj8=: 00:34:40.602 00:47:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:40.602 00:47:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:34:40.602 00:47:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:40.602 00:47:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:40.602 00:47:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:40.602 00:47:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:40.602 00:47:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:40.602 00:47:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:34:40.602 00:47:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:40.602 00:47:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:40.602 00:47:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:40.602 00:47:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:40.602 00:47:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:40.602 00:47:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:40.602 00:47:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:40.602 00:47:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:40.602 00:47:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:40.602 00:47:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:40.602 00:47:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:40.602 00:47:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:40.602 00:47:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:40.602 00:47:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:40.602 00:47:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:40.602 00:47:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:40.602 00:47:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:41.534 nvme0n1 00:34:41.534 00:47:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:41.534 00:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:41.534 00:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:41.534 00:47:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:41.534 00:47:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:41.534 00:47:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:41.534 00:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:41.534 00:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:41.534 00:47:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:41.534 00:47:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:41.534 00:47:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:41.534 00:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:41.534 00:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:41.534 00:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:34:41.534 00:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:41.534 00:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:41.534 00:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:41.534 00:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:41.534 00:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzM0ZmUyMGEyYjQ3M2EwOWQxOTAwOGI3OGQyNGRiYWEOBnsC: 00:34:41.534 00:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OWE3MjMyODQ3MjQ4NTQ0MGQzZDQ1NDQzNGE2MzlhYWY2NTExNzM4MWQyZjhiNmZiODViMTE5YTA0ODU5YjkyOVg9ftU=: 00:34:41.534 00:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:41.534 00:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:41.534 00:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzM0ZmUyMGEyYjQ3M2EwOWQxOTAwOGI3OGQyNGRiYWEOBnsC: 00:34:41.534 00:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OWE3MjMyODQ3MjQ4NTQ0MGQzZDQ1NDQzNGE2MzlhYWY2NTExNzM4MWQyZjhiNmZiODViMTE5YTA0ODU5YjkyOVg9ftU=: ]] 00:34:41.534 00:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OWE3MjMyODQ3MjQ4NTQ0MGQzZDQ1NDQzNGE2MzlhYWY2NTExNzM4MWQyZjhiNmZiODViMTE5YTA0ODU5YjkyOVg9ftU=: 00:34:41.534 00:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:34:41.534 00:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:41.534 00:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:41.534 00:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:41.534 00:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:41.534 00:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:41.534 00:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:34:41.534 00:47:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:41.534 00:47:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:41.534 00:47:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:41.534 00:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:41.534 00:47:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:41.534 00:47:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:41.534 00:47:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:41.534 00:47:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:41.534 00:47:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:41.534 00:47:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:41.534 00:47:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:41.534 00:47:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:41.534 00:47:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:41.534 00:47:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:41.534 00:47:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:41.534 00:47:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:41.534 00:47:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:42.465 nvme0n1 00:34:42.465 00:47:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:42.465 00:47:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:42.465 00:47:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:42.465 00:47:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:42.465 00:47:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:42.465 00:47:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:42.465 00:47:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:42.465 00:47:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:42.465 00:47:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:42.465 00:47:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:42.465 00:47:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:42.465 00:47:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:42.465 00:47:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:34:42.465 00:47:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:42.465 00:47:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:42.465 00:47:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:42.465 00:47:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:42.465 00:47:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzJmYmYxOWI0M2QyMjZhYzNkZjk0NmJhMWZmOWI3YzM2YTExNWY2NmUxZjNiMzlhzzwroQ==: 00:34:42.465 00:47:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDRkYjY0MTg1Y2Q4YjBiNDRmODExN2U1OTlkOTg5MDFkZTRlY2VkNWIwOTM5ZTYwTB+MXQ==: 00:34:42.465 00:47:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:42.465 00:47:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:42.465 00:47:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzJmYmYxOWI0M2QyMjZhYzNkZjk0NmJhMWZmOWI3YzM2YTExNWY2NmUxZjNiMzlhzzwroQ==: 00:34:42.465 00:47:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDRkYjY0MTg1Y2Q4YjBiNDRmODExN2U1OTlkOTg5MDFkZTRlY2VkNWIwOTM5ZTYwTB+MXQ==: ]] 00:34:42.465 00:47:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDRkYjY0MTg1Y2Q4YjBiNDRmODExN2U1OTlkOTg5MDFkZTRlY2VkNWIwOTM5ZTYwTB+MXQ==: 00:34:42.465 00:47:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:34:42.465 00:47:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:42.465 00:47:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:42.465 00:47:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:42.465 00:47:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:42.465 00:47:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:42.465 00:47:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:34:42.465 00:47:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:42.465 00:47:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:42.465 00:47:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:42.465 00:47:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:42.465 00:47:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:42.465 00:47:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:42.465 00:47:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:42.465 00:47:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:42.465 00:47:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:42.465 00:47:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:42.465 00:47:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:42.465 00:47:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:42.465 00:47:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:42.465 00:47:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:42.465 00:47:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:42.465 00:47:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:42.465 00:47:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:43.839 nvme0n1 00:34:43.839 00:47:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:43.839 00:47:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:43.839 00:47:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:43.839 00:47:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:43.839 00:47:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:43.839 00:47:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:43.839 00:47:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:43.839 00:47:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:43.839 00:47:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:43.839 00:47:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:43.839 00:47:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:43.839 00:47:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:43.839 00:47:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:34:43.839 00:47:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:43.839 00:47:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:43.839 00:47:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:43.839 00:47:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:43.839 00:47:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTBlYjZhN2M4NTRkNzNmYzFhZDZiNzUxNWQ4ZWU4ZjYC+55A: 00:34:43.839 00:47:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzZkMmQ0MDBjZWZkNzU1NTc2NzYwYTZhYjhiNmZjOTYVZUF1: 00:34:43.839 00:47:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:43.839 00:47:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:43.839 00:47:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTBlYjZhN2M4NTRkNzNmYzFhZDZiNzUxNWQ4ZWU4ZjYC+55A: 00:34:43.839 00:47:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzZkMmQ0MDBjZWZkNzU1NTc2NzYwYTZhYjhiNmZjOTYVZUF1: ]] 00:34:43.839 00:47:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzZkMmQ0MDBjZWZkNzU1NTc2NzYwYTZhYjhiNmZjOTYVZUF1: 00:34:43.839 00:47:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:34:43.839 00:47:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:43.839 00:47:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:43.839 00:47:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:43.839 00:47:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:43.839 00:47:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:43.839 00:47:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:34:43.839 00:47:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:43.839 00:47:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:43.839 00:47:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:43.839 00:47:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:43.839 00:47:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:43.839 00:47:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:43.839 00:47:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:43.839 00:47:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:43.839 00:47:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:43.839 00:47:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:43.839 00:47:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:43.839 00:47:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:43.839 00:47:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:43.839 00:47:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:43.839 00:47:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:43.839 00:47:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:43.839 00:47:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:44.827 nvme0n1 00:34:44.827 00:47:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:44.827 00:47:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:44.827 00:47:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:44.827 00:47:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:44.827 00:47:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:44.827 00:47:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:44.827 00:47:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:44.827 00:47:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:44.827 00:47:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:44.827 00:47:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:44.827 00:47:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:44.827 00:47:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:44.827 00:47:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:34:44.827 00:47:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:44.827 00:47:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:44.827 00:47:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:44.827 00:47:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:44.827 00:47:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NjdhM2EwZjczYTlmNTIyNjhmM2ZiOTkzOTEyNjNkZjZiYjkzNzE3MGU1ZTgwMGY0Ph/xtA==: 00:34:44.827 00:47:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NGI2YzdlOWIzMjkwODc0MzdhN2EyNjE4YTE1Yzg4MmM23AVj: 00:34:44.827 00:47:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:44.827 00:47:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:44.827 00:47:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NjdhM2EwZjczYTlmNTIyNjhmM2ZiOTkzOTEyNjNkZjZiYjkzNzE3MGU1ZTgwMGY0Ph/xtA==: 00:34:44.827 00:47:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NGI2YzdlOWIzMjkwODc0MzdhN2EyNjE4YTE1Yzg4MmM23AVj: ]] 00:34:44.827 00:47:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NGI2YzdlOWIzMjkwODc0MzdhN2EyNjE4YTE1Yzg4MmM23AVj: 00:34:44.827 00:47:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:34:44.827 00:47:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:44.827 00:47:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:44.827 00:47:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:44.827 00:47:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:44.827 00:47:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:44.827 00:47:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:34:44.827 00:47:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:44.827 00:47:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:44.827 00:47:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:44.827 00:47:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:44.827 00:47:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:44.827 00:47:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:44.827 00:47:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:44.827 00:47:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:44.827 00:47:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:44.827 00:47:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:44.827 00:47:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:44.827 00:47:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:44.827 00:47:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:44.827 00:47:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:44.827 00:47:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:44.827 00:47:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:44.827 00:47:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:46.206 nvme0n1 00:34:46.206 00:47:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:46.206 00:47:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:46.206 00:47:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:46.206 00:47:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:46.206 00:47:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:46.206 00:47:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:46.206 00:47:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:46.206 00:47:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:46.206 00:47:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:46.206 00:47:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:46.206 00:47:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:46.206 00:47:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:46.206 00:47:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:34:46.206 00:47:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:46.206 00:47:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:46.206 00:47:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:46.206 00:47:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:46.206 00:47:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YmNhNmY3YmEzYWNhZTgwZWQ4OGMwMzY1YmJjYzk0MDk1YjRmZWY4ZGViY2UzNTA1ZTE2ZWRiYWM2MDYyN2Q3N8fUjj8=: 00:34:46.206 00:47:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:46.206 00:47:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:46.206 00:47:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:46.206 00:47:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YmNhNmY3YmEzYWNhZTgwZWQ4OGMwMzY1YmJjYzk0MDk1YjRmZWY4ZGViY2UzNTA1ZTE2ZWRiYWM2MDYyN2Q3N8fUjj8=: 00:34:46.206 00:47:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:46.207 00:47:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:34:46.207 00:47:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:46.207 00:47:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:46.207 00:47:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:46.207 00:47:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:46.207 00:47:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:46.207 00:47:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:34:46.207 00:47:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:46.207 00:47:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:46.207 00:47:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:46.207 00:47:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:46.207 00:47:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:46.207 00:47:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:46.207 00:47:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:46.207 00:47:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:46.207 00:47:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:46.207 00:47:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:46.207 00:47:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:46.207 00:47:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:46.207 00:47:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:46.207 00:47:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:46.207 00:47:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:46.207 00:47:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:46.207 00:47:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:47.590 nvme0n1 00:34:47.590 00:47:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:47.591 00:47:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:47.591 00:47:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:47.591 00:47:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:47.591 00:47:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:47.591 00:47:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:47.591 00:47:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:47.591 00:47:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:47.591 00:47:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:47.591 00:47:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:47.591 00:47:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:47.591 00:47:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:34:47.591 00:47:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:47.591 00:47:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:47.591 00:47:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:47.591 00:47:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:47.591 00:47:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzJmYmYxOWI0M2QyMjZhYzNkZjk0NmJhMWZmOWI3YzM2YTExNWY2NmUxZjNiMzlhzzwroQ==: 00:34:47.591 00:47:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDRkYjY0MTg1Y2Q4YjBiNDRmODExN2U1OTlkOTg5MDFkZTRlY2VkNWIwOTM5ZTYwTB+MXQ==: 00:34:47.591 00:47:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:47.591 00:47:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:47.591 00:47:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzJmYmYxOWI0M2QyMjZhYzNkZjk0NmJhMWZmOWI3YzM2YTExNWY2NmUxZjNiMzlhzzwroQ==: 00:34:47.591 00:47:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDRkYjY0MTg1Y2Q4YjBiNDRmODExN2U1OTlkOTg5MDFkZTRlY2VkNWIwOTM5ZTYwTB+MXQ==: ]] 00:34:47.591 00:47:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDRkYjY0MTg1Y2Q4YjBiNDRmODExN2U1OTlkOTg5MDFkZTRlY2VkNWIwOTM5ZTYwTB+MXQ==: 00:34:47.591 00:47:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:34:47.591 00:47:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:47.591 00:47:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:47.591 00:47:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:47.591 00:47:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:34:47.591 00:47:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:47.591 00:47:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:47.591 00:47:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:47.591 00:47:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:47.591 00:47:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:47.591 00:47:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:47.591 00:47:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:47.591 00:47:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:47.591 00:47:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:47.591 00:47:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:47.591 00:47:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:34:47.591 00:47:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:34:47.591 00:47:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:34:47.591 00:47:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:34:47.591 00:47:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:34:47.591 00:47:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:34:47.591 00:47:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:34:47.591 00:47:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:34:47.591 00:47:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:47.591 00:47:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:47.591 request: 00:34:47.591 { 00:34:47.591 "name": "nvme0", 00:34:47.591 "trtype": "tcp", 00:34:47.591 "traddr": "10.0.0.1", 00:34:47.591 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:34:47.591 "adrfam": "ipv4", 00:34:47.591 "trsvcid": "4420", 00:34:47.591 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:34:47.591 "method": "bdev_nvme_attach_controller", 00:34:47.591 "req_id": 1 00:34:47.591 } 00:34:47.591 Got JSON-RPC error response 00:34:47.591 response: 00:34:47.591 { 00:34:47.591 "code": -5, 00:34:47.591 "message": "Input/output error" 00:34:47.591 } 00:34:47.591 00:47:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:34:47.591 00:47:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:34:47.591 00:47:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:34:47.591 00:47:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:34:47.591 00:47:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:34:47.591 00:47:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:34:47.591 00:47:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:47.591 00:47:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:47.591 00:47:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:34:47.591 00:47:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:47.591 00:47:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:34:47.591 00:47:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:34:47.591 00:47:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:47.591 00:47:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:47.591 00:47:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:47.591 00:47:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:47.591 00:47:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:47.591 00:47:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:47.591 00:47:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:47.591 00:47:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:47.591 00:47:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:47.591 00:47:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:47.591 00:47:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:34:47.591 00:47:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:34:47.591 00:47:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:34:47.591 00:47:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:34:47.591 00:47:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:34:47.591 00:47:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:34:47.591 00:47:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:34:47.591 00:47:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:34:47.591 00:47:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:47.591 00:47:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:47.591 request: 00:34:47.591 { 00:34:47.591 "name": "nvme0", 00:34:47.591 "trtype": "tcp", 00:34:47.591 "traddr": "10.0.0.1", 00:34:47.591 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:34:47.591 "adrfam": "ipv4", 00:34:47.591 "trsvcid": "4420", 00:34:47.591 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:34:47.591 "dhchap_key": "key2", 00:34:47.591 "method": "bdev_nvme_attach_controller", 00:34:47.591 "req_id": 1 00:34:47.591 } 00:34:47.591 Got JSON-RPC error response 00:34:47.591 response: 00:34:47.591 { 00:34:47.591 "code": -5, 00:34:47.591 "message": "Input/output error" 00:34:47.591 } 00:34:47.591 00:47:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:34:47.591 00:47:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:34:47.591 00:47:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:34:47.591 00:47:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:34:47.591 00:47:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:34:47.591 00:47:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:34:47.591 00:47:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:34:47.591 00:47:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:47.591 00:47:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:47.591 00:47:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:47.591 00:47:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:34:47.591 00:47:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:34:47.591 00:47:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:34:47.591 00:47:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:47.591 00:47:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:47.591 00:47:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:47.591 00:47:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:47.591 00:47:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:47.591 00:47:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:47.591 00:47:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:47.591 00:47:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:47.591 00:47:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:47.591 00:47:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:34:47.591 00:47:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:34:47.592 00:47:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:34:47.592 00:47:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:34:47.592 00:47:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:34:47.592 00:47:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:34:47.592 00:47:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:34:47.592 00:47:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:34:47.592 00:47:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:47.592 00:47:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:47.592 request: 00:34:47.592 { 00:34:47.592 "name": "nvme0", 00:34:47.592 "trtype": "tcp", 00:34:47.592 "traddr": "10.0.0.1", 00:34:47.592 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:34:47.592 "adrfam": "ipv4", 00:34:47.592 "trsvcid": "4420", 00:34:47.592 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:34:47.592 "dhchap_key": "key1", 00:34:47.592 "dhchap_ctrlr_key": "ckey2", 00:34:47.592 "method": "bdev_nvme_attach_controller", 00:34:47.592 "req_id": 1 00:34:47.592 } 00:34:47.592 Got JSON-RPC error response 00:34:47.592 response: 00:34:47.592 { 00:34:47.592 "code": -5, 00:34:47.592 "message": "Input/output error" 00:34:47.592 } 00:34:47.592 00:47:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:34:47.592 00:47:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:34:47.592 00:47:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:34:47.592 00:47:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:34:47.592 00:47:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:34:47.592 00:47:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:34:47.592 00:47:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@128 -- # cleanup 00:34:47.592 00:47:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:34:47.592 00:47:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:34:47.592 00:47:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@117 -- # sync 00:34:47.851 00:47:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:34:47.851 00:47:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@120 -- # set +e 00:34:47.851 00:47:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:34:47.851 00:47:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:34:47.851 rmmod nvme_tcp 00:34:47.851 rmmod nvme_fabrics 00:34:47.851 00:47:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:34:47.851 00:47:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@124 -- # set -e 00:34:47.851 00:47:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@125 -- # return 0 00:34:47.851 00:47:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@489 -- # '[' -n 1067620 ']' 00:34:47.851 00:47:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@490 -- # killprocess 1067620 00:34:47.851 00:47:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@946 -- # '[' -z 1067620 ']' 00:34:47.851 00:47:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@950 -- # kill -0 1067620 00:34:47.851 00:47:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@951 -- # uname 00:34:47.851 00:47:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:34:47.851 00:47:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1067620 00:34:47.851 00:47:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:34:47.851 00:47:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:34:47.851 00:47:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1067620' 00:34:47.851 killing process with pid 1067620 00:34:47.851 00:47:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@965 -- # kill 1067620 00:34:47.851 00:47:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@970 -- # wait 1067620 00:34:47.851 00:47:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:34:47.851 00:47:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:34:47.851 00:47:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:34:47.851 00:47:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:34:47.852 00:47:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:34:47.852 00:47:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:47.852 00:47:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:34:47.852 00:47:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:50.390 00:47:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:34:50.390 00:47:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:34:50.390 00:47:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:34:50.390 00:47:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:34:50.390 00:47:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:34:50.390 00:47:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@686 -- # echo 0 00:34:50.390 00:47:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:34:50.390 00:47:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:34:50.390 00:47:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:34:50.390 00:47:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:34:50.390 00:47:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:34:50.390 00:47:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:34:50.390 00:47:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:34:50.958 0000:00:04.7 (8086 3c27): ioatdma -> vfio-pci 00:34:50.958 0000:00:04.6 (8086 3c26): ioatdma -> vfio-pci 00:34:50.958 0000:00:04.5 (8086 3c25): ioatdma -> vfio-pci 00:34:50.958 0000:00:04.4 (8086 3c24): ioatdma -> vfio-pci 00:34:50.958 0000:00:04.3 (8086 3c23): ioatdma -> vfio-pci 00:34:50.958 0000:00:04.2 (8086 3c22): ioatdma -> vfio-pci 00:34:50.958 0000:00:04.1 (8086 3c21): ioatdma -> vfio-pci 00:34:50.958 0000:00:04.0 (8086 3c20): ioatdma -> vfio-pci 00:34:50.958 0000:80:04.7 (8086 3c27): ioatdma -> vfio-pci 00:34:50.958 0000:80:04.6 (8086 3c26): ioatdma -> vfio-pci 00:34:50.958 0000:80:04.5 (8086 3c25): ioatdma -> vfio-pci 00:34:50.958 0000:80:04.4 (8086 3c24): ioatdma -> vfio-pci 00:34:51.218 0000:80:04.3 (8086 3c23): ioatdma -> vfio-pci 00:34:51.218 0000:80:04.2 (8086 3c22): ioatdma -> vfio-pci 00:34:51.218 0000:80:04.1 (8086 3c21): ioatdma -> vfio-pci 00:34:51.218 0000:80:04.0 (8086 3c20): ioatdma -> vfio-pci 00:34:52.157 0000:84:00.0 (8086 0a54): nvme -> vfio-pci 00:34:52.157 00:47:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.V8Q /tmp/spdk.key-null.CX9 /tmp/spdk.key-sha256.V9d /tmp/spdk.key-sha384.1wV /tmp/spdk.key-sha512.8VW /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:34:52.157 00:47:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:34:53.097 0000:00:04.7 (8086 3c27): Already using the vfio-pci driver 00:34:53.097 0000:84:00.0 (8086 0a54): Already using the vfio-pci driver 00:34:53.097 0000:00:04.6 (8086 3c26): Already using the vfio-pci driver 00:34:53.097 0000:00:04.5 (8086 3c25): Already using the vfio-pci driver 00:34:53.097 0000:00:04.4 (8086 3c24): Already using the vfio-pci driver 00:34:53.097 0000:00:04.3 (8086 3c23): Already using the vfio-pci driver 00:34:53.097 0000:00:04.2 (8086 3c22): Already using the vfio-pci driver 00:34:53.097 0000:00:04.1 (8086 3c21): Already using the vfio-pci driver 00:34:53.097 0000:00:04.0 (8086 3c20): Already using the vfio-pci driver 00:34:53.097 0000:80:04.7 (8086 3c27): Already using the vfio-pci driver 00:34:53.097 0000:80:04.6 (8086 3c26): Already using the vfio-pci driver 00:34:53.097 0000:80:04.5 (8086 3c25): Already using the vfio-pci driver 00:34:53.097 0000:80:04.4 (8086 3c24): Already using the vfio-pci driver 00:34:53.097 0000:80:04.3 (8086 3c23): Already using the vfio-pci driver 00:34:53.097 0000:80:04.2 (8086 3c22): Already using the vfio-pci driver 00:34:53.097 0000:80:04.1 (8086 3c21): Already using the vfio-pci driver 00:34:53.097 0000:80:04.0 (8086 3c20): Already using the vfio-pci driver 00:34:53.097 00:34:53.097 real 0m56.449s 00:34:53.097 user 0m54.552s 00:34:53.097 sys 0m5.184s 00:34:53.097 00:47:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1122 -- # xtrace_disable 00:34:53.097 00:47:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:53.097 ************************************ 00:34:53.097 END TEST nvmf_auth_host 00:34:53.097 ************************************ 00:34:53.097 00:47:20 nvmf_tcp -- nvmf/nvmf.sh@107 -- # [[ tcp == \t\c\p ]] 00:34:53.097 00:47:20 nvmf_tcp -- nvmf/nvmf.sh@108 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:34:53.097 00:47:20 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:34:53.097 00:47:20 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:34:53.097 00:47:20 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:53.097 ************************************ 00:34:53.097 START TEST nvmf_digest 00:34:53.097 ************************************ 00:34:53.097 00:47:20 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:34:53.097 * Looking for test storage... 00:34:53.097 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:34:53.097 00:47:20 nvmf_tcp.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:53.097 00:47:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:34:53.097 00:47:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:53.097 00:47:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:53.097 00:47:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:53.097 00:47:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:53.097 00:47:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:53.097 00:47:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:53.097 00:47:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:53.097 00:47:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:53.097 00:47:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:53.097 00:47:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:53.097 00:47:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:34:53.097 00:47:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:34:53.097 00:47:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:53.097 00:47:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:53.097 00:47:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:53.097 00:47:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:53.097 00:47:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:53.097 00:47:20 nvmf_tcp.nvmf_digest -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:53.097 00:47:20 nvmf_tcp.nvmf_digest -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:53.097 00:47:20 nvmf_tcp.nvmf_digest -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:53.097 00:47:20 nvmf_tcp.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:53.097 00:47:20 nvmf_tcp.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:53.097 00:47:20 nvmf_tcp.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:53.097 00:47:20 nvmf_tcp.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:34:53.097 00:47:20 nvmf_tcp.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:53.097 00:47:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@47 -- # : 0 00:34:53.098 00:47:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:34:53.098 00:47:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:34:53.098 00:47:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:53.098 00:47:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:53.098 00:47:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:53.098 00:47:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:34:53.098 00:47:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:34:53.098 00:47:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@51 -- # have_pci_nics=0 00:34:53.098 00:47:20 nvmf_tcp.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:34:53.098 00:47:20 nvmf_tcp.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:34:53.098 00:47:20 nvmf_tcp.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:34:53.098 00:47:20 nvmf_tcp.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:34:53.098 00:47:20 nvmf_tcp.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:34:53.098 00:47:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:34:53.098 00:47:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:53.098 00:47:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@448 -- # prepare_net_devs 00:34:53.098 00:47:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@410 -- # local -g is_hw=no 00:34:53.098 00:47:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@412 -- # remove_spdk_ns 00:34:53.098 00:47:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:53.098 00:47:20 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:34:53.098 00:47:20 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:53.098 00:47:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:34:53.098 00:47:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:34:53.098 00:47:20 nvmf_tcp.nvmf_digest -- nvmf/common.sh@285 -- # xtrace_disable 00:34:53.098 00:47:20 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:34:55.055 00:47:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:55.055 00:47:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # pci_devs=() 00:34:55.055 00:47:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # local -a pci_devs 00:34:55.055 00:47:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # pci_net_devs=() 00:34:55.055 00:47:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:34:55.055 00:47:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # pci_drivers=() 00:34:55.055 00:47:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # local -A pci_drivers 00:34:55.055 00:47:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # net_devs=() 00:34:55.055 00:47:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # local -ga net_devs 00:34:55.055 00:47:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # e810=() 00:34:55.055 00:47:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # local -ga e810 00:34:55.055 00:47:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # x722=() 00:34:55.055 00:47:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # local -ga x722 00:34:55.055 00:47:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # mlx=() 00:34:55.055 00:47:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # local -ga mlx 00:34:55.055 00:47:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:55.055 00:47:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:55.055 00:47:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:55.055 00:47:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:55.055 00:47:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:55.055 00:47:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:55.055 00:47:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:55.055 00:47:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:55.055 00:47:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:55.055 00:47:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:55.055 00:47:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:55.055 00:47:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:34:55.055 00:47:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:34:55.055 00:47:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:34:55.055 00:47:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:34:55.055 00:47:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:34:55.055 00:47:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:34:55.055 00:47:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:55.055 00:47:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:34:55.055 Found 0000:08:00.0 (0x8086 - 0x159b) 00:34:55.055 00:47:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:55.055 00:47:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:55.055 00:47:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:55.055 00:47:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:55.055 00:47:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:55.055 00:47:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:55.055 00:47:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:34:55.055 Found 0000:08:00.1 (0x8086 - 0x159b) 00:34:55.055 00:47:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:55.055 00:47:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:55.055 00:47:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:55.055 00:47:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:55.055 00:47:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:55.055 00:47:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:34:55.055 00:47:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:34:55.055 00:47:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:34:55.055 00:47:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:55.055 00:47:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:55.055 00:47:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:55.055 00:47:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:55.055 00:47:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:55.055 00:47:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:55.055 00:47:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:55.056 00:47:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:34:55.056 Found net devices under 0000:08:00.0: cvl_0_0 00:34:55.056 00:47:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:55.056 00:47:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:55.056 00:47:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:55.056 00:47:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:55.056 00:47:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:55.056 00:47:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:55.056 00:47:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:55.056 00:47:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:55.056 00:47:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:34:55.056 Found net devices under 0000:08:00.1: cvl_0_1 00:34:55.056 00:47:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:55.056 00:47:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:34:55.056 00:47:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # is_hw=yes 00:34:55.056 00:47:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:34:55.056 00:47:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:34:55.056 00:47:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:34:55.056 00:47:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:55.056 00:47:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:55.056 00:47:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:55.056 00:47:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:34:55.056 00:47:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:55.056 00:47:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:55.056 00:47:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:34:55.056 00:47:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:55.056 00:47:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:55.056 00:47:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:34:55.056 00:47:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:34:55.056 00:47:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:34:55.056 00:47:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:55.056 00:47:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:55.056 00:47:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:55.056 00:47:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:34:55.056 00:47:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:55.056 00:47:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:55.056 00:47:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:55.056 00:47:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:34:55.056 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:55.056 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.351 ms 00:34:55.056 00:34:55.056 --- 10.0.0.2 ping statistics --- 00:34:55.056 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:55.056 rtt min/avg/max/mdev = 0.351/0.351/0.351/0.000 ms 00:34:55.056 00:47:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:55.056 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:55.056 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.177 ms 00:34:55.056 00:34:55.056 --- 10.0.0.1 ping statistics --- 00:34:55.056 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:55.056 rtt min/avg/max/mdev = 0.177/0.177/0.177/0.000 ms 00:34:55.056 00:47:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:55.056 00:47:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@422 -- # return 0 00:34:55.056 00:47:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:34:55.056 00:47:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:55.056 00:47:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:34:55.056 00:47:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:34:55.056 00:47:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:55.056 00:47:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:34:55.056 00:47:22 nvmf_tcp.nvmf_digest -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:34:55.056 00:47:22 nvmf_tcp.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:34:55.056 00:47:22 nvmf_tcp.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:34:55.056 00:47:22 nvmf_tcp.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:34:55.056 00:47:22 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:34:55.056 00:47:22 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1103 -- # xtrace_disable 00:34:55.056 00:47:22 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:34:55.056 ************************************ 00:34:55.056 START TEST nvmf_digest_clean 00:34:55.056 ************************************ 00:34:55.056 00:47:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1121 -- # run_digest 00:34:55.056 00:47:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:34:55.056 00:47:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:34:55.056 00:47:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:34:55.056 00:47:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:34:55.056 00:47:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:34:55.056 00:47:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:34:55.056 00:47:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@720 -- # xtrace_disable 00:34:55.056 00:47:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:34:55.056 00:47:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@481 -- # nvmfpid=1075787 00:34:55.056 00:47:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:34:55.056 00:47:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@482 -- # waitforlisten 1075787 00:34:55.056 00:47:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@827 -- # '[' -z 1075787 ']' 00:34:55.056 00:47:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:55.056 00:47:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:34:55.056 00:47:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:55.056 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:55.056 00:47:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:34:55.056 00:47:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:34:55.056 [2024-07-12 00:47:22.682648] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:34:55.056 [2024-07-12 00:47:22.682744] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:55.056 EAL: No free 2048 kB hugepages reported on node 1 00:34:55.056 [2024-07-12 00:47:22.751882] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:55.056 [2024-07-12 00:47:22.839412] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:55.056 [2024-07-12 00:47:22.839471] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:55.056 [2024-07-12 00:47:22.839486] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:55.056 [2024-07-12 00:47:22.839499] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:55.056 [2024-07-12 00:47:22.839510] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:55.056 [2024-07-12 00:47:22.839541] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:34:55.313 00:47:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:34:55.313 00:47:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # return 0 00:34:55.313 00:47:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:34:55.313 00:47:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:55.313 00:47:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:34:55.313 00:47:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:55.313 00:47:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:34:55.313 00:47:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:34:55.313 00:47:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:34:55.313 00:47:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:55.313 00:47:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:34:55.313 null0 00:34:55.313 [2024-07-12 00:47:23.041994] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:55.313 [2024-07-12 00:47:23.066180] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:55.313 00:47:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:55.313 00:47:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:34:55.313 00:47:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:34:55.313 00:47:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:34:55.313 00:47:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:34:55.313 00:47:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:34:55.313 00:47:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:34:55.313 00:47:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:34:55.313 00:47:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1075860 00:34:55.313 00:47:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:34:55.313 00:47:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1075860 /var/tmp/bperf.sock 00:34:55.313 00:47:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@827 -- # '[' -z 1075860 ']' 00:34:55.313 00:47:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:34:55.313 00:47:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:34:55.313 00:47:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:34:55.313 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:34:55.313 00:47:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:34:55.313 00:47:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:34:55.313 [2024-07-12 00:47:23.114938] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:34:55.313 [2024-07-12 00:47:23.115023] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1075860 ] 00:34:55.313 EAL: No free 2048 kB hugepages reported on node 1 00:34:55.568 [2024-07-12 00:47:23.175092] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:55.568 [2024-07-12 00:47:23.262491] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:34:55.568 00:47:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:34:55.568 00:47:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # return 0 00:34:55.568 00:47:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:34:55.568 00:47:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:34:55.569 00:47:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:34:56.132 00:47:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:56.132 00:47:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:56.391 nvme0n1 00:34:56.391 00:47:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:34:56.391 00:47:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:34:56.391 Running I/O for 2 seconds... 00:34:58.929 00:34:58.929 Latency(us) 00:34:58.929 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:58.929 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:34:58.929 nvme0n1 : 2.01 17605.51 68.77 0.00 0.00 7261.01 4102.07 17185.00 00:34:58.929 =================================================================================================================== 00:34:58.929 Total : 17605.51 68.77 0.00 0.00 7261.01 4102.07 17185.00 00:34:58.929 0 00:34:58.929 00:47:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:34:58.929 00:47:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:34:58.929 00:47:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:34:58.929 00:47:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:34:58.929 | select(.opcode=="crc32c") 00:34:58.929 | "\(.module_name) \(.executed)"' 00:34:58.929 00:47:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:34:58.929 00:47:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:34:58.929 00:47:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:34:58.929 00:47:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:34:58.929 00:47:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:34:58.929 00:47:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1075860 00:34:58.929 00:47:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@946 -- # '[' -z 1075860 ']' 00:34:58.929 00:47:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # kill -0 1075860 00:34:58.929 00:47:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # uname 00:34:58.929 00:47:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:34:58.929 00:47:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1075860 00:34:58.929 00:47:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:34:58.929 00:47:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:34:58.929 00:47:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1075860' 00:34:58.929 killing process with pid 1075860 00:34:58.929 00:47:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@965 -- # kill 1075860 00:34:58.929 Received shutdown signal, test time was about 2.000000 seconds 00:34:58.929 00:34:58.929 Latency(us) 00:34:58.929 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:58.929 =================================================================================================================== 00:34:58.929 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:34:58.929 00:47:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # wait 1075860 00:34:58.929 00:47:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:34:58.929 00:47:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:34:58.929 00:47:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:34:58.929 00:47:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:34:58.929 00:47:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:34:58.929 00:47:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:34:58.929 00:47:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:34:58.929 00:47:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1076209 00:34:58.929 00:47:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:34:58.929 00:47:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1076209 /var/tmp/bperf.sock 00:34:58.929 00:47:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@827 -- # '[' -z 1076209 ']' 00:34:58.929 00:47:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:34:58.929 00:47:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:34:58.929 00:47:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:34:58.929 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:34:58.929 00:47:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:34:58.929 00:47:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:34:59.188 [2024-07-12 00:47:26.785054] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:34:59.188 [2024-07-12 00:47:26.785156] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1076209 ] 00:34:59.188 I/O size of 131072 is greater than zero copy threshold (65536). 00:34:59.188 Zero copy mechanism will not be used. 00:34:59.188 EAL: No free 2048 kB hugepages reported on node 1 00:34:59.188 [2024-07-12 00:47:26.846731] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:59.188 [2024-07-12 00:47:26.934299] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:34:59.447 00:47:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:34:59.447 00:47:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # return 0 00:34:59.447 00:47:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:34:59.447 00:47:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:34:59.447 00:47:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:34:59.706 00:47:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:59.706 00:47:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:59.964 nvme0n1 00:34:59.964 00:47:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:34:59.964 00:47:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:00.224 I/O size of 131072 is greater than zero copy threshold (65536). 00:35:00.224 Zero copy mechanism will not be used. 00:35:00.224 Running I/O for 2 seconds... 00:35:02.134 00:35:02.134 Latency(us) 00:35:02.134 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:02.134 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:35:02.134 nvme0n1 : 2.00 6003.57 750.45 0.00 0.00 2660.44 819.20 11699.39 00:35:02.134 =================================================================================================================== 00:35:02.134 Total : 6003.57 750.45 0.00 0.00 2660.44 819.20 11699.39 00:35:02.134 0 00:35:02.134 00:47:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:35:02.134 00:47:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:35:02.134 00:47:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:35:02.134 00:47:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:35:02.134 | select(.opcode=="crc32c") 00:35:02.134 | "\(.module_name) \(.executed)"' 00:35:02.134 00:47:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:35:02.433 00:47:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:35:02.433 00:47:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:35:02.433 00:47:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:35:02.433 00:47:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:35:02.433 00:47:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1076209 00:35:02.433 00:47:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@946 -- # '[' -z 1076209 ']' 00:35:02.433 00:47:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # kill -0 1076209 00:35:02.433 00:47:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # uname 00:35:02.433 00:47:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:35:02.433 00:47:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1076209 00:35:02.433 00:47:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:35:02.433 00:47:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:35:02.433 00:47:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1076209' 00:35:02.433 killing process with pid 1076209 00:35:02.433 00:47:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@965 -- # kill 1076209 00:35:02.694 Received shutdown signal, test time was about 2.000000 seconds 00:35:02.694 00:35:02.694 Latency(us) 00:35:02.694 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:02.694 =================================================================================================================== 00:35:02.694 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:02.694 00:47:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # wait 1076209 00:35:02.694 00:47:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:35:02.694 00:47:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:35:02.694 00:47:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:35:02.694 00:47:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:35:02.694 00:47:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:35:02.694 00:47:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:35:02.694 00:47:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:35:02.694 00:47:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1076528 00:35:02.694 00:47:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1076528 /var/tmp/bperf.sock 00:35:02.694 00:47:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:35:02.694 00:47:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@827 -- # '[' -z 1076528 ']' 00:35:02.694 00:47:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:02.694 00:47:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:35:02.694 00:47:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:02.694 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:02.694 00:47:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:35:02.694 00:47:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:35:02.694 [2024-07-12 00:47:30.461516] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:35:02.694 [2024-07-12 00:47:30.461623] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1076528 ] 00:35:02.694 EAL: No free 2048 kB hugepages reported on node 1 00:35:02.694 [2024-07-12 00:47:30.521974] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:02.953 [2024-07-12 00:47:30.612703] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:35:02.953 00:47:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:35:02.953 00:47:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # return 0 00:35:02.953 00:47:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:35:02.953 00:47:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:35:02.953 00:47:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:35:03.520 00:47:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:03.520 00:47:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:03.778 nvme0n1 00:35:03.778 00:47:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:35:03.779 00:47:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:04.037 Running I/O for 2 seconds... 00:35:05.943 00:35:05.943 Latency(us) 00:35:05.943 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:05.943 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:35:05.943 nvme0n1 : 2.01 17607.77 68.78 0.00 0.00 7251.11 3252.53 17767.54 00:35:05.943 =================================================================================================================== 00:35:05.943 Total : 17607.77 68.78 0.00 0.00 7251.11 3252.53 17767.54 00:35:05.943 0 00:35:05.943 00:47:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:35:05.943 00:47:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:35:05.943 00:47:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:35:05.943 00:47:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:35:05.943 | select(.opcode=="crc32c") 00:35:05.943 | "\(.module_name) \(.executed)"' 00:35:05.943 00:47:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:35:06.203 00:47:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:35:06.203 00:47:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:35:06.203 00:47:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:35:06.203 00:47:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:35:06.203 00:47:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1076528 00:35:06.203 00:47:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@946 -- # '[' -z 1076528 ']' 00:35:06.203 00:47:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # kill -0 1076528 00:35:06.203 00:47:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # uname 00:35:06.203 00:47:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:35:06.203 00:47:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1076528 00:35:06.203 00:47:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:35:06.203 00:47:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:35:06.203 00:47:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1076528' 00:35:06.203 killing process with pid 1076528 00:35:06.203 00:47:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@965 -- # kill 1076528 00:35:06.203 Received shutdown signal, test time was about 2.000000 seconds 00:35:06.203 00:35:06.203 Latency(us) 00:35:06.203 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:06.203 =================================================================================================================== 00:35:06.203 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:06.203 00:47:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # wait 1076528 00:35:06.461 00:47:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:35:06.461 00:47:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:35:06.461 00:47:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:35:06.461 00:47:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:35:06.461 00:47:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:35:06.461 00:47:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:35:06.461 00:47:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:35:06.461 00:47:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1076905 00:35:06.461 00:47:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1076905 /var/tmp/bperf.sock 00:35:06.461 00:47:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:35:06.461 00:47:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@827 -- # '[' -z 1076905 ']' 00:35:06.461 00:47:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:06.461 00:47:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:35:06.461 00:47:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:06.461 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:06.461 00:47:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:35:06.461 00:47:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:35:06.461 [2024-07-12 00:47:34.236348] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:35:06.461 [2024-07-12 00:47:34.236451] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1076905 ] 00:35:06.461 I/O size of 131072 is greater than zero copy threshold (65536). 00:35:06.461 Zero copy mechanism will not be used. 00:35:06.461 EAL: No free 2048 kB hugepages reported on node 1 00:35:06.461 [2024-07-12 00:47:34.297154] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:06.719 [2024-07-12 00:47:34.384206] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:35:06.719 00:47:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:35:06.719 00:47:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # return 0 00:35:06.719 00:47:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:35:06.719 00:47:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:35:06.719 00:47:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:35:07.287 00:47:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:07.287 00:47:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:07.857 nvme0n1 00:35:07.857 00:47:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:35:07.857 00:47:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:07.857 I/O size of 131072 is greater than zero copy threshold (65536). 00:35:07.857 Zero copy mechanism will not be used. 00:35:07.857 Running I/O for 2 seconds... 00:35:09.765 00:35:09.765 Latency(us) 00:35:09.765 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:09.765 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:35:09.765 nvme0n1 : 2.00 5786.68 723.33 0.00 0.00 2757.71 2099.58 11699.39 00:35:09.765 =================================================================================================================== 00:35:09.765 Total : 5786.68 723.33 0.00 0.00 2757.71 2099.58 11699.39 00:35:09.765 0 00:35:09.765 00:47:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:35:09.765 00:47:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:35:09.765 00:47:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:35:09.765 00:47:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:35:09.765 | select(.opcode=="crc32c") 00:35:09.765 | "\(.module_name) \(.executed)"' 00:35:09.765 00:47:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:35:10.024 00:47:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:35:10.024 00:47:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:35:10.024 00:47:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:35:10.024 00:47:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:35:10.024 00:47:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1076905 00:35:10.024 00:47:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@946 -- # '[' -z 1076905 ']' 00:35:10.024 00:47:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # kill -0 1076905 00:35:10.024 00:47:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # uname 00:35:10.024 00:47:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:35:10.024 00:47:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1076905 00:35:10.285 00:47:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:35:10.285 00:47:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:35:10.285 00:47:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1076905' 00:35:10.285 killing process with pid 1076905 00:35:10.285 00:47:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@965 -- # kill 1076905 00:35:10.285 Received shutdown signal, test time was about 2.000000 seconds 00:35:10.285 00:35:10.285 Latency(us) 00:35:10.285 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:10.285 =================================================================================================================== 00:35:10.285 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:10.285 00:47:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # wait 1076905 00:35:10.285 00:47:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 1075787 00:35:10.285 00:47:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@946 -- # '[' -z 1075787 ']' 00:35:10.285 00:47:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # kill -0 1075787 00:35:10.285 00:47:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # uname 00:35:10.285 00:47:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:35:10.285 00:47:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1075787 00:35:10.285 00:47:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:35:10.285 00:47:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:35:10.285 00:47:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1075787' 00:35:10.285 killing process with pid 1075787 00:35:10.285 00:47:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@965 -- # kill 1075787 00:35:10.285 00:47:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # wait 1075787 00:35:10.544 00:35:10.544 real 0m15.614s 00:35:10.544 user 0m31.813s 00:35:10.544 sys 0m4.113s 00:35:10.544 00:47:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1122 -- # xtrace_disable 00:35:10.544 00:47:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:35:10.544 ************************************ 00:35:10.544 END TEST nvmf_digest_clean 00:35:10.544 ************************************ 00:35:10.544 00:47:38 nvmf_tcp.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:35:10.544 00:47:38 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:35:10.544 00:47:38 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1103 -- # xtrace_disable 00:35:10.544 00:47:38 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:35:10.544 ************************************ 00:35:10.544 START TEST nvmf_digest_error 00:35:10.544 ************************************ 00:35:10.544 00:47:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1121 -- # run_digest_error 00:35:10.544 00:47:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:35:10.544 00:47:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:35:10.544 00:47:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@720 -- # xtrace_disable 00:35:10.544 00:47:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:10.544 00:47:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@481 -- # nvmfpid=1077258 00:35:10.544 00:47:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:35:10.544 00:47:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@482 -- # waitforlisten 1077258 00:35:10.544 00:47:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@827 -- # '[' -z 1077258 ']' 00:35:10.545 00:47:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:10.545 00:47:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local max_retries=100 00:35:10.545 00:47:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:10.545 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:10.545 00:47:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # xtrace_disable 00:35:10.545 00:47:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:10.545 [2024-07-12 00:47:38.353736] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:35:10.545 [2024-07-12 00:47:38.353828] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:10.803 EAL: No free 2048 kB hugepages reported on node 1 00:35:10.803 [2024-07-12 00:47:38.417491] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:10.803 [2024-07-12 00:47:38.503447] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:10.803 [2024-07-12 00:47:38.503511] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:10.803 [2024-07-12 00:47:38.503528] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:10.803 [2024-07-12 00:47:38.503541] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:10.803 [2024-07-12 00:47:38.503552] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:10.803 [2024-07-12 00:47:38.503595] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:35:10.803 00:47:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:35:10.803 00:47:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # return 0 00:35:10.803 00:47:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:35:10.803 00:47:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:10.803 00:47:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:10.803 00:47:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:10.803 00:47:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:35:10.803 00:47:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:10.803 00:47:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:10.803 [2024-07-12 00:47:38.628344] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:35:10.803 00:47:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:10.803 00:47:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:35:10.803 00:47:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:35:10.803 00:47:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:10.803 00:47:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:11.061 null0 00:35:11.061 [2024-07-12 00:47:38.731092] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:11.061 [2024-07-12 00:47:38.755296] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:11.061 00:47:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:11.061 00:47:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:35:11.061 00:47:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:35:11.061 00:47:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:35:11.061 00:47:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:35:11.061 00:47:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:35:11.061 00:47:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1077376 00:35:11.061 00:47:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1077376 /var/tmp/bperf.sock 00:35:11.061 00:47:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:35:11.061 00:47:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@827 -- # '[' -z 1077376 ']' 00:35:11.061 00:47:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:11.061 00:47:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local max_retries=100 00:35:11.061 00:47:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:11.061 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:11.061 00:47:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # xtrace_disable 00:35:11.061 00:47:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:11.061 [2024-07-12 00:47:38.805012] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:35:11.061 [2024-07-12 00:47:38.805105] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1077376 ] 00:35:11.061 EAL: No free 2048 kB hugepages reported on node 1 00:35:11.061 [2024-07-12 00:47:38.865220] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:11.319 [2024-07-12 00:47:38.952913] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:35:11.319 00:47:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:35:11.319 00:47:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # return 0 00:35:11.319 00:47:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:35:11.319 00:47:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:35:11.577 00:47:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:35:11.577 00:47:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:11.577 00:47:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:11.577 00:47:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:11.577 00:47:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:11.577 00:47:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:12.146 nvme0n1 00:35:12.146 00:47:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:35:12.146 00:47:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:12.146 00:47:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:12.146 00:47:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:12.146 00:47:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:35:12.146 00:47:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:12.146 Running I/O for 2 seconds... 00:35:12.146 [2024-07-12 00:47:39.855989] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x875590) 00:35:12.146 [2024-07-12 00:47:39.856049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9770 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.146 [2024-07-12 00:47:39.856070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:12.146 [2024-07-12 00:47:39.872776] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x875590) 00:35:12.146 [2024-07-12 00:47:39.872812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:22392 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.146 [2024-07-12 00:47:39.872832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:12.146 [2024-07-12 00:47:39.886409] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x875590) 00:35:12.146 [2024-07-12 00:47:39.886444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:24228 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.146 [2024-07-12 00:47:39.886473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:12.146 [2024-07-12 00:47:39.901931] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x875590) 00:35:12.146 [2024-07-12 00:47:39.901964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:10468 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.146 [2024-07-12 00:47:39.901983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:12.146 [2024-07-12 00:47:39.917566] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x875590) 00:35:12.146 [2024-07-12 00:47:39.917606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:506 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.146 [2024-07-12 00:47:39.917626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:12.146 [2024-07-12 00:47:39.932583] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x875590) 00:35:12.146 [2024-07-12 00:47:39.932623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:39 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.146 [2024-07-12 00:47:39.932643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:12.146 [2024-07-12 00:47:39.947674] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x875590) 00:35:12.146 [2024-07-12 00:47:39.947706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:42 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.146 [2024-07-12 00:47:39.947724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:12.146 [2024-07-12 00:47:39.962974] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x875590) 00:35:12.146 [2024-07-12 00:47:39.963007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:20655 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.146 [2024-07-12 00:47:39.963027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:12.146 [2024-07-12 00:47:39.976851] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x875590) 00:35:12.146 [2024-07-12 00:47:39.976888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:21391 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.146 [2024-07-12 00:47:39.976908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:12.407 [2024-07-12 00:47:39.994404] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x875590) 00:35:12.407 [2024-07-12 00:47:39.994438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:16935 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.407 [2024-07-12 00:47:39.994457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:12.408 [2024-07-12 00:47:40.010541] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x875590) 00:35:12.408 [2024-07-12 00:47:40.010579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:21851 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.408 [2024-07-12 00:47:40.010611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:12.408 [2024-07-12 00:47:40.023796] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x875590) 00:35:12.408 [2024-07-12 00:47:40.023837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:13411 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.408 [2024-07-12 00:47:40.023858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:12.408 [2024-07-12 00:47:40.039597] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x875590) 00:35:12.408 [2024-07-12 00:47:40.039657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:7425 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.408 [2024-07-12 00:47:40.039682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:12.408 [2024-07-12 00:47:40.054196] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x875590) 00:35:12.408 [2024-07-12 00:47:40.054230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:14929 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.408 [2024-07-12 00:47:40.054249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:12.408 [2024-07-12 00:47:40.069934] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x875590) 00:35:12.408 [2024-07-12 00:47:40.069967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:25258 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.408 [2024-07-12 00:47:40.069986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:12.408 [2024-07-12 00:47:40.086529] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x875590) 00:35:12.408 [2024-07-12 00:47:40.086562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:5101 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.408 [2024-07-12 00:47:40.086581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:12.408 [2024-07-12 00:47:40.100015] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x875590) 00:35:12.408 [2024-07-12 00:47:40.100048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:23711 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.408 [2024-07-12 00:47:40.100067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:12.408 [2024-07-12 00:47:40.116250] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x875590) 00:35:12.408 [2024-07-12 00:47:40.116286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:10027 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.408 [2024-07-12 00:47:40.116304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:12.408 [2024-07-12 00:47:40.129886] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x875590) 00:35:12.408 [2024-07-12 00:47:40.129920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:5052 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.408 [2024-07-12 00:47:40.129939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:12.408 [2024-07-12 00:47:40.145160] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x875590) 00:35:12.408 [2024-07-12 00:47:40.145192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:11246 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.408 [2024-07-12 00:47:40.145211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:12.408 [2024-07-12 00:47:40.160978] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x875590) 00:35:12.408 [2024-07-12 00:47:40.161010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:20194 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.408 [2024-07-12 00:47:40.161029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:12.408 [2024-07-12 00:47:40.175444] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x875590) 00:35:12.408 [2024-07-12 00:47:40.175485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:23279 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.408 [2024-07-12 00:47:40.175503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:12.408 [2024-07-12 00:47:40.190045] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x875590) 00:35:12.408 [2024-07-12 00:47:40.190077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:11620 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.408 [2024-07-12 00:47:40.190096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:12.408 [2024-07-12 00:47:40.203649] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x875590) 00:35:12.408 [2024-07-12 00:47:40.203681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1973 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.408 [2024-07-12 00:47:40.203700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:12.408 [2024-07-12 00:47:40.217896] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x875590) 00:35:12.408 [2024-07-12 00:47:40.217928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:20599 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.408 [2024-07-12 00:47:40.217947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:12.408 [2024-07-12 00:47:40.232179] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x875590) 00:35:12.408 [2024-07-12 00:47:40.232211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:11649 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.408 [2024-07-12 00:47:40.232229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:12.669 [2024-07-12 00:47:40.246765] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x875590) 00:35:12.669 [2024-07-12 00:47:40.246799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:19364 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.669 [2024-07-12 00:47:40.246818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:12.669 [2024-07-12 00:47:40.261138] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x875590) 00:35:12.669 [2024-07-12 00:47:40.261170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:21314 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.669 [2024-07-12 00:47:40.261188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:12.669 [2024-07-12 00:47:40.275572] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x875590) 00:35:12.669 [2024-07-12 00:47:40.275615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:13368 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.669 [2024-07-12 00:47:40.275641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:12.669 [2024-07-12 00:47:40.290010] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x875590) 00:35:12.669 [2024-07-12 00:47:40.290045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:7804 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.669 [2024-07-12 00:47:40.290064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:12.669 [2024-07-12 00:47:40.304416] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x875590) 00:35:12.669 [2024-07-12 00:47:40.304452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:7090 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.669 [2024-07-12 00:47:40.304470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:12.669 [2024-07-12 00:47:40.319111] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x875590) 00:35:12.669 [2024-07-12 00:47:40.319152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:19779 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.669 [2024-07-12 00:47:40.319171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:12.669 [2024-07-12 00:47:40.333578] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x875590) 00:35:12.669 [2024-07-12 00:47:40.333618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16690 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.669 [2024-07-12 00:47:40.333637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:12.669 [2024-07-12 00:47:40.348022] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x875590) 00:35:12.669 [2024-07-12 00:47:40.348055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:804 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.669 [2024-07-12 00:47:40.348073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:12.669 [2024-07-12 00:47:40.362820] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x875590) 00:35:12.669 [2024-07-12 00:47:40.362855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:16880 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.669 [2024-07-12 00:47:40.362874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:12.669 [2024-07-12 00:47:40.379836] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x875590) 00:35:12.669 [2024-07-12 00:47:40.379875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11545 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.669 [2024-07-12 00:47:40.379893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:12.669 [2024-07-12 00:47:40.393193] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x875590) 00:35:12.669 [2024-07-12 00:47:40.393226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4257 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.669 [2024-07-12 00:47:40.393245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:12.669 [2024-07-12 00:47:40.408642] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x875590) 00:35:12.669 [2024-07-12 00:47:40.408679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:5235 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.669 [2024-07-12 00:47:40.408699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:12.669 [2024-07-12 00:47:40.421947] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x875590) 00:35:12.669 [2024-07-12 00:47:40.421979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20464 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.669 [2024-07-12 00:47:40.421998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:12.669 [2024-07-12 00:47:40.437654] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x875590) 00:35:12.669 [2024-07-12 00:47:40.437688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:16595 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.669 [2024-07-12 00:47:40.437706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:12.669 [2024-07-12 00:47:40.450714] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x875590) 00:35:12.669 [2024-07-12 00:47:40.450748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:1005 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.669 [2024-07-12 00:47:40.450766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:12.670 [2024-07-12 00:47:40.465139] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x875590) 00:35:12.670 [2024-07-12 00:47:40.465172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:8464 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.670 [2024-07-12 00:47:40.465191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:12.670 [2024-07-12 00:47:40.480528] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x875590) 00:35:12.670 [2024-07-12 00:47:40.480561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:9849 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.670 [2024-07-12 00:47:40.480580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:12.670 [2024-07-12 00:47:40.495917] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x875590) 00:35:12.670 [2024-07-12 00:47:40.495950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:20273 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.670 [2024-07-12 00:47:40.495970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:12.928 [2024-07-12 00:47:40.508219] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x875590) 00:35:12.928 [2024-07-12 00:47:40.508254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22198 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.928 [2024-07-12 00:47:40.508272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:12.928 [2024-07-12 00:47:40.523743] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x875590) 00:35:12.928 [2024-07-12 00:47:40.523777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:14163 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.928 [2024-07-12 00:47:40.523795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:12.928 [2024-07-12 00:47:40.542677] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x875590) 00:35:12.928 [2024-07-12 00:47:40.542711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:21372 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.928 [2024-07-12 00:47:40.542729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:12.928 [2024-07-12 00:47:40.556763] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x875590) 00:35:12.928 [2024-07-12 00:47:40.556796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:14753 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.928 [2024-07-12 00:47:40.556815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:12.928 [2024-07-12 00:47:40.569108] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x875590) 00:35:12.928 [2024-07-12 00:47:40.569140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:17605 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.928 [2024-07-12 00:47:40.569158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:12.928 [2024-07-12 00:47:40.587095] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x875590) 00:35:12.928 [2024-07-12 00:47:40.587129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:5953 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.928 [2024-07-12 00:47:40.587148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:12.928 [2024-07-12 00:47:40.601657] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x875590) 00:35:12.928 [2024-07-12 00:47:40.601690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:1053 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.928 [2024-07-12 00:47:40.601708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:12.928 [2024-07-12 00:47:40.616066] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x875590) 00:35:12.928 [2024-07-12 00:47:40.616098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:16964 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.928 [2024-07-12 00:47:40.616117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:12.928 [2024-07-12 00:47:40.628606] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x875590) 00:35:12.928 [2024-07-12 00:47:40.628638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:19478 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.928 [2024-07-12 00:47:40.628657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:12.928 [2024-07-12 00:47:40.643698] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x875590) 00:35:12.928 [2024-07-12 00:47:40.643730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:21653 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.928 [2024-07-12 00:47:40.643749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:12.928 [2024-07-12 00:47:40.659212] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x875590) 00:35:12.928 [2024-07-12 00:47:40.659246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:25264 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.928 [2024-07-12 00:47:40.659278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:12.928 [2024-07-12 00:47:40.674195] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x875590) 00:35:12.928 [2024-07-12 00:47:40.674226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7658 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.928 [2024-07-12 00:47:40.674245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:12.928 [2024-07-12 00:47:40.687963] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x875590) 00:35:12.928 [2024-07-12 00:47:40.687995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15060 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.928 [2024-07-12 00:47:40.688014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:12.928 [2024-07-12 00:47:40.703259] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x875590) 00:35:12.928 [2024-07-12 00:47:40.703291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:16564 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.928 [2024-07-12 00:47:40.703310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:12.928 [2024-07-12 00:47:40.717974] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x875590) 00:35:12.928 [2024-07-12 00:47:40.718007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:7353 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.928 [2024-07-12 00:47:40.718025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:12.928 [2024-07-12 00:47:40.732059] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x875590) 00:35:12.928 [2024-07-12 00:47:40.732092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22032 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.928 [2024-07-12 00:47:40.732110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:12.928 [2024-07-12 00:47:40.748630] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x875590) 00:35:12.928 [2024-07-12 00:47:40.748662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:13659 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.928 [2024-07-12 00:47:40.748681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:12.928 [2024-07-12 00:47:40.761716] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x875590) 00:35:12.928 [2024-07-12 00:47:40.761749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:24759 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:12.928 [2024-07-12 00:47:40.761767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:13.188 [2024-07-12 00:47:40.777196] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x875590) 00:35:13.188 [2024-07-12 00:47:40.777230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:17642 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.188 [2024-07-12 00:47:40.777248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:13.188 [2024-07-12 00:47:40.794339] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x875590) 00:35:13.188 [2024-07-12 00:47:40.794373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:745 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.188 [2024-07-12 00:47:40.794391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:13.188 [2024-07-12 00:47:40.807315] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x875590) 00:35:13.188 [2024-07-12 00:47:40.807347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:23606 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.188 [2024-07-12 00:47:40.807365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:13.188 [2024-07-12 00:47:40.822777] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x875590) 00:35:13.188 [2024-07-12 00:47:40.822812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:21348 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.188 [2024-07-12 00:47:40.822831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:13.188 [2024-07-12 00:47:40.838330] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x875590) 00:35:13.188 [2024-07-12 00:47:40.838366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:2343 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.188 [2024-07-12 00:47:40.838385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:13.188 [2024-07-12 00:47:40.850910] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x875590) 00:35:13.188 [2024-07-12 00:47:40.850943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:20817 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.188 [2024-07-12 00:47:40.850961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:13.188 [2024-07-12 00:47:40.867823] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x875590) 00:35:13.188 [2024-07-12 00:47:40.867855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:13858 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.188 [2024-07-12 00:47:40.867874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:13.188 [2024-07-12 00:47:40.884809] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x875590) 00:35:13.188 [2024-07-12 00:47:40.884841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13107 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.188 [2024-07-12 00:47:40.884860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:13.188 [2024-07-12 00:47:40.897698] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x875590) 00:35:13.188 [2024-07-12 00:47:40.897729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:22659 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.188 [2024-07-12 00:47:40.897748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:13.188 [2024-07-12 00:47:40.916879] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x875590) 00:35:13.188 [2024-07-12 00:47:40.916913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:14554 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.188 [2024-07-12 00:47:40.916941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:13.188 [2024-07-12 00:47:40.933947] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x875590) 00:35:13.188 [2024-07-12 00:47:40.933980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11784 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.188 [2024-07-12 00:47:40.933999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:13.188 [2024-07-12 00:47:40.947263] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x875590) 00:35:13.188 [2024-07-12 00:47:40.947296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:17484 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.188 [2024-07-12 00:47:40.947315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:13.188 [2024-07-12 00:47:40.962390] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x875590) 00:35:13.188 [2024-07-12 00:47:40.962423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24898 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.188 [2024-07-12 00:47:40.962441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:13.188 [2024-07-12 00:47:40.976818] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x875590) 00:35:13.188 [2024-07-12 00:47:40.976850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:15809 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.188 [2024-07-12 00:47:40.976869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:13.188 [2024-07-12 00:47:40.991557] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x875590) 00:35:13.188 [2024-07-12 00:47:40.991595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18694 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.188 [2024-07-12 00:47:40.991615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:13.188 [2024-07-12 00:47:41.006316] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x875590) 00:35:13.188 [2024-07-12 00:47:41.006349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:19079 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.188 [2024-07-12 00:47:41.006367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:13.188 [2024-07-12 00:47:41.025123] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x875590) 00:35:13.188 [2024-07-12 00:47:41.025156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:6874 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.188 [2024-07-12 00:47:41.025175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:13.448 [2024-07-12 00:47:41.037729] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x875590) 00:35:13.448 [2024-07-12 00:47:41.037763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:23288 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.448 [2024-07-12 00:47:41.037781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:13.448 [2024-07-12 00:47:41.056501] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x875590) 00:35:13.448 [2024-07-12 00:47:41.056543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:23654 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.448 [2024-07-12 00:47:41.056563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:13.448 [2024-07-12 00:47:41.072444] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x875590) 00:35:13.448 [2024-07-12 00:47:41.072477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:7019 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.448 [2024-07-12 00:47:41.072495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:13.448 [2024-07-12 00:47:41.085532] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x875590) 00:35:13.448 [2024-07-12 00:47:41.085564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:16361 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.448 [2024-07-12 00:47:41.085582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:13.448 [2024-07-12 00:47:41.103434] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x875590) 00:35:13.448 [2024-07-12 00:47:41.103468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23582 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.448 [2024-07-12 00:47:41.103486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:13.448 [2024-07-12 00:47:41.118787] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x875590) 00:35:13.449 [2024-07-12 00:47:41.118819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20388 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.449 [2024-07-12 00:47:41.118838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:13.449 [2024-07-12 00:47:41.131256] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x875590) 00:35:13.449 [2024-07-12 00:47:41.131288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:16430 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.449 [2024-07-12 00:47:41.131306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:13.449 [2024-07-12 00:47:41.147745] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x875590) 00:35:13.449 [2024-07-12 00:47:41.147778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:24648 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.449 [2024-07-12 00:47:41.147796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:13.449 [2024-07-12 00:47:41.162297] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x875590) 00:35:13.449 [2024-07-12 00:47:41.162330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:9090 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.449 [2024-07-12 00:47:41.162349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:13.449 [2024-07-12 00:47:41.176557] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x875590) 00:35:13.449 [2024-07-12 00:47:41.176597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:22123 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.449 [2024-07-12 00:47:41.176616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:13.449 [2024-07-12 00:47:41.190501] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x875590) 00:35:13.449 [2024-07-12 00:47:41.190533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:10170 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.449 [2024-07-12 00:47:41.190551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:13.449 [2024-07-12 00:47:41.203783] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x875590) 00:35:13.449 [2024-07-12 00:47:41.203815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:20216 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.449 [2024-07-12 00:47:41.203834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:13.449 [2024-07-12 00:47:41.221866] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x875590) 00:35:13.449 [2024-07-12 00:47:41.221898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24611 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.449 [2024-07-12 00:47:41.221917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:13.449 [2024-07-12 00:47:41.238283] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x875590) 00:35:13.449 [2024-07-12 00:47:41.238316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4761 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.449 [2024-07-12 00:47:41.238334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:13.449 [2024-07-12 00:47:41.250938] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x875590) 00:35:13.449 [2024-07-12 00:47:41.250969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:1189 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.449 [2024-07-12 00:47:41.250987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:13.449 [2024-07-12 00:47:41.266094] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x875590) 00:35:13.449 [2024-07-12 00:47:41.266126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:14380 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.449 [2024-07-12 00:47:41.266144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:13.449 [2024-07-12 00:47:41.282784] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x875590) 00:35:13.449 [2024-07-12 00:47:41.282817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:4018 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.449 [2024-07-12 00:47:41.282836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:13.710 [2024-07-12 00:47:41.296647] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x875590) 00:35:13.710 [2024-07-12 00:47:41.296679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:5684 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.710 [2024-07-12 00:47:41.296699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:13.710 [2024-07-12 00:47:41.311321] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x875590) 00:35:13.710 [2024-07-12 00:47:41.311353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22327 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.710 [2024-07-12 00:47:41.311380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:13.710 [2024-07-12 00:47:41.326042] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x875590) 00:35:13.710 [2024-07-12 00:47:41.326074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:3912 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.710 [2024-07-12 00:47:41.326093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:13.710 [2024-07-12 00:47:41.339480] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x875590) 00:35:13.710 [2024-07-12 00:47:41.339512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:5261 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.710 [2024-07-12 00:47:41.339530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:13.710 [2024-07-12 00:47:41.358085] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x875590) 00:35:13.710 [2024-07-12 00:47:41.358118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:5502 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.710 [2024-07-12 00:47:41.358137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:13.710 [2024-07-12 00:47:41.375491] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x875590) 00:35:13.710 [2024-07-12 00:47:41.375524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:9062 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.710 [2024-07-12 00:47:41.375542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:13.710 [2024-07-12 00:47:41.389168] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x875590) 00:35:13.710 [2024-07-12 00:47:41.389201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:22752 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.710 [2024-07-12 00:47:41.389220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:13.710 [2024-07-12 00:47:41.404835] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x875590) 00:35:13.710 [2024-07-12 00:47:41.404868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:10179 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.710 [2024-07-12 00:47:41.404886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:13.710 [2024-07-12 00:47:41.419429] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x875590) 00:35:13.710 [2024-07-12 00:47:41.419463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:13534 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.710 [2024-07-12 00:47:41.419482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:13.710 [2024-07-12 00:47:41.434006] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x875590) 00:35:13.710 [2024-07-12 00:47:41.434037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:24284 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.710 [2024-07-12 00:47:41.434056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:13.710 [2024-07-12 00:47:41.448653] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x875590) 00:35:13.710 [2024-07-12 00:47:41.448694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:1207 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.710 [2024-07-12 00:47:41.448712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:13.710 [2024-07-12 00:47:41.464545] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x875590) 00:35:13.710 [2024-07-12 00:47:41.464576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:18197 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.710 [2024-07-12 00:47:41.464603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:13.710 [2024-07-12 00:47:41.478335] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x875590) 00:35:13.710 [2024-07-12 00:47:41.478368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:3825 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.710 [2024-07-12 00:47:41.478386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:13.710 [2024-07-12 00:47:41.492368] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x875590) 00:35:13.710 [2024-07-12 00:47:41.492400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:14063 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.710 [2024-07-12 00:47:41.492419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:13.710 [2024-07-12 00:47:41.506520] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x875590) 00:35:13.710 [2024-07-12 00:47:41.506553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25537 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.710 [2024-07-12 00:47:41.506571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:13.710 [2024-07-12 00:47:41.521706] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x875590) 00:35:13.710 [2024-07-12 00:47:41.521738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:19187 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.710 [2024-07-12 00:47:41.521756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:13.710 [2024-07-12 00:47:41.536133] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x875590) 00:35:13.710 [2024-07-12 00:47:41.536165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:13955 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.710 [2024-07-12 00:47:41.536183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:13.971 [2024-07-12 00:47:41.551443] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x875590) 00:35:13.971 [2024-07-12 00:47:41.551475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:10919 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.971 [2024-07-12 00:47:41.551494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:13.971 [2024-07-12 00:47:41.565961] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x875590) 00:35:13.971 [2024-07-12 00:47:41.565992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17954 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.971 [2024-07-12 00:47:41.566010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:13.971 [2024-07-12 00:47:41.581683] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x875590) 00:35:13.971 [2024-07-12 00:47:41.581715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:24699 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.971 [2024-07-12 00:47:41.581734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:13.971 [2024-07-12 00:47:41.594913] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x875590) 00:35:13.971 [2024-07-12 00:47:41.594945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:24718 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.971 [2024-07-12 00:47:41.594963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:13.971 [2024-07-12 00:47:41.612130] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x875590) 00:35:13.971 [2024-07-12 00:47:41.612166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18618 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.971 [2024-07-12 00:47:41.612185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:13.971 [2024-07-12 00:47:41.625454] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x875590) 00:35:13.971 [2024-07-12 00:47:41.625489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22827 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.971 [2024-07-12 00:47:41.625507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:13.971 [2024-07-12 00:47:41.640502] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x875590) 00:35:13.971 [2024-07-12 00:47:41.640533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:9085 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.971 [2024-07-12 00:47:41.640552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:13.971 [2024-07-12 00:47:41.654897] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x875590) 00:35:13.971 [2024-07-12 00:47:41.654929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:15935 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.971 [2024-07-12 00:47:41.654948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:13.971 [2024-07-12 00:47:41.669469] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x875590) 00:35:13.971 [2024-07-12 00:47:41.669505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:3475 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.972 [2024-07-12 00:47:41.669524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:13.972 [2024-07-12 00:47:41.683520] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x875590) 00:35:13.972 [2024-07-12 00:47:41.683556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19809 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.972 [2024-07-12 00:47:41.683574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:13.972 [2024-07-12 00:47:41.700131] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x875590) 00:35:13.972 [2024-07-12 00:47:41.700165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:16504 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.972 [2024-07-12 00:47:41.700188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:13.972 [2024-07-12 00:47:41.714654] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x875590) 00:35:13.972 [2024-07-12 00:47:41.714686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:23280 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.972 [2024-07-12 00:47:41.714704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:13.972 [2024-07-12 00:47:41.727426] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x875590) 00:35:13.972 [2024-07-12 00:47:41.727461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:11590 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.972 [2024-07-12 00:47:41.727480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:13.972 [2024-07-12 00:47:41.745907] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x875590) 00:35:13.972 [2024-07-12 00:47:41.745943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:22650 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.972 [2024-07-12 00:47:41.745962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:13.972 [2024-07-12 00:47:41.761769] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x875590) 00:35:13.972 [2024-07-12 00:47:41.761801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:25245 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.972 [2024-07-12 00:47:41.761819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:13.972 [2024-07-12 00:47:41.775416] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x875590) 00:35:13.972 [2024-07-12 00:47:41.775448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:5678 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.972 [2024-07-12 00:47:41.775466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:13.972 [2024-07-12 00:47:41.789419] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x875590) 00:35:13.972 [2024-07-12 00:47:41.789451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:12700 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.972 [2024-07-12 00:47:41.789470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:13.972 [2024-07-12 00:47:41.803818] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x875590) 00:35:13.972 [2024-07-12 00:47:41.803850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:17237 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.972 [2024-07-12 00:47:41.803868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:14.229 [2024-07-12 00:47:41.820379] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x875590) 00:35:14.229 [2024-07-12 00:47:41.820419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:21962 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.229 [2024-07-12 00:47:41.820438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:14.229 [2024-07-12 00:47:41.834068] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x875590) 00:35:14.229 [2024-07-12 00:47:41.834100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:2485 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.229 [2024-07-12 00:47:41.834119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:14.229 00:35:14.229 Latency(us) 00:35:14.229 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:14.229 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:35:14.229 nvme0n1 : 2.05 16691.18 65.20 0.00 0.00 7509.31 4053.52 44273.21 00:35:14.229 =================================================================================================================== 00:35:14.229 Total : 16691.18 65.20 0.00 0.00 7509.31 4053.52 44273.21 00:35:14.229 0 00:35:14.229 00:47:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:35:14.229 00:47:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:35:14.229 00:47:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:35:14.229 00:47:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:35:14.229 | .driver_specific 00:35:14.229 | .nvme_error 00:35:14.229 | .status_code 00:35:14.229 | .command_transient_transport_error' 00:35:14.488 00:47:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 133 > 0 )) 00:35:14.488 00:47:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1077376 00:35:14.488 00:47:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@946 -- # '[' -z 1077376 ']' 00:35:14.488 00:47:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # kill -0 1077376 00:35:14.488 00:47:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # uname 00:35:14.488 00:47:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:35:14.488 00:47:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1077376 00:35:14.488 00:47:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:35:14.488 00:47:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:35:14.488 00:47:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1077376' 00:35:14.488 killing process with pid 1077376 00:35:14.488 00:47:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # kill 1077376 00:35:14.488 Received shutdown signal, test time was about 2.000000 seconds 00:35:14.488 00:35:14.488 Latency(us) 00:35:14.488 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:14.488 =================================================================================================================== 00:35:14.488 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:14.488 00:47:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # wait 1077376 00:35:14.747 00:47:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:35:14.747 00:47:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:35:14.747 00:47:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:35:14.747 00:47:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:35:14.747 00:47:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:35:14.747 00:47:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1077688 00:35:14.747 00:47:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:35:14.747 00:47:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1077688 /var/tmp/bperf.sock 00:35:14.747 00:47:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@827 -- # '[' -z 1077688 ']' 00:35:14.747 00:47:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:14.747 00:47:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local max_retries=100 00:35:14.747 00:47:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:14.747 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:14.747 00:47:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # xtrace_disable 00:35:14.747 00:47:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:14.747 [2024-07-12 00:47:42.440201] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:35:14.747 [2024-07-12 00:47:42.440310] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1077688 ] 00:35:14.747 I/O size of 131072 is greater than zero copy threshold (65536). 00:35:14.747 Zero copy mechanism will not be used. 00:35:14.747 EAL: No free 2048 kB hugepages reported on node 1 00:35:14.747 [2024-07-12 00:47:42.500970] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:15.006 [2024-07-12 00:47:42.588459] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:35:15.006 00:47:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:35:15.006 00:47:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # return 0 00:35:15.006 00:47:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:35:15.006 00:47:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:35:15.263 00:47:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:35:15.263 00:47:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:15.263 00:47:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:15.263 00:47:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:15.263 00:47:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:15.263 00:47:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:15.521 nvme0n1 00:35:15.521 00:47:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:35:15.521 00:47:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:15.521 00:47:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:15.521 00:47:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:15.521 00:47:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:35:15.521 00:47:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:15.780 I/O size of 131072 is greater than zero copy threshold (65536). 00:35:15.780 Zero copy mechanism will not be used. 00:35:15.780 Running I/O for 2 seconds... 00:35:15.780 [2024-07-12 00:47:43.413006] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0f70) 00:35:15.780 [2024-07-12 00:47:43.413069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.780 [2024-07-12 00:47:43.413091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:15.780 [2024-07-12 00:47:43.419805] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0f70) 00:35:15.780 [2024-07-12 00:47:43.419841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.780 [2024-07-12 00:47:43.419861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:15.780 [2024-07-12 00:47:43.426451] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0f70) 00:35:15.780 [2024-07-12 00:47:43.426487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.780 [2024-07-12 00:47:43.426506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:15.780 [2024-07-12 00:47:43.433065] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0f70) 00:35:15.780 [2024-07-12 00:47:43.433100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.780 [2024-07-12 00:47:43.433119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:15.780 [2024-07-12 00:47:43.439649] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0f70) 00:35:15.780 [2024-07-12 00:47:43.439683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.780 [2024-07-12 00:47:43.439702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:15.780 [2024-07-12 00:47:43.446306] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0f70) 00:35:15.781 [2024-07-12 00:47:43.446341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.781 [2024-07-12 00:47:43.446360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:15.781 [2024-07-12 00:47:43.452916] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0f70) 00:35:15.781 [2024-07-12 00:47:43.452951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.781 [2024-07-12 00:47:43.452970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:15.781 [2024-07-12 00:47:43.459501] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0f70) 00:35:15.781 [2024-07-12 00:47:43.459535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.781 [2024-07-12 00:47:43.459554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:15.781 [2024-07-12 00:47:43.466195] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0f70) 00:35:15.781 [2024-07-12 00:47:43.466230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.781 [2024-07-12 00:47:43.466256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:15.781 [2024-07-12 00:47:43.472900] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0f70) 00:35:15.781 [2024-07-12 00:47:43.472934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.781 [2024-07-12 00:47:43.472953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:15.781 [2024-07-12 00:47:43.480345] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0f70) 00:35:15.781 [2024-07-12 00:47:43.480492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.781 [2024-07-12 00:47:43.480532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:15.781 [2024-07-12 00:47:43.486442] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0f70) 00:35:15.781 [2024-07-12 00:47:43.486476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.781 [2024-07-12 00:47:43.486496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:15.781 [2024-07-12 00:47:43.494057] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0f70) 00:35:15.781 [2024-07-12 00:47:43.494092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.781 [2024-07-12 00:47:43.494111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:15.781 [2024-07-12 00:47:43.501802] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0f70) 00:35:15.781 [2024-07-12 00:47:43.501837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.781 [2024-07-12 00:47:43.501857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:15.781 [2024-07-12 00:47:43.509307] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0f70) 00:35:15.781 [2024-07-12 00:47:43.509342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.781 [2024-07-12 00:47:43.509360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:15.781 [2024-07-12 00:47:43.516287] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0f70) 00:35:15.781 [2024-07-12 00:47:43.516321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.781 [2024-07-12 00:47:43.516340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:15.781 [2024-07-12 00:47:43.524094] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0f70) 00:35:15.781 [2024-07-12 00:47:43.524129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.781 [2024-07-12 00:47:43.524148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:15.781 [2024-07-12 00:47:43.531900] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0f70) 00:35:15.781 [2024-07-12 00:47:43.531935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.781 [2024-07-12 00:47:43.531955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:15.781 [2024-07-12 00:47:43.539213] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0f70) 00:35:15.781 [2024-07-12 00:47:43.539248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.781 [2024-07-12 00:47:43.539267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:15.781 [2024-07-12 00:47:43.546487] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0f70) 00:35:15.781 [2024-07-12 00:47:43.546522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.781 [2024-07-12 00:47:43.546540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:15.781 [2024-07-12 00:47:43.554376] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0f70) 00:35:15.781 [2024-07-12 00:47:43.554411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.781 [2024-07-12 00:47:43.554430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:15.781 [2024-07-12 00:47:43.562179] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0f70) 00:35:15.781 [2024-07-12 00:47:43.562214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.781 [2024-07-12 00:47:43.562233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:15.781 [2024-07-12 00:47:43.568901] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0f70) 00:35:15.781 [2024-07-12 00:47:43.568936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.781 [2024-07-12 00:47:43.568955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:15.781 [2024-07-12 00:47:43.576686] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0f70) 00:35:15.781 [2024-07-12 00:47:43.576722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.781 [2024-07-12 00:47:43.576742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:15.781 [2024-07-12 00:47:43.585077] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0f70) 00:35:15.781 [2024-07-12 00:47:43.585112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.781 [2024-07-12 00:47:43.585131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:15.781 [2024-07-12 00:47:43.593447] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0f70) 00:35:15.781 [2024-07-12 00:47:43.593502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.781 [2024-07-12 00:47:43.593531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:15.781 [2024-07-12 00:47:43.599357] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0f70) 00:35:15.781 [2024-07-12 00:47:43.599391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.781 [2024-07-12 00:47:43.599410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:15.781 [2024-07-12 00:47:43.606014] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0f70) 00:35:15.781 [2024-07-12 00:47:43.606049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.781 [2024-07-12 00:47:43.606069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:15.781 [2024-07-12 00:47:43.613349] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0f70) 00:35:15.781 [2024-07-12 00:47:43.613384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.781 [2024-07-12 00:47:43.613403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:16.041 [2024-07-12 00:47:43.621574] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0f70) 00:35:16.041 [2024-07-12 00:47:43.621618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.041 [2024-07-12 00:47:43.621638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:16.041 [2024-07-12 00:47:43.629423] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0f70) 00:35:16.041 [2024-07-12 00:47:43.629457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.041 [2024-07-12 00:47:43.629476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:16.041 [2024-07-12 00:47:43.636098] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0f70) 00:35:16.041 [2024-07-12 00:47:43.636132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.041 [2024-07-12 00:47:43.636151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:16.041 [2024-07-12 00:47:43.643011] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0f70) 00:35:16.041 [2024-07-12 00:47:43.643046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.041 [2024-07-12 00:47:43.643064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:16.041 [2024-07-12 00:47:43.649747] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0f70) 00:35:16.041 [2024-07-12 00:47:43.649783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.041 [2024-07-12 00:47:43.649803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:16.041 [2024-07-12 00:47:43.657140] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0f70) 00:35:16.041 [2024-07-12 00:47:43.657179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.041 [2024-07-12 00:47:43.657199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:16.041 [2024-07-12 00:47:43.664848] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0f70) 00:35:16.041 [2024-07-12 00:47:43.664882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.041 [2024-07-12 00:47:43.664902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:16.041 [2024-07-12 00:47:43.672754] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0f70) 00:35:16.041 [2024-07-12 00:47:43.672791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.041 [2024-07-12 00:47:43.672811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:16.041 [2024-07-12 00:47:43.680348] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0f70) 00:35:16.041 [2024-07-12 00:47:43.680401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.041 [2024-07-12 00:47:43.680486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:16.041 [2024-07-12 00:47:43.687669] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0f70) 00:35:16.041 [2024-07-12 00:47:43.687704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.041 [2024-07-12 00:47:43.687724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:16.041 [2024-07-12 00:47:43.695715] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0f70) 00:35:16.041 [2024-07-12 00:47:43.695751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.041 [2024-07-12 00:47:43.695769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:16.041 [2024-07-12 00:47:43.702698] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0f70) 00:35:16.041 [2024-07-12 00:47:43.702734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.042 [2024-07-12 00:47:43.702753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:16.042 [2024-07-12 00:47:43.709431] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0f70) 00:35:16.042 [2024-07-12 00:47:43.709466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.042 [2024-07-12 00:47:43.709486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:16.042 [2024-07-12 00:47:43.716514] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0f70) 00:35:16.042 [2024-07-12 00:47:43.716548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.042 [2024-07-12 00:47:43.716568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:16.042 [2024-07-12 00:47:43.724460] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0f70) 00:35:16.042 [2024-07-12 00:47:43.724495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.042 [2024-07-12 00:47:43.724514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:16.042 [2024-07-12 00:47:43.731955] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0f70) 00:35:16.042 [2024-07-12 00:47:43.731991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.042 [2024-07-12 00:47:43.732010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:16.042 [2024-07-12 00:47:43.740039] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0f70) 00:35:16.042 [2024-07-12 00:47:43.740087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.042 [2024-07-12 00:47:43.740108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:16.042 [2024-07-12 00:47:43.746946] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0f70) 00:35:16.042 [2024-07-12 00:47:43.746988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.042 [2024-07-12 00:47:43.747008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:16.042 [2024-07-12 00:47:43.753689] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0f70) 00:35:16.042 [2024-07-12 00:47:43.753725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.042 [2024-07-12 00:47:43.753744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:16.042 [2024-07-12 00:47:43.760318] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0f70) 00:35:16.042 [2024-07-12 00:47:43.760354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.042 [2024-07-12 00:47:43.760373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:16.042 [2024-07-12 00:47:43.766983] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0f70) 00:35:16.042 [2024-07-12 00:47:43.767017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.042 [2024-07-12 00:47:43.767035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:16.042 [2024-07-12 00:47:43.774709] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0f70) 00:35:16.042 [2024-07-12 00:47:43.774744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.042 [2024-07-12 00:47:43.774764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:16.042 [2024-07-12 00:47:43.781874] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0f70) 00:35:16.042 [2024-07-12 00:47:43.781910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.042 [2024-07-12 00:47:43.781935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:16.042 [2024-07-12 00:47:43.789656] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0f70) 00:35:16.042 [2024-07-12 00:47:43.789692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.042 [2024-07-12 00:47:43.789711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:16.042 [2024-07-12 00:47:43.796505] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0f70) 00:35:16.042 [2024-07-12 00:47:43.796540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.042 [2024-07-12 00:47:43.796560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:16.042 [2024-07-12 00:47:43.804283] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0f70) 00:35:16.042 [2024-07-12 00:47:43.804330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.042 [2024-07-12 00:47:43.804349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:16.042 [2024-07-12 00:47:43.812087] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0f70) 00:35:16.042 [2024-07-12 00:47:43.812123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.042 [2024-07-12 00:47:43.812143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:16.042 [2024-07-12 00:47:43.818879] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0f70) 00:35:16.042 [2024-07-12 00:47:43.818914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.042 [2024-07-12 00:47:43.818933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:16.042 [2024-07-12 00:47:43.826758] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0f70) 00:35:16.042 [2024-07-12 00:47:43.826795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.042 [2024-07-12 00:47:43.826814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:16.042 [2024-07-12 00:47:43.834758] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0f70) 00:35:16.042 [2024-07-12 00:47:43.834793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.042 [2024-07-12 00:47:43.834812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:16.042 [2024-07-12 00:47:43.842572] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0f70) 00:35:16.042 [2024-07-12 00:47:43.842617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.042 [2024-07-12 00:47:43.842636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:16.042 [2024-07-12 00:47:43.850493] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0f70) 00:35:16.042 [2024-07-12 00:47:43.850537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.042 [2024-07-12 00:47:43.850556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:16.042 [2024-07-12 00:47:43.858482] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0f70) 00:35:16.042 [2024-07-12 00:47:43.858531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.042 [2024-07-12 00:47:43.858551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:16.042 [2024-07-12 00:47:43.866479] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0f70) 00:35:16.042 [2024-07-12 00:47:43.866514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.042 [2024-07-12 00:47:43.866533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:16.042 [2024-07-12 00:47:43.874297] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0f70) 00:35:16.042 [2024-07-12 00:47:43.874342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.042 [2024-07-12 00:47:43.874362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:16.301 [2024-07-12 00:47:43.882127] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0f70) 00:35:16.301 [2024-07-12 00:47:43.882162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.301 [2024-07-12 00:47:43.882182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:16.301 [2024-07-12 00:47:43.889925] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0f70) 00:35:16.301 [2024-07-12 00:47:43.889961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.301 [2024-07-12 00:47:43.889980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:16.301 [2024-07-12 00:47:43.896921] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0f70) 00:35:16.301 [2024-07-12 00:47:43.896956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.301 [2024-07-12 00:47:43.896976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:16.301 [2024-07-12 00:47:43.904503] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0f70) 00:35:16.301 [2024-07-12 00:47:43.904545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.301 [2024-07-12 00:47:43.904564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:16.301 [2024-07-12 00:47:43.912845] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0f70) 00:35:16.301 [2024-07-12 00:47:43.912882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.301 [2024-07-12 00:47:43.912901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:16.301 [2024-07-12 00:47:43.921633] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0f70) 00:35:16.301 [2024-07-12 00:47:43.921668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.301 [2024-07-12 00:47:43.921688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:16.301 [2024-07-12 00:47:43.929205] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0f70) 00:35:16.301 [2024-07-12 00:47:43.929242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.301 [2024-07-12 00:47:43.929262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:16.301 [2024-07-12 00:47:43.936682] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0f70) 00:35:16.301 [2024-07-12 00:47:43.936717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.301 [2024-07-12 00:47:43.936735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:16.301 [2024-07-12 00:47:43.943336] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0f70) 00:35:16.301 [2024-07-12 00:47:43.943371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.301 [2024-07-12 00:47:43.943390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:16.301 [2024-07-12 00:47:43.949932] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0f70) 00:35:16.301 [2024-07-12 00:47:43.949968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.301 [2024-07-12 00:47:43.949987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:16.301 [2024-07-12 00:47:43.956667] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0f70) 00:35:16.301 [2024-07-12 00:47:43.956704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.301 [2024-07-12 00:47:43.956723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:16.302 [2024-07-12 00:47:43.963269] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0f70) 00:35:16.302 [2024-07-12 00:47:43.963304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.302 [2024-07-12 00:47:43.963323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:16.302 [2024-07-12 00:47:43.969956] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0f70) 00:35:16.302 [2024-07-12 00:47:43.969994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.302 [2024-07-12 00:47:43.970014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:16.302 [2024-07-12 00:47:43.976614] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0f70) 00:35:16.302 [2024-07-12 00:47:43.976654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.302 [2024-07-12 00:47:43.976674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:16.302 [2024-07-12 00:47:43.983322] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0f70) 00:35:16.302 [2024-07-12 00:47:43.983357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.302 [2024-07-12 00:47:43.983377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:16.302 [2024-07-12 00:47:43.990098] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0f70) 00:35:16.302 [2024-07-12 00:47:43.990143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.302 [2024-07-12 00:47:43.990163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:16.302 [2024-07-12 00:47:43.997029] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0f70) 00:35:16.302 [2024-07-12 00:47:43.997063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.302 [2024-07-12 00:47:43.997083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:16.302 [2024-07-12 00:47:44.004267] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0f70) 00:35:16.302 [2024-07-12 00:47:44.004312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.302 [2024-07-12 00:47:44.004331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:16.302 [2024-07-12 00:47:44.012079] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0f70) 00:35:16.302 [2024-07-12 00:47:44.012115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.302 [2024-07-12 00:47:44.012134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:16.302 [2024-07-12 00:47:44.020515] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0f70) 00:35:16.302 [2024-07-12 00:47:44.020558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.302 [2024-07-12 00:47:44.020577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:16.302 [2024-07-12 00:47:44.029223] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0f70) 00:35:16.302 [2024-07-12 00:47:44.029257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.302 [2024-07-12 00:47:44.029277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:16.302 [2024-07-12 00:47:44.037146] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0f70) 00:35:16.302 [2024-07-12 00:47:44.037182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.302 [2024-07-12 00:47:44.037201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:16.302 [2024-07-12 00:47:44.045096] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0f70) 00:35:16.302 [2024-07-12 00:47:44.045131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.302 [2024-07-12 00:47:44.045150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:16.302 [2024-07-12 00:47:44.053043] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0f70) 00:35:16.302 [2024-07-12 00:47:44.053079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.302 [2024-07-12 00:47:44.053098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:16.302 [2024-07-12 00:47:44.061234] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0f70) 00:35:16.302 [2024-07-12 00:47:44.061269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.302 [2024-07-12 00:47:44.061296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:16.302 [2024-07-12 00:47:44.068509] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0f70) 00:35:16.302 [2024-07-12 00:47:44.068543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.302 [2024-07-12 00:47:44.068562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:16.302 [2024-07-12 00:47:44.075264] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0f70) 00:35:16.302 [2024-07-12 00:47:44.075298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.302 [2024-07-12 00:47:44.075318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:16.302 [2024-07-12 00:47:44.082031] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0f70) 00:35:16.302 [2024-07-12 00:47:44.082074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.302 [2024-07-12 00:47:44.082093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:16.302 [2024-07-12 00:47:44.088855] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0f70) 00:35:16.302 [2024-07-12 00:47:44.088891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.302 [2024-07-12 00:47:44.088910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:16.302 [2024-07-12 00:47:44.096267] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0f70) 00:35:16.302 [2024-07-12 00:47:44.096302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.302 [2024-07-12 00:47:44.096321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:16.302 [2024-07-12 00:47:44.103978] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0f70) 00:35:16.302 [2024-07-12 00:47:44.104013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.302 [2024-07-12 00:47:44.104038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:16.302 [2024-07-12 00:47:44.110999] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0f70) 00:35:16.302 [2024-07-12 00:47:44.111035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.302 [2024-07-12 00:47:44.111055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:16.302 [2024-07-12 00:47:44.119037] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0f70) 00:35:16.302 [2024-07-12 00:47:44.119080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.302 [2024-07-12 00:47:44.119099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:16.302 [2024-07-12 00:47:44.126201] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0f70) 00:35:16.303 [2024-07-12 00:47:44.126236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.303 [2024-07-12 00:47:44.126255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:16.303 [2024-07-12 00:47:44.134319] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0f70) 00:35:16.303 [2024-07-12 00:47:44.134354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.303 [2024-07-12 00:47:44.134385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:16.561 [2024-07-12 00:47:44.139753] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0f70) 00:35:16.561 [2024-07-12 00:47:44.139847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.561 [2024-07-12 00:47:44.139867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:16.561 [2024-07-12 00:47:44.146478] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0f70) 00:35:16.561 [2024-07-12 00:47:44.146513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.561 [2024-07-12 00:47:44.146532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:16.561 [2024-07-12 00:47:44.153093] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0f70) 00:35:16.561 [2024-07-12 00:47:44.153131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.561 [2024-07-12 00:47:44.153150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:16.561 [2024-07-12 00:47:44.161086] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0f70) 00:35:16.561 [2024-07-12 00:47:44.161122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.561 [2024-07-12 00:47:44.161141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:16.561 [2024-07-12 00:47:44.168419] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0f70) 00:35:16.561 [2024-07-12 00:47:44.168461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.561 [2024-07-12 00:47:44.168481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:16.561 [2024-07-12 00:47:44.175833] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0f70) 00:35:16.561 [2024-07-12 00:47:44.175868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.561 [2024-07-12 00:47:44.175887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:16.561 [2024-07-12 00:47:44.183646] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0f70) 00:35:16.561 [2024-07-12 00:47:44.183683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.561 [2024-07-12 00:47:44.183702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:16.561 [2024-07-12 00:47:44.190868] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0f70) 00:35:16.561 [2024-07-12 00:47:44.190902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.561 [2024-07-12 00:47:44.190921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:16.561 [2024-07-12 00:47:44.198649] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0f70) 00:35:16.561 [2024-07-12 00:47:44.198684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.561 [2024-07-12 00:47:44.198703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:16.561 [2024-07-12 00:47:44.205871] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0f70) 00:35:16.561 [2024-07-12 00:47:44.205909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.561 [2024-07-12 00:47:44.205928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:16.561 [2024-07-12 00:47:44.213198] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0f70) 00:35:16.561 [2024-07-12 00:47:44.213240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.561 [2024-07-12 00:47:44.213260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:16.561 [2024-07-12 00:47:44.219812] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0f70) 00:35:16.561 [2024-07-12 00:47:44.219847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.561 [2024-07-12 00:47:44.219866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:16.561 [2024-07-12 00:47:44.226652] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0f70) 00:35:16.561 [2024-07-12 00:47:44.226686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.561 [2024-07-12 00:47:44.226708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:16.561 [2024-07-12 00:47:44.233445] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0f70) 00:35:16.561 [2024-07-12 00:47:44.233481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.561 [2024-07-12 00:47:44.233500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:16.562 [2024-07-12 00:47:44.240215] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0f70) 00:35:16.562 [2024-07-12 00:47:44.240250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.562 [2024-07-12 00:47:44.240270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:16.562 [2024-07-12 00:47:44.247027] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0f70) 00:35:16.562 [2024-07-12 00:47:44.247068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.562 [2024-07-12 00:47:44.247087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:16.562 [2024-07-12 00:47:44.253765] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0f70) 00:35:16.562 [2024-07-12 00:47:44.253801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.562 [2024-07-12 00:47:44.253821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:16.562 [2024-07-12 00:47:44.261157] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0f70) 00:35:16.562 [2024-07-12 00:47:44.261199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.562 [2024-07-12 00:47:44.261221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:16.562 [2024-07-12 00:47:44.268488] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0f70) 00:35:16.562 [2024-07-12 00:47:44.268523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.562 [2024-07-12 00:47:44.268543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:16.562 [2024-07-12 00:47:44.275292] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0f70) 00:35:16.562 [2024-07-12 00:47:44.275326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.562 [2024-07-12 00:47:44.275345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:16.562 [2024-07-12 00:47:44.282085] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0f70) 00:35:16.562 [2024-07-12 00:47:44.282119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.562 [2024-07-12 00:47:44.282139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:16.562 [2024-07-12 00:47:44.288767] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0f70) 00:35:16.562 [2024-07-12 00:47:44.288801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.562 [2024-07-12 00:47:44.288827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:16.562 [2024-07-12 00:47:44.295489] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0f70) 00:35:16.562 [2024-07-12 00:47:44.295523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.562 [2024-07-12 00:47:44.295542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:16.562 [2024-07-12 00:47:44.302299] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0f70) 00:35:16.562 [2024-07-12 00:47:44.302334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.562 [2024-07-12 00:47:44.302353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:16.562 [2024-07-12 00:47:44.309888] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0f70) 00:35:16.562 [2024-07-12 00:47:44.309927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.562 [2024-07-12 00:47:44.309947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:16.562 [2024-07-12 00:47:44.317379] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0f70) 00:35:16.562 [2024-07-12 00:47:44.317414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.562 [2024-07-12 00:47:44.317433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:16.562 [2024-07-12 00:47:44.324706] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0f70) 00:35:16.562 [2024-07-12 00:47:44.324740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.562 [2024-07-12 00:47:44.324759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:16.562 [2024-07-12 00:47:44.331698] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0f70) 00:35:16.562 [2024-07-12 00:47:44.331733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.562 [2024-07-12 00:47:44.331753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:16.562 [2024-07-12 00:47:44.339252] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0f70) 00:35:16.562 [2024-07-12 00:47:44.339286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.562 [2024-07-12 00:47:44.339305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:16.562 [2024-07-12 00:47:44.346684] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0f70) 00:35:16.562 [2024-07-12 00:47:44.346718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.562 [2024-07-12 00:47:44.346738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:16.562 [2024-07-12 00:47:44.353695] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0f70) 00:35:16.562 [2024-07-12 00:47:44.353735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.562 [2024-07-12 00:47:44.353755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:16.562 [2024-07-12 00:47:44.360475] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0f70) 00:35:16.562 [2024-07-12 00:47:44.360510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.562 [2024-07-12 00:47:44.360528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:16.562 [2024-07-12 00:47:44.368261] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0f70) 00:35:16.562 [2024-07-12 00:47:44.368296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.562 [2024-07-12 00:47:44.368315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:16.562 [2024-07-12 00:47:44.376191] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0f70) 00:35:16.562 [2024-07-12 00:47:44.376225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.562 [2024-07-12 00:47:44.376244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:16.562 [2024-07-12 00:47:44.384134] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0f70) 00:35:16.562 [2024-07-12 00:47:44.384170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.562 [2024-07-12 00:47:44.384189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:16.562 [2024-07-12 00:47:44.391514] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0f70) 00:35:16.562 [2024-07-12 00:47:44.391550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.562 [2024-07-12 00:47:44.391569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:16.562 [2024-07-12 00:47:44.398585] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0f70) 00:35:16.562 [2024-07-12 00:47:44.398633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.563 [2024-07-12 00:47:44.398653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:16.822 [2024-07-12 00:47:44.406489] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0f70) 00:35:16.822 [2024-07-12 00:47:44.406524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.822 [2024-07-12 00:47:44.406543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:16.822 [2024-07-12 00:47:44.413548] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0f70) 00:35:16.822 [2024-07-12 00:47:44.413583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.822 [2024-07-12 00:47:44.413613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:16.822 [2024-07-12 00:47:44.421173] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0f70) 00:35:16.822 [2024-07-12 00:47:44.421208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.822 [2024-07-12 00:47:44.421227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:16.822 [2024-07-12 00:47:44.428570] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0f70) 00:35:16.822 [2024-07-12 00:47:44.428611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.822 [2024-07-12 00:47:44.428632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:16.822 [2024-07-12 00:47:44.436527] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0f70) 00:35:16.822 [2024-07-12 00:47:44.436564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.822 [2024-07-12 00:47:44.436583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:16.822 [2024-07-12 00:47:44.443930] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0f70) 00:35:16.822 [2024-07-12 00:47:44.443965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.822 [2024-07-12 00:47:44.443984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:16.822 [2024-07-12 00:47:44.450895] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0f70) 00:35:16.822 [2024-07-12 00:47:44.450931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.822 [2024-07-12 00:47:44.450950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:16.822 [2024-07-12 00:47:44.458749] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0f70) 00:35:16.822 [2024-07-12 00:47:44.458784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.822 [2024-07-12 00:47:44.458803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:16.822 [2024-07-12 00:47:44.466811] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0f70) 00:35:16.822 [2024-07-12 00:47:44.466848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.822 [2024-07-12 00:47:44.466867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:16.822 [2024-07-12 00:47:44.474171] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0f70) 00:35:16.822 [2024-07-12 00:47:44.474206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.822 [2024-07-12 00:47:44.474225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:16.822 [2024-07-12 00:47:44.481896] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0f70) 00:35:16.822 [2024-07-12 00:47:44.481931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.822 [2024-07-12 00:47:44.481956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:16.822 [2024-07-12 00:47:44.489736] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0f70) 00:35:16.822 [2024-07-12 00:47:44.489773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.822 [2024-07-12 00:47:44.489792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:16.822 [2024-07-12 00:47:44.497955] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0f70) 00:35:16.822 [2024-07-12 00:47:44.497990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.822 [2024-07-12 00:47:44.498009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:16.822 [2024-07-12 00:47:44.505616] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0f70) 00:35:16.823 [2024-07-12 00:47:44.505665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.823 [2024-07-12 00:47:44.505686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:16.823 [2024-07-12 00:47:44.513981] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0f70) 00:35:16.823 [2024-07-12 00:47:44.514017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.823 [2024-07-12 00:47:44.514036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:16.823 [2024-07-12 00:47:44.522486] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0f70) 00:35:16.823 [2024-07-12 00:47:44.522521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.823 [2024-07-12 00:47:44.522540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:16.823 [2024-07-12 00:47:44.530087] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0f70) 00:35:16.823 [2024-07-12 00:47:44.530122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.823 [2024-07-12 00:47:44.530141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:16.823 [2024-07-12 00:47:44.538181] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0f70) 00:35:16.823 [2024-07-12 00:47:44.538216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.823 [2024-07-12 00:47:44.538235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:16.823 [2024-07-12 00:47:44.546603] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0f70) 00:35:16.823 [2024-07-12 00:47:44.546636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.823 [2024-07-12 00:47:44.546656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:16.823 [2024-07-12 00:47:44.553828] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0f70) 00:35:16.823 [2024-07-12 00:47:44.553862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.823 [2024-07-12 00:47:44.553882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:16.823 [2024-07-12 00:47:44.561020] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0f70) 00:35:16.823 [2024-07-12 00:47:44.561055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.823 [2024-07-12 00:47:44.561074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:16.823 [2024-07-12 00:47:44.567861] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0f70) 00:35:16.823 [2024-07-12 00:47:44.567896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.823 [2024-07-12 00:47:44.567915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:16.823 [2024-07-12 00:47:44.574874] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0f70) 00:35:16.823 [2024-07-12 00:47:44.574908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.823 [2024-07-12 00:47:44.574927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:16.823 [2024-07-12 00:47:44.581673] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0f70) 00:35:16.823 [2024-07-12 00:47:44.581707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.823 [2024-07-12 00:47:44.581726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:16.823 [2024-07-12 00:47:44.588335] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0f70) 00:35:16.823 [2024-07-12 00:47:44.588369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.823 [2024-07-12 00:47:44.588388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:16.823 [2024-07-12 00:47:44.595635] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0f70) 00:35:16.823 [2024-07-12 00:47:44.595669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.823 [2024-07-12 00:47:44.595687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:16.823 [2024-07-12 00:47:44.602645] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0f70) 00:35:16.823 [2024-07-12 00:47:44.602679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.823 [2024-07-12 00:47:44.602698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:16.823 [2024-07-12 00:47:44.609285] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0f70) 00:35:16.823 [2024-07-12 00:47:44.609319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.823 [2024-07-12 00:47:44.609344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:16.823 [2024-07-12 00:47:44.616148] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0f70) 00:35:16.823 [2024-07-12 00:47:44.616181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.823 [2024-07-12 00:47:44.616200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:16.823 [2024-07-12 00:47:44.622867] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0f70) 00:35:16.823 [2024-07-12 00:47:44.622901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.823 [2024-07-12 00:47:44.622920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:16.823 [2024-07-12 00:47:44.629500] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0f70) 00:35:16.823 [2024-07-12 00:47:44.629534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.823 [2024-07-12 00:47:44.629554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:16.823 [2024-07-12 00:47:44.635872] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0f70) 00:35:16.823 [2024-07-12 00:47:44.635906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.823 [2024-07-12 00:47:44.635925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:16.823 [2024-07-12 00:47:44.640096] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0f70) 00:35:16.823 [2024-07-12 00:47:44.640129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.823 [2024-07-12 00:47:44.640148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:16.823 [2024-07-12 00:47:44.647040] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0f70) 00:35:16.823 [2024-07-12 00:47:44.647080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.823 [2024-07-12 00:47:44.647099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:16.823 [2024-07-12 00:47:44.654977] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0f70) 00:35:16.823 [2024-07-12 00:47:44.655012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.823 [2024-07-12 00:47:44.655031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:17.121 [2024-07-12 00:47:44.662585] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0f70) 00:35:17.121 [2024-07-12 00:47:44.662632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.121 [2024-07-12 00:47:44.662651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:17.121 [2024-07-12 00:47:44.669679] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0f70) 00:35:17.121 [2024-07-12 00:47:44.669722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.121 [2024-07-12 00:47:44.669742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:17.121 [2024-07-12 00:47:44.677611] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0f70) 00:35:17.121 [2024-07-12 00:47:44.677647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.121 [2024-07-12 00:47:44.677666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:17.121 [2024-07-12 00:47:44.685484] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0f70) 00:35:17.121 [2024-07-12 00:47:44.685519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.121 [2024-07-12 00:47:44.685538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:17.121 [2024-07-12 00:47:44.693039] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0f70) 00:35:17.121 [2024-07-12 00:47:44.693074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.121 [2024-07-12 00:47:44.693094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:17.121 [2024-07-12 00:47:44.700223] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0f70) 00:35:17.121 [2024-07-12 00:47:44.700258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.121 [2024-07-12 00:47:44.700278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:17.121 [2024-07-12 00:47:44.708150] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0f70) 00:35:17.121 [2024-07-12 00:47:44.708185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.121 [2024-07-12 00:47:44.708204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:17.121 [2024-07-12 00:47:44.716541] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0f70) 00:35:17.121 [2024-07-12 00:47:44.716575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.121 [2024-07-12 00:47:44.716603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:17.121 [2024-07-12 00:47:44.724366] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0f70) 00:35:17.121 [2024-07-12 00:47:44.724403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.121 [2024-07-12 00:47:44.724422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:17.121 [2024-07-12 00:47:44.732563] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0f70) 00:35:17.121 [2024-07-12 00:47:44.732605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.121 [2024-07-12 00:47:44.732625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:17.121 [2024-07-12 00:47:44.739568] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0f70) 00:35:17.121 [2024-07-12 00:47:44.739612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.121 [2024-07-12 00:47:44.739631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:17.121 [2024-07-12 00:47:44.746457] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0f70) 00:35:17.121 [2024-07-12 00:47:44.746491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.121 [2024-07-12 00:47:44.746511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:17.121 [2024-07-12 00:47:44.753505] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0f70) 00:35:17.121 [2024-07-12 00:47:44.753540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.121 [2024-07-12 00:47:44.753559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:17.121 [2024-07-12 00:47:44.760882] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0f70) 00:35:17.121 [2024-07-12 00:47:44.760917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.121 [2024-07-12 00:47:44.760936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:17.121 [2024-07-12 00:47:44.769031] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0f70) 00:35:17.121 [2024-07-12 00:47:44.769066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.121 [2024-07-12 00:47:44.769085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:17.121 [2024-07-12 00:47:44.776209] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0f70) 00:35:17.121 [2024-07-12 00:47:44.776243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.121 [2024-07-12 00:47:44.776262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:17.121 [2024-07-12 00:47:44.783692] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0f70) 00:35:17.121 [2024-07-12 00:47:44.783725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.121 [2024-07-12 00:47:44.783745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:17.121 [2024-07-12 00:47:44.791397] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0f70) 00:35:17.121 [2024-07-12 00:47:44.791432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.121 [2024-07-12 00:47:44.791452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:17.121 [2024-07-12 00:47:44.798184] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0f70) 00:35:17.121 [2024-07-12 00:47:44.798218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.121 [2024-07-12 00:47:44.798249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:17.121 [2024-07-12 00:47:44.805798] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0f70) 00:35:17.121 [2024-07-12 00:47:44.805842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.121 [2024-07-12 00:47:44.805861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:17.121 [2024-07-12 00:47:44.813639] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0f70) 00:35:17.121 [2024-07-12 00:47:44.813674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.121 [2024-07-12 00:47:44.813693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:17.121 [2024-07-12 00:47:44.820954] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0f70) 00:35:17.121 [2024-07-12 00:47:44.820988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.121 [2024-07-12 00:47:44.821007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:17.121 [2024-07-12 00:47:44.828382] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0f70) 00:35:17.121 [2024-07-12 00:47:44.828417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.121 [2024-07-12 00:47:44.828437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:17.121 [2024-07-12 00:47:44.835762] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0f70) 00:35:17.121 [2024-07-12 00:47:44.835797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.121 [2024-07-12 00:47:44.835816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:17.121 [2024-07-12 00:47:44.843711] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0f70) 00:35:17.121 [2024-07-12 00:47:44.843745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.122 [2024-07-12 00:47:44.843765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:17.122 [2024-07-12 00:47:44.851180] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0f70) 00:35:17.122 [2024-07-12 00:47:44.851215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.122 [2024-07-12 00:47:44.851234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:17.122 [2024-07-12 00:47:44.858722] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0f70) 00:35:17.122 [2024-07-12 00:47:44.858758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.122 [2024-07-12 00:47:44.858777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:17.122 [2024-07-12 00:47:44.866160] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0f70) 00:35:17.122 [2024-07-12 00:47:44.866201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.122 [2024-07-12 00:47:44.866221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:17.122 [2024-07-12 00:47:44.873361] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0f70) 00:35:17.122 [2024-07-12 00:47:44.873397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.122 [2024-07-12 00:47:44.873416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:17.122 [2024-07-12 00:47:44.880120] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0f70) 00:35:17.122 [2024-07-12 00:47:44.880155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.122 [2024-07-12 00:47:44.880174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:17.122 [2024-07-12 00:47:44.886920] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0f70) 00:35:17.122 [2024-07-12 00:47:44.886956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.122 [2024-07-12 00:47:44.886975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:17.122 [2024-07-12 00:47:44.894647] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0f70) 00:35:17.122 [2024-07-12 00:47:44.894695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.122 [2024-07-12 00:47:44.894714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:17.122 [2024-07-12 00:47:44.901505] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0f70) 00:35:17.122 [2024-07-12 00:47:44.901538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.122 [2024-07-12 00:47:44.901557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:17.122 [2024-07-12 00:47:44.908841] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0f70) 00:35:17.122 [2024-07-12 00:47:44.908875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.122 [2024-07-12 00:47:44.908894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:17.122 [2024-07-12 00:47:44.915636] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0f70) 00:35:17.122 [2024-07-12 00:47:44.915669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.122 [2024-07-12 00:47:44.915688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:17.122 [2024-07-12 00:47:44.922510] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0f70) 00:35:17.122 [2024-07-12 00:47:44.922545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.122 [2024-07-12 00:47:44.922565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:17.122 [2024-07-12 00:47:44.929144] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0f70) 00:35:17.122 [2024-07-12 00:47:44.929178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.122 [2024-07-12 00:47:44.929196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:17.122 [2024-07-12 00:47:44.935841] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0f70) 00:35:17.122 [2024-07-12 00:47:44.935875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.122 [2024-07-12 00:47:44.935894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:17.122 [2024-07-12 00:47:44.943350] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0f70) 00:35:17.122 [2024-07-12 00:47:44.943386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.122 [2024-07-12 00:47:44.943405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:17.122 [2024-07-12 00:47:44.951174] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0f70) 00:35:17.122 [2024-07-12 00:47:44.951208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.122 [2024-07-12 00:47:44.951226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:17.381 [2024-07-12 00:47:44.959185] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0f70) 00:35:17.381 [2024-07-12 00:47:44.959220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.381 [2024-07-12 00:47:44.959240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:17.381 [2024-07-12 00:47:44.966442] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0f70) 00:35:17.381 [2024-07-12 00:47:44.966478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.381 [2024-07-12 00:47:44.966497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:17.381 [2024-07-12 00:47:44.974397] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0f70) 00:35:17.381 [2024-07-12 00:47:44.974431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.382 [2024-07-12 00:47:44.974450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:17.382 [2024-07-12 00:47:44.982179] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0f70) 00:35:17.382 [2024-07-12 00:47:44.982213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.382 [2024-07-12 00:47:44.982232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:17.382 [2024-07-12 00:47:44.989514] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0f70) 00:35:17.382 [2024-07-12 00:47:44.989550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.382 [2024-07-12 00:47:44.989576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:17.382 [2024-07-12 00:47:44.996837] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0f70) 00:35:17.382 [2024-07-12 00:47:44.996873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.382 [2024-07-12 00:47:44.996892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:17.382 [2024-07-12 00:47:45.004728] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0f70) 00:35:17.382 [2024-07-12 00:47:45.004764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.382 [2024-07-12 00:47:45.004794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:17.382 [2024-07-12 00:47:45.012594] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0f70) 00:35:17.382 [2024-07-12 00:47:45.012628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.382 [2024-07-12 00:47:45.012647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:17.382 [2024-07-12 00:47:45.020550] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0f70) 00:35:17.382 [2024-07-12 00:47:45.020593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.382 [2024-07-12 00:47:45.020614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:17.382 [2024-07-12 00:47:45.027964] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0f70) 00:35:17.382 [2024-07-12 00:47:45.028000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.382 [2024-07-12 00:47:45.028019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:17.382 [2024-07-12 00:47:45.035453] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0f70) 00:35:17.382 [2024-07-12 00:47:45.035488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.382 [2024-07-12 00:47:45.035507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:17.382 [2024-07-12 00:47:45.043263] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0f70) 00:35:17.382 [2024-07-12 00:47:45.043298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.382 [2024-07-12 00:47:45.043318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:17.382 [2024-07-12 00:47:45.051114] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0f70) 00:35:17.382 [2024-07-12 00:47:45.051149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.382 [2024-07-12 00:47:45.051167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:17.382 [2024-07-12 00:47:45.058909] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0f70) 00:35:17.382 [2024-07-12 00:47:45.058945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.382 [2024-07-12 00:47:45.058964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:17.382 [2024-07-12 00:47:45.066224] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0f70) 00:35:17.382 [2024-07-12 00:47:45.066258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.382 [2024-07-12 00:47:45.066277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:17.382 [2024-07-12 00:47:45.073612] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0f70) 00:35:17.382 [2024-07-12 00:47:45.073647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.382 [2024-07-12 00:47:45.073666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:17.382 [2024-07-12 00:47:45.081459] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0f70) 00:35:17.382 [2024-07-12 00:47:45.081493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.382 [2024-07-12 00:47:45.081512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:17.382 [2024-07-12 00:47:45.089330] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0f70) 00:35:17.382 [2024-07-12 00:47:45.089367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.382 [2024-07-12 00:47:45.089386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:17.382 [2024-07-12 00:47:45.097162] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0f70) 00:35:17.382 [2024-07-12 00:47:45.097196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.382 [2024-07-12 00:47:45.097215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:17.382 [2024-07-12 00:47:45.104432] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0f70) 00:35:17.382 [2024-07-12 00:47:45.104466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.382 [2024-07-12 00:47:45.104484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:17.382 [2024-07-12 00:47:45.111983] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0f70) 00:35:17.382 [2024-07-12 00:47:45.112018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.382 [2024-07-12 00:47:45.112037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:17.382 [2024-07-12 00:47:45.119764] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0f70) 00:35:17.382 [2024-07-12 00:47:45.119799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.382 [2024-07-12 00:47:45.119824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:17.382 [2024-07-12 00:47:45.126850] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0f70) 00:35:17.382 [2024-07-12 00:47:45.126885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.382 [2024-07-12 00:47:45.126904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:17.382 [2024-07-12 00:47:45.134898] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0f70) 00:35:17.382 [2024-07-12 00:47:45.134933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.382 [2024-07-12 00:47:45.134952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:17.382 [2024-07-12 00:47:45.142308] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0f70) 00:35:17.382 [2024-07-12 00:47:45.142350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.382 [2024-07-12 00:47:45.142369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:17.382 [2024-07-12 00:47:45.149400] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0f70) 00:35:17.382 [2024-07-12 00:47:45.149434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.382 [2024-07-12 00:47:45.149453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:17.382 [2024-07-12 00:47:45.156384] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0f70) 00:35:17.382 [2024-07-12 00:47:45.156428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.382 [2024-07-12 00:47:45.156448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:17.382 [2024-07-12 00:47:45.163699] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0f70) 00:35:17.382 [2024-07-12 00:47:45.163734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.382 [2024-07-12 00:47:45.163753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:17.382 [2024-07-12 00:47:45.170738] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0f70) 00:35:17.382 [2024-07-12 00:47:45.170773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.382 [2024-07-12 00:47:45.170793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:17.382 [2024-07-12 00:47:45.175668] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0f70) 00:35:17.382 [2024-07-12 00:47:45.175716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.382 [2024-07-12 00:47:45.175738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:17.382 [2024-07-12 00:47:45.181625] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0f70) 00:35:17.382 [2024-07-12 00:47:45.181675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.382 [2024-07-12 00:47:45.181694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:17.382 [2024-07-12 00:47:45.188521] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0f70) 00:35:17.382 [2024-07-12 00:47:45.188570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.382 [2024-07-12 00:47:45.188597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:17.383 [2024-07-12 00:47:45.195841] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0f70) 00:35:17.383 [2024-07-12 00:47:45.195877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.383 [2024-07-12 00:47:45.195897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:17.383 [2024-07-12 00:47:45.203078] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0f70) 00:35:17.383 [2024-07-12 00:47:45.203113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.383 [2024-07-12 00:47:45.203132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:17.383 [2024-07-12 00:47:45.211666] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0f70) 00:35:17.383 [2024-07-12 00:47:45.211702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.383 [2024-07-12 00:47:45.211721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:17.641 [2024-07-12 00:47:45.220178] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0f70) 00:35:17.641 [2024-07-12 00:47:45.220215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.641 [2024-07-12 00:47:45.220233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:17.641 [2024-07-12 00:47:45.227720] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0f70) 00:35:17.641 [2024-07-12 00:47:45.227755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.641 [2024-07-12 00:47:45.227774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:17.641 [2024-07-12 00:47:45.234749] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0f70) 00:35:17.641 [2024-07-12 00:47:45.234785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.641 [2024-07-12 00:47:45.234805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:17.641 [2024-07-12 00:47:45.241490] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0f70) 00:35:17.641 [2024-07-12 00:47:45.241526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.641 [2024-07-12 00:47:45.241544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:17.641 [2024-07-12 00:47:45.248332] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0f70) 00:35:17.641 [2024-07-12 00:47:45.248368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.641 [2024-07-12 00:47:45.248387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:17.641 [2024-07-12 00:47:45.255161] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0f70) 00:35:17.641 [2024-07-12 00:47:45.255197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.641 [2024-07-12 00:47:45.255217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:17.641 [2024-07-12 00:47:45.261983] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0f70) 00:35:17.641 [2024-07-12 00:47:45.262019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.641 [2024-07-12 00:47:45.262037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:17.641 [2024-07-12 00:47:45.268424] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0f70) 00:35:17.641 [2024-07-12 00:47:45.268511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.641 [2024-07-12 00:47:45.268541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:17.642 [2024-07-12 00:47:45.272811] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0f70) 00:35:17.642 [2024-07-12 00:47:45.272847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.642 [2024-07-12 00:47:45.272866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:17.642 [2024-07-12 00:47:45.279649] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0f70) 00:35:17.642 [2024-07-12 00:47:45.279685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.642 [2024-07-12 00:47:45.279708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:17.642 [2024-07-12 00:47:45.287529] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0f70) 00:35:17.642 [2024-07-12 00:47:45.287563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.642 [2024-07-12 00:47:45.287582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:17.642 [2024-07-12 00:47:45.295465] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0f70) 00:35:17.642 [2024-07-12 00:47:45.295500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.642 [2024-07-12 00:47:45.295519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:17.642 [2024-07-12 00:47:45.302553] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0f70) 00:35:17.642 [2024-07-12 00:47:45.302596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.642 [2024-07-12 00:47:45.302631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:17.642 [2024-07-12 00:47:45.309922] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0f70) 00:35:17.642 [2024-07-12 00:47:45.309957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.642 [2024-07-12 00:47:45.309976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:17.642 [2024-07-12 00:47:45.317680] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0f70) 00:35:17.642 [2024-07-12 00:47:45.317714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.642 [2024-07-12 00:47:45.317734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:17.642 [2024-07-12 00:47:45.325701] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0f70) 00:35:17.642 [2024-07-12 00:47:45.325743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.642 [2024-07-12 00:47:45.325762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:17.642 [2024-07-12 00:47:45.333845] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0f70) 00:35:17.642 [2024-07-12 00:47:45.333880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.642 [2024-07-12 00:47:45.333900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:17.642 [2024-07-12 00:47:45.342331] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0f70) 00:35:17.642 [2024-07-12 00:47:45.342367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.642 [2024-07-12 00:47:45.342386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:17.642 [2024-07-12 00:47:45.350750] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0f70) 00:35:17.642 [2024-07-12 00:47:45.350785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.642 [2024-07-12 00:47:45.350804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:17.642 [2024-07-12 00:47:45.358696] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0f70) 00:35:17.642 [2024-07-12 00:47:45.358733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.642 [2024-07-12 00:47:45.358753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:17.642 [2024-07-12 00:47:45.366915] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0f70) 00:35:17.642 [2024-07-12 00:47:45.366952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.642 [2024-07-12 00:47:45.366971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:17.642 [2024-07-12 00:47:45.375438] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0f70) 00:35:17.642 [2024-07-12 00:47:45.375485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.642 [2024-07-12 00:47:45.375505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:17.642 [2024-07-12 00:47:45.384247] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0f70) 00:35:17.642 [2024-07-12 00:47:45.384292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.642 [2024-07-12 00:47:45.384311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:17.642 [2024-07-12 00:47:45.391830] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0f70) 00:35:17.642 [2024-07-12 00:47:45.391864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.642 [2024-07-12 00:47:45.391883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:17.642 [2024-07-12 00:47:45.399047] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0f70) 00:35:17.642 [2024-07-12 00:47:45.399082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.642 [2024-07-12 00:47:45.399101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:17.642 [2024-07-12 00:47:45.407210] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12e0f70) 00:35:17.642 [2024-07-12 00:47:45.407246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.642 [2024-07-12 00:47:45.407265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:17.642 00:35:17.642 Latency(us) 00:35:17.642 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:17.642 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:35:17.642 nvme0n1 : 2.00 4220.77 527.60 0.00 0.00 3785.93 958.77 9175.04 00:35:17.642 =================================================================================================================== 00:35:17.642 Total : 4220.77 527.60 0.00 0.00 3785.93 958.77 9175.04 00:35:17.642 0 00:35:17.642 00:47:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:35:17.642 00:47:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:35:17.642 00:47:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:35:17.642 00:47:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:35:17.642 | .driver_specific 00:35:17.642 | .nvme_error 00:35:17.642 | .status_code 00:35:17.642 | .command_transient_transport_error' 00:35:17.900 00:47:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 272 > 0 )) 00:35:17.900 00:47:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1077688 00:35:17.900 00:47:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@946 -- # '[' -z 1077688 ']' 00:35:17.900 00:47:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # kill -0 1077688 00:35:17.900 00:47:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # uname 00:35:17.900 00:47:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:35:17.900 00:47:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1077688 00:35:18.159 00:47:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:35:18.159 00:47:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:35:18.159 00:47:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1077688' 00:35:18.159 killing process with pid 1077688 00:35:18.159 00:47:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # kill 1077688 00:35:18.159 Received shutdown signal, test time was about 2.000000 seconds 00:35:18.159 00:35:18.159 Latency(us) 00:35:18.159 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:18.159 =================================================================================================================== 00:35:18.159 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:18.159 00:47:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # wait 1077688 00:35:18.159 00:47:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:35:18.159 00:47:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:35:18.159 00:47:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:35:18.159 00:47:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:35:18.159 00:47:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:35:18.159 00:47:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1077998 00:35:18.159 00:47:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:35:18.159 00:47:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1077998 /var/tmp/bperf.sock 00:35:18.159 00:47:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@827 -- # '[' -z 1077998 ']' 00:35:18.159 00:47:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:18.159 00:47:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local max_retries=100 00:35:18.159 00:47:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:18.159 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:18.159 00:47:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # xtrace_disable 00:35:18.159 00:47:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:18.159 [2024-07-12 00:47:45.963901] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:35:18.159 [2024-07-12 00:47:45.964005] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1077998 ] 00:35:18.159 EAL: No free 2048 kB hugepages reported on node 1 00:35:18.417 [2024-07-12 00:47:46.024551] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:18.417 [2024-07-12 00:47:46.115387] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:35:18.417 00:47:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:35:18.417 00:47:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # return 0 00:35:18.417 00:47:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:35:18.417 00:47:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:35:18.675 00:47:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:35:18.675 00:47:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:18.675 00:47:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:18.933 00:47:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:18.933 00:47:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:18.933 00:47:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:19.499 nvme0n1 00:35:19.499 00:47:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:35:19.499 00:47:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:19.499 00:47:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:19.499 00:47:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:19.499 00:47:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:35:19.499 00:47:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:19.499 Running I/O for 2 seconds... 00:35:19.499 [2024-07-12 00:47:47.216160] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2b7f0) with pdu=0x2000190df988 00:35:19.500 [2024-07-12 00:47:47.216996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:5988 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:19.500 [2024-07-12 00:47:47.217038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:35:19.500 [2024-07-12 00:47:47.230106] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2b7f0) with pdu=0x2000190df988 00:35:19.500 [2024-07-12 00:47:47.230952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:3123 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:19.500 [2024-07-12 00:47:47.230985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:35:19.500 [2024-07-12 00:47:47.242720] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2b7f0) with pdu=0x2000190de8a8 00:35:19.500 [2024-07-12 00:47:47.243536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:20156 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:19.500 [2024-07-12 00:47:47.243567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:35:19.500 [2024-07-12 00:47:47.260066] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2b7f0) with pdu=0x2000190e01f8 00:35:19.500 [2024-07-12 00:47:47.261660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:6466 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:19.500 [2024-07-12 00:47:47.261692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:19.500 [2024-07-12 00:47:47.272652] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2b7f0) with pdu=0x2000190e01f8 00:35:19.500 [2024-07-12 00:47:47.274234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:3864 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:19.500 [2024-07-12 00:47:47.274264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:19.500 [2024-07-12 00:47:47.287027] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2b7f0) with pdu=0x2000190df118 00:35:19.500 [2024-07-12 00:47:47.288800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:3068 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:19.500 [2024-07-12 00:47:47.288831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:35:19.500 [2024-07-12 00:47:47.301384] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2b7f0) with pdu=0x2000190e5220 00:35:19.500 [2024-07-12 00:47:47.303331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:15700 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:19.500 [2024-07-12 00:47:47.303362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:19.500 [2024-07-12 00:47:47.315732] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2b7f0) with pdu=0x2000190f8618 00:35:19.500 [2024-07-12 00:47:47.317868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:22306 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:19.500 [2024-07-12 00:47:47.317899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:35:19.500 [2024-07-12 00:47:47.330029] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2b7f0) with pdu=0x2000190f2510 00:35:19.500 [2024-07-12 00:47:47.332357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:11676 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:19.500 [2024-07-12 00:47:47.332387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:19.760 [2024-07-12 00:47:47.339768] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2b7f0) with pdu=0x2000190e6300 00:35:19.760 [2024-07-12 00:47:47.340778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:3900 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:19.760 [2024-07-12 00:47:47.340809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:35:19.760 [2024-07-12 00:47:47.354082] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2b7f0) with pdu=0x2000190f1ca0 00:35:19.760 [2024-07-12 00:47:47.355280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:22053 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:19.760 [2024-07-12 00:47:47.355310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:35:19.760 [2024-07-12 00:47:47.368500] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2b7f0) with pdu=0x2000190e7818 00:35:19.760 [2024-07-12 00:47:47.369940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:18133 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:19.760 [2024-07-12 00:47:47.369971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:35:19.760 [2024-07-12 00:47:47.381722] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2b7f0) with pdu=0x2000190f81e0 00:35:19.760 [2024-07-12 00:47:47.383142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:220 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:19.760 [2024-07-12 00:47:47.383173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:35:19.760 [2024-07-12 00:47:47.396054] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2b7f0) with pdu=0x2000190f0350 00:35:19.760 [2024-07-12 00:47:47.397669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:424 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:19.760 [2024-07-12 00:47:47.397699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:35:19.760 [2024-07-12 00:47:47.410347] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2b7f0) with pdu=0x2000190de8a8 00:35:19.760 [2024-07-12 00:47:47.412153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:5141 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:19.760 [2024-07-12 00:47:47.412184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:35:19.760 [2024-07-12 00:47:47.424655] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2b7f0) with pdu=0x2000190ea248 00:35:19.760 [2024-07-12 00:47:47.426657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:18125 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:19.760 [2024-07-12 00:47:47.426688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:35:19.760 [2024-07-12 00:47:47.439038] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2b7f0) with pdu=0x2000190fcdd0 00:35:19.760 [2024-07-12 00:47:47.441270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:23096 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:19.760 [2024-07-12 00:47:47.441302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:35:19.760 [2024-07-12 00:47:47.448821] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2b7f0) with pdu=0x2000190fef90 00:35:19.761 [2024-07-12 00:47:47.449730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:492 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:19.761 [2024-07-12 00:47:47.449760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:35:19.761 [2024-07-12 00:47:47.463260] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2b7f0) with pdu=0x2000190eb328 00:35:19.761 [2024-07-12 00:47:47.464348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:21624 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:19.761 [2024-07-12 00:47:47.464379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:35:19.761 [2024-07-12 00:47:47.477791] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2b7f0) with pdu=0x2000190e23b8 00:35:19.761 [2024-07-12 00:47:47.479072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:18284 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:19.761 [2024-07-12 00:47:47.479105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:35:19.761 [2024-07-12 00:47:47.491619] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2b7f0) with pdu=0x2000190f92c0 00:35:19.761 [2024-07-12 00:47:47.492870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:8315 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:19.761 [2024-07-12 00:47:47.492901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:35:19.761 [2024-07-12 00:47:47.505204] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2b7f0) with pdu=0x2000190e1b48 00:35:19.761 [2024-07-12 00:47:47.506446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:7073 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:19.761 [2024-07-12 00:47:47.506477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:35:19.761 [2024-07-12 00:47:47.517945] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2b7f0) with pdu=0x2000190e4de8 00:35:19.761 [2024-07-12 00:47:47.519202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:8468 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:19.761 [2024-07-12 00:47:47.519240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:35:19.761 [2024-07-12 00:47:47.532307] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2b7f0) with pdu=0x2000190fda78 00:35:19.761 [2024-07-12 00:47:47.533732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20738 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:19.761 [2024-07-12 00:47:47.533763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:35:19.761 [2024-07-12 00:47:47.546691] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2b7f0) with pdu=0x2000190f0ff8 00:35:19.761 [2024-07-12 00:47:47.548319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:4581 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:19.761 [2024-07-12 00:47:47.548350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:35:19.761 [2024-07-12 00:47:47.561116] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2b7f0) with pdu=0x2000190fa3a0 00:35:19.761 [2024-07-12 00:47:47.562930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:23889 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:19.761 [2024-07-12 00:47:47.562960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:35:19.761 [2024-07-12 00:47:47.575500] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2b7f0) with pdu=0x2000190eea00 00:35:19.761 [2024-07-12 00:47:47.577494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:24094 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:19.761 [2024-07-12 00:47:47.577527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:35:19.761 [2024-07-12 00:47:47.589879] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2b7f0) with pdu=0x2000190f96f8 00:35:19.761 [2024-07-12 00:47:47.592081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24017 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:19.761 [2024-07-12 00:47:47.592111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:35:20.021 [2024-07-12 00:47:47.599634] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2b7f0) with pdu=0x2000190e84c0 00:35:20.021 [2024-07-12 00:47:47.600483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:20001 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:20.021 [2024-07-12 00:47:47.600513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:35:20.021 [2024-07-12 00:47:47.613960] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2b7f0) with pdu=0x2000190fb480 00:35:20.021 [2024-07-12 00:47:47.615037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:629 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:20.021 [2024-07-12 00:47:47.615066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:35:20.021 [2024-07-12 00:47:47.628365] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2b7f0) with pdu=0x2000190e3060 00:35:20.021 [2024-07-12 00:47:47.629631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:20540 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:20.021 [2024-07-12 00:47:47.629661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:35:20.021 [2024-07-12 00:47:47.642209] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2b7f0) with pdu=0x2000190ef270 00:35:20.021 [2024-07-12 00:47:47.642385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:9228 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:20.021 [2024-07-12 00:47:47.642414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:35:20.021 [2024-07-12 00:47:47.656918] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2b7f0) with pdu=0x2000190ef270 00:35:20.021 [2024-07-12 00:47:47.657080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:2127 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:20.021 [2024-07-12 00:47:47.657107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:35:20.021 [2024-07-12 00:47:47.671552] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2b7f0) with pdu=0x2000190ef270 00:35:20.021 [2024-07-12 00:47:47.671718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:14710 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:20.021 [2024-07-12 00:47:47.671747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:35:20.021 [2024-07-12 00:47:47.686287] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2b7f0) with pdu=0x2000190ef270 00:35:20.021 [2024-07-12 00:47:47.686453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:11908 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:20.021 [2024-07-12 00:47:47.686482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:35:20.021 [2024-07-12 00:47:47.701012] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2b7f0) with pdu=0x2000190ef270 00:35:20.021 [2024-07-12 00:47:47.701171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:14291 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:20.021 [2024-07-12 00:47:47.701199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:35:20.021 [2024-07-12 00:47:47.715698] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2b7f0) with pdu=0x2000190ef270 00:35:20.021 [2024-07-12 00:47:47.715857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:8443 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:20.021 [2024-07-12 00:47:47.715884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:35:20.021 [2024-07-12 00:47:47.730395] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2b7f0) with pdu=0x2000190ef270 00:35:20.021 [2024-07-12 00:47:47.730556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:17839 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:20.021 [2024-07-12 00:47:47.730592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:35:20.021 [2024-07-12 00:47:47.745224] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2b7f0) with pdu=0x2000190ef270 00:35:20.021 [2024-07-12 00:47:47.745395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:3194 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:20.021 [2024-07-12 00:47:47.745424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:35:20.021 [2024-07-12 00:47:47.759967] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2b7f0) with pdu=0x2000190ef270 00:35:20.021 [2024-07-12 00:47:47.760125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:9527 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:20.021 [2024-07-12 00:47:47.760153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:35:20.021 [2024-07-12 00:47:47.774663] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2b7f0) with pdu=0x2000190ef270 00:35:20.021 [2024-07-12 00:47:47.774824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:16418 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:20.021 [2024-07-12 00:47:47.774851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:35:20.021 [2024-07-12 00:47:47.789355] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2b7f0) with pdu=0x2000190ef270 00:35:20.021 [2024-07-12 00:47:47.789516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:11624 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:20.021 [2024-07-12 00:47:47.789544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:35:20.021 [2024-07-12 00:47:47.804066] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2b7f0) with pdu=0x2000190ef270 00:35:20.021 [2024-07-12 00:47:47.804227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:9238 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:20.021 [2024-07-12 00:47:47.804256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:35:20.021 [2024-07-12 00:47:47.818821] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2b7f0) with pdu=0x2000190ef270 00:35:20.021 [2024-07-12 00:47:47.818994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:3644 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:20.021 [2024-07-12 00:47:47.819024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:35:20.021 [2024-07-12 00:47:47.833521] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2b7f0) with pdu=0x2000190ef270 00:35:20.021 [2024-07-12 00:47:47.833690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:19250 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:20.021 [2024-07-12 00:47:47.833719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:35:20.021 [2024-07-12 00:47:47.848208] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2b7f0) with pdu=0x2000190ef270 00:35:20.021 [2024-07-12 00:47:47.848374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:24539 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:20.021 [2024-07-12 00:47:47.848401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:35:20.282 [2024-07-12 00:47:47.862908] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2b7f0) with pdu=0x2000190ef270 00:35:20.282 [2024-07-12 00:47:47.863066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:4027 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:20.282 [2024-07-12 00:47:47.863097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:35:20.282 [2024-07-12 00:47:47.877685] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2b7f0) with pdu=0x2000190ef270 00:35:20.282 [2024-07-12 00:47:47.877846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:2257 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:20.282 [2024-07-12 00:47:47.877874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:35:20.282 [2024-07-12 00:47:47.892296] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2b7f0) with pdu=0x2000190ef270 00:35:20.282 [2024-07-12 00:47:47.892456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:14089 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:20.282 [2024-07-12 00:47:47.892491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:35:20.282 [2024-07-12 00:47:47.907024] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2b7f0) with pdu=0x2000190ef270 00:35:20.282 [2024-07-12 00:47:47.907183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:14131 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:20.282 [2024-07-12 00:47:47.907213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:35:20.282 [2024-07-12 00:47:47.921727] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2b7f0) with pdu=0x2000190ef270 00:35:20.282 [2024-07-12 00:47:47.921893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:9280 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:20.282 [2024-07-12 00:47:47.921922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:35:20.282 [2024-07-12 00:47:47.936447] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2b7f0) with pdu=0x2000190ef270 00:35:20.282 [2024-07-12 00:47:47.936608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:13528 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:20.282 [2024-07-12 00:47:47.936645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:35:20.282 [2024-07-12 00:47:47.951176] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2b7f0) with pdu=0x2000190ef270 00:35:20.282 [2024-07-12 00:47:47.951344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:8693 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:20.282 [2024-07-12 00:47:47.951372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:35:20.282 [2024-07-12 00:47:47.965894] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2b7f0) with pdu=0x2000190ef270 00:35:20.282 [2024-07-12 00:47:47.966061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:21127 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:20.282 [2024-07-12 00:47:47.966088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:35:20.282 [2024-07-12 00:47:47.980616] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2b7f0) with pdu=0x2000190ef270 00:35:20.282 [2024-07-12 00:47:47.980785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:7913 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:20.282 [2024-07-12 00:47:47.980813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:35:20.282 [2024-07-12 00:47:47.995406] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2b7f0) with pdu=0x2000190ef270 00:35:20.282 [2024-07-12 00:47:47.995570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:1571 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:20.282 [2024-07-12 00:47:47.995605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:35:20.282 [2024-07-12 00:47:48.010103] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2b7f0) with pdu=0x2000190ef270 00:35:20.282 [2024-07-12 00:47:48.010270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:17843 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:20.282 [2024-07-12 00:47:48.010298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:35:20.282 [2024-07-12 00:47:48.024836] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2b7f0) with pdu=0x2000190ef270 00:35:20.282 [2024-07-12 00:47:48.025000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:4065 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:20.282 [2024-07-12 00:47:48.025028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:35:20.282 [2024-07-12 00:47:48.039508] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2b7f0) with pdu=0x2000190ef270 00:35:20.282 [2024-07-12 00:47:48.039682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:13166 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:20.282 [2024-07-12 00:47:48.039712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:35:20.282 [2024-07-12 00:47:48.054210] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2b7f0) with pdu=0x2000190ef270 00:35:20.282 [2024-07-12 00:47:48.054376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:4724 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:20.282 [2024-07-12 00:47:48.054405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:35:20.282 [2024-07-12 00:47:48.069004] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2b7f0) with pdu=0x2000190ef270 00:35:20.282 [2024-07-12 00:47:48.069164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:2415 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:20.282 [2024-07-12 00:47:48.069192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:35:20.282 [2024-07-12 00:47:48.083949] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2b7f0) with pdu=0x2000190ef270 00:35:20.282 [2024-07-12 00:47:48.084114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:2620 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:20.282 [2024-07-12 00:47:48.084142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:35:20.282 [2024-07-12 00:47:48.098720] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2b7f0) with pdu=0x2000190ef270 00:35:20.282 [2024-07-12 00:47:48.098886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:498 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:20.283 [2024-07-12 00:47:48.098914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:35:20.283 [2024-07-12 00:47:48.113456] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2b7f0) with pdu=0x2000190ef270 00:35:20.283 [2024-07-12 00:47:48.113634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:891 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:20.283 [2024-07-12 00:47:48.113663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:35:20.543 [2024-07-12 00:47:48.128273] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2b7f0) with pdu=0x2000190ef270 00:35:20.543 [2024-07-12 00:47:48.128442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:9160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:20.543 [2024-07-12 00:47:48.128471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:35:20.543 [2024-07-12 00:47:48.143027] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2b7f0) with pdu=0x2000190ef270 00:35:20.543 [2024-07-12 00:47:48.143191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:6016 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:20.543 [2024-07-12 00:47:48.143220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:35:20.543 [2024-07-12 00:47:48.157792] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2b7f0) with pdu=0x2000190ef270 00:35:20.543 [2024-07-12 00:47:48.157956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:8459 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:20.543 [2024-07-12 00:47:48.157983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:35:20.543 [2024-07-12 00:47:48.172573] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2b7f0) with pdu=0x2000190ef270 00:35:20.543 [2024-07-12 00:47:48.172751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:7387 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:20.543 [2024-07-12 00:47:48.172779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:35:20.543 [2024-07-12 00:47:48.187288] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2b7f0) with pdu=0x2000190ef270 00:35:20.543 [2024-07-12 00:47:48.187459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24780 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:20.543 [2024-07-12 00:47:48.187488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:35:20.543 [2024-07-12 00:47:48.202084] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2b7f0) with pdu=0x2000190ef270 00:35:20.543 [2024-07-12 00:47:48.202261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:25224 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:20.543 [2024-07-12 00:47:48.202289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:35:20.543 [2024-07-12 00:47:48.216850] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2b7f0) with pdu=0x2000190ef270 00:35:20.543 [2024-07-12 00:47:48.217019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:7104 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:20.543 [2024-07-12 00:47:48.217047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:35:20.543 [2024-07-12 00:47:48.231647] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2b7f0) with pdu=0x2000190ef270 00:35:20.543 [2024-07-12 00:47:48.231813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:21029 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:20.543 [2024-07-12 00:47:48.231842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:35:20.543 [2024-07-12 00:47:48.246482] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2b7f0) with pdu=0x2000190ef270 00:35:20.543 [2024-07-12 00:47:48.246656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:7159 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:20.543 [2024-07-12 00:47:48.246685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:35:20.543 [2024-07-12 00:47:48.261234] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2b7f0) with pdu=0x2000190ef270 00:35:20.543 [2024-07-12 00:47:48.261402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:11548 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:20.543 [2024-07-12 00:47:48.261430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:35:20.543 [2024-07-12 00:47:48.275996] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2b7f0) with pdu=0x2000190ef270 00:35:20.543 [2024-07-12 00:47:48.276157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:15238 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:20.543 [2024-07-12 00:47:48.276200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:35:20.543 [2024-07-12 00:47:48.290793] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2b7f0) with pdu=0x2000190ef270 00:35:20.543 [2024-07-12 00:47:48.290964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:4314 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:20.543 [2024-07-12 00:47:48.290992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:35:20.543 [2024-07-12 00:47:48.305541] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2b7f0) with pdu=0x2000190ef270 00:35:20.543 [2024-07-12 00:47:48.305717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:24119 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:20.543 [2024-07-12 00:47:48.305746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:35:20.543 [2024-07-12 00:47:48.320249] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2b7f0) with pdu=0x2000190ef270 00:35:20.543 [2024-07-12 00:47:48.320413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:17910 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:20.543 [2024-07-12 00:47:48.320440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:35:20.543 [2024-07-12 00:47:48.335087] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2b7f0) with pdu=0x2000190ef270 00:35:20.543 [2024-07-12 00:47:48.335252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:8768 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:20.543 [2024-07-12 00:47:48.335279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:35:20.543 [2024-07-12 00:47:48.349825] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2b7f0) with pdu=0x2000190ef270 00:35:20.543 [2024-07-12 00:47:48.349985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:17004 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:20.543 [2024-07-12 00:47:48.350012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:35:20.543 [2024-07-12 00:47:48.364619] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2b7f0) with pdu=0x2000190ef270 00:35:20.543 [2024-07-12 00:47:48.364782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:16362 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:20.543 [2024-07-12 00:47:48.364809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:35:20.543 [2024-07-12 00:47:48.379373] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2b7f0) with pdu=0x2000190ef270 00:35:20.544 [2024-07-12 00:47:48.379538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:19136 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:20.544 [2024-07-12 00:47:48.379567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:35:20.803 [2024-07-12 00:47:48.394368] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2b7f0) with pdu=0x2000190ef270 00:35:20.803 [2024-07-12 00:47:48.394530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:20551 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:20.803 [2024-07-12 00:47:48.394558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:35:20.803 [2024-07-12 00:47:48.409160] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2b7f0) with pdu=0x2000190ef270 00:35:20.803 [2024-07-12 00:47:48.409330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:14907 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:20.803 [2024-07-12 00:47:48.409357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:35:20.803 [2024-07-12 00:47:48.423920] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2b7f0) with pdu=0x2000190ef270 00:35:20.803 [2024-07-12 00:47:48.424079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:20190 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:20.803 [2024-07-12 00:47:48.424108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:35:20.803 [2024-07-12 00:47:48.438651] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2b7f0) with pdu=0x2000190ef270 00:35:20.803 [2024-07-12 00:47:48.438816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:1555 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:20.803 [2024-07-12 00:47:48.438845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:35:20.803 [2024-07-12 00:47:48.453449] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2b7f0) with pdu=0x2000190ef270 00:35:20.803 [2024-07-12 00:47:48.453613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:1631 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:20.803 [2024-07-12 00:47:48.453648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:35:20.803 [2024-07-12 00:47:48.468293] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2b7f0) with pdu=0x2000190ef270 00:35:20.803 [2024-07-12 00:47:48.468454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:15259 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:20.803 [2024-07-12 00:47:48.468482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:35:20.803 [2024-07-12 00:47:48.483093] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2b7f0) with pdu=0x2000190ef270 00:35:20.803 [2024-07-12 00:47:48.483256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:19536 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:20.803 [2024-07-12 00:47:48.483283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:35:20.803 [2024-07-12 00:47:48.497963] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2b7f0) with pdu=0x2000190ef270 00:35:20.803 [2024-07-12 00:47:48.498135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:13689 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:20.803 [2024-07-12 00:47:48.498164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:35:20.803 [2024-07-12 00:47:48.512746] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2b7f0) with pdu=0x2000190ef270 00:35:20.803 [2024-07-12 00:47:48.512909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:25532 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:20.803 [2024-07-12 00:47:48.512938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:35:20.803 [2024-07-12 00:47:48.527538] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2b7f0) with pdu=0x2000190ef270 00:35:20.803 [2024-07-12 00:47:48.527712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:6093 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:20.803 [2024-07-12 00:47:48.527740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:35:20.803 [2024-07-12 00:47:48.542291] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2b7f0) with pdu=0x2000190ef270 00:35:20.803 [2024-07-12 00:47:48.542454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:24299 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:20.803 [2024-07-12 00:47:48.542482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:35:20.803 [2024-07-12 00:47:48.557132] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2b7f0) with pdu=0x2000190ef270 00:35:20.803 [2024-07-12 00:47:48.557295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:12138 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:20.803 [2024-07-12 00:47:48.557324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:35:20.804 [2024-07-12 00:47:48.571985] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2b7f0) with pdu=0x2000190ef270 00:35:20.804 [2024-07-12 00:47:48.572146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:13319 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:20.804 [2024-07-12 00:47:48.572175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:35:20.804 [2024-07-12 00:47:48.586797] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2b7f0) with pdu=0x2000190ef270 00:35:20.804 [2024-07-12 00:47:48.586959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:9217 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:20.804 [2024-07-12 00:47:48.586994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:35:20.804 [2024-07-12 00:47:48.601574] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2b7f0) with pdu=0x2000190ef270 00:35:20.804 [2024-07-12 00:47:48.601748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:8400 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:20.804 [2024-07-12 00:47:48.601777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:35:20.804 [2024-07-12 00:47:48.616345] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2b7f0) with pdu=0x2000190ef270 00:35:20.804 [2024-07-12 00:47:48.616510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:20818 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:20.804 [2024-07-12 00:47:48.616538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:35:20.804 [2024-07-12 00:47:48.631213] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2b7f0) with pdu=0x2000190ef270 00:35:20.804 [2024-07-12 00:47:48.631374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:8530 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:20.804 [2024-07-12 00:47:48.631404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:35:21.064 [2024-07-12 00:47:48.646014] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2b7f0) with pdu=0x2000190ef270 00:35:21.064 [2024-07-12 00:47:48.646174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:24410 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:21.064 [2024-07-12 00:47:48.646202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:35:21.064 [2024-07-12 00:47:48.660844] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2b7f0) with pdu=0x2000190ef270 00:35:21.064 [2024-07-12 00:47:48.661012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:14731 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:21.064 [2024-07-12 00:47:48.661049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:35:21.064 [2024-07-12 00:47:48.675676] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2b7f0) with pdu=0x2000190ef270 00:35:21.064 [2024-07-12 00:47:48.675845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:4488 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:21.064 [2024-07-12 00:47:48.675872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:35:21.064 [2024-07-12 00:47:48.690509] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2b7f0) with pdu=0x2000190ef270 00:35:21.064 [2024-07-12 00:47:48.690681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:6693 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:21.064 [2024-07-12 00:47:48.690713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:35:21.064 [2024-07-12 00:47:48.705322] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2b7f0) with pdu=0x2000190ef270 00:35:21.064 [2024-07-12 00:47:48.705488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:11496 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:21.064 [2024-07-12 00:47:48.705518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:35:21.064 [2024-07-12 00:47:48.720100] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2b7f0) with pdu=0x2000190ef270 00:35:21.064 [2024-07-12 00:47:48.720261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:4122 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:21.064 [2024-07-12 00:47:48.720289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:35:21.064 [2024-07-12 00:47:48.734903] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2b7f0) with pdu=0x2000190ef270 00:35:21.064 [2024-07-12 00:47:48.735065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:12631 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:21.064 [2024-07-12 00:47:48.735093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:35:21.064 [2024-07-12 00:47:48.749697] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2b7f0) with pdu=0x2000190ef270 00:35:21.064 [2024-07-12 00:47:48.749860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:18292 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:21.065 [2024-07-12 00:47:48.749890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:35:21.065 [2024-07-12 00:47:48.764491] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2b7f0) with pdu=0x2000190ef270 00:35:21.065 [2024-07-12 00:47:48.764657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:13742 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:21.065 [2024-07-12 00:47:48.764686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:35:21.065 [2024-07-12 00:47:48.779205] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2b7f0) with pdu=0x2000190ef270 00:35:21.065 [2024-07-12 00:47:48.779365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:3308 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:21.065 [2024-07-12 00:47:48.779394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:35:21.065 [2024-07-12 00:47:48.794014] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2b7f0) with pdu=0x2000190ef270 00:35:21.065 [2024-07-12 00:47:48.794183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:5880 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:21.065 [2024-07-12 00:47:48.794211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:35:21.065 [2024-07-12 00:47:48.808776] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2b7f0) with pdu=0x2000190ef270 00:35:21.065 [2024-07-12 00:47:48.808937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:677 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:21.065 [2024-07-12 00:47:48.808965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:35:21.065 [2024-07-12 00:47:48.823542] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2b7f0) with pdu=0x2000190ef270 00:35:21.065 [2024-07-12 00:47:48.823712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:4765 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:21.065 [2024-07-12 00:47:48.823742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:35:21.065 [2024-07-12 00:47:48.838305] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2b7f0) with pdu=0x2000190ef270 00:35:21.065 [2024-07-12 00:47:48.838467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:23514 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:21.065 [2024-07-12 00:47:48.838496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:35:21.065 [2024-07-12 00:47:48.853135] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2b7f0) with pdu=0x2000190ef270 00:35:21.065 [2024-07-12 00:47:48.853296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23445 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:21.065 [2024-07-12 00:47:48.853324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:35:21.065 [2024-07-12 00:47:48.867887] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2b7f0) with pdu=0x2000190ef270 00:35:21.065 [2024-07-12 00:47:48.868048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:1933 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:21.065 [2024-07-12 00:47:48.868076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:35:21.065 [2024-07-12 00:47:48.882603] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2b7f0) with pdu=0x2000190ef270 00:35:21.065 [2024-07-12 00:47:48.882770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:5782 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:21.065 [2024-07-12 00:47:48.882798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:35:21.065 [2024-07-12 00:47:48.897404] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2b7f0) with pdu=0x2000190ef270 00:35:21.065 [2024-07-12 00:47:48.897567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:6710 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:21.065 [2024-07-12 00:47:48.897604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:35:21.326 [2024-07-12 00:47:48.912166] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2b7f0) with pdu=0x2000190ef270 00:35:21.326 [2024-07-12 00:47:48.912328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:10566 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:21.326 [2024-07-12 00:47:48.912356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:35:21.326 [2024-07-12 00:47:48.926951] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2b7f0) with pdu=0x2000190ef270 00:35:21.326 [2024-07-12 00:47:48.927111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:21107 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:21.326 [2024-07-12 00:47:48.927139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:35:21.326 [2024-07-12 00:47:48.941699] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2b7f0) with pdu=0x2000190ef270 00:35:21.326 [2024-07-12 00:47:48.941858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:17514 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:21.326 [2024-07-12 00:47:48.941888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:35:21.326 [2024-07-12 00:47:48.956435] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2b7f0) with pdu=0x2000190ef270 00:35:21.326 [2024-07-12 00:47:48.956603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:2235 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:21.326 [2024-07-12 00:47:48.956632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:35:21.326 [2024-07-12 00:47:48.971234] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2b7f0) with pdu=0x2000190ef270 00:35:21.326 [2024-07-12 00:47:48.971392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23830 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:21.326 [2024-07-12 00:47:48.971421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:35:21.326 [2024-07-12 00:47:48.986043] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2b7f0) with pdu=0x2000190ef270 00:35:21.326 [2024-07-12 00:47:48.986203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:10762 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:21.326 [2024-07-12 00:47:48.986231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:35:21.326 [2024-07-12 00:47:49.000856] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2b7f0) with pdu=0x2000190ef270 00:35:21.326 [2024-07-12 00:47:49.001032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:12509 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:21.326 [2024-07-12 00:47:49.001062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:35:21.326 [2024-07-12 00:47:49.015618] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2b7f0) with pdu=0x2000190ef270 00:35:21.326 [2024-07-12 00:47:49.015777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:7574 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:21.326 [2024-07-12 00:47:49.015806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:35:21.326 [2024-07-12 00:47:49.030467] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2b7f0) with pdu=0x2000190ef270 00:35:21.326 [2024-07-12 00:47:49.030637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:4351 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:21.326 [2024-07-12 00:47:49.030665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:35:21.326 [2024-07-12 00:47:49.045266] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2b7f0) with pdu=0x2000190ef270 00:35:21.326 [2024-07-12 00:47:49.045430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:13690 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:21.326 [2024-07-12 00:47:49.045467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:35:21.326 [2024-07-12 00:47:49.060188] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2b7f0) with pdu=0x2000190ef270 00:35:21.326 [2024-07-12 00:47:49.060349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:15519 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:21.326 [2024-07-12 00:47:49.060378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:35:21.326 [2024-07-12 00:47:49.074997] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2b7f0) with pdu=0x2000190ef270 00:35:21.326 [2024-07-12 00:47:49.075160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:616 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:21.326 [2024-07-12 00:47:49.075189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:35:21.326 [2024-07-12 00:47:49.089732] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2b7f0) with pdu=0x2000190ef270 00:35:21.326 [2024-07-12 00:47:49.089889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:6633 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:21.326 [2024-07-12 00:47:49.089917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:35:21.326 [2024-07-12 00:47:49.104504] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2b7f0) with pdu=0x2000190ef270 00:35:21.326 [2024-07-12 00:47:49.104675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:600 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:21.326 [2024-07-12 00:47:49.104703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:35:21.326 [2024-07-12 00:47:49.119217] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2b7f0) with pdu=0x2000190ef270 00:35:21.326 [2024-07-12 00:47:49.119383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:2640 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:21.326 [2024-07-12 00:47:49.119411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:35:21.326 [2024-07-12 00:47:49.133950] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2b7f0) with pdu=0x2000190ef270 00:35:21.326 [2024-07-12 00:47:49.134108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:22075 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:21.326 [2024-07-12 00:47:49.134135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:35:21.326 [2024-07-12 00:47:49.148664] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2b7f0) with pdu=0x2000190ef270 00:35:21.326 [2024-07-12 00:47:49.148825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:12719 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:21.326 [2024-07-12 00:47:49.148853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:35:21.326 [2024-07-12 00:47:49.163391] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2b7f0) with pdu=0x2000190ef270 00:35:21.326 [2024-07-12 00:47:49.163550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:10920 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:21.326 [2024-07-12 00:47:49.163578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:35:21.585 [2024-07-12 00:47:49.178083] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2b7f0) with pdu=0x2000190ef270 00:35:21.585 [2024-07-12 00:47:49.178251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:9348 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:21.586 [2024-07-12 00:47:49.178279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:35:21.586 [2024-07-12 00:47:49.192806] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2b7f0) with pdu=0x2000190ef270 00:35:21.586 [2024-07-12 00:47:49.192965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:7054 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:21.586 [2024-07-12 00:47:49.192993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:35:21.586 [2024-07-12 00:47:49.207502] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2b7f0) with pdu=0x2000190ef270 00:35:21.586 [2024-07-12 00:47:49.207672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:1305 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:21.586 [2024-07-12 00:47:49.207700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:35:21.586 00:35:21.586 Latency(us) 00:35:21.586 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:21.586 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:35:21.586 nvme0n1 : 2.01 17542.85 68.53 0.00 0.00 7278.51 3470.98 16117.00 00:35:21.586 =================================================================================================================== 00:35:21.586 Total : 17542.85 68.53 0.00 0.00 7278.51 3470.98 16117.00 00:35:21.586 0 00:35:21.586 00:47:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:35:21.586 00:47:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:35:21.586 00:47:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:35:21.586 | .driver_specific 00:35:21.586 | .nvme_error 00:35:21.586 | .status_code 00:35:21.586 | .command_transient_transport_error' 00:35:21.586 00:47:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:35:21.845 00:47:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 138 > 0 )) 00:35:21.845 00:47:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1077998 00:35:21.845 00:47:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@946 -- # '[' -z 1077998 ']' 00:35:21.845 00:47:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # kill -0 1077998 00:35:21.845 00:47:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # uname 00:35:21.845 00:47:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:35:21.845 00:47:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1077998 00:35:21.845 00:47:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:35:21.845 00:47:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:35:21.845 00:47:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1077998' 00:35:21.845 killing process with pid 1077998 00:35:21.845 00:47:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # kill 1077998 00:35:21.845 Received shutdown signal, test time was about 2.000000 seconds 00:35:21.845 00:35:21.845 Latency(us) 00:35:21.845 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:21.845 =================================================================================================================== 00:35:21.845 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:21.845 00:47:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # wait 1077998 00:35:22.104 00:47:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:35:22.104 00:47:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:35:22.104 00:47:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:35:22.104 00:47:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:35:22.104 00:47:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:35:22.104 00:47:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1078340 00:35:22.104 00:47:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1078340 /var/tmp/bperf.sock 00:35:22.104 00:47:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:35:22.104 00:47:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@827 -- # '[' -z 1078340 ']' 00:35:22.104 00:47:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:22.104 00:47:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local max_retries=100 00:35:22.104 00:47:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:22.104 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:22.104 00:47:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # xtrace_disable 00:35:22.104 00:47:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:22.104 [2024-07-12 00:47:49.758513] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:35:22.104 [2024-07-12 00:47:49.758605] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1078340 ] 00:35:22.104 I/O size of 131072 is greater than zero copy threshold (65536). 00:35:22.104 Zero copy mechanism will not be used. 00:35:22.104 EAL: No free 2048 kB hugepages reported on node 1 00:35:22.104 [2024-07-12 00:47:49.817911] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:22.104 [2024-07-12 00:47:49.905779] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:35:22.363 00:47:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:35:22.363 00:47:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # return 0 00:35:22.363 00:47:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:35:22.363 00:47:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:35:22.621 00:47:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:35:22.621 00:47:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:22.621 00:47:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:22.621 00:47:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:22.621 00:47:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:22.621 00:47:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:22.879 nvme0n1 00:35:22.879 00:47:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:35:22.879 00:47:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:22.879 00:47:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:22.879 00:47:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:22.879 00:47:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:35:22.879 00:47:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:23.140 I/O size of 131072 is greater than zero copy threshold (65536). 00:35:23.140 Zero copy mechanism will not be used. 00:35:23.140 Running I/O for 2 seconds... 00:35:23.140 [2024-07-12 00:47:50.779033] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2bb30) with pdu=0x2000190fef90 00:35:23.140 [2024-07-12 00:47:50.779390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.140 [2024-07-12 00:47:50.779429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:23.140 [2024-07-12 00:47:50.785695] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2bb30) with pdu=0x2000190fef90 00:35:23.140 [2024-07-12 00:47:50.786034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.140 [2024-07-12 00:47:50.786067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:23.140 [2024-07-12 00:47:50.792432] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2bb30) with pdu=0x2000190fef90 00:35:23.140 [2024-07-12 00:47:50.792785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.140 [2024-07-12 00:47:50.792817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:23.140 [2024-07-12 00:47:50.799085] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2bb30) with pdu=0x2000190fef90 00:35:23.140 [2024-07-12 00:47:50.799418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.140 [2024-07-12 00:47:50.799450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.140 [2024-07-12 00:47:50.805649] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2bb30) with pdu=0x2000190fef90 00:35:23.140 [2024-07-12 00:47:50.805983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.140 [2024-07-12 00:47:50.806015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:23.140 [2024-07-12 00:47:50.812142] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2bb30) with pdu=0x2000190fef90 00:35:23.140 [2024-07-12 00:47:50.812474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.140 [2024-07-12 00:47:50.812506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:23.140 [2024-07-12 00:47:50.818643] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2bb30) with pdu=0x2000190fef90 00:35:23.140 [2024-07-12 00:47:50.818983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.140 [2024-07-12 00:47:50.819015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:23.140 [2024-07-12 00:47:50.825191] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2bb30) with pdu=0x2000190fef90 00:35:23.140 [2024-07-12 00:47:50.825526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.140 [2024-07-12 00:47:50.825558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.140 [2024-07-12 00:47:50.831668] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2bb30) with pdu=0x2000190fef90 00:35:23.141 [2024-07-12 00:47:50.832001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.141 [2024-07-12 00:47:50.832033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:23.141 [2024-07-12 00:47:50.838065] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2bb30) with pdu=0x2000190fef90 00:35:23.141 [2024-07-12 00:47:50.838399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.141 [2024-07-12 00:47:50.838430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:23.141 [2024-07-12 00:47:50.844444] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2bb30) with pdu=0x2000190fef90 00:35:23.141 [2024-07-12 00:47:50.844785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.141 [2024-07-12 00:47:50.844817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:23.141 [2024-07-12 00:47:50.850898] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2bb30) with pdu=0x2000190fef90 00:35:23.141 [2024-07-12 00:47:50.851230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.141 [2024-07-12 00:47:50.851262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.141 [2024-07-12 00:47:50.857382] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2bb30) with pdu=0x2000190fef90 00:35:23.141 [2024-07-12 00:47:50.857730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.141 [2024-07-12 00:47:50.857762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:23.141 [2024-07-12 00:47:50.863938] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2bb30) with pdu=0x2000190fef90 00:35:23.141 [2024-07-12 00:47:50.864272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.141 [2024-07-12 00:47:50.864305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:23.141 [2024-07-12 00:47:50.870426] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2bb30) with pdu=0x2000190fef90 00:35:23.141 [2024-07-12 00:47:50.870773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.141 [2024-07-12 00:47:50.870806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:23.141 [2024-07-12 00:47:50.876911] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2bb30) with pdu=0x2000190fef90 00:35:23.141 [2024-07-12 00:47:50.877245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.141 [2024-07-12 00:47:50.877283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.141 [2024-07-12 00:47:50.883378] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2bb30) with pdu=0x2000190fef90 00:35:23.141 [2024-07-12 00:47:50.883717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.141 [2024-07-12 00:47:50.883748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:23.141 [2024-07-12 00:47:50.889805] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2bb30) with pdu=0x2000190fef90 00:35:23.141 [2024-07-12 00:47:50.890137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.141 [2024-07-12 00:47:50.890169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:23.141 [2024-07-12 00:47:50.896418] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2bb30) with pdu=0x2000190fef90 00:35:23.141 [2024-07-12 00:47:50.896757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.141 [2024-07-12 00:47:50.896789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:23.141 [2024-07-12 00:47:50.902826] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2bb30) with pdu=0x2000190fef90 00:35:23.141 [2024-07-12 00:47:50.903159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.141 [2024-07-12 00:47:50.903191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.141 [2024-07-12 00:47:50.909247] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2bb30) with pdu=0x2000190fef90 00:35:23.141 [2024-07-12 00:47:50.909581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.141 [2024-07-12 00:47:50.909618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:23.141 [2024-07-12 00:47:50.915680] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2bb30) with pdu=0x2000190fef90 00:35:23.141 [2024-07-12 00:47:50.916016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.141 [2024-07-12 00:47:50.916047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:23.141 [2024-07-12 00:47:50.922101] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2bb30) with pdu=0x2000190fef90 00:35:23.141 [2024-07-12 00:47:50.922432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.141 [2024-07-12 00:47:50.922463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:23.141 [2024-07-12 00:47:50.928551] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2bb30) with pdu=0x2000190fef90 00:35:23.141 [2024-07-12 00:47:50.928893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.141 [2024-07-12 00:47:50.928923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.141 [2024-07-12 00:47:50.935069] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2bb30) with pdu=0x2000190fef90 00:35:23.141 [2024-07-12 00:47:50.935403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.141 [2024-07-12 00:47:50.935433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:23.141 [2024-07-12 00:47:50.941453] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2bb30) with pdu=0x2000190fef90 00:35:23.141 [2024-07-12 00:47:50.941790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.141 [2024-07-12 00:47:50.941822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:23.141 [2024-07-12 00:47:50.947927] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2bb30) with pdu=0x2000190fef90 00:35:23.141 [2024-07-12 00:47:50.948263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.141 [2024-07-12 00:47:50.948295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:23.141 [2024-07-12 00:47:50.954380] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2bb30) with pdu=0x2000190fef90 00:35:23.141 [2024-07-12 00:47:50.954722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.141 [2024-07-12 00:47:50.954753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.141 [2024-07-12 00:47:50.960820] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2bb30) with pdu=0x2000190fef90 00:35:23.141 [2024-07-12 00:47:50.961151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.141 [2024-07-12 00:47:50.961182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:23.141 [2024-07-12 00:47:50.967233] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2bb30) with pdu=0x2000190fef90 00:35:23.141 [2024-07-12 00:47:50.967563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.141 [2024-07-12 00:47:50.967599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:23.141 [2024-07-12 00:47:50.973638] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2bb30) with pdu=0x2000190fef90 00:35:23.141 [2024-07-12 00:47:50.973972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.141 [2024-07-12 00:47:50.974003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:23.402 [2024-07-12 00:47:50.980103] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2bb30) with pdu=0x2000190fef90 00:35:23.402 [2024-07-12 00:47:50.980436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.402 [2024-07-12 00:47:50.980467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.402 [2024-07-12 00:47:50.986528] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2bb30) with pdu=0x2000190fef90 00:35:23.402 [2024-07-12 00:47:50.986860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.402 [2024-07-12 00:47:50.986898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:23.402 [2024-07-12 00:47:50.992875] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2bb30) with pdu=0x2000190fef90 00:35:23.402 [2024-07-12 00:47:50.993208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.402 [2024-07-12 00:47:50.993240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:23.402 [2024-07-12 00:47:50.999233] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2bb30) with pdu=0x2000190fef90 00:35:23.402 [2024-07-12 00:47:50.999568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.402 [2024-07-12 00:47:50.999607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:23.402 [2024-07-12 00:47:51.005656] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2bb30) with pdu=0x2000190fef90 00:35:23.402 [2024-07-12 00:47:51.005990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.402 [2024-07-12 00:47:51.006021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.402 [2024-07-12 00:47:51.012097] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2bb30) with pdu=0x2000190fef90 00:35:23.402 [2024-07-12 00:47:51.012428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.402 [2024-07-12 00:47:51.012459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:23.402 [2024-07-12 00:47:51.018501] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2bb30) with pdu=0x2000190fef90 00:35:23.402 [2024-07-12 00:47:51.018841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.402 [2024-07-12 00:47:51.018875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:23.402 [2024-07-12 00:47:51.024919] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2bb30) with pdu=0x2000190fef90 00:35:23.402 [2024-07-12 00:47:51.025253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.402 [2024-07-12 00:47:51.025284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:23.402 [2024-07-12 00:47:51.031303] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2bb30) with pdu=0x2000190fef90 00:35:23.402 [2024-07-12 00:47:51.031644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.402 [2024-07-12 00:47:51.031677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.402 [2024-07-12 00:47:51.038104] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2bb30) with pdu=0x2000190fef90 00:35:23.402 [2024-07-12 00:47:51.038439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.402 [2024-07-12 00:47:51.038472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:23.402 [2024-07-12 00:47:51.044563] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2bb30) with pdu=0x2000190fef90 00:35:23.402 [2024-07-12 00:47:51.044912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.402 [2024-07-12 00:47:51.044944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:23.402 [2024-07-12 00:47:51.050998] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2bb30) with pdu=0x2000190fef90 00:35:23.403 [2024-07-12 00:47:51.051331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.403 [2024-07-12 00:47:51.051364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:23.403 [2024-07-12 00:47:51.057406] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2bb30) with pdu=0x2000190fef90 00:35:23.403 [2024-07-12 00:47:51.057745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.403 [2024-07-12 00:47:51.057776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.403 [2024-07-12 00:47:51.063821] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2bb30) with pdu=0x2000190fef90 00:35:23.403 [2024-07-12 00:47:51.064153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.403 [2024-07-12 00:47:51.064183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:23.403 [2024-07-12 00:47:51.070255] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2bb30) with pdu=0x2000190fef90 00:35:23.403 [2024-07-12 00:47:51.070595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.403 [2024-07-12 00:47:51.070627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:23.403 [2024-07-12 00:47:51.076677] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2bb30) with pdu=0x2000190fef90 00:35:23.403 [2024-07-12 00:47:51.077012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.403 [2024-07-12 00:47:51.077043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:23.403 [2024-07-12 00:47:51.083137] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2bb30) with pdu=0x2000190fef90 00:35:23.403 [2024-07-12 00:47:51.083468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.403 [2024-07-12 00:47:51.083499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.403 [2024-07-12 00:47:51.089533] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2bb30) with pdu=0x2000190fef90 00:35:23.403 [2024-07-12 00:47:51.089878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.403 [2024-07-12 00:47:51.089910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:23.403 [2024-07-12 00:47:51.095974] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2bb30) with pdu=0x2000190fef90 00:35:23.403 [2024-07-12 00:47:51.096307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.403 [2024-07-12 00:47:51.096338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:23.403 [2024-07-12 00:47:51.102379] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2bb30) with pdu=0x2000190fef90 00:35:23.403 [2024-07-12 00:47:51.102719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.403 [2024-07-12 00:47:51.102751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:23.403 [2024-07-12 00:47:51.108794] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2bb30) with pdu=0x2000190fef90 00:35:23.403 [2024-07-12 00:47:51.109132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.403 [2024-07-12 00:47:51.109163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.403 [2024-07-12 00:47:51.115270] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2bb30) with pdu=0x2000190fef90 00:35:23.403 [2024-07-12 00:47:51.115607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.403 [2024-07-12 00:47:51.115637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:23.403 [2024-07-12 00:47:51.121675] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2bb30) with pdu=0x2000190fef90 00:35:23.403 [2024-07-12 00:47:51.122007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.403 [2024-07-12 00:47:51.122037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:23.403 [2024-07-12 00:47:51.128060] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2bb30) with pdu=0x2000190fef90 00:35:23.403 [2024-07-12 00:47:51.128390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.403 [2024-07-12 00:47:51.128421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:23.403 [2024-07-12 00:47:51.134439] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2bb30) with pdu=0x2000190fef90 00:35:23.403 [2024-07-12 00:47:51.134774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.403 [2024-07-12 00:47:51.134822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.403 [2024-07-12 00:47:51.140888] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2bb30) with pdu=0x2000190fef90 00:35:23.403 [2024-07-12 00:47:51.141223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.403 [2024-07-12 00:47:51.141254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:23.403 [2024-07-12 00:47:51.147268] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2bb30) with pdu=0x2000190fef90 00:35:23.403 [2024-07-12 00:47:51.147609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.403 [2024-07-12 00:47:51.147639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:23.403 [2024-07-12 00:47:51.153644] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2bb30) with pdu=0x2000190fef90 00:35:23.403 [2024-07-12 00:47:51.153975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.403 [2024-07-12 00:47:51.154014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:23.403 [2024-07-12 00:47:51.160022] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2bb30) with pdu=0x2000190fef90 00:35:23.403 [2024-07-12 00:47:51.160355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.403 [2024-07-12 00:47:51.160385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.403 [2024-07-12 00:47:51.166439] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2bb30) with pdu=0x2000190fef90 00:35:23.403 [2024-07-12 00:47:51.166779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.403 [2024-07-12 00:47:51.166810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:23.403 [2024-07-12 00:47:51.172869] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2bb30) with pdu=0x2000190fef90 00:35:23.403 [2024-07-12 00:47:51.173200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.403 [2024-07-12 00:47:51.173231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:23.403 [2024-07-12 00:47:51.179280] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2bb30) with pdu=0x2000190fef90 00:35:23.403 [2024-07-12 00:47:51.179617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.403 [2024-07-12 00:47:51.179648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:23.403 [2024-07-12 00:47:51.185630] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2bb30) with pdu=0x2000190fef90 00:35:23.403 [2024-07-12 00:47:51.185962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.403 [2024-07-12 00:47:51.185993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.403 [2024-07-12 00:47:51.191961] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2bb30) with pdu=0x2000190fef90 00:35:23.403 [2024-07-12 00:47:51.192294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.403 [2024-07-12 00:47:51.192324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:23.403 [2024-07-12 00:47:51.198421] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2bb30) with pdu=0x2000190fef90 00:35:23.403 [2024-07-12 00:47:51.198757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.403 [2024-07-12 00:47:51.198788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:23.403 [2024-07-12 00:47:51.204811] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2bb30) with pdu=0x2000190fef90 00:35:23.403 [2024-07-12 00:47:51.205142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.403 [2024-07-12 00:47:51.205172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:23.403 [2024-07-12 00:47:51.211155] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2bb30) with pdu=0x2000190fef90 00:35:23.403 [2024-07-12 00:47:51.211494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.403 [2024-07-12 00:47:51.211525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.403 [2024-07-12 00:47:51.217545] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2bb30) with pdu=0x2000190fef90 00:35:23.403 [2024-07-12 00:47:51.217887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.403 [2024-07-12 00:47:51.217917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:23.403 [2024-07-12 00:47:51.223961] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2bb30) with pdu=0x2000190fef90 00:35:23.403 [2024-07-12 00:47:51.224293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.403 [2024-07-12 00:47:51.224327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:23.403 [2024-07-12 00:47:51.230356] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2bb30) with pdu=0x2000190fef90 00:35:23.404 [2024-07-12 00:47:51.230693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.404 [2024-07-12 00:47:51.230724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:23.404 [2024-07-12 00:47:51.236753] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2bb30) with pdu=0x2000190fef90 00:35:23.404 [2024-07-12 00:47:51.237087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.404 [2024-07-12 00:47:51.237118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.665 [2024-07-12 00:47:51.243146] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2bb30) with pdu=0x2000190fef90 00:35:23.665 [2024-07-12 00:47:51.243478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.665 [2024-07-12 00:47:51.243510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:23.665 [2024-07-12 00:47:51.249574] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2bb30) with pdu=0x2000190fef90 00:35:23.665 [2024-07-12 00:47:51.249923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.665 [2024-07-12 00:47:51.249953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:23.665 [2024-07-12 00:47:51.255961] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2bb30) with pdu=0x2000190fef90 00:35:23.665 [2024-07-12 00:47:51.256292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.665 [2024-07-12 00:47:51.256323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:23.665 [2024-07-12 00:47:51.262377] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2bb30) with pdu=0x2000190fef90 00:35:23.665 [2024-07-12 00:47:51.262716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.665 [2024-07-12 00:47:51.262747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.665 [2024-07-12 00:47:51.268838] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2bb30) with pdu=0x2000190fef90 00:35:23.665 [2024-07-12 00:47:51.269170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.665 [2024-07-12 00:47:51.269201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:23.665 [2024-07-12 00:47:51.275238] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2bb30) with pdu=0x2000190fef90 00:35:23.665 [2024-07-12 00:47:51.275569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.665 [2024-07-12 00:47:51.275607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:23.665 [2024-07-12 00:47:51.281531] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2bb30) with pdu=0x2000190fef90 00:35:23.665 [2024-07-12 00:47:51.281872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.665 [2024-07-12 00:47:51.281903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:23.665 [2024-07-12 00:47:51.287968] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2bb30) with pdu=0x2000190fef90 00:35:23.665 [2024-07-12 00:47:51.288304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.665 [2024-07-12 00:47:51.288337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.665 [2024-07-12 00:47:51.294380] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2bb30) with pdu=0x2000190fef90 00:35:23.665 [2024-07-12 00:47:51.294721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.665 [2024-07-12 00:47:51.294753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:23.665 [2024-07-12 00:47:51.300786] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2bb30) with pdu=0x2000190fef90 00:35:23.665 [2024-07-12 00:47:51.301120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.665 [2024-07-12 00:47:51.301151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:23.665 [2024-07-12 00:47:51.307252] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2bb30) with pdu=0x2000190fef90 00:35:23.665 [2024-07-12 00:47:51.307584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.665 [2024-07-12 00:47:51.307621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:23.665 [2024-07-12 00:47:51.313618] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2bb30) with pdu=0x2000190fef90 00:35:23.665 [2024-07-12 00:47:51.313959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.665 [2024-07-12 00:47:51.313991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.665 [2024-07-12 00:47:51.320027] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2bb30) with pdu=0x2000190fef90 00:35:23.665 [2024-07-12 00:47:51.320358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.665 [2024-07-12 00:47:51.320400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:23.665 [2024-07-12 00:47:51.326455] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2bb30) with pdu=0x2000190fef90 00:35:23.665 [2024-07-12 00:47:51.326793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.665 [2024-07-12 00:47:51.326824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:23.665 [2024-07-12 00:47:51.332851] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2bb30) with pdu=0x2000190fef90 00:35:23.665 [2024-07-12 00:47:51.333189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.665 [2024-07-12 00:47:51.333220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:23.665 [2024-07-12 00:47:51.339247] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2bb30) with pdu=0x2000190fef90 00:35:23.665 [2024-07-12 00:47:51.339578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.665 [2024-07-12 00:47:51.339617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.665 [2024-07-12 00:47:51.345668] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2bb30) with pdu=0x2000190fef90 00:35:23.665 [2024-07-12 00:47:51.345999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.665 [2024-07-12 00:47:51.346030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:23.665 [2024-07-12 00:47:51.352064] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2bb30) with pdu=0x2000190fef90 00:35:23.665 [2024-07-12 00:47:51.352396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.665 [2024-07-12 00:47:51.352428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:23.665 [2024-07-12 00:47:51.358469] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2bb30) with pdu=0x2000190fef90 00:35:23.665 [2024-07-12 00:47:51.358810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.665 [2024-07-12 00:47:51.358841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:23.665 [2024-07-12 00:47:51.364936] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2bb30) with pdu=0x2000190fef90 00:35:23.665 [2024-07-12 00:47:51.365272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.665 [2024-07-12 00:47:51.365304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.665 [2024-07-12 00:47:51.371337] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2bb30) with pdu=0x2000190fef90 00:35:23.665 [2024-07-12 00:47:51.371676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.665 [2024-07-12 00:47:51.371713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:23.665 [2024-07-12 00:47:51.377806] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2bb30) with pdu=0x2000190fef90 00:35:23.665 [2024-07-12 00:47:51.378145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.665 [2024-07-12 00:47:51.378177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:23.665 [2024-07-12 00:47:51.384227] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2bb30) with pdu=0x2000190fef90 00:35:23.665 [2024-07-12 00:47:51.384558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.665 [2024-07-12 00:47:51.384595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:23.665 [2024-07-12 00:47:51.390651] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2bb30) with pdu=0x2000190fef90 00:35:23.666 [2024-07-12 00:47:51.390990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.666 [2024-07-12 00:47:51.391024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.666 [2024-07-12 00:47:51.397044] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2bb30) with pdu=0x2000190fef90 00:35:23.666 [2024-07-12 00:47:51.397375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.666 [2024-07-12 00:47:51.397406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:23.666 [2024-07-12 00:47:51.403431] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2bb30) with pdu=0x2000190fef90 00:35:23.666 [2024-07-12 00:47:51.403778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.666 [2024-07-12 00:47:51.403808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:23.666 [2024-07-12 00:47:51.409864] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2bb30) with pdu=0x2000190fef90 00:35:23.666 [2024-07-12 00:47:51.410196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.666 [2024-07-12 00:47:51.410227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:23.666 [2024-07-12 00:47:51.416450] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2bb30) with pdu=0x2000190fef90 00:35:23.666 [2024-07-12 00:47:51.416789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.666 [2024-07-12 00:47:51.416820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.666 [2024-07-12 00:47:51.422878] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2bb30) with pdu=0x2000190fef90 00:35:23.666 [2024-07-12 00:47:51.423210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.666 [2024-07-12 00:47:51.423241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:23.666 [2024-07-12 00:47:51.429330] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2bb30) with pdu=0x2000190fef90 00:35:23.666 [2024-07-12 00:47:51.429670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.666 [2024-07-12 00:47:51.429703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:23.666 [2024-07-12 00:47:51.435789] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2bb30) with pdu=0x2000190fef90 00:35:23.666 [2024-07-12 00:47:51.436122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.666 [2024-07-12 00:47:51.436153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:23.666 [2024-07-12 00:47:51.442224] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2bb30) with pdu=0x2000190fef90 00:35:23.666 [2024-07-12 00:47:51.442562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.666 [2024-07-12 00:47:51.442600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.666 [2024-07-12 00:47:51.448656] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2bb30) with pdu=0x2000190fef90 00:35:23.666 [2024-07-12 00:47:51.448990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.666 [2024-07-12 00:47:51.449021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:23.666 [2024-07-12 00:47:51.455133] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2bb30) with pdu=0x2000190fef90 00:35:23.666 [2024-07-12 00:47:51.455467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.666 [2024-07-12 00:47:51.455499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:23.666 [2024-07-12 00:47:51.461498] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2bb30) with pdu=0x2000190fef90 00:35:23.666 [2024-07-12 00:47:51.461834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.666 [2024-07-12 00:47:51.461866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:23.666 [2024-07-12 00:47:51.467913] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2bb30) with pdu=0x2000190fef90 00:35:23.666 [2024-07-12 00:47:51.468242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.666 [2024-07-12 00:47:51.468274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.666 [2024-07-12 00:47:51.474404] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2bb30) with pdu=0x2000190fef90 00:35:23.666 [2024-07-12 00:47:51.474768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.666 [2024-07-12 00:47:51.474800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:23.666 [2024-07-12 00:47:51.480877] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2bb30) with pdu=0x2000190fef90 00:35:23.666 [2024-07-12 00:47:51.481212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.666 [2024-07-12 00:47:51.481243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:23.666 [2024-07-12 00:47:51.487151] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2bb30) with pdu=0x2000190fef90 00:35:23.666 [2024-07-12 00:47:51.487488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.666 [2024-07-12 00:47:51.487527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:23.666 [2024-07-12 00:47:51.494003] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2bb30) with pdu=0x2000190fef90 00:35:23.666 [2024-07-12 00:47:51.494324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.666 [2024-07-12 00:47:51.494354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.666 [2024-07-12 00:47:51.502307] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2bb30) with pdu=0x2000190fef90 00:35:23.927 [2024-07-12 00:47:51.502672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.927 [2024-07-12 00:47:51.502709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:23.927 [2024-07-12 00:47:51.510105] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2bb30) with pdu=0x2000190fef90 00:35:23.927 [2024-07-12 00:47:51.510454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.927 [2024-07-12 00:47:51.510486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:23.927 [2024-07-12 00:47:51.517877] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2bb30) with pdu=0x2000190fef90 00:35:23.927 [2024-07-12 00:47:51.518193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.927 [2024-07-12 00:47:51.518225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:23.927 [2024-07-12 00:47:51.524629] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2bb30) with pdu=0x2000190fef90 00:35:23.927 [2024-07-12 00:47:51.524946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.927 [2024-07-12 00:47:51.524978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.927 [2024-07-12 00:47:51.531225] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2bb30) with pdu=0x2000190fef90 00:35:23.927 [2024-07-12 00:47:51.531558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.927 [2024-07-12 00:47:51.531595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:23.927 [2024-07-12 00:47:51.537880] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2bb30) with pdu=0x2000190fef90 00:35:23.927 [2024-07-12 00:47:51.538224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.927 [2024-07-12 00:47:51.538258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:23.927 [2024-07-12 00:47:51.544535] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2bb30) with pdu=0x2000190fef90 00:35:23.927 [2024-07-12 00:47:51.544880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.927 [2024-07-12 00:47:51.544911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:23.927 [2024-07-12 00:47:51.551148] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2bb30) with pdu=0x2000190fef90 00:35:23.927 [2024-07-12 00:47:51.551490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.927 [2024-07-12 00:47:51.551522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.927 [2024-07-12 00:47:51.557671] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2bb30) with pdu=0x2000190fef90 00:35:23.927 [2024-07-12 00:47:51.558003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.927 [2024-07-12 00:47:51.558036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:23.927 [2024-07-12 00:47:51.564345] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2bb30) with pdu=0x2000190fef90 00:35:23.927 [2024-07-12 00:47:51.564687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.927 [2024-07-12 00:47:51.564719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:23.927 [2024-07-12 00:47:51.570936] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2bb30) with pdu=0x2000190fef90 00:35:23.927 [2024-07-12 00:47:51.571266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.927 [2024-07-12 00:47:51.571299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:23.927 [2024-07-12 00:47:51.577565] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2bb30) with pdu=0x2000190fef90 00:35:23.927 [2024-07-12 00:47:51.577908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.927 [2024-07-12 00:47:51.577941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.927 [2024-07-12 00:47:51.584211] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2bb30) with pdu=0x2000190fef90 00:35:23.927 [2024-07-12 00:47:51.584544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.927 [2024-07-12 00:47:51.584576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:23.927 [2024-07-12 00:47:51.590633] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2bb30) with pdu=0x2000190fef90 00:35:23.927 [2024-07-12 00:47:51.590975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.927 [2024-07-12 00:47:51.591007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:23.927 [2024-07-12 00:47:51.597208] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2bb30) with pdu=0x2000190fef90 00:35:23.927 [2024-07-12 00:47:51.597524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.927 [2024-07-12 00:47:51.597555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:23.927 [2024-07-12 00:47:51.604415] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2bb30) with pdu=0x2000190fef90 00:35:23.927 [2024-07-12 00:47:51.604757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.927 [2024-07-12 00:47:51.604796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.927 [2024-07-12 00:47:51.611198] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2bb30) with pdu=0x2000190fef90 00:35:23.927 [2024-07-12 00:47:51.611530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.927 [2024-07-12 00:47:51.611561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:23.927 [2024-07-12 00:47:51.617789] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2bb30) with pdu=0x2000190fef90 00:35:23.927 [2024-07-12 00:47:51.618121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.927 [2024-07-12 00:47:51.618154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:23.927 [2024-07-12 00:47:51.624385] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2bb30) with pdu=0x2000190fef90 00:35:23.927 [2024-07-12 00:47:51.624736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.927 [2024-07-12 00:47:51.624769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:23.927 [2024-07-12 00:47:51.630661] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2bb30) with pdu=0x2000190fef90 00:35:23.928 [2024-07-12 00:47:51.630975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.928 [2024-07-12 00:47:51.631007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.928 [2024-07-12 00:47:51.637603] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2bb30) with pdu=0x2000190fef90 00:35:23.928 [2024-07-12 00:47:51.637935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.928 [2024-07-12 00:47:51.637966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:23.928 [2024-07-12 00:47:51.644928] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2bb30) with pdu=0x2000190fef90 00:35:23.928 [2024-07-12 00:47:51.645260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.928 [2024-07-12 00:47:51.645296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:23.928 [2024-07-12 00:47:51.652853] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2bb30) with pdu=0x2000190fef90 00:35:23.928 [2024-07-12 00:47:51.653202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.928 [2024-07-12 00:47:51.653237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:23.928 [2024-07-12 00:47:51.660185] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2bb30) with pdu=0x2000190fef90 00:35:23.928 [2024-07-12 00:47:51.660517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.928 [2024-07-12 00:47:51.660549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.928 [2024-07-12 00:47:51.667073] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2bb30) with pdu=0x2000190fef90 00:35:23.928 [2024-07-12 00:47:51.667421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.928 [2024-07-12 00:47:51.667453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:23.928 [2024-07-12 00:47:51.673571] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2bb30) with pdu=0x2000190fef90 00:35:23.928 [2024-07-12 00:47:51.673911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.928 [2024-07-12 00:47:51.673943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:23.928 [2024-07-12 00:47:51.679914] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2bb30) with pdu=0x2000190fef90 00:35:23.928 [2024-07-12 00:47:51.680248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.928 [2024-07-12 00:47:51.680279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:23.928 [2024-07-12 00:47:51.686327] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2bb30) with pdu=0x2000190fef90 00:35:23.928 [2024-07-12 00:47:51.686667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.928 [2024-07-12 00:47:51.686699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.928 [2024-07-12 00:47:51.692698] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2bb30) with pdu=0x2000190fef90 00:35:23.928 [2024-07-12 00:47:51.693033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.928 [2024-07-12 00:47:51.693065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:23.928 [2024-07-12 00:47:51.699120] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2bb30) with pdu=0x2000190fef90 00:35:23.928 [2024-07-12 00:47:51.699452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.928 [2024-07-12 00:47:51.699483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:23.928 [2024-07-12 00:47:51.705471] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2bb30) with pdu=0x2000190fef90 00:35:23.928 [2024-07-12 00:47:51.705809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.928 [2024-07-12 00:47:51.705841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:23.928 [2024-07-12 00:47:51.711741] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2bb30) with pdu=0x2000190fef90 00:35:23.928 [2024-07-12 00:47:51.712061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.928 [2024-07-12 00:47:51.712091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.928 [2024-07-12 00:47:51.718055] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2bb30) with pdu=0x2000190fef90 00:35:23.928 [2024-07-12 00:47:51.718386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.928 [2024-07-12 00:47:51.718419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:23.928 [2024-07-12 00:47:51.724385] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2bb30) with pdu=0x2000190fef90 00:35:23.928 [2024-07-12 00:47:51.724728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.928 [2024-07-12 00:47:51.724760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:23.928 [2024-07-12 00:47:51.730738] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2bb30) with pdu=0x2000190fef90 00:35:23.928 [2024-07-12 00:47:51.731071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.928 [2024-07-12 00:47:51.731102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:23.928 [2024-07-12 00:47:51.737104] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2bb30) with pdu=0x2000190fef90 00:35:23.928 [2024-07-12 00:47:51.737435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.928 [2024-07-12 00:47:51.737467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:23.928 [2024-07-12 00:47:51.743482] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2bb30) with pdu=0x2000190fef90 00:35:23.928 [2024-07-12 00:47:51.743826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.928 [2024-07-12 00:47:51.743861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:23.928 [2024-07-12 00:47:51.749868] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2bb30) with pdu=0x2000190fef90 00:35:23.928 [2024-07-12 00:47:51.750200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.928 [2024-07-12 00:47:51.750232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:23.928 [2024-07-12 00:47:51.756198] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2bb30) with pdu=0x2000190fef90 00:35:23.928 [2024-07-12 00:47:51.756530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.928 [2024-07-12 00:47:51.756560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:23.928 [2024-07-12 00:47:51.762631] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2bb30) with pdu=0x2000190fef90 00:35:23.928 [2024-07-12 00:47:51.762970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:23.928 [2024-07-12 00:47:51.763001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:24.188 [2024-07-12 00:47:51.768986] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2bb30) with pdu=0x2000190fef90 00:35:24.188 [2024-07-12 00:47:51.769318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.188 [2024-07-12 00:47:51.769350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:24.188 [2024-07-12 00:47:51.775249] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2bb30) with pdu=0x2000190fef90 00:35:24.188 [2024-07-12 00:47:51.775583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.188 [2024-07-12 00:47:51.775631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:24.188 [2024-07-12 00:47:51.781555] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2bb30) with pdu=0x2000190fef90 00:35:24.188 [2024-07-12 00:47:51.781892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.188 [2024-07-12 00:47:51.781923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:24.188 [2024-07-12 00:47:51.788054] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2bb30) with pdu=0x2000190fef90 00:35:24.188 [2024-07-12 00:47:51.788389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.188 [2024-07-12 00:47:51.788422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:24.188 [2024-07-12 00:47:51.794403] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2bb30) with pdu=0x2000190fef90 00:35:24.188 [2024-07-12 00:47:51.794743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.188 [2024-07-12 00:47:51.794775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:24.188 [2024-07-12 00:47:51.800802] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2bb30) with pdu=0x2000190fef90 00:35:24.188 [2024-07-12 00:47:51.801134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.188 [2024-07-12 00:47:51.801168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:24.188 [2024-07-12 00:47:51.807150] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2bb30) with pdu=0x2000190fef90 00:35:24.188 [2024-07-12 00:47:51.807480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.188 [2024-07-12 00:47:51.807511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:24.188 [2024-07-12 00:47:51.813433] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2bb30) with pdu=0x2000190fef90 00:35:24.188 [2024-07-12 00:47:51.813774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.188 [2024-07-12 00:47:51.813807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:24.188 [2024-07-12 00:47:51.819761] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2bb30) with pdu=0x2000190fef90 00:35:24.188 [2024-07-12 00:47:51.820093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.188 [2024-07-12 00:47:51.820124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:24.188 [2024-07-12 00:47:51.826153] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2bb30) with pdu=0x2000190fef90 00:35:24.188 [2024-07-12 00:47:51.826484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.188 [2024-07-12 00:47:51.826515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:24.188 [2024-07-12 00:47:51.832409] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2bb30) with pdu=0x2000190fef90 00:35:24.188 [2024-07-12 00:47:51.832754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.188 [2024-07-12 00:47:51.832790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:24.188 [2024-07-12 00:47:51.838738] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2bb30) with pdu=0x2000190fef90 00:35:24.188 [2024-07-12 00:47:51.839073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.188 [2024-07-12 00:47:51.839105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:24.188 [2024-07-12 00:47:51.845370] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2bb30) with pdu=0x2000190fef90 00:35:24.188 [2024-07-12 00:47:51.845711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.188 [2024-07-12 00:47:51.845743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:24.188 [2024-07-12 00:47:51.851716] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2bb30) with pdu=0x2000190fef90 00:35:24.188 [2024-07-12 00:47:51.852051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.188 [2024-07-12 00:47:51.852083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:24.188 [2024-07-12 00:47:51.857998] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2bb30) with pdu=0x2000190fef90 00:35:24.188 [2024-07-12 00:47:51.858330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.188 [2024-07-12 00:47:51.858362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:24.188 [2024-07-12 00:47:51.864685] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2bb30) with pdu=0x2000190fef90 00:35:24.188 [2024-07-12 00:47:51.865046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.188 [2024-07-12 00:47:51.865078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:24.188 [2024-07-12 00:47:51.871432] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2bb30) with pdu=0x2000190fef90 00:35:24.188 [2024-07-12 00:47:51.871778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.188 [2024-07-12 00:47:51.871811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:24.188 [2024-07-12 00:47:51.878432] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2bb30) with pdu=0x2000190fef90 00:35:24.188 [2024-07-12 00:47:51.878784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.188 [2024-07-12 00:47:51.878816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:24.188 [2024-07-12 00:47:51.885006] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2bb30) with pdu=0x2000190fef90 00:35:24.188 [2024-07-12 00:47:51.885329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.188 [2024-07-12 00:47:51.885361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:24.188 [2024-07-12 00:47:51.891309] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2bb30) with pdu=0x2000190fef90 00:35:24.188 [2024-07-12 00:47:51.891650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.188 [2024-07-12 00:47:51.891682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:24.188 [2024-07-12 00:47:51.897584] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2bb30) with pdu=0x2000190fef90 00:35:24.188 [2024-07-12 00:47:51.897927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.188 [2024-07-12 00:47:51.897959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:24.188 [2024-07-12 00:47:51.903894] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2bb30) with pdu=0x2000190fef90 00:35:24.188 [2024-07-12 00:47:51.904212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.188 [2024-07-12 00:47:51.904243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:24.188 [2024-07-12 00:47:51.910603] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2bb30) with pdu=0x2000190fef90 00:35:24.188 [2024-07-12 00:47:51.910939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.188 [2024-07-12 00:47:51.910971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:24.188 [2024-07-12 00:47:51.917184] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2bb30) with pdu=0x2000190fef90 00:35:24.188 [2024-07-12 00:47:51.917516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.188 [2024-07-12 00:47:51.917547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:24.188 [2024-07-12 00:47:51.923665] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2bb30) with pdu=0x2000190fef90 00:35:24.188 [2024-07-12 00:47:51.923998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.188 [2024-07-12 00:47:51.924030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:24.188 [2024-07-12 00:47:51.930100] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2bb30) with pdu=0x2000190fef90 00:35:24.188 [2024-07-12 00:47:51.930434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.188 [2024-07-12 00:47:51.930466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:24.188 [2024-07-12 00:47:51.936679] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2bb30) with pdu=0x2000190fef90 00:35:24.188 [2024-07-12 00:47:51.937016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.188 [2024-07-12 00:47:51.937047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:24.188 [2024-07-12 00:47:51.944203] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2bb30) with pdu=0x2000190fef90 00:35:24.189 [2024-07-12 00:47:51.944537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.189 [2024-07-12 00:47:51.944575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:24.189 [2024-07-12 00:47:51.951875] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2bb30) with pdu=0x2000190fef90 00:35:24.189 [2024-07-12 00:47:51.952214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.189 [2024-07-12 00:47:51.952246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:24.189 [2024-07-12 00:47:51.958983] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2bb30) with pdu=0x2000190fef90 00:35:24.189 [2024-07-12 00:47:51.959315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.189 [2024-07-12 00:47:51.959348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:24.189 [2024-07-12 00:47:51.966050] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2bb30) with pdu=0x2000190fef90 00:35:24.189 [2024-07-12 00:47:51.966381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.189 [2024-07-12 00:47:51.966411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:24.189 [2024-07-12 00:47:51.972673] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2bb30) with pdu=0x2000190fef90 00:35:24.189 [2024-07-12 00:47:51.973004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.189 [2024-07-12 00:47:51.973036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:24.189 [2024-07-12 00:47:51.979848] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2bb30) with pdu=0x2000190fef90 00:35:24.189 [2024-07-12 00:47:51.980181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.189 [2024-07-12 00:47:51.980214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:24.189 [2024-07-12 00:47:51.987322] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2bb30) with pdu=0x2000190fef90 00:35:24.189 [2024-07-12 00:47:51.987665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.189 [2024-07-12 00:47:51.987696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:24.189 [2024-07-12 00:47:51.994411] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2bb30) with pdu=0x2000190fef90 00:35:24.189 [2024-07-12 00:47:51.994753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.189 [2024-07-12 00:47:51.994785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:24.189 [2024-07-12 00:47:52.001294] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2bb30) with pdu=0x2000190fef90 00:35:24.189 [2024-07-12 00:47:52.001617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.189 [2024-07-12 00:47:52.001649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:24.189 [2024-07-12 00:47:52.007797] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2bb30) with pdu=0x2000190fef90 00:35:24.189 [2024-07-12 00:47:52.008130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.189 [2024-07-12 00:47:52.008162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:24.189 [2024-07-12 00:47:52.014111] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2bb30) with pdu=0x2000190fef90 00:35:24.189 [2024-07-12 00:47:52.014442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.189 [2024-07-12 00:47:52.014474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:24.189 [2024-07-12 00:47:52.020812] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2bb30) with pdu=0x2000190fef90 00:35:24.189 [2024-07-12 00:47:52.021147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.189 [2024-07-12 00:47:52.021179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:24.448 [2024-07-12 00:47:52.027507] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2bb30) with pdu=0x2000190fef90 00:35:24.448 [2024-07-12 00:47:52.027847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.448 [2024-07-12 00:47:52.027880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:24.448 [2024-07-12 00:47:52.033986] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2bb30) with pdu=0x2000190fef90 00:35:24.448 [2024-07-12 00:47:52.034320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.448 [2024-07-12 00:47:52.034352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:24.448 [2024-07-12 00:47:52.041559] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2bb30) with pdu=0x2000190fef90 00:35:24.448 [2024-07-12 00:47:52.041903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.448 [2024-07-12 00:47:52.041935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:24.448 [2024-07-12 00:47:52.049286] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2bb30) with pdu=0x2000190fef90 00:35:24.448 [2024-07-12 00:47:52.049608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.448 [2024-07-12 00:47:52.049640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:24.448 [2024-07-12 00:47:52.056072] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2bb30) with pdu=0x2000190fef90 00:35:24.448 [2024-07-12 00:47:52.056403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.448 [2024-07-12 00:47:52.056435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:24.448 [2024-07-12 00:47:52.062279] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2bb30) with pdu=0x2000190fef90 00:35:24.448 [2024-07-12 00:47:52.062621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.448 [2024-07-12 00:47:52.062659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:24.448 [2024-07-12 00:47:52.068889] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2bb30) with pdu=0x2000190fef90 00:35:24.448 [2024-07-12 00:47:52.069221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.448 [2024-07-12 00:47:52.069253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:24.448 [2024-07-12 00:47:52.075616] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2bb30) with pdu=0x2000190fef90 00:35:24.448 [2024-07-12 00:47:52.075950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.448 [2024-07-12 00:47:52.075983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:24.448 [2024-07-12 00:47:52.082676] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2bb30) with pdu=0x2000190fef90 00:35:24.448 [2024-07-12 00:47:52.083012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.448 [2024-07-12 00:47:52.083044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:24.448 [2024-07-12 00:47:52.089522] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2bb30) with pdu=0x2000190fef90 00:35:24.448 [2024-07-12 00:47:52.089862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.448 [2024-07-12 00:47:52.089893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:24.448 [2024-07-12 00:47:52.096190] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2bb30) with pdu=0x2000190fef90 00:35:24.448 [2024-07-12 00:47:52.096523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.448 [2024-07-12 00:47:52.096559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:24.448 [2024-07-12 00:47:52.102866] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2bb30) with pdu=0x2000190fef90 00:35:24.448 [2024-07-12 00:47:52.103205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.448 [2024-07-12 00:47:52.103238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:24.448 [2024-07-12 00:47:52.109114] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2bb30) with pdu=0x2000190fef90 00:35:24.448 [2024-07-12 00:47:52.109444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.448 [2024-07-12 00:47:52.109475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:24.448 [2024-07-12 00:47:52.116328] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2bb30) with pdu=0x2000190fef90 00:35:24.448 [2024-07-12 00:47:52.116667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.448 [2024-07-12 00:47:52.116699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:24.448 [2024-07-12 00:47:52.122922] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2bb30) with pdu=0x2000190fef90 00:35:24.448 [2024-07-12 00:47:52.123264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.448 [2024-07-12 00:47:52.123297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:24.448 [2024-07-12 00:47:52.129730] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2bb30) with pdu=0x2000190fef90 00:35:24.448 [2024-07-12 00:47:52.130063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.448 [2024-07-12 00:47:52.130096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:24.448 [2024-07-12 00:47:52.137860] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2bb30) with pdu=0x2000190fef90 00:35:24.448 [2024-07-12 00:47:52.138194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.448 [2024-07-12 00:47:52.138225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:24.448 [2024-07-12 00:47:52.145003] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2bb30) with pdu=0x2000190fef90 00:35:24.448 [2024-07-12 00:47:52.145343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.448 [2024-07-12 00:47:52.145374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:24.448 [2024-07-12 00:47:52.151436] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2bb30) with pdu=0x2000190fef90 00:35:24.448 [2024-07-12 00:47:52.151778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.448 [2024-07-12 00:47:52.151810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:24.448 [2024-07-12 00:47:52.157911] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2bb30) with pdu=0x2000190fef90 00:35:24.448 [2024-07-12 00:47:52.158247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.448 [2024-07-12 00:47:52.158278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:24.448 [2024-07-12 00:47:52.164181] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2bb30) with pdu=0x2000190fef90 00:35:24.448 [2024-07-12 00:47:52.164511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.448 [2024-07-12 00:47:52.164543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:24.448 [2024-07-12 00:47:52.171207] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2bb30) with pdu=0x2000190fef90 00:35:24.448 [2024-07-12 00:47:52.171542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.448 [2024-07-12 00:47:52.171574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:24.449 [2024-07-12 00:47:52.178051] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2bb30) with pdu=0x2000190fef90 00:35:24.449 [2024-07-12 00:47:52.178390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.449 [2024-07-12 00:47:52.178423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:24.449 [2024-07-12 00:47:52.185138] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2bb30) with pdu=0x2000190fef90 00:35:24.449 [2024-07-12 00:47:52.185471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.449 [2024-07-12 00:47:52.185502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:24.449 [2024-07-12 00:47:52.191541] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2bb30) with pdu=0x2000190fef90 00:35:24.449 [2024-07-12 00:47:52.191883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.449 [2024-07-12 00:47:52.191916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:24.449 [2024-07-12 00:47:52.198010] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2bb30) with pdu=0x2000190fef90 00:35:24.449 [2024-07-12 00:47:52.198342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.449 [2024-07-12 00:47:52.198374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:24.449 [2024-07-12 00:47:52.204566] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2bb30) with pdu=0x2000190fef90 00:35:24.449 [2024-07-12 00:47:52.204908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.449 [2024-07-12 00:47:52.204939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:24.449 [2024-07-12 00:47:52.211143] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2bb30) with pdu=0x2000190fef90 00:35:24.449 [2024-07-12 00:47:52.211476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.449 [2024-07-12 00:47:52.211508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:24.449 [2024-07-12 00:47:52.218226] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2bb30) with pdu=0x2000190fef90 00:35:24.449 [2024-07-12 00:47:52.218545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.449 [2024-07-12 00:47:52.218576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:24.449 [2024-07-12 00:47:52.225908] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2bb30) with pdu=0x2000190fef90 00:35:24.449 [2024-07-12 00:47:52.226242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.449 [2024-07-12 00:47:52.226272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:24.449 [2024-07-12 00:47:52.233326] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2bb30) with pdu=0x2000190fef90 00:35:24.449 [2024-07-12 00:47:52.233665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.449 [2024-07-12 00:47:52.233697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:24.449 [2024-07-12 00:47:52.241516] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2bb30) with pdu=0x2000190fef90 00:35:24.449 [2024-07-12 00:47:52.241867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.449 [2024-07-12 00:47:52.241906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:24.449 [2024-07-12 00:47:52.248171] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2bb30) with pdu=0x2000190fef90 00:35:24.449 [2024-07-12 00:47:52.248503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.449 [2024-07-12 00:47:52.248535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:24.449 [2024-07-12 00:47:52.254465] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2bb30) with pdu=0x2000190fef90 00:35:24.449 [2024-07-12 00:47:52.254805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.449 [2024-07-12 00:47:52.254836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:24.449 [2024-07-12 00:47:52.260898] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2bb30) with pdu=0x2000190fef90 00:35:24.449 [2024-07-12 00:47:52.261231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.449 [2024-07-12 00:47:52.261264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:24.449 [2024-07-12 00:47:52.267254] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2bb30) with pdu=0x2000190fef90 00:35:24.449 [2024-07-12 00:47:52.267592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.449 [2024-07-12 00:47:52.267624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:24.449 [2024-07-12 00:47:52.273645] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2bb30) with pdu=0x2000190fef90 00:35:24.449 [2024-07-12 00:47:52.273977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.449 [2024-07-12 00:47:52.274009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:24.449 [2024-07-12 00:47:52.280184] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2bb30) with pdu=0x2000190fef90 00:35:24.449 [2024-07-12 00:47:52.280516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.449 [2024-07-12 00:47:52.280546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:24.708 [2024-07-12 00:47:52.286982] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2bb30) with pdu=0x2000190fef90 00:35:24.708 [2024-07-12 00:47:52.287321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.708 [2024-07-12 00:47:52.287352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:24.708 [2024-07-12 00:47:52.293682] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2bb30) with pdu=0x2000190fef90 00:35:24.708 [2024-07-12 00:47:52.294015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.708 [2024-07-12 00:47:52.294048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:24.708 [2024-07-12 00:47:52.301075] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2bb30) with pdu=0x2000190fef90 00:35:24.708 [2024-07-12 00:47:52.301417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.708 [2024-07-12 00:47:52.301450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:24.708 [2024-07-12 00:47:52.307739] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2bb30) with pdu=0x2000190fef90 00:35:24.708 [2024-07-12 00:47:52.308081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.708 [2024-07-12 00:47:52.308114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:24.708 [2024-07-12 00:47:52.314638] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2bb30) with pdu=0x2000190fef90 00:35:24.708 [2024-07-12 00:47:52.314970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.708 [2024-07-12 00:47:52.315006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:24.708 [2024-07-12 00:47:52.320920] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2bb30) with pdu=0x2000190fef90 00:35:24.708 [2024-07-12 00:47:52.321254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.708 [2024-07-12 00:47:52.321287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:24.708 [2024-07-12 00:47:52.327625] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2bb30) with pdu=0x2000190fef90 00:35:24.708 [2024-07-12 00:47:52.327958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.708 [2024-07-12 00:47:52.327989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:24.708 [2024-07-12 00:47:52.335768] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2bb30) with pdu=0x2000190fef90 00:35:24.708 [2024-07-12 00:47:52.336085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.708 [2024-07-12 00:47:52.336120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:24.708 [2024-07-12 00:47:52.342690] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2bb30) with pdu=0x2000190fef90 00:35:24.708 [2024-07-12 00:47:52.343024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.708 [2024-07-12 00:47:52.343056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:24.708 [2024-07-12 00:47:52.348904] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2bb30) with pdu=0x2000190fef90 00:35:24.708 [2024-07-12 00:47:52.349236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.708 [2024-07-12 00:47:52.349267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:24.708 [2024-07-12 00:47:52.355142] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2bb30) with pdu=0x2000190fef90 00:35:24.708 [2024-07-12 00:47:52.355474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.708 [2024-07-12 00:47:52.355505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:24.708 [2024-07-12 00:47:52.361477] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2bb30) with pdu=0x2000190fef90 00:35:24.708 [2024-07-12 00:47:52.361815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.708 [2024-07-12 00:47:52.361847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:24.708 [2024-07-12 00:47:52.367966] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2bb30) with pdu=0x2000190fef90 00:35:24.708 [2024-07-12 00:47:52.368301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.708 [2024-07-12 00:47:52.368334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:24.708 [2024-07-12 00:47:52.374547] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2bb30) with pdu=0x2000190fef90 00:35:24.708 [2024-07-12 00:47:52.374885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.708 [2024-07-12 00:47:52.374916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:24.708 [2024-07-12 00:47:52.381121] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2bb30) with pdu=0x2000190fef90 00:35:24.708 [2024-07-12 00:47:52.381453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.708 [2024-07-12 00:47:52.381485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:24.708 [2024-07-12 00:47:52.387941] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2bb30) with pdu=0x2000190fef90 00:35:24.708 [2024-07-12 00:47:52.388274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.708 [2024-07-12 00:47:52.388306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:24.708 [2024-07-12 00:47:52.395745] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2bb30) with pdu=0x2000190fef90 00:35:24.708 [2024-07-12 00:47:52.396094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.708 [2024-07-12 00:47:52.396125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:24.708 [2024-07-12 00:47:52.403316] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2bb30) with pdu=0x2000190fef90 00:35:24.708 [2024-07-12 00:47:52.403654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.708 [2024-07-12 00:47:52.403686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:24.708 [2024-07-12 00:47:52.410198] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2bb30) with pdu=0x2000190fef90 00:35:24.708 [2024-07-12 00:47:52.410538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.708 [2024-07-12 00:47:52.410571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:24.708 [2024-07-12 00:47:52.416733] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2bb30) with pdu=0x2000190fef90 00:35:24.708 [2024-07-12 00:47:52.417072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.708 [2024-07-12 00:47:52.417111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:24.708 [2024-07-12 00:47:52.423136] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2bb30) with pdu=0x2000190fef90 00:35:24.708 [2024-07-12 00:47:52.423468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.708 [2024-07-12 00:47:52.423499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:24.708 [2024-07-12 00:47:52.429736] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2bb30) with pdu=0x2000190fef90 00:35:24.708 [2024-07-12 00:47:52.430068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.708 [2024-07-12 00:47:52.430099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:24.708 [2024-07-12 00:47:52.436284] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2bb30) with pdu=0x2000190fef90 00:35:24.708 [2024-07-12 00:47:52.436623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.708 [2024-07-12 00:47:52.436655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:24.708 [2024-07-12 00:47:52.442669] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2bb30) with pdu=0x2000190fef90 00:35:24.708 [2024-07-12 00:47:52.443001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.708 [2024-07-12 00:47:52.443033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:24.708 [2024-07-12 00:47:52.449170] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2bb30) with pdu=0x2000190fef90 00:35:24.708 [2024-07-12 00:47:52.449488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.708 [2024-07-12 00:47:52.449520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:24.708 [2024-07-12 00:47:52.455690] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2bb30) with pdu=0x2000190fef90 00:35:24.708 [2024-07-12 00:47:52.456022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.708 [2024-07-12 00:47:52.456053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:24.708 [2024-07-12 00:47:52.462226] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2bb30) with pdu=0x2000190fef90 00:35:24.708 [2024-07-12 00:47:52.462560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.708 [2024-07-12 00:47:52.462598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:24.708 [2024-07-12 00:47:52.468866] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2bb30) with pdu=0x2000190fef90 00:35:24.708 [2024-07-12 00:47:52.469200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.708 [2024-07-12 00:47:52.469231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:24.708 [2024-07-12 00:47:52.475440] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2bb30) with pdu=0x2000190fef90 00:35:24.708 [2024-07-12 00:47:52.475787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.708 [2024-07-12 00:47:52.475819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:24.708 [2024-07-12 00:47:52.482064] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2bb30) with pdu=0x2000190fef90 00:35:24.708 [2024-07-12 00:47:52.482398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.708 [2024-07-12 00:47:52.482430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:24.708 [2024-07-12 00:47:52.488640] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2bb30) with pdu=0x2000190fef90 00:35:24.709 [2024-07-12 00:47:52.489007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.709 [2024-07-12 00:47:52.489038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:24.709 [2024-07-12 00:47:52.495254] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2bb30) with pdu=0x2000190fef90 00:35:24.709 [2024-07-12 00:47:52.495600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.709 [2024-07-12 00:47:52.495632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:24.709 [2024-07-12 00:47:52.501719] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2bb30) with pdu=0x2000190fef90 00:35:24.709 [2024-07-12 00:47:52.502051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.709 [2024-07-12 00:47:52.502083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:24.709 [2024-07-12 00:47:52.508456] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2bb30) with pdu=0x2000190fef90 00:35:24.709 [2024-07-12 00:47:52.508798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.709 [2024-07-12 00:47:52.508831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:24.709 [2024-07-12 00:47:52.516357] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2bb30) with pdu=0x2000190fef90 00:35:24.709 [2024-07-12 00:47:52.516697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.709 [2024-07-12 00:47:52.516728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:24.709 [2024-07-12 00:47:52.523264] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2bb30) with pdu=0x2000190fef90 00:35:24.709 [2024-07-12 00:47:52.523608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.709 [2024-07-12 00:47:52.523640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:24.709 [2024-07-12 00:47:52.529583] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2bb30) with pdu=0x2000190fef90 00:35:24.709 [2024-07-12 00:47:52.529928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.709 [2024-07-12 00:47:52.529970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:24.709 [2024-07-12 00:47:52.536134] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2bb30) with pdu=0x2000190fef90 00:35:24.709 [2024-07-12 00:47:52.536468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.709 [2024-07-12 00:47:52.536499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:24.709 [2024-07-12 00:47:52.542957] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2bb30) with pdu=0x2000190fef90 00:35:24.709 [2024-07-12 00:47:52.543289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.709 [2024-07-12 00:47:52.543320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:24.968 [2024-07-12 00:47:52.549475] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2bb30) with pdu=0x2000190fef90 00:35:24.968 [2024-07-12 00:47:52.549839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.968 [2024-07-12 00:47:52.549871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:24.968 [2024-07-12 00:47:52.556066] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2bb30) with pdu=0x2000190fef90 00:35:24.968 [2024-07-12 00:47:52.556416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.968 [2024-07-12 00:47:52.556448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:24.968 [2024-07-12 00:47:52.562703] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2bb30) with pdu=0x2000190fef90 00:35:24.968 [2024-07-12 00:47:52.563035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.968 [2024-07-12 00:47:52.563066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:24.968 [2024-07-12 00:47:52.569252] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2bb30) with pdu=0x2000190fef90 00:35:24.968 [2024-07-12 00:47:52.569584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.968 [2024-07-12 00:47:52.569636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:24.968 [2024-07-12 00:47:52.575853] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2bb30) with pdu=0x2000190fef90 00:35:24.968 [2024-07-12 00:47:52.576171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.968 [2024-07-12 00:47:52.576203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:24.968 [2024-07-12 00:47:52.582477] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2bb30) with pdu=0x2000190fef90 00:35:24.968 [2024-07-12 00:47:52.582821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.968 [2024-07-12 00:47:52.582853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:24.968 [2024-07-12 00:47:52.588872] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2bb30) with pdu=0x2000190fef90 00:35:24.968 [2024-07-12 00:47:52.589213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.968 [2024-07-12 00:47:52.589244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:24.968 [2024-07-12 00:47:52.595405] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2bb30) with pdu=0x2000190fef90 00:35:24.968 [2024-07-12 00:47:52.595747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.968 [2024-07-12 00:47:52.595779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:24.968 [2024-07-12 00:47:52.601999] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2bb30) with pdu=0x2000190fef90 00:35:24.968 [2024-07-12 00:47:52.602333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.968 [2024-07-12 00:47:52.602365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:24.968 [2024-07-12 00:47:52.608584] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2bb30) with pdu=0x2000190fef90 00:35:24.968 [2024-07-12 00:47:52.608927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.968 [2024-07-12 00:47:52.608959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:24.968 [2024-07-12 00:47:52.615113] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2bb30) with pdu=0x2000190fef90 00:35:24.968 [2024-07-12 00:47:52.615428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.968 [2024-07-12 00:47:52.615459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:24.968 [2024-07-12 00:47:52.621448] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2bb30) with pdu=0x2000190fef90 00:35:24.968 [2024-07-12 00:47:52.621790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.968 [2024-07-12 00:47:52.621821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:24.968 [2024-07-12 00:47:52.627973] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2bb30) with pdu=0x2000190fef90 00:35:24.968 [2024-07-12 00:47:52.628304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.968 [2024-07-12 00:47:52.628336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:24.968 [2024-07-12 00:47:52.634565] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2bb30) with pdu=0x2000190fef90 00:35:24.969 [2024-07-12 00:47:52.634896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.969 [2024-07-12 00:47:52.634927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:24.969 [2024-07-12 00:47:52.641042] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2bb30) with pdu=0x2000190fef90 00:35:24.969 [2024-07-12 00:47:52.641382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.969 [2024-07-12 00:47:52.641413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:24.969 [2024-07-12 00:47:52.647526] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2bb30) with pdu=0x2000190fef90 00:35:24.969 [2024-07-12 00:47:52.647847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.969 [2024-07-12 00:47:52.647878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:24.969 [2024-07-12 00:47:52.654833] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2bb30) with pdu=0x2000190fef90 00:35:24.969 [2024-07-12 00:47:52.655166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.969 [2024-07-12 00:47:52.655197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:24.969 [2024-07-12 00:47:52.661127] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2bb30) with pdu=0x2000190fef90 00:35:24.969 [2024-07-12 00:47:52.661459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.969 [2024-07-12 00:47:52.661490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:24.969 [2024-07-12 00:47:52.667454] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2bb30) with pdu=0x2000190fef90 00:35:24.969 [2024-07-12 00:47:52.667794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.969 [2024-07-12 00:47:52.667825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:24.969 [2024-07-12 00:47:52.673792] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2bb30) with pdu=0x2000190fef90 00:35:24.969 [2024-07-12 00:47:52.674126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.969 [2024-07-12 00:47:52.674158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:24.969 [2024-07-12 00:47:52.679995] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2bb30) with pdu=0x2000190fef90 00:35:24.969 [2024-07-12 00:47:52.680330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.969 [2024-07-12 00:47:52.680361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:24.969 [2024-07-12 00:47:52.686238] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2bb30) with pdu=0x2000190fef90 00:35:24.969 [2024-07-12 00:47:52.686568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.969 [2024-07-12 00:47:52.686608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:24.969 [2024-07-12 00:47:52.693022] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2bb30) with pdu=0x2000190fef90 00:35:24.969 [2024-07-12 00:47:52.693371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.969 [2024-07-12 00:47:52.693402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:24.969 [2024-07-12 00:47:52.700788] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2bb30) with pdu=0x2000190fef90 00:35:24.969 [2024-07-12 00:47:52.701103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.969 [2024-07-12 00:47:52.701141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:24.969 [2024-07-12 00:47:52.707777] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2bb30) with pdu=0x2000190fef90 00:35:24.969 [2024-07-12 00:47:52.708095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.969 [2024-07-12 00:47:52.708126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:24.969 [2024-07-12 00:47:52.714337] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2bb30) with pdu=0x2000190fef90 00:35:24.969 [2024-07-12 00:47:52.714676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.969 [2024-07-12 00:47:52.714713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:24.969 [2024-07-12 00:47:52.721476] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2bb30) with pdu=0x2000190fef90 00:35:24.969 [2024-07-12 00:47:52.721818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.969 [2024-07-12 00:47:52.721849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:24.969 [2024-07-12 00:47:52.728024] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2bb30) with pdu=0x2000190fef90 00:35:24.969 [2024-07-12 00:47:52.728357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.969 [2024-07-12 00:47:52.728388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:24.969 [2024-07-12 00:47:52.734595] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2bb30) with pdu=0x2000190fef90 00:35:24.969 [2024-07-12 00:47:52.734927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.969 [2024-07-12 00:47:52.734958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:24.969 [2024-07-12 00:47:52.741205] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2bb30) with pdu=0x2000190fef90 00:35:24.969 [2024-07-12 00:47:52.741538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.969 [2024-07-12 00:47:52.741570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:24.969 [2024-07-12 00:47:52.747686] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2bb30) with pdu=0x2000190fef90 00:35:24.969 [2024-07-12 00:47:52.748010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.969 [2024-07-12 00:47:52.748040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:24.969 [2024-07-12 00:47:52.754497] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2bb30) with pdu=0x2000190fef90 00:35:24.969 [2024-07-12 00:47:52.754835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.969 [2024-07-12 00:47:52.754867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:24.969 [2024-07-12 00:47:52.761367] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2bb30) with pdu=0x2000190fef90 00:35:24.969 [2024-07-12 00:47:52.761714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.969 [2024-07-12 00:47:52.761746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:24.969 [2024-07-12 00:47:52.768105] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2bb30) with pdu=0x2000190fef90 00:35:24.969 [2024-07-12 00:47:52.768434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.969 [2024-07-12 00:47:52.768467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:24.969 [2024-07-12 00:47:52.774639] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1f2bb30) with pdu=0x2000190fef90 00:35:24.969 [2024-07-12 00:47:52.774831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:24.969 [2024-07-12 00:47:52.774863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:24.969 00:35:24.969 Latency(us) 00:35:24.969 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:24.969 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:35:24.969 nvme0n1 : 2.00 4690.44 586.30 0.00 0.00 3401.99 2961.26 8495.41 00:35:24.969 =================================================================================================================== 00:35:24.969 Total : 4690.44 586.30 0.00 0.00 3401.99 2961.26 8495.41 00:35:24.969 0 00:35:24.969 00:47:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:35:24.969 00:47:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:35:24.969 00:47:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:35:24.969 | .driver_specific 00:35:24.969 | .nvme_error 00:35:24.969 | .status_code 00:35:24.969 | .command_transient_transport_error' 00:35:24.969 00:47:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:35:25.229 00:47:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 303 > 0 )) 00:35:25.229 00:47:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1078340 00:35:25.229 00:47:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@946 -- # '[' -z 1078340 ']' 00:35:25.229 00:47:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # kill -0 1078340 00:35:25.229 00:47:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # uname 00:35:25.229 00:47:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:35:25.229 00:47:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1078340 00:35:25.488 00:47:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:35:25.488 00:47:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:35:25.488 00:47:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1078340' 00:35:25.488 killing process with pid 1078340 00:35:25.488 00:47:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # kill 1078340 00:35:25.488 Received shutdown signal, test time was about 2.000000 seconds 00:35:25.488 00:35:25.488 Latency(us) 00:35:25.488 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:25.488 =================================================================================================================== 00:35:25.488 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:25.488 00:47:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # wait 1078340 00:35:25.488 00:47:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 1077258 00:35:25.488 00:47:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@946 -- # '[' -z 1077258 ']' 00:35:25.488 00:47:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # kill -0 1077258 00:35:25.488 00:47:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # uname 00:35:25.488 00:47:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:35:25.488 00:47:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1077258 00:35:25.488 00:47:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:35:25.488 00:47:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:35:25.488 00:47:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1077258' 00:35:25.488 killing process with pid 1077258 00:35:25.488 00:47:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # kill 1077258 00:35:25.488 00:47:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # wait 1077258 00:35:25.748 00:35:25.748 real 0m15.146s 00:35:25.748 user 0m30.334s 00:35:25.748 sys 0m4.091s 00:35:25.748 00:47:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1122 -- # xtrace_disable 00:35:25.748 00:47:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:25.748 ************************************ 00:35:25.748 END TEST nvmf_digest_error 00:35:25.748 ************************************ 00:35:25.748 00:47:53 nvmf_tcp.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:35:25.748 00:47:53 nvmf_tcp.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:35:25.748 00:47:53 nvmf_tcp.nvmf_digest -- nvmf/common.sh@488 -- # nvmfcleanup 00:35:25.748 00:47:53 nvmf_tcp.nvmf_digest -- nvmf/common.sh@117 -- # sync 00:35:25.748 00:47:53 nvmf_tcp.nvmf_digest -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:35:25.748 00:47:53 nvmf_tcp.nvmf_digest -- nvmf/common.sh@120 -- # set +e 00:35:25.748 00:47:53 nvmf_tcp.nvmf_digest -- nvmf/common.sh@121 -- # for i in {1..20} 00:35:25.748 00:47:53 nvmf_tcp.nvmf_digest -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:35:25.748 rmmod nvme_tcp 00:35:25.748 rmmod nvme_fabrics 00:35:25.748 rmmod nvme_keyring 00:35:25.748 00:47:53 nvmf_tcp.nvmf_digest -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:35:25.748 00:47:53 nvmf_tcp.nvmf_digest -- nvmf/common.sh@124 -- # set -e 00:35:25.748 00:47:53 nvmf_tcp.nvmf_digest -- nvmf/common.sh@125 -- # return 0 00:35:25.748 00:47:53 nvmf_tcp.nvmf_digest -- nvmf/common.sh@489 -- # '[' -n 1077258 ']' 00:35:25.748 00:47:53 nvmf_tcp.nvmf_digest -- nvmf/common.sh@490 -- # killprocess 1077258 00:35:25.748 00:47:53 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@946 -- # '[' -z 1077258 ']' 00:35:25.748 00:47:53 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@950 -- # kill -0 1077258 00:35:25.748 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 950: kill: (1077258) - No such process 00:35:25.748 00:47:53 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@973 -- # echo 'Process with pid 1077258 is not found' 00:35:25.748 Process with pid 1077258 is not found 00:35:25.748 00:47:53 nvmf_tcp.nvmf_digest -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:35:25.748 00:47:53 nvmf_tcp.nvmf_digest -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:35:25.748 00:47:53 nvmf_tcp.nvmf_digest -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:35:25.748 00:47:53 nvmf_tcp.nvmf_digest -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:35:25.748 00:47:53 nvmf_tcp.nvmf_digest -- nvmf/common.sh@278 -- # remove_spdk_ns 00:35:25.748 00:47:53 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:25.748 00:47:53 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:35:25.748 00:47:53 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:28.278 00:47:55 nvmf_tcp.nvmf_digest -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:35:28.278 00:35:28.278 real 0m34.815s 00:35:28.278 user 1m2.919s 00:35:28.278 sys 0m9.480s 00:35:28.278 00:47:55 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1122 -- # xtrace_disable 00:35:28.278 00:47:55 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:35:28.278 ************************************ 00:35:28.278 END TEST nvmf_digest 00:35:28.278 ************************************ 00:35:28.278 00:47:55 nvmf_tcp -- nvmf/nvmf.sh@111 -- # [[ 0 -eq 1 ]] 00:35:28.278 00:47:55 nvmf_tcp -- nvmf/nvmf.sh@116 -- # [[ 0 -eq 1 ]] 00:35:28.278 00:47:55 nvmf_tcp -- nvmf/nvmf.sh@121 -- # [[ phy == phy ]] 00:35:28.278 00:47:55 nvmf_tcp -- nvmf/nvmf.sh@122 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:35:28.278 00:47:55 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:35:28.278 00:47:55 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:35:28.278 00:47:55 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:28.278 ************************************ 00:35:28.278 START TEST nvmf_bdevperf 00:35:28.278 ************************************ 00:35:28.278 00:47:55 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:35:28.278 * Looking for test storage... 00:35:28.278 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:35:28.278 00:47:55 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:28.278 00:47:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:35:28.278 00:47:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:28.278 00:47:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:28.278 00:47:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:28.278 00:47:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:28.278 00:47:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:28.278 00:47:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:28.278 00:47:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:28.278 00:47:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:28.278 00:47:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:28.278 00:47:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:28.278 00:47:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:35:28.278 00:47:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:35:28.278 00:47:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:28.278 00:47:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:28.278 00:47:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:28.278 00:47:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:28.278 00:47:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:28.279 00:47:55 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:28.279 00:47:55 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:28.279 00:47:55 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:28.279 00:47:55 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:28.279 00:47:55 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:28.279 00:47:55 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:28.279 00:47:55 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:35:28.279 00:47:55 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:28.279 00:47:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@47 -- # : 0 00:35:28.279 00:47:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:35:28.279 00:47:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:35:28.279 00:47:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:28.279 00:47:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:28.279 00:47:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:28.279 00:47:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:35:28.279 00:47:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:35:28.279 00:47:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:35:28.279 00:47:55 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:35:28.279 00:47:55 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:35:28.279 00:47:55 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:35:28.279 00:47:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:35:28.279 00:47:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:28.279 00:47:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@448 -- # prepare_net_devs 00:35:28.279 00:47:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:35:28.279 00:47:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:35:28.279 00:47:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:28.279 00:47:55 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:35:28.279 00:47:55 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:28.279 00:47:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:35:28.279 00:47:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:35:28.279 00:47:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@285 -- # xtrace_disable 00:35:28.279 00:47:55 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:29.660 00:47:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:29.660 00:47:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@291 -- # pci_devs=() 00:35:29.660 00:47:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@291 -- # local -a pci_devs 00:35:29.660 00:47:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:35:29.660 00:47:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:35:29.660 00:47:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@293 -- # pci_drivers=() 00:35:29.660 00:47:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:35:29.660 00:47:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@295 -- # net_devs=() 00:35:29.660 00:47:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@295 -- # local -ga net_devs 00:35:29.660 00:47:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@296 -- # e810=() 00:35:29.660 00:47:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@296 -- # local -ga e810 00:35:29.660 00:47:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@297 -- # x722=() 00:35:29.660 00:47:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@297 -- # local -ga x722 00:35:29.660 00:47:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@298 -- # mlx=() 00:35:29.660 00:47:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@298 -- # local -ga mlx 00:35:29.660 00:47:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:29.660 00:47:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:29.660 00:47:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:29.660 00:47:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:29.660 00:47:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:29.660 00:47:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:29.660 00:47:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:29.660 00:47:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:29.660 00:47:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:29.661 00:47:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:29.661 00:47:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:29.661 00:47:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:35:29.661 00:47:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:35:29.661 00:47:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:35:29.661 00:47:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:35:29.661 00:47:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:35:29.661 00:47:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:35:29.661 00:47:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:35:29.661 00:47:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:35:29.661 Found 0000:08:00.0 (0x8086 - 0x159b) 00:35:29.661 00:47:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:35:29.661 00:47:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:35:29.661 00:47:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:29.661 00:47:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:29.661 00:47:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:35:29.661 00:47:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:35:29.661 00:47:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:35:29.661 Found 0000:08:00.1 (0x8086 - 0x159b) 00:35:29.661 00:47:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:35:29.661 00:47:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:35:29.661 00:47:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:29.661 00:47:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:29.661 00:47:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:35:29.661 00:47:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:35:29.661 00:47:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:35:29.661 00:47:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:35:29.661 00:47:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:35:29.661 00:47:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:29.661 00:47:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:35:29.661 00:47:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:29.661 00:47:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:35:29.661 00:47:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:35:29.661 00:47:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:29.661 00:47:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:35:29.661 Found net devices under 0000:08:00.0: cvl_0_0 00:35:29.661 00:47:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:35:29.661 00:47:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:35:29.661 00:47:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:29.661 00:47:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:35:29.661 00:47:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:29.661 00:47:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:35:29.661 00:47:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:35:29.661 00:47:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:29.661 00:47:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:35:29.661 Found net devices under 0000:08:00.1: cvl_0_1 00:35:29.661 00:47:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:35:29.661 00:47:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:35:29.661 00:47:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # is_hw=yes 00:35:29.661 00:47:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:35:29.661 00:47:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:35:29.661 00:47:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:35:29.661 00:47:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:29.661 00:47:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:29.661 00:47:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:29.661 00:47:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:35:29.661 00:47:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:29.661 00:47:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:29.661 00:47:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:35:29.661 00:47:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:29.661 00:47:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:29.661 00:47:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:35:29.661 00:47:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:35:29.661 00:47:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:35:29.661 00:47:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:29.661 00:47:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:29.661 00:47:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:29.661 00:47:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:35:29.661 00:47:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:29.661 00:47:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:29.661 00:47:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:29.661 00:47:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:35:29.661 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:29.661 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.305 ms 00:35:29.661 00:35:29.661 --- 10.0.0.2 ping statistics --- 00:35:29.661 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:29.661 rtt min/avg/max/mdev = 0.305/0.305/0.305/0.000 ms 00:35:29.661 00:47:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:29.661 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:29.661 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.150 ms 00:35:29.661 00:35:29.661 --- 10.0.0.1 ping statistics --- 00:35:29.661 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:29.661 rtt min/avg/max/mdev = 0.150/0.150/0.150/0.000 ms 00:35:29.661 00:47:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:29.661 00:47:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@422 -- # return 0 00:35:29.661 00:47:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:35:29.661 00:47:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:29.661 00:47:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:35:29.661 00:47:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:35:29.661 00:47:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:29.661 00:47:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:35:29.661 00:47:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:35:29.661 00:47:57 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:35:29.661 00:47:57 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:35:29.661 00:47:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:35:29.661 00:47:57 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@720 -- # xtrace_disable 00:35:29.661 00:47:57 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:29.661 00:47:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=1080171 00:35:29.661 00:47:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:35:29.661 00:47:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 1080171 00:35:29.661 00:47:57 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@827 -- # '[' -z 1080171 ']' 00:35:29.661 00:47:57 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:29.661 00:47:57 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@832 -- # local max_retries=100 00:35:29.661 00:47:57 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:29.661 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:29.661 00:47:57 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@836 -- # xtrace_disable 00:35:29.661 00:47:57 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:29.919 [2024-07-12 00:47:57.508556] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:35:29.919 [2024-07-12 00:47:57.508669] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:29.919 EAL: No free 2048 kB hugepages reported on node 1 00:35:29.919 [2024-07-12 00:47:57.574522] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:35:29.919 [2024-07-12 00:47:57.662204] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:29.919 [2024-07-12 00:47:57.662264] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:29.919 [2024-07-12 00:47:57.662281] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:29.919 [2024-07-12 00:47:57.662294] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:29.919 [2024-07-12 00:47:57.662306] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:29.919 [2024-07-12 00:47:57.662395] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:35:29.919 [2024-07-12 00:47:57.662725] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:35:29.919 [2024-07-12 00:47:57.662760] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:35:30.176 00:47:57 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:35:30.176 00:47:57 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@860 -- # return 0 00:35:30.176 00:47:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:35:30.176 00:47:57 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:30.176 00:47:57 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:30.176 00:47:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:30.176 00:47:57 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:35:30.176 00:47:57 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:30.176 00:47:57 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:30.176 [2024-07-12 00:47:57.785549] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:30.176 00:47:57 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:30.176 00:47:57 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:35:30.176 00:47:57 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:30.176 00:47:57 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:30.176 Malloc0 00:35:30.176 00:47:57 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:30.177 00:47:57 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:35:30.177 00:47:57 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:30.177 00:47:57 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:30.177 00:47:57 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:30.177 00:47:57 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:35:30.177 00:47:57 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:30.177 00:47:57 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:30.177 00:47:57 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:30.177 00:47:57 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:30.177 00:47:57 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:30.177 00:47:57 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:30.177 [2024-07-12 00:47:57.843345] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:30.177 00:47:57 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:30.177 00:47:57 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:35:30.177 00:47:57 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:35:30.177 00:47:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:35:30.177 00:47:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:35:30.177 00:47:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:30.177 00:47:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:30.177 { 00:35:30.177 "params": { 00:35:30.177 "name": "Nvme$subsystem", 00:35:30.177 "trtype": "$TEST_TRANSPORT", 00:35:30.177 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:30.177 "adrfam": "ipv4", 00:35:30.177 "trsvcid": "$NVMF_PORT", 00:35:30.177 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:30.177 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:30.177 "hdgst": ${hdgst:-false}, 00:35:30.177 "ddgst": ${ddgst:-false} 00:35:30.177 }, 00:35:30.177 "method": "bdev_nvme_attach_controller" 00:35:30.177 } 00:35:30.177 EOF 00:35:30.177 )") 00:35:30.177 00:47:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:35:30.177 00:47:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:35:30.177 00:47:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:35:30.177 00:47:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:35:30.177 "params": { 00:35:30.177 "name": "Nvme1", 00:35:30.177 "trtype": "tcp", 00:35:30.177 "traddr": "10.0.0.2", 00:35:30.177 "adrfam": "ipv4", 00:35:30.177 "trsvcid": "4420", 00:35:30.177 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:30.177 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:30.177 "hdgst": false, 00:35:30.177 "ddgst": false 00:35:30.177 }, 00:35:30.177 "method": "bdev_nvme_attach_controller" 00:35:30.177 }' 00:35:30.177 [2024-07-12 00:47:57.892491] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:35:30.177 [2024-07-12 00:47:57.892603] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1080240 ] 00:35:30.177 EAL: No free 2048 kB hugepages reported on node 1 00:35:30.177 [2024-07-12 00:47:57.952685] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:30.434 [2024-07-12 00:47:58.040085] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:35:30.434 Running I/O for 1 seconds... 00:35:31.824 00:35:31.824 Latency(us) 00:35:31.824 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:31.824 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:35:31.824 Verification LBA range: start 0x0 length 0x4000 00:35:31.824 Nvme1n1 : 1.01 7727.30 30.18 0.00 0.00 16482.16 3349.62 17670.45 00:35:31.824 =================================================================================================================== 00:35:31.824 Total : 7727.30 30.18 0.00 0.00 16482.16 3349.62 17670.45 00:35:31.824 00:47:59 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=1080345 00:35:31.824 00:47:59 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:35:31.824 00:47:59 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:35:31.824 00:47:59 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:35:31.824 00:47:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:35:31.824 00:47:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:35:31.824 00:47:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:31.824 00:47:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:31.824 { 00:35:31.824 "params": { 00:35:31.824 "name": "Nvme$subsystem", 00:35:31.824 "trtype": "$TEST_TRANSPORT", 00:35:31.824 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:31.824 "adrfam": "ipv4", 00:35:31.824 "trsvcid": "$NVMF_PORT", 00:35:31.824 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:31.824 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:31.824 "hdgst": ${hdgst:-false}, 00:35:31.824 "ddgst": ${ddgst:-false} 00:35:31.824 }, 00:35:31.824 "method": "bdev_nvme_attach_controller" 00:35:31.824 } 00:35:31.824 EOF 00:35:31.824 )") 00:35:31.824 00:47:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:35:31.824 00:47:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:35:31.824 00:47:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:35:31.824 00:47:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:35:31.824 "params": { 00:35:31.824 "name": "Nvme1", 00:35:31.824 "trtype": "tcp", 00:35:31.824 "traddr": "10.0.0.2", 00:35:31.824 "adrfam": "ipv4", 00:35:31.824 "trsvcid": "4420", 00:35:31.824 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:31.824 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:31.824 "hdgst": false, 00:35:31.824 "ddgst": false 00:35:31.824 }, 00:35:31.824 "method": "bdev_nvme_attach_controller" 00:35:31.824 }' 00:35:31.824 [2024-07-12 00:47:59.473170] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:35:31.824 [2024-07-12 00:47:59.473271] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1080345 ] 00:35:31.824 EAL: No free 2048 kB hugepages reported on node 1 00:35:31.824 [2024-07-12 00:47:59.534853] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:31.824 [2024-07-12 00:47:59.621710] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:35:32.098 Running I/O for 15 seconds... 00:35:34.627 00:48:02 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 1080171 00:35:34.627 00:48:02 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:35:34.627 [2024-07-12 00:48:02.440210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:30200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.627 [2024-07-12 00:48:02.440262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.627 [2024-07-12 00:48:02.440297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:30208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.627 [2024-07-12 00:48:02.440323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.627 [2024-07-12 00:48:02.440343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:30216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.627 [2024-07-12 00:48:02.440360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.627 [2024-07-12 00:48:02.440378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:30224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.627 [2024-07-12 00:48:02.440393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.627 [2024-07-12 00:48:02.440410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:30232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.627 [2024-07-12 00:48:02.440426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.627 [2024-07-12 00:48:02.440443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:30240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.627 [2024-07-12 00:48:02.440459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.627 [2024-07-12 00:48:02.440477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:30248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.627 [2024-07-12 00:48:02.440493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.627 [2024-07-12 00:48:02.440510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:30256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.627 [2024-07-12 00:48:02.440526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.627 [2024-07-12 00:48:02.440543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:30264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.627 [2024-07-12 00:48:02.440558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.627 [2024-07-12 00:48:02.440576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:30272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.627 [2024-07-12 00:48:02.440599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.627 [2024-07-12 00:48:02.440617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:30280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.627 [2024-07-12 00:48:02.440639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.627 [2024-07-12 00:48:02.440657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:30288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.627 [2024-07-12 00:48:02.440673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.627 [2024-07-12 00:48:02.440690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:30296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.627 [2024-07-12 00:48:02.440706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.627 [2024-07-12 00:48:02.440724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:30304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.627 [2024-07-12 00:48:02.440739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.627 [2024-07-12 00:48:02.440761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:30312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.627 [2024-07-12 00:48:02.440777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.627 [2024-07-12 00:48:02.440794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:30320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.627 [2024-07-12 00:48:02.440810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.627 [2024-07-12 00:48:02.440828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:30328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.627 [2024-07-12 00:48:02.440844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.627 [2024-07-12 00:48:02.440861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.627 [2024-07-12 00:48:02.440877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.627 [2024-07-12 00:48:02.440894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:30344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.628 [2024-07-12 00:48:02.440911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.628 [2024-07-12 00:48:02.440928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:30352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.628 [2024-07-12 00:48:02.440944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.628 [2024-07-12 00:48:02.440961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:30360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.628 [2024-07-12 00:48:02.440977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.628 [2024-07-12 00:48:02.441003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:30368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.628 [2024-07-12 00:48:02.441021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.628 [2024-07-12 00:48:02.441039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:30376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.628 [2024-07-12 00:48:02.441055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.628 [2024-07-12 00:48:02.441079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:30384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.628 [2024-07-12 00:48:02.441095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.628 [2024-07-12 00:48:02.441112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:30392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.628 [2024-07-12 00:48:02.441127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.628 [2024-07-12 00:48:02.441144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:30400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.628 [2024-07-12 00:48:02.441159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.628 [2024-07-12 00:48:02.441177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:30408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.628 [2024-07-12 00:48:02.441196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.628 [2024-07-12 00:48:02.441214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:30416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.628 [2024-07-12 00:48:02.441229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.628 [2024-07-12 00:48:02.441246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:30424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.628 [2024-07-12 00:48:02.441262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.628 [2024-07-12 00:48:02.441278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:30432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.628 [2024-07-12 00:48:02.441294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.628 [2024-07-12 00:48:02.441311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:30440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.628 [2024-07-12 00:48:02.441326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.628 [2024-07-12 00:48:02.441343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:30448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.628 [2024-07-12 00:48:02.441358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.628 [2024-07-12 00:48:02.441375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:30456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.628 [2024-07-12 00:48:02.441390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.628 [2024-07-12 00:48:02.441407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:30464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.628 [2024-07-12 00:48:02.441423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.628 [2024-07-12 00:48:02.441440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:30472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.628 [2024-07-12 00:48:02.441455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.628 [2024-07-12 00:48:02.441472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:30480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.628 [2024-07-12 00:48:02.441487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.628 [2024-07-12 00:48:02.441504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.628 [2024-07-12 00:48:02.441521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.628 [2024-07-12 00:48:02.441539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:30496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.628 [2024-07-12 00:48:02.441554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.628 [2024-07-12 00:48:02.441571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:30504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.628 [2024-07-12 00:48:02.441591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.628 [2024-07-12 00:48:02.441614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:30512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.628 [2024-07-12 00:48:02.441640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.628 [2024-07-12 00:48:02.441657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:30520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.628 [2024-07-12 00:48:02.441672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.628 [2024-07-12 00:48:02.441689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:30528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.628 [2024-07-12 00:48:02.441704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.628 [2024-07-12 00:48:02.441722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:30536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.628 [2024-07-12 00:48:02.441737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.628 [2024-07-12 00:48:02.441754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:30544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.628 [2024-07-12 00:48:02.441769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.628 [2024-07-12 00:48:02.441786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:30552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.628 [2024-07-12 00:48:02.441802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.628 [2024-07-12 00:48:02.441819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:30560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.628 [2024-07-12 00:48:02.441834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.628 [2024-07-12 00:48:02.441851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:30568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.628 [2024-07-12 00:48:02.441866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.628 [2024-07-12 00:48:02.441883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:30576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.628 [2024-07-12 00:48:02.441899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.628 [2024-07-12 00:48:02.441916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:30584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.628 [2024-07-12 00:48:02.441931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.628 [2024-07-12 00:48:02.441948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:30592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.628 [2024-07-12 00:48:02.441963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.628 [2024-07-12 00:48:02.441980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.628 [2024-07-12 00:48:02.441995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.628 [2024-07-12 00:48:02.442012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:30608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.628 [2024-07-12 00:48:02.442031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.628 [2024-07-12 00:48:02.442049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:30616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.628 [2024-07-12 00:48:02.442065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.628 [2024-07-12 00:48:02.442083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:30624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.628 [2024-07-12 00:48:02.442098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.628 [2024-07-12 00:48:02.442115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:30632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.628 [2024-07-12 00:48:02.442130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.628 [2024-07-12 00:48:02.442148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:30640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.628 [2024-07-12 00:48:02.442163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.628 [2024-07-12 00:48:02.442180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:30648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.628 [2024-07-12 00:48:02.442196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.628 [2024-07-12 00:48:02.442213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:30656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.628 [2024-07-12 00:48:02.442234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.628 [2024-07-12 00:48:02.442251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:30664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.628 [2024-07-12 00:48:02.442266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.628 [2024-07-12 00:48:02.442283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.628 [2024-07-12 00:48:02.442298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.628 [2024-07-12 00:48:02.442315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:30680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.629 [2024-07-12 00:48:02.442330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.629 [2024-07-12 00:48:02.442348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:30688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.629 [2024-07-12 00:48:02.442363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.629 [2024-07-12 00:48:02.442380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:30696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.629 [2024-07-12 00:48:02.442395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.629 [2024-07-12 00:48:02.442412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:30704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.629 [2024-07-12 00:48:02.442427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.629 [2024-07-12 00:48:02.442450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:30712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.629 [2024-07-12 00:48:02.442470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.629 [2024-07-12 00:48:02.442488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:30720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.629 [2024-07-12 00:48:02.442504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.629 [2024-07-12 00:48:02.442521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:30728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.629 [2024-07-12 00:48:02.442537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.629 [2024-07-12 00:48:02.442554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:30736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.629 [2024-07-12 00:48:02.442570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.629 [2024-07-12 00:48:02.442593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:30744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.629 [2024-07-12 00:48:02.442610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.629 [2024-07-12 00:48:02.442628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:30752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.629 [2024-07-12 00:48:02.442643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.629 [2024-07-12 00:48:02.442660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:30760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.629 [2024-07-12 00:48:02.442677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.629 [2024-07-12 00:48:02.442699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:30768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.629 [2024-07-12 00:48:02.442715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.629 [2024-07-12 00:48:02.442732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:30776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.629 [2024-07-12 00:48:02.442747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.629 [2024-07-12 00:48:02.442765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:30784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.629 [2024-07-12 00:48:02.442780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.629 [2024-07-12 00:48:02.442798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:30792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.629 [2024-07-12 00:48:02.442813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.629 [2024-07-12 00:48:02.442830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:30800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.629 [2024-07-12 00:48:02.442846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.629 [2024-07-12 00:48:02.442863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:30808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.629 [2024-07-12 00:48:02.442878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.629 [2024-07-12 00:48:02.442900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:30816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.629 [2024-07-12 00:48:02.442916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.629 [2024-07-12 00:48:02.442934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:30824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.629 [2024-07-12 00:48:02.442950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.629 [2024-07-12 00:48:02.442967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:30832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.629 [2024-07-12 00:48:02.442982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.629 [2024-07-12 00:48:02.443004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.629 [2024-07-12 00:48:02.443019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.629 [2024-07-12 00:48:02.443037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:30848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.629 [2024-07-12 00:48:02.443052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.629 [2024-07-12 00:48:02.443070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:30856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.629 [2024-07-12 00:48:02.443085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.629 [2024-07-12 00:48:02.443102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:30864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.629 [2024-07-12 00:48:02.443118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.629 [2024-07-12 00:48:02.443135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:30872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.629 [2024-07-12 00:48:02.443150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.629 [2024-07-12 00:48:02.443167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:30880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.629 [2024-07-12 00:48:02.443183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.629 [2024-07-12 00:48:02.443200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:30888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.629 [2024-07-12 00:48:02.443216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.629 [2024-07-12 00:48:02.443233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:30896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.629 [2024-07-12 00:48:02.443248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.629 [2024-07-12 00:48:02.443264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:30904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.629 [2024-07-12 00:48:02.443280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.629 [2024-07-12 00:48:02.443297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:30912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.629 [2024-07-12 00:48:02.443316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.629 [2024-07-12 00:48:02.443334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:30920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.629 [2024-07-12 00:48:02.443349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.629 [2024-07-12 00:48:02.443366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:30928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.629 [2024-07-12 00:48:02.443381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.629 [2024-07-12 00:48:02.443399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:30936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.629 [2024-07-12 00:48:02.443414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.629 [2024-07-12 00:48:02.443431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:30944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.629 [2024-07-12 00:48:02.443446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.629 [2024-07-12 00:48:02.443463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.629 [2024-07-12 00:48:02.443479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.629 [2024-07-12 00:48:02.443496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:30960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.629 [2024-07-12 00:48:02.443511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.629 [2024-07-12 00:48:02.443528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:30968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.629 [2024-07-12 00:48:02.443544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.629 [2024-07-12 00:48:02.443561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:30976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.629 [2024-07-12 00:48:02.443576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.629 [2024-07-12 00:48:02.443599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:30984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.629 [2024-07-12 00:48:02.443616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.629 [2024-07-12 00:48:02.443639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.629 [2024-07-12 00:48:02.443654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.629 [2024-07-12 00:48:02.443672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:31000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.629 [2024-07-12 00:48:02.443688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.629 [2024-07-12 00:48:02.443705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:31008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.629 [2024-07-12 00:48:02.443720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.629 [2024-07-12 00:48:02.443742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:31016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.629 [2024-07-12 00:48:02.443758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.629 [2024-07-12 00:48:02.443776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:31024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.630 [2024-07-12 00:48:02.443791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.630 [2024-07-12 00:48:02.443808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:31032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.630 [2024-07-12 00:48:02.443823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.630 [2024-07-12 00:48:02.443840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:31040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.630 [2024-07-12 00:48:02.443856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.630 [2024-07-12 00:48:02.443872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:31048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.630 [2024-07-12 00:48:02.443888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.630 [2024-07-12 00:48:02.443905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:31056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.630 [2024-07-12 00:48:02.443920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.630 [2024-07-12 00:48:02.443937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:31064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.630 [2024-07-12 00:48:02.443952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.630 [2024-07-12 00:48:02.443969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:31072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.630 [2024-07-12 00:48:02.443984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.630 [2024-07-12 00:48:02.444001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:31080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.630 [2024-07-12 00:48:02.444016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.630 [2024-07-12 00:48:02.444033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:31088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.630 [2024-07-12 00:48:02.444049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.630 [2024-07-12 00:48:02.444066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:31096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.630 [2024-07-12 00:48:02.444081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.630 [2024-07-12 00:48:02.444098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:31104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.630 [2024-07-12 00:48:02.444114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.630 [2024-07-12 00:48:02.444131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:31112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.630 [2024-07-12 00:48:02.444149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.630 [2024-07-12 00:48:02.444167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.630 [2024-07-12 00:48:02.444183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.630 [2024-07-12 00:48:02.444206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:31128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.630 [2024-07-12 00:48:02.444222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.630 [2024-07-12 00:48:02.444239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:31136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.630 [2024-07-12 00:48:02.444255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.630 [2024-07-12 00:48:02.444272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:31144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.630 [2024-07-12 00:48:02.444287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.630 [2024-07-12 00:48:02.444304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:31152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.630 [2024-07-12 00:48:02.444319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.630 [2024-07-12 00:48:02.444336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:31160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.630 [2024-07-12 00:48:02.444351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.630 [2024-07-12 00:48:02.444368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:31168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.630 [2024-07-12 00:48:02.444383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.630 [2024-07-12 00:48:02.444400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:31176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.630 [2024-07-12 00:48:02.444415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.630 [2024-07-12 00:48:02.444432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:31184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.630 [2024-07-12 00:48:02.444447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.630 [2024-07-12 00:48:02.444464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:31192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.630 [2024-07-12 00:48:02.444479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.630 [2024-07-12 00:48:02.444496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:31200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.630 [2024-07-12 00:48:02.444511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.630 [2024-07-12 00:48:02.444533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:31208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.630 [2024-07-12 00:48:02.444549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.630 [2024-07-12 00:48:02.444566] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24c4aa0 is same with the state(5) to be set 00:35:34.630 [2024-07-12 00:48:02.444596] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:34.630 [2024-07-12 00:48:02.444611] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:34.630 [2024-07-12 00:48:02.444624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:31216 len:8 PRP1 0x0 PRP2 0x0 00:35:34.630 [2024-07-12 00:48:02.444639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.630 [2024-07-12 00:48:02.444713] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x24c4aa0 was disconnected and freed. reset controller. 00:35:34.630 [2024-07-12 00:48:02.444796] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:35:34.630 [2024-07-12 00:48:02.444818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.630 [2024-07-12 00:48:02.444835] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:35:34.630 [2024-07-12 00:48:02.444849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.630 [2024-07-12 00:48:02.444865] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:35:34.630 [2024-07-12 00:48:02.444879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.630 [2024-07-12 00:48:02.444894] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:35:34.630 [2024-07-12 00:48:02.444908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.630 [2024-07-12 00:48:02.444922] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ca950 is same with the state(5) to be set 00:35:34.630 [2024-07-12 00:48:02.449198] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:34.630 [2024-07-12 00:48:02.449239] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ca950 (9): Bad file descriptor 00:35:34.630 [2024-07-12 00:48:02.450024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.630 [2024-07-12 00:48:02.450066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ca950 with addr=10.0.0.2, port=4420 00:35:34.630 [2024-07-12 00:48:02.450085] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ca950 is same with the state(5) to be set 00:35:34.630 [2024-07-12 00:48:02.450357] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ca950 (9): Bad file descriptor 00:35:34.630 [2024-07-12 00:48:02.450640] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:34.630 [2024-07-12 00:48:02.450665] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:34.630 [2024-07-12 00:48:02.450683] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:34.630 [2024-07-12 00:48:02.454733] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:34.630 [2024-07-12 00:48:02.463828] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:34.890 [2024-07-12 00:48:02.464274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.890 [2024-07-12 00:48:02.464330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ca950 with addr=10.0.0.2, port=4420 00:35:34.890 [2024-07-12 00:48:02.464348] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ca950 is same with the state(5) to be set 00:35:34.890 [2024-07-12 00:48:02.464630] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ca950 (9): Bad file descriptor 00:35:34.890 [2024-07-12 00:48:02.464899] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:34.890 [2024-07-12 00:48:02.464921] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:34.890 [2024-07-12 00:48:02.464936] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:34.890 [2024-07-12 00:48:02.469001] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:34.890 [2024-07-12 00:48:02.478343] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:34.890 [2024-07-12 00:48:02.478834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.890 [2024-07-12 00:48:02.478876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ca950 with addr=10.0.0.2, port=4420 00:35:34.890 [2024-07-12 00:48:02.478895] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ca950 is same with the state(5) to be set 00:35:34.890 [2024-07-12 00:48:02.479177] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ca950 (9): Bad file descriptor 00:35:34.890 [2024-07-12 00:48:02.479446] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:34.890 [2024-07-12 00:48:02.479468] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:34.890 [2024-07-12 00:48:02.479483] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:34.890 [2024-07-12 00:48:02.483542] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:34.890 [2024-07-12 00:48:02.492934] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:34.890 [2024-07-12 00:48:02.493386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.890 [2024-07-12 00:48:02.493452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ca950 with addr=10.0.0.2, port=4420 00:35:34.890 [2024-07-12 00:48:02.493471] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ca950 is same with the state(5) to be set 00:35:34.890 [2024-07-12 00:48:02.493755] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ca950 (9): Bad file descriptor 00:35:34.890 [2024-07-12 00:48:02.494025] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:34.890 [2024-07-12 00:48:02.494047] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:34.890 [2024-07-12 00:48:02.494063] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:34.890 [2024-07-12 00:48:02.498133] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:34.890 [2024-07-12 00:48:02.507481] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:34.890 [2024-07-12 00:48:02.507972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.890 [2024-07-12 00:48:02.508013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ca950 with addr=10.0.0.2, port=4420 00:35:34.890 [2024-07-12 00:48:02.508033] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ca950 is same with the state(5) to be set 00:35:34.890 [2024-07-12 00:48:02.508303] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ca950 (9): Bad file descriptor 00:35:34.890 [2024-07-12 00:48:02.508571] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:34.890 [2024-07-12 00:48:02.508608] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:34.890 [2024-07-12 00:48:02.508634] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:34.890 [2024-07-12 00:48:02.512697] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:34.890 [2024-07-12 00:48:02.522046] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:34.890 [2024-07-12 00:48:02.522581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.890 [2024-07-12 00:48:02.522618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ca950 with addr=10.0.0.2, port=4420 00:35:34.890 [2024-07-12 00:48:02.522636] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ca950 is same with the state(5) to be set 00:35:34.890 [2024-07-12 00:48:02.522900] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ca950 (9): Bad file descriptor 00:35:34.890 [2024-07-12 00:48:02.523168] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:34.890 [2024-07-12 00:48:02.523190] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:34.890 [2024-07-12 00:48:02.523205] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:34.890 [2024-07-12 00:48:02.527275] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:34.890 [2024-07-12 00:48:02.536652] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:34.890 [2024-07-12 00:48:02.537125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.890 [2024-07-12 00:48:02.537174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ca950 with addr=10.0.0.2, port=4420 00:35:34.890 [2024-07-12 00:48:02.537191] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ca950 is same with the state(5) to be set 00:35:34.890 [2024-07-12 00:48:02.537455] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ca950 (9): Bad file descriptor 00:35:34.890 [2024-07-12 00:48:02.537735] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:34.890 [2024-07-12 00:48:02.537758] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:34.890 [2024-07-12 00:48:02.537773] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:34.890 [2024-07-12 00:48:02.541862] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:34.890 [2024-07-12 00:48:02.551237] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:34.890 [2024-07-12 00:48:02.551835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.890 [2024-07-12 00:48:02.551876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ca950 with addr=10.0.0.2, port=4420 00:35:34.890 [2024-07-12 00:48:02.551895] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ca950 is same with the state(5) to be set 00:35:34.890 [2024-07-12 00:48:02.552171] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ca950 (9): Bad file descriptor 00:35:34.890 [2024-07-12 00:48:02.552440] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:34.890 [2024-07-12 00:48:02.552462] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:34.890 [2024-07-12 00:48:02.552483] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:34.890 [2024-07-12 00:48:02.556543] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:34.890 [2024-07-12 00:48:02.565685] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:34.890 [2024-07-12 00:48:02.566113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.890 [2024-07-12 00:48:02.566153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ca950 with addr=10.0.0.2, port=4420 00:35:34.890 [2024-07-12 00:48:02.566173] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ca950 is same with the state(5) to be set 00:35:34.890 [2024-07-12 00:48:02.566447] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ca950 (9): Bad file descriptor 00:35:34.890 [2024-07-12 00:48:02.566729] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:34.890 [2024-07-12 00:48:02.566753] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:34.890 [2024-07-12 00:48:02.566769] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:34.890 [2024-07-12 00:48:02.570828] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:34.890 [2024-07-12 00:48:02.580166] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:34.890 [2024-07-12 00:48:02.580660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.890 [2024-07-12 00:48:02.580701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ca950 with addr=10.0.0.2, port=4420 00:35:34.890 [2024-07-12 00:48:02.580720] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ca950 is same with the state(5) to be set 00:35:34.890 [2024-07-12 00:48:02.580991] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ca950 (9): Bad file descriptor 00:35:34.890 [2024-07-12 00:48:02.581260] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:34.890 [2024-07-12 00:48:02.581282] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:34.890 [2024-07-12 00:48:02.581297] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:34.890 [2024-07-12 00:48:02.585367] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:34.890 [2024-07-12 00:48:02.594714] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:34.890 [2024-07-12 00:48:02.595212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.891 [2024-07-12 00:48:02.595263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ca950 with addr=10.0.0.2, port=4420 00:35:34.891 [2024-07-12 00:48:02.595282] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ca950 is same with the state(5) to be set 00:35:34.891 [2024-07-12 00:48:02.595552] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ca950 (9): Bad file descriptor 00:35:34.891 [2024-07-12 00:48:02.595834] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:34.891 [2024-07-12 00:48:02.595858] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:34.891 [2024-07-12 00:48:02.595873] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:34.891 [2024-07-12 00:48:02.599921] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:34.891 [2024-07-12 00:48:02.609274] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:34.891 [2024-07-12 00:48:02.609801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.891 [2024-07-12 00:48:02.609843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ca950 with addr=10.0.0.2, port=4420 00:35:34.891 [2024-07-12 00:48:02.609862] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ca950 is same with the state(5) to be set 00:35:34.891 [2024-07-12 00:48:02.610133] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ca950 (9): Bad file descriptor 00:35:34.891 [2024-07-12 00:48:02.610408] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:34.891 [2024-07-12 00:48:02.610431] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:34.891 [2024-07-12 00:48:02.610446] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:34.891 [2024-07-12 00:48:02.614496] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:34.891 [2024-07-12 00:48:02.623844] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:34.891 [2024-07-12 00:48:02.624308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.891 [2024-07-12 00:48:02.624355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ca950 with addr=10.0.0.2, port=4420 00:35:34.891 [2024-07-12 00:48:02.624372] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ca950 is same with the state(5) to be set 00:35:34.891 [2024-07-12 00:48:02.624648] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ca950 (9): Bad file descriptor 00:35:34.891 [2024-07-12 00:48:02.624917] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:34.891 [2024-07-12 00:48:02.624939] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:34.891 [2024-07-12 00:48:02.624954] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:34.891 [2024-07-12 00:48:02.629006] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:34.891 [2024-07-12 00:48:02.638391] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:34.891 [2024-07-12 00:48:02.638928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.891 [2024-07-12 00:48:02.638970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ca950 with addr=10.0.0.2, port=4420 00:35:34.891 [2024-07-12 00:48:02.638990] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ca950 is same with the state(5) to be set 00:35:34.891 [2024-07-12 00:48:02.639261] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ca950 (9): Bad file descriptor 00:35:34.891 [2024-07-12 00:48:02.639529] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:34.891 [2024-07-12 00:48:02.639551] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:34.891 [2024-07-12 00:48:02.639566] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:34.891 [2024-07-12 00:48:02.643662] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:34.891 [2024-07-12 00:48:02.652787] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:34.891 [2024-07-12 00:48:02.653265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.891 [2024-07-12 00:48:02.653295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ca950 with addr=10.0.0.2, port=4420 00:35:34.891 [2024-07-12 00:48:02.653312] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ca950 is same with the state(5) to be set 00:35:34.891 [2024-07-12 00:48:02.653576] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ca950 (9): Bad file descriptor 00:35:34.891 [2024-07-12 00:48:02.653857] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:34.891 [2024-07-12 00:48:02.653879] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:34.891 [2024-07-12 00:48:02.653894] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:34.891 [2024-07-12 00:48:02.657974] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:34.891 [2024-07-12 00:48:02.667344] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:34.891 [2024-07-12 00:48:02.667793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.891 [2024-07-12 00:48:02.667839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ca950 with addr=10.0.0.2, port=4420 00:35:34.891 [2024-07-12 00:48:02.667856] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ca950 is same with the state(5) to be set 00:35:34.891 [2024-07-12 00:48:02.668120] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ca950 (9): Bad file descriptor 00:35:34.891 [2024-07-12 00:48:02.668387] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:34.891 [2024-07-12 00:48:02.668409] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:34.891 [2024-07-12 00:48:02.668424] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:34.891 [2024-07-12 00:48:02.672500] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:34.891 [2024-07-12 00:48:02.681864] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:34.891 [2024-07-12 00:48:02.682291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.891 [2024-07-12 00:48:02.682320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ca950 with addr=10.0.0.2, port=4420 00:35:34.891 [2024-07-12 00:48:02.682337] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ca950 is same with the state(5) to be set 00:35:34.891 [2024-07-12 00:48:02.682611] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ca950 (9): Bad file descriptor 00:35:34.891 [2024-07-12 00:48:02.682879] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:34.891 [2024-07-12 00:48:02.682901] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:34.891 [2024-07-12 00:48:02.682917] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:34.891 [2024-07-12 00:48:02.686962] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:34.891 [2024-07-12 00:48:02.696277] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:34.891 [2024-07-12 00:48:02.696733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.891 [2024-07-12 00:48:02.696763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ca950 with addr=10.0.0.2, port=4420 00:35:34.891 [2024-07-12 00:48:02.696780] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ca950 is same with the state(5) to be set 00:35:34.891 [2024-07-12 00:48:02.697043] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ca950 (9): Bad file descriptor 00:35:34.891 [2024-07-12 00:48:02.697311] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:34.891 [2024-07-12 00:48:02.697333] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:34.891 [2024-07-12 00:48:02.697347] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:34.891 [2024-07-12 00:48:02.701462] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:34.891 [2024-07-12 00:48:02.710785] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:34.891 [2024-07-12 00:48:02.711203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.891 [2024-07-12 00:48:02.711233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ca950 with addr=10.0.0.2, port=4420 00:35:34.891 [2024-07-12 00:48:02.711262] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ca950 is same with the state(5) to be set 00:35:34.891 [2024-07-12 00:48:02.711533] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ca950 (9): Bad file descriptor 00:35:34.891 [2024-07-12 00:48:02.711821] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:34.891 [2024-07-12 00:48:02.711845] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:34.891 [2024-07-12 00:48:02.711860] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:34.891 [2024-07-12 00:48:02.715967] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:34.891 [2024-07-12 00:48:02.725348] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:34.891 [2024-07-12 00:48:02.725806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.891 [2024-07-12 00:48:02.725869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ca950 with addr=10.0.0.2, port=4420 00:35:34.891 [2024-07-12 00:48:02.725915] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ca950 is same with the state(5) to be set 00:35:34.891 [2024-07-12 00:48:02.726179] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ca950 (9): Bad file descriptor 00:35:34.891 [2024-07-12 00:48:02.726446] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:34.891 [2024-07-12 00:48:02.726469] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:34.891 [2024-07-12 00:48:02.726484] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:35.150 [2024-07-12 00:48:02.730546] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:35.150 [2024-07-12 00:48:02.739913] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:35.150 [2024-07-12 00:48:02.740343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.151 [2024-07-12 00:48:02.740373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ca950 with addr=10.0.0.2, port=4420 00:35:35.151 [2024-07-12 00:48:02.740391] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ca950 is same with the state(5) to be set 00:35:35.151 [2024-07-12 00:48:02.740666] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ca950 (9): Bad file descriptor 00:35:35.151 [2024-07-12 00:48:02.740935] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:35.151 [2024-07-12 00:48:02.740957] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:35.151 [2024-07-12 00:48:02.740972] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:35.151 [2024-07-12 00:48:02.745042] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:35.151 [2024-07-12 00:48:02.754370] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:35.151 [2024-07-12 00:48:02.754933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.151 [2024-07-12 00:48:02.754963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ca950 with addr=10.0.0.2, port=4420 00:35:35.151 [2024-07-12 00:48:02.754980] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ca950 is same with the state(5) to be set 00:35:35.151 [2024-07-12 00:48:02.755244] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ca950 (9): Bad file descriptor 00:35:35.151 [2024-07-12 00:48:02.755511] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:35.151 [2024-07-12 00:48:02.755540] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:35.151 [2024-07-12 00:48:02.755555] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:35.151 [2024-07-12 00:48:02.759647] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:35.151 [2024-07-12 00:48:02.768759] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:35.151 [2024-07-12 00:48:02.769270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.151 [2024-07-12 00:48:02.769311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ca950 with addr=10.0.0.2, port=4420 00:35:35.151 [2024-07-12 00:48:02.769330] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ca950 is same with the state(5) to be set 00:35:35.151 [2024-07-12 00:48:02.769615] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ca950 (9): Bad file descriptor 00:35:35.151 [2024-07-12 00:48:02.769884] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:35.151 [2024-07-12 00:48:02.769906] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:35.151 [2024-07-12 00:48:02.769922] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:35.151 [2024-07-12 00:48:02.773998] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:35.151 [2024-07-12 00:48:02.783130] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:35.151 [2024-07-12 00:48:02.783631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.151 [2024-07-12 00:48:02.783662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ca950 with addr=10.0.0.2, port=4420 00:35:35.151 [2024-07-12 00:48:02.783680] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ca950 is same with the state(5) to be set 00:35:35.151 [2024-07-12 00:48:02.783943] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ca950 (9): Bad file descriptor 00:35:35.151 [2024-07-12 00:48:02.784212] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:35.151 [2024-07-12 00:48:02.784234] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:35.151 [2024-07-12 00:48:02.784248] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:35.151 [2024-07-12 00:48:02.788320] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:35.151 [2024-07-12 00:48:02.797695] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:35.151 [2024-07-12 00:48:02.798159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.151 [2024-07-12 00:48:02.798199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ca950 with addr=10.0.0.2, port=4420 00:35:35.151 [2024-07-12 00:48:02.798218] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ca950 is same with the state(5) to be set 00:35:35.151 [2024-07-12 00:48:02.798488] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ca950 (9): Bad file descriptor 00:35:35.151 [2024-07-12 00:48:02.798770] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:35.151 [2024-07-12 00:48:02.798793] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:35.151 [2024-07-12 00:48:02.798809] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:35.151 [2024-07-12 00:48:02.802885] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:35.151 [2024-07-12 00:48:02.812282] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:35.151 [2024-07-12 00:48:02.812757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.151 [2024-07-12 00:48:02.812797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ca950 with addr=10.0.0.2, port=4420 00:35:35.151 [2024-07-12 00:48:02.812817] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ca950 is same with the state(5) to be set 00:35:35.151 [2024-07-12 00:48:02.813100] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ca950 (9): Bad file descriptor 00:35:35.151 [2024-07-12 00:48:02.813368] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:35.151 [2024-07-12 00:48:02.813390] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:35.151 [2024-07-12 00:48:02.813405] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:35.151 [2024-07-12 00:48:02.817469] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:35.151 [2024-07-12 00:48:02.826834] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:35.151 [2024-07-12 00:48:02.827346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.151 [2024-07-12 00:48:02.827387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ca950 with addr=10.0.0.2, port=4420 00:35:35.151 [2024-07-12 00:48:02.827406] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ca950 is same with the state(5) to be set 00:35:35.151 [2024-07-12 00:48:02.827693] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ca950 (9): Bad file descriptor 00:35:35.151 [2024-07-12 00:48:02.827963] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:35.151 [2024-07-12 00:48:02.827985] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:35.151 [2024-07-12 00:48:02.828000] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:35.151 [2024-07-12 00:48:02.832065] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:35.151 [2024-07-12 00:48:02.841199] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:35.151 [2024-07-12 00:48:02.841799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.151 [2024-07-12 00:48:02.841840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ca950 with addr=10.0.0.2, port=4420 00:35:35.151 [2024-07-12 00:48:02.841860] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ca950 is same with the state(5) to be set 00:35:35.151 [2024-07-12 00:48:02.842136] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ca950 (9): Bad file descriptor 00:35:35.151 [2024-07-12 00:48:02.842404] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:35.151 [2024-07-12 00:48:02.842427] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:35.151 [2024-07-12 00:48:02.842442] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:35.151 [2024-07-12 00:48:02.846498] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:35.151 [2024-07-12 00:48:02.855668] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:35.151 [2024-07-12 00:48:02.856165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.151 [2024-07-12 00:48:02.856206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ca950 with addr=10.0.0.2, port=4420 00:35:35.151 [2024-07-12 00:48:02.856231] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ca950 is same with the state(5) to be set 00:35:35.151 [2024-07-12 00:48:02.856508] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ca950 (9): Bad file descriptor 00:35:35.151 [2024-07-12 00:48:02.856789] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:35.151 [2024-07-12 00:48:02.856812] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:35.151 [2024-07-12 00:48:02.856828] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:35.151 [2024-07-12 00:48:02.860907] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:35.151 [2024-07-12 00:48:02.870014] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:35.151 [2024-07-12 00:48:02.870449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.151 [2024-07-12 00:48:02.870488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ca950 with addr=10.0.0.2, port=4420 00:35:35.151 [2024-07-12 00:48:02.870508] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ca950 is same with the state(5) to be set 00:35:35.151 [2024-07-12 00:48:02.870792] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ca950 (9): Bad file descriptor 00:35:35.151 [2024-07-12 00:48:02.871061] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:35.151 [2024-07-12 00:48:02.871084] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:35.151 [2024-07-12 00:48:02.871099] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:35.151 [2024-07-12 00:48:02.875170] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:35.151 [2024-07-12 00:48:02.884562] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:35.151 [2024-07-12 00:48:02.885059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.151 [2024-07-12 00:48:02.885108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ca950 with addr=10.0.0.2, port=4420 00:35:35.151 [2024-07-12 00:48:02.885125] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ca950 is same with the state(5) to be set 00:35:35.151 [2024-07-12 00:48:02.885399] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ca950 (9): Bad file descriptor 00:35:35.151 [2024-07-12 00:48:02.885679] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:35.151 [2024-07-12 00:48:02.885703] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:35.151 [2024-07-12 00:48:02.885717] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:35.151 [2024-07-12 00:48:02.889784] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:35.151 [2024-07-12 00:48:02.898932] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:35.151 [2024-07-12 00:48:02.899351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.152 [2024-07-12 00:48:02.899392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ca950 with addr=10.0.0.2, port=4420 00:35:35.152 [2024-07-12 00:48:02.899411] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ca950 is same with the state(5) to be set 00:35:35.152 [2024-07-12 00:48:02.899696] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ca950 (9): Bad file descriptor 00:35:35.152 [2024-07-12 00:48:02.899967] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:35.152 [2024-07-12 00:48:02.899997] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:35.152 [2024-07-12 00:48:02.900012] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:35.152 [2024-07-12 00:48:02.904086] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:35.152 [2024-07-12 00:48:02.913408] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:35.152 [2024-07-12 00:48:02.913919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.152 [2024-07-12 00:48:02.913950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ca950 with addr=10.0.0.2, port=4420 00:35:35.152 [2024-07-12 00:48:02.913967] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ca950 is same with the state(5) to be set 00:35:35.152 [2024-07-12 00:48:02.914232] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ca950 (9): Bad file descriptor 00:35:35.152 [2024-07-12 00:48:02.914499] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:35.152 [2024-07-12 00:48:02.914522] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:35.152 [2024-07-12 00:48:02.914536] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:35.152 [2024-07-12 00:48:02.918599] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:35.152 [2024-07-12 00:48:02.927932] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:35.152 [2024-07-12 00:48:02.928390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.152 [2024-07-12 00:48:02.928440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ca950 with addr=10.0.0.2, port=4420 00:35:35.152 [2024-07-12 00:48:02.928457] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ca950 is same with the state(5) to be set 00:35:35.152 [2024-07-12 00:48:02.928740] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ca950 (9): Bad file descriptor 00:35:35.152 [2024-07-12 00:48:02.929010] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:35.152 [2024-07-12 00:48:02.929032] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:35.152 [2024-07-12 00:48:02.929047] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:35.152 [2024-07-12 00:48:02.933103] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:35.152 [2024-07-12 00:48:02.942441] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:35.152 [2024-07-12 00:48:02.942986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.152 [2024-07-12 00:48:02.943045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ca950 with addr=10.0.0.2, port=4420 00:35:35.152 [2024-07-12 00:48:02.943064] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ca950 is same with the state(5) to be set 00:35:35.152 [2024-07-12 00:48:02.943341] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ca950 (9): Bad file descriptor 00:35:35.152 [2024-07-12 00:48:02.943622] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:35.152 [2024-07-12 00:48:02.943645] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:35.152 [2024-07-12 00:48:02.943660] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:35.152 [2024-07-12 00:48:02.947704] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:35.152 [2024-07-12 00:48:02.956877] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:35.152 [2024-07-12 00:48:02.957350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.152 [2024-07-12 00:48:02.957392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ca950 with addr=10.0.0.2, port=4420 00:35:35.152 [2024-07-12 00:48:02.957411] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ca950 is same with the state(5) to be set 00:35:35.152 [2024-07-12 00:48:02.957695] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ca950 (9): Bad file descriptor 00:35:35.152 [2024-07-12 00:48:02.957965] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:35.152 [2024-07-12 00:48:02.957987] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:35.152 [2024-07-12 00:48:02.958002] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:35.152 [2024-07-12 00:48:02.962044] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:35.152 [2024-07-12 00:48:02.971349] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:35.152 [2024-07-12 00:48:02.971781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.152 [2024-07-12 00:48:02.971823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ca950 with addr=10.0.0.2, port=4420 00:35:35.152 [2024-07-12 00:48:02.971842] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ca950 is same with the state(5) to be set 00:35:35.152 [2024-07-12 00:48:02.972112] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ca950 (9): Bad file descriptor 00:35:35.152 [2024-07-12 00:48:02.972381] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:35.152 [2024-07-12 00:48:02.972403] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:35.152 [2024-07-12 00:48:02.972418] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:35.152 [2024-07-12 00:48:02.976474] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:35.152 [2024-07-12 00:48:02.985799] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:35.152 [2024-07-12 00:48:02.986304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.152 [2024-07-12 00:48:02.986345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ca950 with addr=10.0.0.2, port=4420 00:35:35.152 [2024-07-12 00:48:02.986364] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ca950 is same with the state(5) to be set 00:35:35.152 [2024-07-12 00:48:02.986646] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ca950 (9): Bad file descriptor 00:35:35.152 [2024-07-12 00:48:02.986921] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:35.152 [2024-07-12 00:48:02.986943] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:35.152 [2024-07-12 00:48:02.986959] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:35.411 [2024-07-12 00:48:02.991002] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:35.411 [2024-07-12 00:48:03.000329] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:35.411 [2024-07-12 00:48:03.000780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.411 [2024-07-12 00:48:03.000822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ca950 with addr=10.0.0.2, port=4420 00:35:35.411 [2024-07-12 00:48:03.000843] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ca950 is same with the state(5) to be set 00:35:35.411 [2024-07-12 00:48:03.001120] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ca950 (9): Bad file descriptor 00:35:35.411 [2024-07-12 00:48:03.001390] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:35.411 [2024-07-12 00:48:03.001412] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:35.411 [2024-07-12 00:48:03.001427] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:35.411 [2024-07-12 00:48:03.005493] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:35.411 [2024-07-12 00:48:03.014793] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:35.411 [2024-07-12 00:48:03.015191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.411 [2024-07-12 00:48:03.015222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ca950 with addr=10.0.0.2, port=4420 00:35:35.411 [2024-07-12 00:48:03.015240] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ca950 is same with the state(5) to be set 00:35:35.411 [2024-07-12 00:48:03.015504] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ca950 (9): Bad file descriptor 00:35:35.411 [2024-07-12 00:48:03.015781] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:35.411 [2024-07-12 00:48:03.015804] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:35.411 [2024-07-12 00:48:03.015819] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:35.411 [2024-07-12 00:48:03.019859] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:35.411 [2024-07-12 00:48:03.029150] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:35.411 [2024-07-12 00:48:03.029540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.411 [2024-07-12 00:48:03.029570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ca950 with addr=10.0.0.2, port=4420 00:35:35.411 [2024-07-12 00:48:03.029594] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ca950 is same with the state(5) to be set 00:35:35.411 [2024-07-12 00:48:03.029862] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ca950 (9): Bad file descriptor 00:35:35.411 [2024-07-12 00:48:03.030130] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:35.411 [2024-07-12 00:48:03.030152] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:35.411 [2024-07-12 00:48:03.030167] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:35.411 [2024-07-12 00:48:03.034217] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:35.411 [2024-07-12 00:48:03.043576] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:35.411 [2024-07-12 00:48:03.044063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.411 [2024-07-12 00:48:03.044118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ca950 with addr=10.0.0.2, port=4420 00:35:35.411 [2024-07-12 00:48:03.044135] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ca950 is same with the state(5) to be set 00:35:35.411 [2024-07-12 00:48:03.044405] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ca950 (9): Bad file descriptor 00:35:35.411 [2024-07-12 00:48:03.044682] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:35.411 [2024-07-12 00:48:03.044704] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:35.411 [2024-07-12 00:48:03.044727] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:35.411 [2024-07-12 00:48:03.048796] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:35.411 [2024-07-12 00:48:03.058180] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:35.411 [2024-07-12 00:48:03.058645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.411 [2024-07-12 00:48:03.058676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ca950 with addr=10.0.0.2, port=4420 00:35:35.411 [2024-07-12 00:48:03.058693] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ca950 is same with the state(5) to be set 00:35:35.411 [2024-07-12 00:48:03.058957] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ca950 (9): Bad file descriptor 00:35:35.411 [2024-07-12 00:48:03.059225] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:35.411 [2024-07-12 00:48:03.059247] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:35.411 [2024-07-12 00:48:03.059263] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:35.411 [2024-07-12 00:48:03.063320] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:35.411 [2024-07-12 00:48:03.072702] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:35.411 [2024-07-12 00:48:03.073065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.411 [2024-07-12 00:48:03.073096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ca950 with addr=10.0.0.2, port=4420 00:35:35.411 [2024-07-12 00:48:03.073114] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ca950 is same with the state(5) to be set 00:35:35.411 [2024-07-12 00:48:03.073379] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ca950 (9): Bad file descriptor 00:35:35.412 [2024-07-12 00:48:03.073661] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:35.412 [2024-07-12 00:48:03.073684] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:35.412 [2024-07-12 00:48:03.073699] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:35.412 [2024-07-12 00:48:03.077751] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:35.412 [2024-07-12 00:48:03.087113] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:35.412 [2024-07-12 00:48:03.087676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.412 [2024-07-12 00:48:03.087718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ca950 with addr=10.0.0.2, port=4420 00:35:35.412 [2024-07-12 00:48:03.087737] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ca950 is same with the state(5) to be set 00:35:35.412 [2024-07-12 00:48:03.088013] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ca950 (9): Bad file descriptor 00:35:35.412 [2024-07-12 00:48:03.088283] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:35.412 [2024-07-12 00:48:03.088305] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:35.412 [2024-07-12 00:48:03.088320] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:35.412 [2024-07-12 00:48:03.092375] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:35.412 [2024-07-12 00:48:03.101493] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:35.412 [2024-07-12 00:48:03.102017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.412 [2024-07-12 00:48:03.102063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ca950 with addr=10.0.0.2, port=4420 00:35:35.412 [2024-07-12 00:48:03.102083] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ca950 is same with the state(5) to be set 00:35:35.412 [2024-07-12 00:48:03.102366] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ca950 (9): Bad file descriptor 00:35:35.412 [2024-07-12 00:48:03.102650] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:35.412 [2024-07-12 00:48:03.102673] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:35.412 [2024-07-12 00:48:03.102689] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:35.412 [2024-07-12 00:48:03.106764] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:35.412 [2024-07-12 00:48:03.115911] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:35.412 [2024-07-12 00:48:03.116398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.412 [2024-07-12 00:48:03.116438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ca950 with addr=10.0.0.2, port=4420 00:35:35.412 [2024-07-12 00:48:03.116457] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ca950 is same with the state(5) to be set 00:35:35.412 [2024-07-12 00:48:03.116742] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ca950 (9): Bad file descriptor 00:35:35.412 [2024-07-12 00:48:03.117011] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:35.412 [2024-07-12 00:48:03.117034] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:35.412 [2024-07-12 00:48:03.117049] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:35.412 [2024-07-12 00:48:03.121113] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:35.412 [2024-07-12 00:48:03.130455] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:35.412 [2024-07-12 00:48:03.130937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.412 [2024-07-12 00:48:03.130990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ca950 with addr=10.0.0.2, port=4420 00:35:35.412 [2024-07-12 00:48:03.131010] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ca950 is same with the state(5) to be set 00:35:35.412 [2024-07-12 00:48:03.131280] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ca950 (9): Bad file descriptor 00:35:35.412 [2024-07-12 00:48:03.131554] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:35.412 [2024-07-12 00:48:03.131576] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:35.412 [2024-07-12 00:48:03.131605] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:35.412 [2024-07-12 00:48:03.135658] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:35.412 [2024-07-12 00:48:03.144981] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:35.412 [2024-07-12 00:48:03.145456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.412 [2024-07-12 00:48:03.145487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ca950 with addr=10.0.0.2, port=4420 00:35:35.412 [2024-07-12 00:48:03.145505] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ca950 is same with the state(5) to be set 00:35:35.412 [2024-07-12 00:48:03.145788] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ca950 (9): Bad file descriptor 00:35:35.412 [2024-07-12 00:48:03.146070] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:35.412 [2024-07-12 00:48:03.146093] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:35.412 [2024-07-12 00:48:03.146108] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:35.412 [2024-07-12 00:48:03.150152] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:35.412 [2024-07-12 00:48:03.159462] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:35.412 [2024-07-12 00:48:03.159915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.412 [2024-07-12 00:48:03.159957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ca950 with addr=10.0.0.2, port=4420 00:35:35.412 [2024-07-12 00:48:03.159976] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ca950 is same with the state(5) to be set 00:35:35.412 [2024-07-12 00:48:03.160252] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ca950 (9): Bad file descriptor 00:35:35.412 [2024-07-12 00:48:03.160521] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:35.412 [2024-07-12 00:48:03.160544] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:35.412 [2024-07-12 00:48:03.160559] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:35.412 [2024-07-12 00:48:03.164615] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:35.412 [2024-07-12 00:48:03.173915] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:35.412 [2024-07-12 00:48:03.174402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.412 [2024-07-12 00:48:03.174453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ca950 with addr=10.0.0.2, port=4420 00:35:35.412 [2024-07-12 00:48:03.174471] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ca950 is same with the state(5) to be set 00:35:35.412 [2024-07-12 00:48:03.174744] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ca950 (9): Bad file descriptor 00:35:35.412 [2024-07-12 00:48:03.175013] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:35.412 [2024-07-12 00:48:03.175035] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:35.412 [2024-07-12 00:48:03.175051] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:35.412 [2024-07-12 00:48:03.179099] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:35.412 [2024-07-12 00:48:03.188444] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:35.412 [2024-07-12 00:48:03.188940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.412 [2024-07-12 00:48:03.188982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ca950 with addr=10.0.0.2, port=4420 00:35:35.412 [2024-07-12 00:48:03.189002] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ca950 is same with the state(5) to be set 00:35:35.412 [2024-07-12 00:48:03.189272] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ca950 (9): Bad file descriptor 00:35:35.412 [2024-07-12 00:48:03.189548] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:35.412 [2024-07-12 00:48:03.189570] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:35.412 [2024-07-12 00:48:03.189598] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:35.412 [2024-07-12 00:48:03.193689] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:35.412 [2024-07-12 00:48:03.202811] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:35.412 [2024-07-12 00:48:03.203210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.412 [2024-07-12 00:48:03.203242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ca950 with addr=10.0.0.2, port=4420 00:35:35.412 [2024-07-12 00:48:03.203260] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ca950 is same with the state(5) to be set 00:35:35.412 [2024-07-12 00:48:03.203524] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ca950 (9): Bad file descriptor 00:35:35.412 [2024-07-12 00:48:03.203811] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:35.412 [2024-07-12 00:48:03.203834] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:35.412 [2024-07-12 00:48:03.203849] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:35.412 [2024-07-12 00:48:03.207975] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:35.412 [2024-07-12 00:48:03.217389] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:35.412 [2024-07-12 00:48:03.217840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.412 [2024-07-12 00:48:03.217893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ca950 with addr=10.0.0.2, port=4420 00:35:35.412 [2024-07-12 00:48:03.217911] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ca950 is same with the state(5) to be set 00:35:35.412 [2024-07-12 00:48:03.218175] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ca950 (9): Bad file descriptor 00:35:35.412 [2024-07-12 00:48:03.218442] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:35.412 [2024-07-12 00:48:03.218464] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:35.412 [2024-07-12 00:48:03.218479] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:35.412 [2024-07-12 00:48:03.222556] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:35.412 [2024-07-12 00:48:03.231940] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:35.412 [2024-07-12 00:48:03.232438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.412 [2024-07-12 00:48:03.232489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ca950 with addr=10.0.0.2, port=4420 00:35:35.413 [2024-07-12 00:48:03.232506] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ca950 is same with the state(5) to be set 00:35:35.413 [2024-07-12 00:48:03.232782] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ca950 (9): Bad file descriptor 00:35:35.413 [2024-07-12 00:48:03.233050] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:35.413 [2024-07-12 00:48:03.233071] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:35.413 [2024-07-12 00:48:03.233086] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:35.413 [2024-07-12 00:48:03.237181] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:35.413 [2024-07-12 00:48:03.246306] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:35.413 [2024-07-12 00:48:03.246735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.413 [2024-07-12 00:48:03.246777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ca950 with addr=10.0.0.2, port=4420 00:35:35.413 [2024-07-12 00:48:03.246806] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ca950 is same with the state(5) to be set 00:35:35.413 [2024-07-12 00:48:03.247077] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ca950 (9): Bad file descriptor 00:35:35.413 [2024-07-12 00:48:03.247346] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:35.413 [2024-07-12 00:48:03.247368] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:35.413 [2024-07-12 00:48:03.247383] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:35.672 [2024-07-12 00:48:03.251434] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:35.672 [2024-07-12 00:48:03.260767] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:35.672 [2024-07-12 00:48:03.261257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.672 [2024-07-12 00:48:03.261298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ca950 with addr=10.0.0.2, port=4420 00:35:35.672 [2024-07-12 00:48:03.261318] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ca950 is same with the state(5) to be set 00:35:35.672 [2024-07-12 00:48:03.261601] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ca950 (9): Bad file descriptor 00:35:35.672 [2024-07-12 00:48:03.261871] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:35.672 [2024-07-12 00:48:03.261893] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:35.672 [2024-07-12 00:48:03.261909] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:35.672 [2024-07-12 00:48:03.265957] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:35.672 [2024-07-12 00:48:03.275274] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:35.672 [2024-07-12 00:48:03.275776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.672 [2024-07-12 00:48:03.275833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ca950 with addr=10.0.0.2, port=4420 00:35:35.672 [2024-07-12 00:48:03.275853] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ca950 is same with the state(5) to be set 00:35:35.672 [2024-07-12 00:48:03.276124] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ca950 (9): Bad file descriptor 00:35:35.672 [2024-07-12 00:48:03.276395] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:35.672 [2024-07-12 00:48:03.276418] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:35.672 [2024-07-12 00:48:03.276433] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:35.672 [2024-07-12 00:48:03.280492] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:35.672 [2024-07-12 00:48:03.289839] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:35.672 [2024-07-12 00:48:03.290312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.672 [2024-07-12 00:48:03.290362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ca950 with addr=10.0.0.2, port=4420 00:35:35.672 [2024-07-12 00:48:03.290379] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ca950 is same with the state(5) to be set 00:35:35.672 [2024-07-12 00:48:03.290656] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ca950 (9): Bad file descriptor 00:35:35.672 [2024-07-12 00:48:03.290931] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:35.672 [2024-07-12 00:48:03.290960] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:35.672 [2024-07-12 00:48:03.290976] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:35.672 [2024-07-12 00:48:03.295035] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:35.672 [2024-07-12 00:48:03.304354] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:35.672 [2024-07-12 00:48:03.304895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.672 [2024-07-12 00:48:03.304936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ca950 with addr=10.0.0.2, port=4420 00:35:35.672 [2024-07-12 00:48:03.304955] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ca950 is same with the state(5) to be set 00:35:35.672 [2024-07-12 00:48:03.305225] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ca950 (9): Bad file descriptor 00:35:35.672 [2024-07-12 00:48:03.305494] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:35.672 [2024-07-12 00:48:03.305516] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:35.672 [2024-07-12 00:48:03.305532] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:35.672 [2024-07-12 00:48:03.309602] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:35.672 [2024-07-12 00:48:03.318906] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:35.672 [2024-07-12 00:48:03.319488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.672 [2024-07-12 00:48:03.319529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ca950 with addr=10.0.0.2, port=4420 00:35:35.672 [2024-07-12 00:48:03.319549] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ca950 is same with the state(5) to be set 00:35:35.672 [2024-07-12 00:48:03.319832] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ca950 (9): Bad file descriptor 00:35:35.672 [2024-07-12 00:48:03.320101] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:35.672 [2024-07-12 00:48:03.320123] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:35.672 [2024-07-12 00:48:03.320138] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:35.672 [2024-07-12 00:48:03.324200] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:35.672 [2024-07-12 00:48:03.333289] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:35.672 [2024-07-12 00:48:03.333809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.672 [2024-07-12 00:48:03.333851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ca950 with addr=10.0.0.2, port=4420 00:35:35.672 [2024-07-12 00:48:03.333870] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ca950 is same with the state(5) to be set 00:35:35.672 [2024-07-12 00:48:03.334141] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ca950 (9): Bad file descriptor 00:35:35.672 [2024-07-12 00:48:03.334416] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:35.672 [2024-07-12 00:48:03.334438] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:35.672 [2024-07-12 00:48:03.334453] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:35.672 [2024-07-12 00:48:03.338516] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:35.672 [2024-07-12 00:48:03.347868] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:35.672 [2024-07-12 00:48:03.348332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.672 [2024-07-12 00:48:03.348386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ca950 with addr=10.0.0.2, port=4420 00:35:35.672 [2024-07-12 00:48:03.348404] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ca950 is same with the state(5) to be set 00:35:35.672 [2024-07-12 00:48:03.348679] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ca950 (9): Bad file descriptor 00:35:35.672 [2024-07-12 00:48:03.348947] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:35.672 [2024-07-12 00:48:03.348969] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:35.672 [2024-07-12 00:48:03.348984] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:35.672 [2024-07-12 00:48:03.353062] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:35.672 [2024-07-12 00:48:03.362414] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:35.672 [2024-07-12 00:48:03.362917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.672 [2024-07-12 00:48:03.362965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ca950 with addr=10.0.0.2, port=4420 00:35:35.672 [2024-07-12 00:48:03.362982] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ca950 is same with the state(5) to be set 00:35:35.672 [2024-07-12 00:48:03.363246] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ca950 (9): Bad file descriptor 00:35:35.673 [2024-07-12 00:48:03.363513] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:35.673 [2024-07-12 00:48:03.363535] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:35.673 [2024-07-12 00:48:03.363550] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:35.673 [2024-07-12 00:48:03.367634] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:35.673 [2024-07-12 00:48:03.376973] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:35.673 [2024-07-12 00:48:03.377446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.673 [2024-07-12 00:48:03.377494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ca950 with addr=10.0.0.2, port=4420 00:35:35.673 [2024-07-12 00:48:03.377512] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ca950 is same with the state(5) to be set 00:35:35.673 [2024-07-12 00:48:03.377785] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ca950 (9): Bad file descriptor 00:35:35.673 [2024-07-12 00:48:03.378053] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:35.673 [2024-07-12 00:48:03.378075] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:35.673 [2024-07-12 00:48:03.378090] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:35.673 [2024-07-12 00:48:03.382140] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:35.673 [2024-07-12 00:48:03.391455] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:35.673 [2024-07-12 00:48:03.391945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.673 [2024-07-12 00:48:03.391996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ca950 with addr=10.0.0.2, port=4420 00:35:35.673 [2024-07-12 00:48:03.392014] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ca950 is same with the state(5) to be set 00:35:35.673 [2024-07-12 00:48:03.392284] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ca950 (9): Bad file descriptor 00:35:35.673 [2024-07-12 00:48:03.392552] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:35.673 [2024-07-12 00:48:03.392573] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:35.673 [2024-07-12 00:48:03.392598] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:35.673 [2024-07-12 00:48:03.396671] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:35.673 [2024-07-12 00:48:03.405995] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:35.673 [2024-07-12 00:48:03.406532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.673 [2024-07-12 00:48:03.406574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ca950 with addr=10.0.0.2, port=4420 00:35:35.673 [2024-07-12 00:48:03.406604] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ca950 is same with the state(5) to be set 00:35:35.673 [2024-07-12 00:48:03.406886] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ca950 (9): Bad file descriptor 00:35:35.673 [2024-07-12 00:48:03.407156] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:35.673 [2024-07-12 00:48:03.407178] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:35.673 [2024-07-12 00:48:03.407193] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:35.673 [2024-07-12 00:48:03.411242] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:35.673 [2024-07-12 00:48:03.420560] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:35.673 [2024-07-12 00:48:03.421080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.673 [2024-07-12 00:48:03.421122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ca950 with addr=10.0.0.2, port=4420 00:35:35.673 [2024-07-12 00:48:03.421140] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ca950 is same with the state(5) to be set 00:35:35.673 [2024-07-12 00:48:03.421417] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ca950 (9): Bad file descriptor 00:35:35.673 [2024-07-12 00:48:03.421700] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:35.673 [2024-07-12 00:48:03.421723] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:35.673 [2024-07-12 00:48:03.421738] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:35.673 [2024-07-12 00:48:03.425784] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:35.673 [2024-07-12 00:48:03.435107] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:35.673 [2024-07-12 00:48:03.435568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.673 [2024-07-12 00:48:03.435615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ca950 with addr=10.0.0.2, port=4420 00:35:35.673 [2024-07-12 00:48:03.435634] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ca950 is same with the state(5) to be set 00:35:35.673 [2024-07-12 00:48:03.435910] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ca950 (9): Bad file descriptor 00:35:35.673 [2024-07-12 00:48:03.436178] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:35.673 [2024-07-12 00:48:03.436200] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:35.673 [2024-07-12 00:48:03.436223] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:35.673 [2024-07-12 00:48:03.440296] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:35.673 [2024-07-12 00:48:03.449644] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:35.673 [2024-07-12 00:48:03.450218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.673 [2024-07-12 00:48:03.450260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ca950 with addr=10.0.0.2, port=4420 00:35:35.673 [2024-07-12 00:48:03.450279] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ca950 is same with the state(5) to be set 00:35:35.673 [2024-07-12 00:48:03.450550] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ca950 (9): Bad file descriptor 00:35:35.673 [2024-07-12 00:48:03.450834] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:35.673 [2024-07-12 00:48:03.450857] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:35.673 [2024-07-12 00:48:03.450872] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:35.673 [2024-07-12 00:48:03.454938] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:35.673 [2024-07-12 00:48:03.464114] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:35.673 [2024-07-12 00:48:03.464606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.673 [2024-07-12 00:48:03.464647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ca950 with addr=10.0.0.2, port=4420 00:35:35.673 [2024-07-12 00:48:03.464666] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ca950 is same with the state(5) to be set 00:35:35.673 [2024-07-12 00:48:03.464942] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ca950 (9): Bad file descriptor 00:35:35.673 [2024-07-12 00:48:03.465211] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:35.673 [2024-07-12 00:48:03.465233] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:35.673 [2024-07-12 00:48:03.465248] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:35.673 [2024-07-12 00:48:03.469303] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:35.673 [2024-07-12 00:48:03.478581] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:35.673 [2024-07-12 00:48:03.479095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.673 [2024-07-12 00:48:03.479137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ca950 with addr=10.0.0.2, port=4420 00:35:35.673 [2024-07-12 00:48:03.479157] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ca950 is same with the state(5) to be set 00:35:35.673 [2024-07-12 00:48:03.479426] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ca950 (9): Bad file descriptor 00:35:35.673 [2024-07-12 00:48:03.479709] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:35.673 [2024-07-12 00:48:03.479733] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:35.673 [2024-07-12 00:48:03.479748] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:35.673 [2024-07-12 00:48:03.483805] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:35.673 [2024-07-12 00:48:03.493154] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:35.673 [2024-07-12 00:48:03.493620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.673 [2024-07-12 00:48:03.493651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ca950 with addr=10.0.0.2, port=4420 00:35:35.673 [2024-07-12 00:48:03.493668] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ca950 is same with the state(5) to be set 00:35:35.673 [2024-07-12 00:48:03.493932] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ca950 (9): Bad file descriptor 00:35:35.673 [2024-07-12 00:48:03.494200] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:35.673 [2024-07-12 00:48:03.494222] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:35.673 [2024-07-12 00:48:03.494237] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:35.673 [2024-07-12 00:48:03.498290] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:35.673 [2024-07-12 00:48:03.507630] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:35.673 [2024-07-12 00:48:03.508084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.673 [2024-07-12 00:48:03.508152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ca950 with addr=10.0.0.2, port=4420 00:35:35.673 [2024-07-12 00:48:03.508184] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ca950 is same with the state(5) to be set 00:35:35.673 [2024-07-12 00:48:03.508447] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ca950 (9): Bad file descriptor 00:35:35.673 [2024-07-12 00:48:03.508726] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:35.673 [2024-07-12 00:48:03.508748] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:35.673 [2024-07-12 00:48:03.508763] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:35.932 [2024-07-12 00:48:03.512802] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:35.932 [2024-07-12 00:48:03.522125] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:35.932 [2024-07-12 00:48:03.522601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.933 [2024-07-12 00:48:03.522630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ca950 with addr=10.0.0.2, port=4420 00:35:35.933 [2024-07-12 00:48:03.522647] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ca950 is same with the state(5) to be set 00:35:35.933 [2024-07-12 00:48:03.522912] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ca950 (9): Bad file descriptor 00:35:35.933 [2024-07-12 00:48:03.523179] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:35.933 [2024-07-12 00:48:03.523200] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:35.933 [2024-07-12 00:48:03.523215] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:35.933 [2024-07-12 00:48:03.527263] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:35.933 [2024-07-12 00:48:03.536574] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:35.933 [2024-07-12 00:48:03.536995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.933 [2024-07-12 00:48:03.537035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ca950 with addr=10.0.0.2, port=4420 00:35:35.933 [2024-07-12 00:48:03.537054] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ca950 is same with the state(5) to be set 00:35:35.933 [2024-07-12 00:48:03.537335] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ca950 (9): Bad file descriptor 00:35:35.933 [2024-07-12 00:48:03.537618] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:35.933 [2024-07-12 00:48:03.537641] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:35.933 [2024-07-12 00:48:03.537656] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:35.933 [2024-07-12 00:48:03.541715] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:35.933 [2024-07-12 00:48:03.551050] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:35.933 [2024-07-12 00:48:03.551470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.933 [2024-07-12 00:48:03.551530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ca950 with addr=10.0.0.2, port=4420 00:35:35.933 [2024-07-12 00:48:03.551547] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ca950 is same with the state(5) to be set 00:35:35.933 [2024-07-12 00:48:03.551821] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ca950 (9): Bad file descriptor 00:35:35.933 [2024-07-12 00:48:03.552089] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:35.933 [2024-07-12 00:48:03.552111] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:35.933 [2024-07-12 00:48:03.552127] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:35.933 [2024-07-12 00:48:03.556173] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:35.933 [2024-07-12 00:48:03.565522] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:35.933 [2024-07-12 00:48:03.565997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.933 [2024-07-12 00:48:03.566026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ca950 with addr=10.0.0.2, port=4420 00:35:35.933 [2024-07-12 00:48:03.566043] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ca950 is same with the state(5) to be set 00:35:35.933 [2024-07-12 00:48:03.566307] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ca950 (9): Bad file descriptor 00:35:35.933 [2024-07-12 00:48:03.566575] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:35.933 [2024-07-12 00:48:03.566626] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:35.933 [2024-07-12 00:48:03.566643] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:35.933 [2024-07-12 00:48:03.570698] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:35.933 [2024-07-12 00:48:03.580035] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:35.933 [2024-07-12 00:48:03.580524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.933 [2024-07-12 00:48:03.580564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ca950 with addr=10.0.0.2, port=4420 00:35:35.933 [2024-07-12 00:48:03.580583] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ca950 is same with the state(5) to be set 00:35:35.933 [2024-07-12 00:48:03.580873] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ca950 (9): Bad file descriptor 00:35:35.933 [2024-07-12 00:48:03.581142] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:35.933 [2024-07-12 00:48:03.581165] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:35.933 [2024-07-12 00:48:03.581186] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:35.933 [2024-07-12 00:48:03.585263] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:35.933 [2024-07-12 00:48:03.594569] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:35.933 [2024-07-12 00:48:03.594992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.933 [2024-07-12 00:48:03.595033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ca950 with addr=10.0.0.2, port=4420 00:35:35.933 [2024-07-12 00:48:03.595052] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ca950 is same with the state(5) to be set 00:35:35.933 [2024-07-12 00:48:03.595323] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ca950 (9): Bad file descriptor 00:35:35.933 [2024-07-12 00:48:03.595604] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:35.933 [2024-07-12 00:48:03.595627] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:35.933 [2024-07-12 00:48:03.595642] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:35.933 [2024-07-12 00:48:03.599681] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:35.933 [2024-07-12 00:48:03.608986] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:35.933 [2024-07-12 00:48:03.609413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.933 [2024-07-12 00:48:03.609454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ca950 with addr=10.0.0.2, port=4420 00:35:35.933 [2024-07-12 00:48:03.609476] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ca950 is same with the state(5) to be set 00:35:35.933 [2024-07-12 00:48:03.609763] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ca950 (9): Bad file descriptor 00:35:35.933 [2024-07-12 00:48:03.610033] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:35.933 [2024-07-12 00:48:03.610056] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:35.933 [2024-07-12 00:48:03.610071] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:35.933 [2024-07-12 00:48:03.614113] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:35.933 [2024-07-12 00:48:03.623433] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:35.933 [2024-07-12 00:48:03.623894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.933 [2024-07-12 00:48:03.623945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ca950 with addr=10.0.0.2, port=4420 00:35:35.933 [2024-07-12 00:48:03.623962] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ca950 is same with the state(5) to be set 00:35:35.933 [2024-07-12 00:48:03.624226] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ca950 (9): Bad file descriptor 00:35:35.933 [2024-07-12 00:48:03.624494] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:35.933 [2024-07-12 00:48:03.624517] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:35.933 [2024-07-12 00:48:03.624532] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:35.933 [2024-07-12 00:48:03.628627] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:35.933 [2024-07-12 00:48:03.638029] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:35.933 [2024-07-12 00:48:03.638527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.933 [2024-07-12 00:48:03.638573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ca950 with addr=10.0.0.2, port=4420 00:35:35.933 [2024-07-12 00:48:03.638604] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ca950 is same with the state(5) to be set 00:35:35.933 [2024-07-12 00:48:03.638883] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ca950 (9): Bad file descriptor 00:35:35.933 [2024-07-12 00:48:03.639151] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:35.933 [2024-07-12 00:48:03.639173] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:35.933 [2024-07-12 00:48:03.639188] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:35.933 [2024-07-12 00:48:03.643258] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:35.933 [2024-07-12 00:48:03.652363] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:35.933 [2024-07-12 00:48:03.652828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.933 [2024-07-12 00:48:03.652880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ca950 with addr=10.0.0.2, port=4420 00:35:35.933 [2024-07-12 00:48:03.652897] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ca950 is same with the state(5) to be set 00:35:35.933 [2024-07-12 00:48:03.653162] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ca950 (9): Bad file descriptor 00:35:35.933 [2024-07-12 00:48:03.653429] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:35.933 [2024-07-12 00:48:03.653451] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:35.933 [2024-07-12 00:48:03.653466] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:35.933 [2024-07-12 00:48:03.657505] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:35.933 [2024-07-12 00:48:03.666824] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:35.933 [2024-07-12 00:48:03.667361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.933 [2024-07-12 00:48:03.667402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ca950 with addr=10.0.0.2, port=4420 00:35:35.933 [2024-07-12 00:48:03.667421] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ca950 is same with the state(5) to be set 00:35:35.933 [2024-07-12 00:48:03.667710] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ca950 (9): Bad file descriptor 00:35:35.933 [2024-07-12 00:48:03.667980] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:35.933 [2024-07-12 00:48:03.668002] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:35.933 [2024-07-12 00:48:03.668017] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:35.933 [2024-07-12 00:48:03.672057] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:35.933 [2024-07-12 00:48:03.681374] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:35.933 [2024-07-12 00:48:03.681846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.934 [2024-07-12 00:48:03.681887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ca950 with addr=10.0.0.2, port=4420 00:35:35.934 [2024-07-12 00:48:03.681906] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ca950 is same with the state(5) to be set 00:35:35.934 [2024-07-12 00:48:03.682183] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ca950 (9): Bad file descriptor 00:35:35.934 [2024-07-12 00:48:03.682458] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:35.934 [2024-07-12 00:48:03.682481] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:35.934 [2024-07-12 00:48:03.682496] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:35.934 [2024-07-12 00:48:03.686561] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:35.934 [2024-07-12 00:48:03.695907] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:35.934 [2024-07-12 00:48:03.696378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.934 [2024-07-12 00:48:03.696410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ca950 with addr=10.0.0.2, port=4420 00:35:35.934 [2024-07-12 00:48:03.696428] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ca950 is same with the state(5) to be set 00:35:35.934 [2024-07-12 00:48:03.696711] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ca950 (9): Bad file descriptor 00:35:35.934 [2024-07-12 00:48:03.696979] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:35.934 [2024-07-12 00:48:03.697001] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:35.934 [2024-07-12 00:48:03.697016] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:35.934 [2024-07-12 00:48:03.701061] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:35.934 [2024-07-12 00:48:03.710383] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:35.934 [2024-07-12 00:48:03.710818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.934 [2024-07-12 00:48:03.710848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ca950 with addr=10.0.0.2, port=4420 00:35:35.934 [2024-07-12 00:48:03.710866] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ca950 is same with the state(5) to be set 00:35:35.934 [2024-07-12 00:48:03.711129] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ca950 (9): Bad file descriptor 00:35:35.934 [2024-07-12 00:48:03.711407] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:35.934 [2024-07-12 00:48:03.711431] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:35.934 [2024-07-12 00:48:03.711445] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:35.934 [2024-07-12 00:48:03.715558] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:35.934 [2024-07-12 00:48:03.724897] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:35.934 [2024-07-12 00:48:03.725352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.934 [2024-07-12 00:48:03.725382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ca950 with addr=10.0.0.2, port=4420 00:35:35.934 [2024-07-12 00:48:03.725399] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ca950 is same with the state(5) to be set 00:35:35.934 [2024-07-12 00:48:03.725672] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ca950 (9): Bad file descriptor 00:35:35.934 [2024-07-12 00:48:03.725940] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:35.934 [2024-07-12 00:48:03.725962] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:35.934 [2024-07-12 00:48:03.725977] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:35.934 [2024-07-12 00:48:03.730024] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:35.934 [2024-07-12 00:48:03.739346] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:35.934 [2024-07-12 00:48:03.739799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.934 [2024-07-12 00:48:03.739828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ca950 with addr=10.0.0.2, port=4420 00:35:35.934 [2024-07-12 00:48:03.739845] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ca950 is same with the state(5) to be set 00:35:35.934 [2024-07-12 00:48:03.740115] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ca950 (9): Bad file descriptor 00:35:35.934 [2024-07-12 00:48:03.740382] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:35.934 [2024-07-12 00:48:03.740404] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:35.934 [2024-07-12 00:48:03.740419] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:35.934 [2024-07-12 00:48:03.744481] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:35.934 [2024-07-12 00:48:03.753818] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:35.934 [2024-07-12 00:48:03.754291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.934 [2024-07-12 00:48:03.754320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ca950 with addr=10.0.0.2, port=4420 00:35:35.934 [2024-07-12 00:48:03.754337] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ca950 is same with the state(5) to be set 00:35:35.934 [2024-07-12 00:48:03.754611] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ca950 (9): Bad file descriptor 00:35:35.934 [2024-07-12 00:48:03.754879] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:35.934 [2024-07-12 00:48:03.754901] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:35.934 [2024-07-12 00:48:03.754916] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:35.934 [2024-07-12 00:48:03.758967] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:35.934 [2024-07-12 00:48:03.768281] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:35.934 [2024-07-12 00:48:03.768700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.934 [2024-07-12 00:48:03.768753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ca950 with addr=10.0.0.2, port=4420 00:35:35.934 [2024-07-12 00:48:03.768771] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ca950 is same with the state(5) to be set 00:35:35.934 [2024-07-12 00:48:03.769035] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ca950 (9): Bad file descriptor 00:35:35.934 [2024-07-12 00:48:03.769302] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:35.934 [2024-07-12 00:48:03.769324] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:35.934 [2024-07-12 00:48:03.769339] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:36.194 [2024-07-12 00:48:03.773386] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:36.194 [2024-07-12 00:48:03.782716] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:36.194 [2024-07-12 00:48:03.783183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.194 [2024-07-12 00:48:03.783233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ca950 with addr=10.0.0.2, port=4420 00:35:36.194 [2024-07-12 00:48:03.783256] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ca950 is same with the state(5) to be set 00:35:36.194 [2024-07-12 00:48:03.783526] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ca950 (9): Bad file descriptor 00:35:36.194 [2024-07-12 00:48:03.783803] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:36.194 [2024-07-12 00:48:03.783826] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:36.194 [2024-07-12 00:48:03.783841] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:36.194 [2024-07-12 00:48:03.787897] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:36.194 [2024-07-12 00:48:03.797201] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:36.194 [2024-07-12 00:48:03.797708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.194 [2024-07-12 00:48:03.797749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ca950 with addr=10.0.0.2, port=4420 00:35:36.194 [2024-07-12 00:48:03.797768] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ca950 is same with the state(5) to be set 00:35:36.194 [2024-07-12 00:48:03.798044] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ca950 (9): Bad file descriptor 00:35:36.194 [2024-07-12 00:48:03.798313] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:36.194 [2024-07-12 00:48:03.798336] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:36.194 [2024-07-12 00:48:03.798351] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:36.194 [2024-07-12 00:48:03.802417] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:36.194 [2024-07-12 00:48:03.811766] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:36.194 [2024-07-12 00:48:03.812276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.194 [2024-07-12 00:48:03.812331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ca950 with addr=10.0.0.2, port=4420 00:35:36.194 [2024-07-12 00:48:03.812350] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ca950 is same with the state(5) to be set 00:35:36.194 [2024-07-12 00:48:03.812639] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ca950 (9): Bad file descriptor 00:35:36.194 [2024-07-12 00:48:03.812908] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:36.194 [2024-07-12 00:48:03.812931] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:36.194 [2024-07-12 00:48:03.812946] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:36.194 [2024-07-12 00:48:03.817008] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:36.194 [2024-07-12 00:48:03.826312] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:36.194 [2024-07-12 00:48:03.826911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.194 [2024-07-12 00:48:03.826953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ca950 with addr=10.0.0.2, port=4420 00:35:36.194 [2024-07-12 00:48:03.826973] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ca950 is same with the state(5) to be set 00:35:36.194 [2024-07-12 00:48:03.827243] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ca950 (9): Bad file descriptor 00:35:36.194 [2024-07-12 00:48:03.827511] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:36.194 [2024-07-12 00:48:03.827539] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:36.194 [2024-07-12 00:48:03.827555] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:36.194 [2024-07-12 00:48:03.831631] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:36.194 [2024-07-12 00:48:03.840706] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:36.194 [2024-07-12 00:48:03.841222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.194 [2024-07-12 00:48:03.841264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ca950 with addr=10.0.0.2, port=4420 00:35:36.194 [2024-07-12 00:48:03.841283] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ca950 is same with the state(5) to be set 00:35:36.194 [2024-07-12 00:48:03.841553] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ca950 (9): Bad file descriptor 00:35:36.194 [2024-07-12 00:48:03.841834] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:36.194 [2024-07-12 00:48:03.841858] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:36.194 [2024-07-12 00:48:03.841873] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:36.194 [2024-07-12 00:48:03.845912] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:36.194 [2024-07-12 00:48:03.855256] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:36.194 [2024-07-12 00:48:03.855744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.194 [2024-07-12 00:48:03.855800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ca950 with addr=10.0.0.2, port=4420 00:35:36.194 [2024-07-12 00:48:03.855819] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ca950 is same with the state(5) to be set 00:35:36.194 [2024-07-12 00:48:03.856096] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ca950 (9): Bad file descriptor 00:35:36.194 [2024-07-12 00:48:03.856364] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:36.194 [2024-07-12 00:48:03.856387] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:36.194 [2024-07-12 00:48:03.856402] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:36.194 [2024-07-12 00:48:03.860483] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:36.194 [2024-07-12 00:48:03.869618] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:36.194 [2024-07-12 00:48:03.870117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.194 [2024-07-12 00:48:03.870176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ca950 with addr=10.0.0.2, port=4420 00:35:36.194 [2024-07-12 00:48:03.870195] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ca950 is same with the state(5) to be set 00:35:36.194 [2024-07-12 00:48:03.870465] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ca950 (9): Bad file descriptor 00:35:36.194 [2024-07-12 00:48:03.870747] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:36.194 [2024-07-12 00:48:03.870770] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:36.194 [2024-07-12 00:48:03.870786] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:36.194 [2024-07-12 00:48:03.874840] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:36.194 [2024-07-12 00:48:03.884190] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:36.194 [2024-07-12 00:48:03.884691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.194 [2024-07-12 00:48:03.884732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ca950 with addr=10.0.0.2, port=4420 00:35:36.194 [2024-07-12 00:48:03.884752] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ca950 is same with the state(5) to be set 00:35:36.194 [2024-07-12 00:48:03.885023] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ca950 (9): Bad file descriptor 00:35:36.194 [2024-07-12 00:48:03.885291] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:36.194 [2024-07-12 00:48:03.885313] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:36.194 [2024-07-12 00:48:03.885328] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:36.194 [2024-07-12 00:48:03.889415] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:36.194 [2024-07-12 00:48:03.898757] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:36.194 [2024-07-12 00:48:03.899271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.194 [2024-07-12 00:48:03.899312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ca950 with addr=10.0.0.2, port=4420 00:35:36.194 [2024-07-12 00:48:03.899331] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ca950 is same with the state(5) to be set 00:35:36.194 [2024-07-12 00:48:03.899615] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ca950 (9): Bad file descriptor 00:35:36.194 [2024-07-12 00:48:03.899885] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:36.194 [2024-07-12 00:48:03.899907] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:36.194 [2024-07-12 00:48:03.899922] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:36.194 [2024-07-12 00:48:03.903983] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:36.194 [2024-07-12 00:48:03.913290] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:36.194 [2024-07-12 00:48:03.913815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.194 [2024-07-12 00:48:03.913869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ca950 with addr=10.0.0.2, port=4420 00:35:36.194 [2024-07-12 00:48:03.913888] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ca950 is same with the state(5) to be set 00:35:36.194 [2024-07-12 00:48:03.914158] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ca950 (9): Bad file descriptor 00:35:36.194 [2024-07-12 00:48:03.914427] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:36.194 [2024-07-12 00:48:03.914449] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:36.194 [2024-07-12 00:48:03.914464] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:36.194 [2024-07-12 00:48:03.918518] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:36.194 [2024-07-12 00:48:03.927840] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:36.194 [2024-07-12 00:48:03.928376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.194 [2024-07-12 00:48:03.928417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ca950 with addr=10.0.0.2, port=4420 00:35:36.194 [2024-07-12 00:48:03.928436] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ca950 is same with the state(5) to be set 00:35:36.194 [2024-07-12 00:48:03.928727] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ca950 (9): Bad file descriptor 00:35:36.195 [2024-07-12 00:48:03.928997] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:36.195 [2024-07-12 00:48:03.929020] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:36.195 [2024-07-12 00:48:03.929035] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:36.195 [2024-07-12 00:48:03.933113] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:36.195 [2024-07-12 00:48:03.942219] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:36.195 [2024-07-12 00:48:03.942738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.195 [2024-07-12 00:48:03.942779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ca950 with addr=10.0.0.2, port=4420 00:35:36.195 [2024-07-12 00:48:03.942799] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ca950 is same with the state(5) to be set 00:35:36.195 [2024-07-12 00:48:03.943069] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ca950 (9): Bad file descriptor 00:35:36.195 [2024-07-12 00:48:03.943338] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:36.195 [2024-07-12 00:48:03.943360] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:36.195 [2024-07-12 00:48:03.943375] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:36.195 [2024-07-12 00:48:03.947447] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:36.195 [2024-07-12 00:48:03.956794] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:36.195 [2024-07-12 00:48:03.957314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.195 [2024-07-12 00:48:03.957355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ca950 with addr=10.0.0.2, port=4420 00:35:36.195 [2024-07-12 00:48:03.957374] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ca950 is same with the state(5) to be set 00:35:36.195 [2024-07-12 00:48:03.957658] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ca950 (9): Bad file descriptor 00:35:36.195 [2024-07-12 00:48:03.957927] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:36.195 [2024-07-12 00:48:03.957949] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:36.195 [2024-07-12 00:48:03.957964] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:36.195 [2024-07-12 00:48:03.962012] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:36.195 [2024-07-12 00:48:03.971179] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:36.195 [2024-07-12 00:48:03.971777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.195 [2024-07-12 00:48:03.971819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ca950 with addr=10.0.0.2, port=4420 00:35:36.195 [2024-07-12 00:48:03.971838] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ca950 is same with the state(5) to be set 00:35:36.195 [2024-07-12 00:48:03.972108] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ca950 (9): Bad file descriptor 00:35:36.195 [2024-07-12 00:48:03.972378] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:36.195 [2024-07-12 00:48:03.972400] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:36.195 [2024-07-12 00:48:03.972424] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:36.195 [2024-07-12 00:48:03.976497] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:36.195 [2024-07-12 00:48:03.985561] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:36.195 [2024-07-12 00:48:03.985994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.195 [2024-07-12 00:48:03.986048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ca950 with addr=10.0.0.2, port=4420 00:35:36.195 [2024-07-12 00:48:03.986066] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ca950 is same with the state(5) to be set 00:35:36.195 [2024-07-12 00:48:03.986330] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ca950 (9): Bad file descriptor 00:35:36.195 [2024-07-12 00:48:03.986608] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:36.195 [2024-07-12 00:48:03.986631] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:36.195 [2024-07-12 00:48:03.986646] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:36.195 [2024-07-12 00:48:03.990706] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:36.195 [2024-07-12 00:48:04.000036] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:36.195 [2024-07-12 00:48:04.000471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.195 [2024-07-12 00:48:04.000524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ca950 with addr=10.0.0.2, port=4420 00:35:36.195 [2024-07-12 00:48:04.000542] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ca950 is same with the state(5) to be set 00:35:36.195 [2024-07-12 00:48:04.000815] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ca950 (9): Bad file descriptor 00:35:36.195 [2024-07-12 00:48:04.001084] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:36.195 [2024-07-12 00:48:04.001106] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:36.195 [2024-07-12 00:48:04.001122] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:36.195 [2024-07-12 00:48:04.005180] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:36.195 [2024-07-12 00:48:04.014500] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:36.195 [2024-07-12 00:48:04.014958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.195 [2024-07-12 00:48:04.015008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ca950 with addr=10.0.0.2, port=4420 00:35:36.195 [2024-07-12 00:48:04.015026] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ca950 is same with the state(5) to be set 00:35:36.195 [2024-07-12 00:48:04.015289] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ca950 (9): Bad file descriptor 00:35:36.195 [2024-07-12 00:48:04.015556] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:36.195 [2024-07-12 00:48:04.015578] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:36.195 [2024-07-12 00:48:04.015605] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:36.195 [2024-07-12 00:48:04.019652] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:36.195 [2024-07-12 00:48:04.028957] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:36.195 [2024-07-12 00:48:04.029395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.195 [2024-07-12 00:48:04.029449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ca950 with addr=10.0.0.2, port=4420 00:35:36.195 [2024-07-12 00:48:04.029468] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ca950 is same with the state(5) to be set 00:35:36.195 [2024-07-12 00:48:04.029751] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ca950 (9): Bad file descriptor 00:35:36.195 [2024-07-12 00:48:04.030020] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:36.195 [2024-07-12 00:48:04.030042] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:36.195 [2024-07-12 00:48:04.030057] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:36.455 [2024-07-12 00:48:04.034100] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:36.455 [2024-07-12 00:48:04.043418] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:36.455 [2024-07-12 00:48:04.043915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.455 [2024-07-12 00:48:04.043957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ca950 with addr=10.0.0.2, port=4420 00:35:36.455 [2024-07-12 00:48:04.043977] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ca950 is same with the state(5) to be set 00:35:36.455 [2024-07-12 00:48:04.044253] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ca950 (9): Bad file descriptor 00:35:36.455 [2024-07-12 00:48:04.044522] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:36.455 [2024-07-12 00:48:04.044545] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:36.455 [2024-07-12 00:48:04.044560] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:36.455 [2024-07-12 00:48:04.048610] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:36.455 [2024-07-12 00:48:04.057915] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:36.455 [2024-07-12 00:48:04.058371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.455 [2024-07-12 00:48:04.058423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ca950 with addr=10.0.0.2, port=4420 00:35:36.455 [2024-07-12 00:48:04.058440] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ca950 is same with the state(5) to be set 00:35:36.455 [2024-07-12 00:48:04.058715] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ca950 (9): Bad file descriptor 00:35:36.455 [2024-07-12 00:48:04.058984] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:36.455 [2024-07-12 00:48:04.059005] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:36.455 [2024-07-12 00:48:04.059021] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:36.455 [2024-07-12 00:48:04.063077] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:36.455 [2024-07-12 00:48:04.072369] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:36.455 [2024-07-12 00:48:04.072795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.455 [2024-07-12 00:48:04.072837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ca950 with addr=10.0.0.2, port=4420 00:35:36.455 [2024-07-12 00:48:04.072857] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ca950 is same with the state(5) to be set 00:35:36.455 [2024-07-12 00:48:04.073134] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ca950 (9): Bad file descriptor 00:35:36.455 [2024-07-12 00:48:04.073403] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:36.455 [2024-07-12 00:48:04.073425] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:36.455 [2024-07-12 00:48:04.073440] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:36.455 [2024-07-12 00:48:04.077486] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:36.455 [2024-07-12 00:48:04.086947] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:36.455 [2024-07-12 00:48:04.087409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.455 [2024-07-12 00:48:04.087454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ca950 with addr=10.0.0.2, port=4420 00:35:36.455 [2024-07-12 00:48:04.087475] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ca950 is same with the state(5) to be set 00:35:36.455 [2024-07-12 00:48:04.087763] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ca950 (9): Bad file descriptor 00:35:36.455 [2024-07-12 00:48:04.088032] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:36.455 [2024-07-12 00:48:04.088054] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:36.455 [2024-07-12 00:48:04.088069] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:36.455 [2024-07-12 00:48:04.092113] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:36.455 [2024-07-12 00:48:04.101410] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:36.455 [2024-07-12 00:48:04.101895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.455 [2024-07-12 00:48:04.101949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ca950 with addr=10.0.0.2, port=4420 00:35:36.455 [2024-07-12 00:48:04.101968] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ca950 is same with the state(5) to be set 00:35:36.455 [2024-07-12 00:48:04.102245] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ca950 (9): Bad file descriptor 00:35:36.455 [2024-07-12 00:48:04.102514] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:36.455 [2024-07-12 00:48:04.102537] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:36.455 [2024-07-12 00:48:04.102552] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:36.455 [2024-07-12 00:48:04.106613] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:36.455 [2024-07-12 00:48:04.115922] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:36.455 [2024-07-12 00:48:04.116344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.455 [2024-07-12 00:48:04.116393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ca950 with addr=10.0.0.2, port=4420 00:35:36.455 [2024-07-12 00:48:04.116425] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ca950 is same with the state(5) to be set 00:35:36.455 [2024-07-12 00:48:04.116699] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ca950 (9): Bad file descriptor 00:35:36.455 [2024-07-12 00:48:04.116968] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:36.455 [2024-07-12 00:48:04.116990] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:36.455 [2024-07-12 00:48:04.117011] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:36.455 [2024-07-12 00:48:04.121051] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:36.455 [2024-07-12 00:48:04.130342] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:36.455 [2024-07-12 00:48:04.130782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.455 [2024-07-12 00:48:04.130832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ca950 with addr=10.0.0.2, port=4420 00:35:36.455 [2024-07-12 00:48:04.130849] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ca950 is same with the state(5) to be set 00:35:36.455 [2024-07-12 00:48:04.131113] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ca950 (9): Bad file descriptor 00:35:36.455 [2024-07-12 00:48:04.131382] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:36.455 [2024-07-12 00:48:04.131404] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:36.455 [2024-07-12 00:48:04.131419] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:36.455 [2024-07-12 00:48:04.135460] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:36.455 [2024-07-12 00:48:04.144755] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:36.455 [2024-07-12 00:48:04.145236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.455 [2024-07-12 00:48:04.145296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ca950 with addr=10.0.0.2, port=4420 00:35:36.455 [2024-07-12 00:48:04.145313] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ca950 is same with the state(5) to be set 00:35:36.455 [2024-07-12 00:48:04.145576] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ca950 (9): Bad file descriptor 00:35:36.455 [2024-07-12 00:48:04.145861] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:36.455 [2024-07-12 00:48:04.145883] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:36.455 [2024-07-12 00:48:04.145898] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:36.455 [2024-07-12 00:48:04.149940] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:36.455 [2024-07-12 00:48:04.159228] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:36.455 [2024-07-12 00:48:04.159690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.455 [2024-07-12 00:48:04.159725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ca950 with addr=10.0.0.2, port=4420 00:35:36.455 [2024-07-12 00:48:04.159743] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ca950 is same with the state(5) to be set 00:35:36.455 [2024-07-12 00:48:04.160006] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ca950 (9): Bad file descriptor 00:35:36.456 [2024-07-12 00:48:04.160273] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:36.456 [2024-07-12 00:48:04.160295] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:36.456 [2024-07-12 00:48:04.160310] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:36.456 [2024-07-12 00:48:04.164377] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:36.456 [2024-07-12 00:48:04.173752] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:36.456 [2024-07-12 00:48:04.174229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.456 [2024-07-12 00:48:04.174263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ca950 with addr=10.0.0.2, port=4420 00:35:36.456 [2024-07-12 00:48:04.174281] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ca950 is same with the state(5) to be set 00:35:36.456 [2024-07-12 00:48:04.174551] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ca950 (9): Bad file descriptor 00:35:36.456 [2024-07-12 00:48:04.174833] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:36.456 [2024-07-12 00:48:04.174856] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:36.456 [2024-07-12 00:48:04.174871] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:36.456 [2024-07-12 00:48:04.178942] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:36.456 [2024-07-12 00:48:04.188309] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:36.456 [2024-07-12 00:48:04.188685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.456 [2024-07-12 00:48:04.188715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ca950 with addr=10.0.0.2, port=4420 00:35:36.456 [2024-07-12 00:48:04.188732] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ca950 is same with the state(5) to be set 00:35:36.456 [2024-07-12 00:48:04.188996] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ca950 (9): Bad file descriptor 00:35:36.456 [2024-07-12 00:48:04.189266] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:36.456 [2024-07-12 00:48:04.189288] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:36.456 [2024-07-12 00:48:04.189303] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:36.456 [2024-07-12 00:48:04.193385] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:36.456 [2024-07-12 00:48:04.202770] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:36.456 [2024-07-12 00:48:04.203239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.456 [2024-07-12 00:48:04.203295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ca950 with addr=10.0.0.2, port=4420 00:35:36.456 [2024-07-12 00:48:04.203314] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ca950 is same with the state(5) to be set 00:35:36.456 [2024-07-12 00:48:04.203604] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ca950 (9): Bad file descriptor 00:35:36.456 [2024-07-12 00:48:04.203874] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:36.456 [2024-07-12 00:48:04.203896] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:36.456 [2024-07-12 00:48:04.203911] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:36.456 [2024-07-12 00:48:04.208009] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:36.456 [2024-07-12 00:48:04.217143] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:36.456 [2024-07-12 00:48:04.217638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.456 [2024-07-12 00:48:04.217713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ca950 with addr=10.0.0.2, port=4420 00:35:36.456 [2024-07-12 00:48:04.217733] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ca950 is same with the state(5) to be set 00:35:36.456 [2024-07-12 00:48:04.218003] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ca950 (9): Bad file descriptor 00:35:36.456 [2024-07-12 00:48:04.218278] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:36.456 [2024-07-12 00:48:04.218300] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:36.456 [2024-07-12 00:48:04.218315] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:36.456 [2024-07-12 00:48:04.222425] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:36.456 [2024-07-12 00:48:04.231611] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:36.456 [2024-07-12 00:48:04.232100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.456 [2024-07-12 00:48:04.232136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ca950 with addr=10.0.0.2, port=4420 00:35:36.456 [2024-07-12 00:48:04.232167] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ca950 is same with the state(5) to be set 00:35:36.456 [2024-07-12 00:48:04.232431] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ca950 (9): Bad file descriptor 00:35:36.456 [2024-07-12 00:48:04.232711] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:36.456 [2024-07-12 00:48:04.232734] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:36.456 [2024-07-12 00:48:04.232749] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:36.456 [2024-07-12 00:48:04.236819] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:36.456 [2024-07-12 00:48:04.246179] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:36.456 [2024-07-12 00:48:04.246663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.456 [2024-07-12 00:48:04.246693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ca950 with addr=10.0.0.2, port=4420 00:35:36.456 [2024-07-12 00:48:04.246710] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ca950 is same with the state(5) to be set 00:35:36.456 [2024-07-12 00:48:04.246973] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ca950 (9): Bad file descriptor 00:35:36.456 [2024-07-12 00:48:04.247241] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:36.456 [2024-07-12 00:48:04.247263] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:36.456 [2024-07-12 00:48:04.247278] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:36.456 [2024-07-12 00:48:04.251346] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:36.456 [2024-07-12 00:48:04.260737] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:36.456 [2024-07-12 00:48:04.261191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.456 [2024-07-12 00:48:04.261221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ca950 with addr=10.0.0.2, port=4420 00:35:36.456 [2024-07-12 00:48:04.261238] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ca950 is same with the state(5) to be set 00:35:36.456 [2024-07-12 00:48:04.261502] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ca950 (9): Bad file descriptor 00:35:36.456 [2024-07-12 00:48:04.261784] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:36.456 [2024-07-12 00:48:04.261807] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:36.456 [2024-07-12 00:48:04.261822] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:36.456 [2024-07-12 00:48:04.265925] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:36.456 [2024-07-12 00:48:04.275236] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:36.456 [2024-07-12 00:48:04.275689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.456 [2024-07-12 00:48:04.275737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ca950 with addr=10.0.0.2, port=4420 00:35:36.456 [2024-07-12 00:48:04.275755] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ca950 is same with the state(5) to be set 00:35:36.456 [2024-07-12 00:48:04.276019] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ca950 (9): Bad file descriptor 00:35:36.456 [2024-07-12 00:48:04.276286] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:36.456 [2024-07-12 00:48:04.276308] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:36.456 [2024-07-12 00:48:04.276323] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:36.456 [2024-07-12 00:48:04.280370] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:36.456 [2024-07-12 00:48:04.289683] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:36.456 [2024-07-12 00:48:04.290165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.456 [2024-07-12 00:48:04.290196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ca950 with addr=10.0.0.2, port=4420 00:35:36.456 [2024-07-12 00:48:04.290213] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ca950 is same with the state(5) to be set 00:35:36.456 [2024-07-12 00:48:04.290477] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ca950 (9): Bad file descriptor 00:35:36.456 [2024-07-12 00:48:04.290755] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:36.456 [2024-07-12 00:48:04.290779] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:36.456 [2024-07-12 00:48:04.290795] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:36.716 [2024-07-12 00:48:04.294838] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:36.716 [2024-07-12 00:48:04.304132] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:36.716 [2024-07-12 00:48:04.304565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.716 [2024-07-12 00:48:04.304618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ca950 with addr=10.0.0.2, port=4420 00:35:36.716 [2024-07-12 00:48:04.304636] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ca950 is same with the state(5) to be set 00:35:36.716 [2024-07-12 00:48:04.304899] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ca950 (9): Bad file descriptor 00:35:36.716 [2024-07-12 00:48:04.305168] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:36.716 [2024-07-12 00:48:04.305190] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:36.716 [2024-07-12 00:48:04.305205] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:36.716 [2024-07-12 00:48:04.309265] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:36.716 [2024-07-12 00:48:04.318654] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:36.716 [2024-07-12 00:48:04.319105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.716 [2024-07-12 00:48:04.319137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ca950 with addr=10.0.0.2, port=4420 00:35:36.716 [2024-07-12 00:48:04.319162] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ca950 is same with the state(5) to be set 00:35:36.716 [2024-07-12 00:48:04.319426] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ca950 (9): Bad file descriptor 00:35:36.716 [2024-07-12 00:48:04.319709] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:36.716 [2024-07-12 00:48:04.319733] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:36.716 [2024-07-12 00:48:04.319754] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:36.716 [2024-07-12 00:48:04.323837] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:36.716 [2024-07-12 00:48:04.333199] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:36.716 [2024-07-12 00:48:04.333680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.716 [2024-07-12 00:48:04.333735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ca950 with addr=10.0.0.2, port=4420 00:35:36.716 [2024-07-12 00:48:04.333752] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ca950 is same with the state(5) to be set 00:35:36.716 [2024-07-12 00:48:04.334015] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ca950 (9): Bad file descriptor 00:35:36.716 [2024-07-12 00:48:04.334283] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:36.716 [2024-07-12 00:48:04.334305] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:36.716 [2024-07-12 00:48:04.334320] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:36.716 [2024-07-12 00:48:04.338429] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:36.716 [2024-07-12 00:48:04.347549] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:36.716 [2024-07-12 00:48:04.348023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.716 [2024-07-12 00:48:04.348064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ca950 with addr=10.0.0.2, port=4420 00:35:36.717 [2024-07-12 00:48:04.348083] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ca950 is same with the state(5) to be set 00:35:36.717 [2024-07-12 00:48:04.348354] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ca950 (9): Bad file descriptor 00:35:36.717 [2024-07-12 00:48:04.348639] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:36.717 [2024-07-12 00:48:04.348663] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:36.717 [2024-07-12 00:48:04.348678] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:36.717 [2024-07-12 00:48:04.352758] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:36.717 [2024-07-12 00:48:04.362148] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:36.717 [2024-07-12 00:48:04.362637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.717 [2024-07-12 00:48:04.362678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ca950 with addr=10.0.0.2, port=4420 00:35:36.717 [2024-07-12 00:48:04.362698] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ca950 is same with the state(5) to be set 00:35:36.717 [2024-07-12 00:48:04.362974] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ca950 (9): Bad file descriptor 00:35:36.717 [2024-07-12 00:48:04.363242] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:36.717 [2024-07-12 00:48:04.363272] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:36.717 [2024-07-12 00:48:04.363288] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:36.717 [2024-07-12 00:48:04.367333] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:36.717 [2024-07-12 00:48:04.376659] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:36.717 [2024-07-12 00:48:04.377119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.717 [2024-07-12 00:48:04.377159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ca950 with addr=10.0.0.2, port=4420 00:35:36.717 [2024-07-12 00:48:04.377178] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ca950 is same with the state(5) to be set 00:35:36.717 [2024-07-12 00:48:04.377448] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ca950 (9): Bad file descriptor 00:35:36.717 [2024-07-12 00:48:04.377731] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:36.717 [2024-07-12 00:48:04.377754] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:36.717 [2024-07-12 00:48:04.377769] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:36.717 [2024-07-12 00:48:04.381855] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:36.717 [2024-07-12 00:48:04.391255] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:36.717 [2024-07-12 00:48:04.391738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.717 [2024-07-12 00:48:04.391778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ca950 with addr=10.0.0.2, port=4420 00:35:36.717 [2024-07-12 00:48:04.391798] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ca950 is same with the state(5) to be set 00:35:36.717 [2024-07-12 00:48:04.392074] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ca950 (9): Bad file descriptor 00:35:36.717 [2024-07-12 00:48:04.392343] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:36.717 [2024-07-12 00:48:04.392365] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:36.717 [2024-07-12 00:48:04.392380] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:36.717 [2024-07-12 00:48:04.396465] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:36.717 [2024-07-12 00:48:04.405840] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:36.717 [2024-07-12 00:48:04.406265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.717 [2024-07-12 00:48:04.406336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ca950 with addr=10.0.0.2, port=4420 00:35:36.717 [2024-07-12 00:48:04.406355] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ca950 is same with the state(5) to be set 00:35:36.717 [2024-07-12 00:48:04.406645] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ca950 (9): Bad file descriptor 00:35:36.717 [2024-07-12 00:48:04.406915] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:36.717 [2024-07-12 00:48:04.406937] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:36.717 [2024-07-12 00:48:04.406953] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:36.717 [2024-07-12 00:48:04.410995] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:36.717 [2024-07-12 00:48:04.420297] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:36.717 [2024-07-12 00:48:04.420691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.717 [2024-07-12 00:48:04.420722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ca950 with addr=10.0.0.2, port=4420 00:35:36.717 [2024-07-12 00:48:04.420740] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ca950 is same with the state(5) to be set 00:35:36.717 [2024-07-12 00:48:04.421010] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ca950 (9): Bad file descriptor 00:35:36.717 [2024-07-12 00:48:04.421277] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:36.717 [2024-07-12 00:48:04.421300] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:36.717 [2024-07-12 00:48:04.421315] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:36.717 [2024-07-12 00:48:04.425353] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:36.717 [2024-07-12 00:48:04.434645] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:36.717 [2024-07-12 00:48:04.435113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.717 [2024-07-12 00:48:04.435163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ca950 with addr=10.0.0.2, port=4420 00:35:36.717 [2024-07-12 00:48:04.435181] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ca950 is same with the state(5) to be set 00:35:36.717 [2024-07-12 00:48:04.435445] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ca950 (9): Bad file descriptor 00:35:36.717 [2024-07-12 00:48:04.435724] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:36.717 [2024-07-12 00:48:04.435747] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:36.717 [2024-07-12 00:48:04.435763] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:36.717 [2024-07-12 00:48:04.439799] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:36.717 [2024-07-12 00:48:04.449097] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:36.717 [2024-07-12 00:48:04.449563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.717 [2024-07-12 00:48:04.449619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ca950 with addr=10.0.0.2, port=4420 00:35:36.717 [2024-07-12 00:48:04.449637] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ca950 is same with the state(5) to be set 00:35:36.717 [2024-07-12 00:48:04.449901] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ca950 (9): Bad file descriptor 00:35:36.717 [2024-07-12 00:48:04.450168] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:36.717 [2024-07-12 00:48:04.450191] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:36.717 [2024-07-12 00:48:04.450206] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:36.717 [2024-07-12 00:48:04.454242] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:36.717 [2024-07-12 00:48:04.463681] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:36.717 [2024-07-12 00:48:04.464186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.717 [2024-07-12 00:48:04.464227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ca950 with addr=10.0.0.2, port=4420 00:35:36.717 [2024-07-12 00:48:04.464246] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ca950 is same with the state(5) to be set 00:35:36.717 [2024-07-12 00:48:04.464530] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ca950 (9): Bad file descriptor 00:35:36.717 [2024-07-12 00:48:04.464811] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:36.717 [2024-07-12 00:48:04.464835] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:36.717 [2024-07-12 00:48:04.464850] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:36.717 [2024-07-12 00:48:04.468915] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:36.717 [2024-07-12 00:48:04.478150] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:36.717 [2024-07-12 00:48:04.478640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.717 [2024-07-12 00:48:04.478695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ca950 with addr=10.0.0.2, port=4420 00:35:36.717 [2024-07-12 00:48:04.478714] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ca950 is same with the state(5) to be set 00:35:36.717 [2024-07-12 00:48:04.478984] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ca950 (9): Bad file descriptor 00:35:36.717 [2024-07-12 00:48:04.479253] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:36.717 [2024-07-12 00:48:04.479276] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:36.717 [2024-07-12 00:48:04.479291] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:36.717 [2024-07-12 00:48:04.483366] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:36.717 [2024-07-12 00:48:04.492922] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:36.717 [2024-07-12 00:48:04.493403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.717 [2024-07-12 00:48:04.493457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ca950 with addr=10.0.0.2, port=4420 00:35:36.717 [2024-07-12 00:48:04.493476] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ca950 is same with the state(5) to be set 00:35:36.717 [2024-07-12 00:48:04.493759] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ca950 (9): Bad file descriptor 00:35:36.717 [2024-07-12 00:48:04.494029] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:36.717 [2024-07-12 00:48:04.494051] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:36.717 [2024-07-12 00:48:04.494066] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:36.717 [2024-07-12 00:48:04.498157] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:36.717 [2024-07-12 00:48:04.507350] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:36.718 [2024-07-12 00:48:04.507827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.718 [2024-07-12 00:48:04.507858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ca950 with addr=10.0.0.2, port=4420 00:35:36.718 [2024-07-12 00:48:04.507877] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ca950 is same with the state(5) to be set 00:35:36.718 [2024-07-12 00:48:04.508142] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ca950 (9): Bad file descriptor 00:35:36.718 [2024-07-12 00:48:04.508409] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:36.718 [2024-07-12 00:48:04.508431] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:36.718 [2024-07-12 00:48:04.508453] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:36.718 [2024-07-12 00:48:04.512518] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:36.718 [2024-07-12 00:48:04.521930] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:36.718 [2024-07-12 00:48:04.522504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.718 [2024-07-12 00:48:04.522545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ca950 with addr=10.0.0.2, port=4420 00:35:36.718 [2024-07-12 00:48:04.522564] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ca950 is same with the state(5) to be set 00:35:36.718 [2024-07-12 00:48:04.522847] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ca950 (9): Bad file descriptor 00:35:36.718 [2024-07-12 00:48:04.523117] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:36.718 [2024-07-12 00:48:04.523140] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:36.718 [2024-07-12 00:48:04.523155] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:36.718 [2024-07-12 00:48:04.527215] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:36.718 [2024-07-12 00:48:04.536292] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:36.718 [2024-07-12 00:48:04.536902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.718 [2024-07-12 00:48:04.536944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ca950 with addr=10.0.0.2, port=4420 00:35:36.718 [2024-07-12 00:48:04.536963] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ca950 is same with the state(5) to be set 00:35:36.718 [2024-07-12 00:48:04.537233] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ca950 (9): Bad file descriptor 00:35:36.718 [2024-07-12 00:48:04.537503] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:36.718 [2024-07-12 00:48:04.537525] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:36.718 [2024-07-12 00:48:04.537540] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:36.718 [2024-07-12 00:48:04.541631] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:36.718 [2024-07-12 00:48:04.550727] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:36.718 [2024-07-12 00:48:04.551290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.718 [2024-07-12 00:48:04.551331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ca950 with addr=10.0.0.2, port=4420 00:35:36.718 [2024-07-12 00:48:04.551350] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ca950 is same with the state(5) to be set 00:35:36.718 [2024-07-12 00:48:04.551639] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ca950 (9): Bad file descriptor 00:35:36.718 [2024-07-12 00:48:04.551908] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:36.718 [2024-07-12 00:48:04.551931] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:36.718 [2024-07-12 00:48:04.551946] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:36.975 [2024-07-12 00:48:04.556016] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:36.975 [2024-07-12 00:48:04.565140] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:36.975 [2024-07-12 00:48:04.565669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.975 [2024-07-12 00:48:04.565710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ca950 with addr=10.0.0.2, port=4420 00:35:36.975 [2024-07-12 00:48:04.565729] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ca950 is same with the state(5) to be set 00:35:36.975 [2024-07-12 00:48:04.566005] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ca950 (9): Bad file descriptor 00:35:36.975 [2024-07-12 00:48:04.566274] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:36.975 [2024-07-12 00:48:04.566297] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:36.975 [2024-07-12 00:48:04.566311] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:36.975 [2024-07-12 00:48:04.570393] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:36.975 [2024-07-12 00:48:04.579541] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:36.975 [2024-07-12 00:48:04.580162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.975 [2024-07-12 00:48:04.580203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ca950 with addr=10.0.0.2, port=4420 00:35:36.975 [2024-07-12 00:48:04.580222] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ca950 is same with the state(5) to be set 00:35:36.976 [2024-07-12 00:48:04.580493] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ca950 (9): Bad file descriptor 00:35:36.976 [2024-07-12 00:48:04.580776] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:36.976 [2024-07-12 00:48:04.580800] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:36.976 [2024-07-12 00:48:04.580815] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:36.976 [2024-07-12 00:48:04.584900] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:36.976 [2024-07-12 00:48:04.594063] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:36.976 [2024-07-12 00:48:04.594604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.976 [2024-07-12 00:48:04.594635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ca950 with addr=10.0.0.2, port=4420 00:35:36.976 [2024-07-12 00:48:04.594652] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ca950 is same with the state(5) to be set 00:35:36.976 [2024-07-12 00:48:04.594916] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ca950 (9): Bad file descriptor 00:35:36.976 [2024-07-12 00:48:04.595185] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:36.976 [2024-07-12 00:48:04.595207] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:36.976 [2024-07-12 00:48:04.595222] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:36.976 [2024-07-12 00:48:04.599288] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:36.976 [2024-07-12 00:48:04.608645] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:36.976 [2024-07-12 00:48:04.609161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.976 [2024-07-12 00:48:04.609202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ca950 with addr=10.0.0.2, port=4420 00:35:36.976 [2024-07-12 00:48:04.609222] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ca950 is same with the state(5) to be set 00:35:36.976 [2024-07-12 00:48:04.609492] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ca950 (9): Bad file descriptor 00:35:36.976 [2024-07-12 00:48:04.609780] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:36.976 [2024-07-12 00:48:04.609804] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:36.976 [2024-07-12 00:48:04.609819] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:36.976 [2024-07-12 00:48:04.613880] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:36.976 [2024-07-12 00:48:04.623185] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:36.976 [2024-07-12 00:48:04.623630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.976 [2024-07-12 00:48:04.623671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ca950 with addr=10.0.0.2, port=4420 00:35:36.976 [2024-07-12 00:48:04.623690] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ca950 is same with the state(5) to be set 00:35:36.976 [2024-07-12 00:48:04.623960] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ca950 (9): Bad file descriptor 00:35:36.976 [2024-07-12 00:48:04.624230] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:36.976 [2024-07-12 00:48:04.624252] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:36.976 [2024-07-12 00:48:04.624267] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:36.976 [2024-07-12 00:48:04.628328] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:36.976 [2024-07-12 00:48:04.637667] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:36.976 [2024-07-12 00:48:04.638097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.976 [2024-07-12 00:48:04.638147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ca950 with addr=10.0.0.2, port=4420 00:35:36.976 [2024-07-12 00:48:04.638164] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ca950 is same with the state(5) to be set 00:35:36.976 [2024-07-12 00:48:04.638434] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ca950 (9): Bad file descriptor 00:35:36.976 [2024-07-12 00:48:04.638712] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:36.976 [2024-07-12 00:48:04.638734] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:36.976 [2024-07-12 00:48:04.638750] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:36.976 [2024-07-12 00:48:04.642823] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:36.976 [2024-07-12 00:48:04.652127] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:36.976 [2024-07-12 00:48:04.652607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.976 [2024-07-12 00:48:04.652655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ca950 with addr=10.0.0.2, port=4420 00:35:36.976 [2024-07-12 00:48:04.652672] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ca950 is same with the state(5) to be set 00:35:36.976 [2024-07-12 00:48:04.652936] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ca950 (9): Bad file descriptor 00:35:36.976 [2024-07-12 00:48:04.653204] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:36.976 [2024-07-12 00:48:04.653226] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:36.976 [2024-07-12 00:48:04.653241] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:36.976 [2024-07-12 00:48:04.657323] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:36.976 [2024-07-12 00:48:04.666702] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:36.976 [2024-07-12 00:48:04.667182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.976 [2024-07-12 00:48:04.667212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ca950 with addr=10.0.0.2, port=4420 00:35:36.976 [2024-07-12 00:48:04.667229] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ca950 is same with the state(5) to be set 00:35:36.976 [2024-07-12 00:48:04.667493] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ca950 (9): Bad file descriptor 00:35:36.976 [2024-07-12 00:48:04.667769] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:36.976 [2024-07-12 00:48:04.667792] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:36.976 [2024-07-12 00:48:04.667807] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:36.976 [2024-07-12 00:48:04.671845] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:36.976 [2024-07-12 00:48:04.681156] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:36.976 [2024-07-12 00:48:04.681617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.976 [2024-07-12 00:48:04.681647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ca950 with addr=10.0.0.2, port=4420 00:35:36.976 [2024-07-12 00:48:04.681665] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ca950 is same with the state(5) to be set 00:35:36.976 [2024-07-12 00:48:04.681929] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ca950 (9): Bad file descriptor 00:35:36.976 [2024-07-12 00:48:04.682197] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:36.976 [2024-07-12 00:48:04.682218] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:36.976 [2024-07-12 00:48:04.682234] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:36.976 [2024-07-12 00:48:04.686304] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:36.976 [2024-07-12 00:48:04.695676] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:36.976 [2024-07-12 00:48:04.696174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.976 [2024-07-12 00:48:04.696228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ca950 with addr=10.0.0.2, port=4420 00:35:36.976 [2024-07-12 00:48:04.696247] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ca950 is same with the state(5) to be set 00:35:36.976 [2024-07-12 00:48:04.696518] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ca950 (9): Bad file descriptor 00:35:36.976 [2024-07-12 00:48:04.696799] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:36.976 [2024-07-12 00:48:04.696822] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:36.976 [2024-07-12 00:48:04.696837] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:36.976 [2024-07-12 00:48:04.700925] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:36.976 [2024-07-12 00:48:04.710061] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:36.976 [2024-07-12 00:48:04.710581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.976 [2024-07-12 00:48:04.710632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ca950 with addr=10.0.0.2, port=4420 00:35:36.976 [2024-07-12 00:48:04.710660] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ca950 is same with the state(5) to be set 00:35:36.976 [2024-07-12 00:48:04.710932] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ca950 (9): Bad file descriptor 00:35:36.976 [2024-07-12 00:48:04.711200] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:36.976 [2024-07-12 00:48:04.711222] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:36.976 [2024-07-12 00:48:04.711237] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:36.976 [2024-07-12 00:48:04.715311] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:36.976 [2024-07-12 00:48:04.724441] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:36.976 [2024-07-12 00:48:04.724901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.976 [2024-07-12 00:48:04.724942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ca950 with addr=10.0.0.2, port=4420 00:35:36.976 [2024-07-12 00:48:04.724962] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ca950 is same with the state(5) to be set 00:35:36.976 [2024-07-12 00:48:04.725232] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ca950 (9): Bad file descriptor 00:35:36.976 [2024-07-12 00:48:04.725505] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:36.976 [2024-07-12 00:48:04.725528] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:36.976 [2024-07-12 00:48:04.725543] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:36.976 [2024-07-12 00:48:04.729689] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:36.976 [2024-07-12 00:48:04.738817] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:36.976 [2024-07-12 00:48:04.739338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.976 [2024-07-12 00:48:04.739391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ca950 with addr=10.0.0.2, port=4420 00:35:36.977 [2024-07-12 00:48:04.739410] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ca950 is same with the state(5) to be set 00:35:36.977 [2024-07-12 00:48:04.739694] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ca950 (9): Bad file descriptor 00:35:36.977 [2024-07-12 00:48:04.739964] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:36.977 [2024-07-12 00:48:04.739986] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:36.977 [2024-07-12 00:48:04.740001] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:36.977 [2024-07-12 00:48:04.744089] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:36.977 [2024-07-12 00:48:04.753226] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:36.977 [2024-07-12 00:48:04.753725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.977 [2024-07-12 00:48:04.753767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ca950 with addr=10.0.0.2, port=4420 00:35:36.977 [2024-07-12 00:48:04.753786] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ca950 is same with the state(5) to be set 00:35:36.977 [2024-07-12 00:48:04.754056] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ca950 (9): Bad file descriptor 00:35:36.977 [2024-07-12 00:48:04.754331] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:36.977 [2024-07-12 00:48:04.754354] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:36.977 [2024-07-12 00:48:04.754370] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:36.977 [2024-07-12 00:48:04.758439] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:36.977 [2024-07-12 00:48:04.767810] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:36.977 [2024-07-12 00:48:04.768328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.977 [2024-07-12 00:48:04.768370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ca950 with addr=10.0.0.2, port=4420 00:35:36.977 [2024-07-12 00:48:04.768389] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ca950 is same with the state(5) to be set 00:35:36.977 [2024-07-12 00:48:04.768680] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ca950 (9): Bad file descriptor 00:35:36.977 [2024-07-12 00:48:04.768950] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:36.977 [2024-07-12 00:48:04.768973] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:36.977 [2024-07-12 00:48:04.768988] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:36.977 [2024-07-12 00:48:04.773075] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:36.977 [2024-07-12 00:48:04.782238] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:36.977 [2024-07-12 00:48:04.782722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.977 [2024-07-12 00:48:04.782777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ca950 with addr=10.0.0.2, port=4420 00:35:36.977 [2024-07-12 00:48:04.782796] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ca950 is same with the state(5) to be set 00:35:36.977 [2024-07-12 00:48:04.783067] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ca950 (9): Bad file descriptor 00:35:36.977 [2024-07-12 00:48:04.783336] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:36.977 [2024-07-12 00:48:04.783358] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:36.977 [2024-07-12 00:48:04.783373] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:36.977 [2024-07-12 00:48:04.787461] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:36.977 [2024-07-12 00:48:04.796825] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:36.977 [2024-07-12 00:48:04.797396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.977 [2024-07-12 00:48:04.797438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ca950 with addr=10.0.0.2, port=4420 00:35:36.977 [2024-07-12 00:48:04.797457] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ca950 is same with the state(5) to be set 00:35:36.977 [2024-07-12 00:48:04.797743] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ca950 (9): Bad file descriptor 00:35:36.977 [2024-07-12 00:48:04.798014] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:36.977 [2024-07-12 00:48:04.798036] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:36.977 [2024-07-12 00:48:04.798051] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:36.977 [2024-07-12 00:48:04.802130] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:36.977 [2024-07-12 00:48:04.811263] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:36.977 [2024-07-12 00:48:04.811720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.977 [2024-07-12 00:48:04.811751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ca950 with addr=10.0.0.2, port=4420 00:35:36.977 [2024-07-12 00:48:04.811788] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ca950 is same with the state(5) to be set 00:35:36.977 [2024-07-12 00:48:04.812073] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ca950 (9): Bad file descriptor 00:35:36.977 [2024-07-12 00:48:04.812341] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:36.977 [2024-07-12 00:48:04.812364] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:36.977 [2024-07-12 00:48:04.812379] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:37.235 [2024-07-12 00:48:04.816448] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:37.235 [2024-07-12 00:48:04.825811] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:37.235 [2024-07-12 00:48:04.826261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.235 [2024-07-12 00:48:04.826324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ca950 with addr=10.0.0.2, port=4420 00:35:37.235 [2024-07-12 00:48:04.826341] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ca950 is same with the state(5) to be set 00:35:37.235 [2024-07-12 00:48:04.826618] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ca950 (9): Bad file descriptor 00:35:37.235 [2024-07-12 00:48:04.826887] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:37.235 [2024-07-12 00:48:04.826909] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:37.235 [2024-07-12 00:48:04.826930] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:37.235 [2024-07-12 00:48:04.831027] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:37.235 [2024-07-12 00:48:04.840421] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:37.235 [2024-07-12 00:48:04.840915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.235 [2024-07-12 00:48:04.840945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ca950 with addr=10.0.0.2, port=4420 00:35:37.235 [2024-07-12 00:48:04.840962] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ca950 is same with the state(5) to be set 00:35:37.235 [2024-07-12 00:48:04.841226] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ca950 (9): Bad file descriptor 00:35:37.235 [2024-07-12 00:48:04.841493] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:37.235 [2024-07-12 00:48:04.841515] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:37.235 [2024-07-12 00:48:04.841530] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:37.235 [2024-07-12 00:48:04.845621] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:37.235 [2024-07-12 00:48:04.854980] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:37.235 [2024-07-12 00:48:04.855467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.235 [2024-07-12 00:48:04.855531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ca950 with addr=10.0.0.2, port=4420 00:35:37.235 [2024-07-12 00:48:04.855554] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ca950 is same with the state(5) to be set 00:35:37.235 [2024-07-12 00:48:04.855830] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ca950 (9): Bad file descriptor 00:35:37.235 [2024-07-12 00:48:04.856098] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:37.235 [2024-07-12 00:48:04.856120] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:37.235 [2024-07-12 00:48:04.856135] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:37.235 [2024-07-12 00:48:04.860203] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:37.235 [2024-07-12 00:48:04.869560] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:37.235 [2024-07-12 00:48:04.870090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.235 [2024-07-12 00:48:04.870132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ca950 with addr=10.0.0.2, port=4420 00:35:37.235 [2024-07-12 00:48:04.870151] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ca950 is same with the state(5) to be set 00:35:37.235 [2024-07-12 00:48:04.870421] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ca950 (9): Bad file descriptor 00:35:37.235 [2024-07-12 00:48:04.870705] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:37.235 [2024-07-12 00:48:04.870729] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:37.235 [2024-07-12 00:48:04.870744] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:37.235 [2024-07-12 00:48:04.874808] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:37.235 [2024-07-12 00:48:04.883945] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:37.235 [2024-07-12 00:48:04.884470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.235 [2024-07-12 00:48:04.884511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ca950 with addr=10.0.0.2, port=4420 00:35:37.235 [2024-07-12 00:48:04.884530] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ca950 is same with the state(5) to be set 00:35:37.235 [2024-07-12 00:48:04.884818] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ca950 (9): Bad file descriptor 00:35:37.235 [2024-07-12 00:48:04.885088] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:37.235 [2024-07-12 00:48:04.885110] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:37.235 [2024-07-12 00:48:04.885126] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:37.235 [2024-07-12 00:48:04.889218] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:37.235 [2024-07-12 00:48:04.898339] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:37.235 [2024-07-12 00:48:04.898770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.235 [2024-07-12 00:48:04.898801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ca950 with addr=10.0.0.2, port=4420 00:35:37.235 [2024-07-12 00:48:04.898819] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ca950 is same with the state(5) to be set 00:35:37.235 [2024-07-12 00:48:04.899082] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ca950 (9): Bad file descriptor 00:35:37.235 [2024-07-12 00:48:04.899350] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:37.235 [2024-07-12 00:48:04.899378] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:37.235 [2024-07-12 00:48:04.899393] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:37.235 [2024-07-12 00:48:04.903459] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:37.235 [2024-07-12 00:48:04.912899] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:37.235 [2024-07-12 00:48:04.913419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.235 [2024-07-12 00:48:04.913460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ca950 with addr=10.0.0.2, port=4420 00:35:37.235 [2024-07-12 00:48:04.913480] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ca950 is same with the state(5) to be set 00:35:37.235 [2024-07-12 00:48:04.913769] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ca950 (9): Bad file descriptor 00:35:37.235 [2024-07-12 00:48:04.914038] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:37.235 [2024-07-12 00:48:04.914060] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:37.235 [2024-07-12 00:48:04.914075] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:37.235 [2024-07-12 00:48:04.918133] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:37.235 [2024-07-12 00:48:04.927244] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:37.235 [2024-07-12 00:48:04.927743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.235 [2024-07-12 00:48:04.927798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ca950 with addr=10.0.0.2, port=4420 00:35:37.235 [2024-07-12 00:48:04.927817] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ca950 is same with the state(5) to be set 00:35:37.235 [2024-07-12 00:48:04.928093] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ca950 (9): Bad file descriptor 00:35:37.235 [2024-07-12 00:48:04.928362] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:37.235 [2024-07-12 00:48:04.928385] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:37.235 [2024-07-12 00:48:04.928400] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:37.235 [2024-07-12 00:48:04.932470] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:37.235 [2024-07-12 00:48:04.941843] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:37.236 [2024-07-12 00:48:04.942346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.236 [2024-07-12 00:48:04.942386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ca950 with addr=10.0.0.2, port=4420 00:35:37.236 [2024-07-12 00:48:04.942405] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ca950 is same with the state(5) to be set 00:35:37.236 [2024-07-12 00:48:04.942702] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ca950 (9): Bad file descriptor 00:35:37.236 [2024-07-12 00:48:04.942974] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:37.236 [2024-07-12 00:48:04.942997] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:37.236 [2024-07-12 00:48:04.943012] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:37.236 [2024-07-12 00:48:04.947098] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:37.236 [2024-07-12 00:48:04.956241] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:37.236 [2024-07-12 00:48:04.956730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.236 [2024-07-12 00:48:04.956783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ca950 with addr=10.0.0.2, port=4420 00:35:37.236 [2024-07-12 00:48:04.956800] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ca950 is same with the state(5) to be set 00:35:37.236 [2024-07-12 00:48:04.957065] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ca950 (9): Bad file descriptor 00:35:37.236 [2024-07-12 00:48:04.957332] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:37.236 [2024-07-12 00:48:04.957354] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:37.236 [2024-07-12 00:48:04.957368] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:37.236 [2024-07-12 00:48:04.961445] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:37.236 [2024-07-12 00:48:04.970841] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:37.236 [2024-07-12 00:48:04.971332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.236 [2024-07-12 00:48:04.971383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ca950 with addr=10.0.0.2, port=4420 00:35:37.236 [2024-07-12 00:48:04.971401] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ca950 is same with the state(5) to be set 00:35:37.236 [2024-07-12 00:48:04.971677] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ca950 (9): Bad file descriptor 00:35:37.236 [2024-07-12 00:48:04.971945] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:37.236 [2024-07-12 00:48:04.971967] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:37.236 [2024-07-12 00:48:04.971981] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:37.236 [2024-07-12 00:48:04.976040] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:37.236 [2024-07-12 00:48:04.985471] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:37.236 [2024-07-12 00:48:04.985956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.236 [2024-07-12 00:48:04.985997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ca950 with addr=10.0.0.2, port=4420 00:35:37.236 [2024-07-12 00:48:04.986016] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ca950 is same with the state(5) to be set 00:35:37.236 [2024-07-12 00:48:04.986293] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ca950 (9): Bad file descriptor 00:35:37.236 [2024-07-12 00:48:04.986562] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:37.236 [2024-07-12 00:48:04.986584] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:37.236 [2024-07-12 00:48:04.986614] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:37.236 [2024-07-12 00:48:04.990684] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:37.236 [2024-07-12 00:48:05.000040] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:37.236 [2024-07-12 00:48:05.000509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.236 [2024-07-12 00:48:05.000564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ca950 with addr=10.0.0.2, port=4420 00:35:37.236 [2024-07-12 00:48:05.000583] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ca950 is same with the state(5) to be set 00:35:37.236 [2024-07-12 00:48:05.000875] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ca950 (9): Bad file descriptor 00:35:37.236 [2024-07-12 00:48:05.001144] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:37.236 [2024-07-12 00:48:05.001167] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:37.236 [2024-07-12 00:48:05.001182] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:37.236 [2024-07-12 00:48:05.005257] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:37.236 [2024-07-12 00:48:05.014626] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:37.236 [2024-07-12 00:48:05.015160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.236 [2024-07-12 00:48:05.015201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ca950 with addr=10.0.0.2, port=4420 00:35:37.236 [2024-07-12 00:48:05.015220] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ca950 is same with the state(5) to be set 00:35:37.236 [2024-07-12 00:48:05.015490] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ca950 (9): Bad file descriptor 00:35:37.236 [2024-07-12 00:48:05.015774] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:37.236 [2024-07-12 00:48:05.015797] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:37.236 [2024-07-12 00:48:05.015813] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:37.236 [2024-07-12 00:48:05.019900] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:37.236 [2024-07-12 00:48:05.029041] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:37.236 [2024-07-12 00:48:05.029548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.236 [2024-07-12 00:48:05.029598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ca950 with addr=10.0.0.2, port=4420 00:35:37.236 [2024-07-12 00:48:05.029619] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ca950 is same with the state(5) to be set 00:35:37.236 [2024-07-12 00:48:05.029891] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ca950 (9): Bad file descriptor 00:35:37.236 [2024-07-12 00:48:05.030160] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:37.236 [2024-07-12 00:48:05.030182] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:37.236 [2024-07-12 00:48:05.030197] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:37.236 [2024-07-12 00:48:05.034277] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:37.236 [2024-07-12 00:48:05.043436] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:37.236 [2024-07-12 00:48:05.043835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.236 [2024-07-12 00:48:05.043866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ca950 with addr=10.0.0.2, port=4420 00:35:37.236 [2024-07-12 00:48:05.043884] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ca950 is same with the state(5) to be set 00:35:37.236 [2024-07-12 00:48:05.044148] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ca950 (9): Bad file descriptor 00:35:37.236 [2024-07-12 00:48:05.044416] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:37.236 [2024-07-12 00:48:05.044438] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:37.236 [2024-07-12 00:48:05.044459] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:37.236 [2024-07-12 00:48:05.048523] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:37.236 [2024-07-12 00:48:05.057870] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:37.236 [2024-07-12 00:48:05.058410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.236 [2024-07-12 00:48:05.058451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ca950 with addr=10.0.0.2, port=4420 00:35:37.236 [2024-07-12 00:48:05.058470] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ca950 is same with the state(5) to be set 00:35:37.236 [2024-07-12 00:48:05.058759] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ca950 (9): Bad file descriptor 00:35:37.236 [2024-07-12 00:48:05.059029] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:37.236 [2024-07-12 00:48:05.059052] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:37.236 [2024-07-12 00:48:05.059067] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:37.236 [2024-07-12 00:48:05.063135] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:37.236 [2024-07-12 00:48:05.072245] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:37.236 [2024-07-12 00:48:05.072613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.236 [2024-07-12 00:48:05.072644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ca950 with addr=10.0.0.2, port=4420 00:35:37.496 [2024-07-12 00:48:05.072661] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ca950 is same with the state(5) to be set 00:35:37.496 [2024-07-12 00:48:05.072926] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ca950 (9): Bad file descriptor 00:35:37.496 [2024-07-12 00:48:05.073193] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:37.496 [2024-07-12 00:48:05.073215] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:37.496 [2024-07-12 00:48:05.073231] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:37.496 [2024-07-12 00:48:05.077300] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:37.496 [2024-07-12 00:48:05.086673] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:37.496 [2024-07-12 00:48:05.087201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.496 [2024-07-12 00:48:05.087242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ca950 with addr=10.0.0.2, port=4420 00:35:37.496 [2024-07-12 00:48:05.087261] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ca950 is same with the state(5) to be set 00:35:37.496 [2024-07-12 00:48:05.087531] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ca950 (9): Bad file descriptor 00:35:37.496 [2024-07-12 00:48:05.087812] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:37.496 [2024-07-12 00:48:05.087835] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:37.496 [2024-07-12 00:48:05.087850] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:37.496 [2024-07-12 00:48:05.091937] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:37.496 [2024-07-12 00:48:05.101075] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:37.496 [2024-07-12 00:48:05.101483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.496 [2024-07-12 00:48:05.101551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ca950 with addr=10.0.0.2, port=4420 00:35:37.496 [2024-07-12 00:48:05.101569] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ca950 is same with the state(5) to be set 00:35:37.496 [2024-07-12 00:48:05.101851] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ca950 (9): Bad file descriptor 00:35:37.496 [2024-07-12 00:48:05.102120] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:37.496 [2024-07-12 00:48:05.102142] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:37.496 [2024-07-12 00:48:05.102157] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:37.496 [2024-07-12 00:48:05.106300] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:37.496 [2024-07-12 00:48:05.115636] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:37.496 [2024-07-12 00:48:05.116175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.496 [2024-07-12 00:48:05.116216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ca950 with addr=10.0.0.2, port=4420 00:35:37.496 [2024-07-12 00:48:05.116235] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ca950 is same with the state(5) to be set 00:35:37.496 [2024-07-12 00:48:05.116506] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ca950 (9): Bad file descriptor 00:35:37.496 [2024-07-12 00:48:05.116789] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:37.496 [2024-07-12 00:48:05.116813] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:37.496 [2024-07-12 00:48:05.116828] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:37.497 [2024-07-12 00:48:05.120891] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:37.497 [2024-07-12 00:48:05.130016] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:37.497 [2024-07-12 00:48:05.130513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.497 [2024-07-12 00:48:05.130555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ca950 with addr=10.0.0.2, port=4420 00:35:37.497 [2024-07-12 00:48:05.130574] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ca950 is same with the state(5) to be set 00:35:37.497 [2024-07-12 00:48:05.130856] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ca950 (9): Bad file descriptor 00:35:37.497 [2024-07-12 00:48:05.131126] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:37.497 [2024-07-12 00:48:05.131148] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:37.497 [2024-07-12 00:48:05.131163] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:37.497 [2024-07-12 00:48:05.135227] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:37.497 [2024-07-12 00:48:05.144559] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:37.497 [2024-07-12 00:48:05.145081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.497 [2024-07-12 00:48:05.145122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ca950 with addr=10.0.0.2, port=4420 00:35:37.497 [2024-07-12 00:48:05.145142] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ca950 is same with the state(5) to be set 00:35:37.497 [2024-07-12 00:48:05.145412] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ca950 (9): Bad file descriptor 00:35:37.497 [2024-07-12 00:48:05.145700] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:37.497 [2024-07-12 00:48:05.145724] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:37.497 [2024-07-12 00:48:05.145740] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:37.497 [2024-07-12 00:48:05.149795] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:37.497 [2024-07-12 00:48:05.158928] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:37.497 [2024-07-12 00:48:05.159486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.497 [2024-07-12 00:48:05.159534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ca950 with addr=10.0.0.2, port=4420 00:35:37.497 [2024-07-12 00:48:05.159557] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ca950 is same with the state(5) to be set 00:35:37.497 [2024-07-12 00:48:05.159854] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ca950 (9): Bad file descriptor 00:35:37.497 [2024-07-12 00:48:05.160128] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:37.497 [2024-07-12 00:48:05.160151] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:37.497 [2024-07-12 00:48:05.160168] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:37.497 [2024-07-12 00:48:05.164219] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:37.497 [2024-07-12 00:48:05.173298] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:37.497 [2024-07-12 00:48:05.173774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.497 [2024-07-12 00:48:05.173815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ca950 with addr=10.0.0.2, port=4420 00:35:37.497 [2024-07-12 00:48:05.173835] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ca950 is same with the state(5) to be set 00:35:37.497 [2024-07-12 00:48:05.174106] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ca950 (9): Bad file descriptor 00:35:37.497 [2024-07-12 00:48:05.174375] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:37.497 [2024-07-12 00:48:05.174398] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:37.497 [2024-07-12 00:48:05.174413] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:37.497 [2024-07-12 00:48:05.178465] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:37.497 [2024-07-12 00:48:05.187792] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:37.497 [2024-07-12 00:48:05.188349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.497 [2024-07-12 00:48:05.188391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ca950 with addr=10.0.0.2, port=4420 00:35:37.497 [2024-07-12 00:48:05.188410] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ca950 is same with the state(5) to be set 00:35:37.497 [2024-07-12 00:48:05.188692] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ca950 (9): Bad file descriptor 00:35:37.497 [2024-07-12 00:48:05.188961] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:37.497 [2024-07-12 00:48:05.188985] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:37.497 [2024-07-12 00:48:05.189001] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:37.497 [2024-07-12 00:48:05.193060] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:37.497 [2024-07-12 00:48:05.202137] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:37.497 [2024-07-12 00:48:05.202558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.497 [2024-07-12 00:48:05.202596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ca950 with addr=10.0.0.2, port=4420 00:35:37.497 [2024-07-12 00:48:05.202615] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ca950 is same with the state(5) to be set 00:35:37.497 [2024-07-12 00:48:05.202880] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ca950 (9): Bad file descriptor 00:35:37.497 [2024-07-12 00:48:05.203147] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:37.497 [2024-07-12 00:48:05.203170] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:37.497 [2024-07-12 00:48:05.203185] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:37.497 [2024-07-12 00:48:05.207228] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:37.497 [2024-07-12 00:48:05.216546] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:37.497 [2024-07-12 00:48:05.217046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.497 [2024-07-12 00:48:05.217090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ca950 with addr=10.0.0.2, port=4420 00:35:37.497 [2024-07-12 00:48:05.217108] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ca950 is same with the state(5) to be set 00:35:37.497 [2024-07-12 00:48:05.217372] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ca950 (9): Bad file descriptor 00:35:37.497 [2024-07-12 00:48:05.217649] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:37.497 [2024-07-12 00:48:05.217672] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:37.497 [2024-07-12 00:48:05.217688] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:37.497 [2024-07-12 00:48:05.221726] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:37.497 [2024-07-12 00:48:05.231022] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:37.498 [2024-07-12 00:48:05.231403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.498 [2024-07-12 00:48:05.231447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ca950 with addr=10.0.0.2, port=4420 00:35:37.498 [2024-07-12 00:48:05.231464] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ca950 is same with the state(5) to be set 00:35:37.498 [2024-07-12 00:48:05.231763] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ca950 (9): Bad file descriptor 00:35:37.498 [2024-07-12 00:48:05.232033] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:37.498 [2024-07-12 00:48:05.232056] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:37.498 [2024-07-12 00:48:05.232071] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:37.498 [2024-07-12 00:48:05.236196] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:37.498 [2024-07-12 00:48:05.245534] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:37.498 [2024-07-12 00:48:05.246020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.498 [2024-07-12 00:48:05.246078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ca950 with addr=10.0.0.2, port=4420 00:35:37.498 [2024-07-12 00:48:05.246103] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ca950 is same with the state(5) to be set 00:35:37.498 [2024-07-12 00:48:05.246374] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ca950 (9): Bad file descriptor 00:35:37.498 [2024-07-12 00:48:05.246655] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:37.498 [2024-07-12 00:48:05.246679] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:37.498 [2024-07-12 00:48:05.246694] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:37.498 [2024-07-12 00:48:05.250734] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:37.498 [2024-07-12 00:48:05.260048] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:37.498 [2024-07-12 00:48:05.260519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.498 [2024-07-12 00:48:05.260576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ca950 with addr=10.0.0.2, port=4420 00:35:37.498 [2024-07-12 00:48:05.260606] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ca950 is same with the state(5) to be set 00:35:37.498 [2024-07-12 00:48:05.260877] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ca950 (9): Bad file descriptor 00:35:37.498 [2024-07-12 00:48:05.261146] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:37.498 [2024-07-12 00:48:05.261169] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:37.498 [2024-07-12 00:48:05.261184] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:37.498 [2024-07-12 00:48:05.265238] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:37.498 [2024-07-12 00:48:05.274526] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:37.498 [2024-07-12 00:48:05.274905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.498 [2024-07-12 00:48:05.274939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ca950 with addr=10.0.0.2, port=4420 00:35:37.498 [2024-07-12 00:48:05.274969] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ca950 is same with the state(5) to be set 00:35:37.498 [2024-07-12 00:48:05.275233] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ca950 (9): Bad file descriptor 00:35:37.498 [2024-07-12 00:48:05.275500] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:37.498 [2024-07-12 00:48:05.275522] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:37.498 [2024-07-12 00:48:05.275537] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:37.498 [2024-07-12 00:48:05.279580] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:37.498 [2024-07-12 00:48:05.288920] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:37.498 [2024-07-12 00:48:05.289400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.498 [2024-07-12 00:48:05.289458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ca950 with addr=10.0.0.2, port=4420 00:35:37.498 [2024-07-12 00:48:05.289477] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ca950 is same with the state(5) to be set 00:35:37.498 [2024-07-12 00:48:05.289767] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ca950 (9): Bad file descriptor 00:35:37.498 [2024-07-12 00:48:05.290037] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:37.498 [2024-07-12 00:48:05.290065] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:37.498 [2024-07-12 00:48:05.290081] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:37.498 [2024-07-12 00:48:05.294134] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:37.498 [2024-07-12 00:48:05.303429] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:37.498 [2024-07-12 00:48:05.304001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.498 [2024-07-12 00:48:05.304058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ca950 with addr=10.0.0.2, port=4420 00:35:37.498 [2024-07-12 00:48:05.304077] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ca950 is same with the state(5) to be set 00:35:37.498 [2024-07-12 00:48:05.304347] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ca950 (9): Bad file descriptor 00:35:37.498 [2024-07-12 00:48:05.304631] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:37.498 [2024-07-12 00:48:05.304654] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:37.498 [2024-07-12 00:48:05.304669] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:37.498 [2024-07-12 00:48:05.308725] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:37.498 [2024-07-12 00:48:05.317817] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:37.498 [2024-07-12 00:48:05.318352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.498 [2024-07-12 00:48:05.318408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ca950 with addr=10.0.0.2, port=4420 00:35:37.498 [2024-07-12 00:48:05.318427] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ca950 is same with the state(5) to be set 00:35:37.498 [2024-07-12 00:48:05.318713] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ca950 (9): Bad file descriptor 00:35:37.498 [2024-07-12 00:48:05.318983] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:37.498 [2024-07-12 00:48:05.319006] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:37.498 [2024-07-12 00:48:05.319021] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:37.498 [2024-07-12 00:48:05.323089] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:37.498 [2024-07-12 00:48:05.332184] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:37.498 [2024-07-12 00:48:05.332624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.498 [2024-07-12 00:48:05.332677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ca950 with addr=10.0.0.2, port=4420 00:35:37.498 [2024-07-12 00:48:05.332697] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ca950 is same with the state(5) to be set 00:35:37.499 [2024-07-12 00:48:05.332973] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ca950 (9): Bad file descriptor 00:35:37.499 [2024-07-12 00:48:05.333242] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:37.499 [2024-07-12 00:48:05.333264] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:37.499 [2024-07-12 00:48:05.333280] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:37.759 [2024-07-12 00:48:05.337328] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:37.759 [2024-07-12 00:48:05.346639] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:37.759 [2024-07-12 00:48:05.347078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.759 [2024-07-12 00:48:05.347130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ca950 with addr=10.0.0.2, port=4420 00:35:37.759 [2024-07-12 00:48:05.347148] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ca950 is same with the state(5) to be set 00:35:37.759 [2024-07-12 00:48:05.347412] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ca950 (9): Bad file descriptor 00:35:37.759 [2024-07-12 00:48:05.347690] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:37.759 [2024-07-12 00:48:05.347713] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:37.759 [2024-07-12 00:48:05.347728] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:37.759 [2024-07-12 00:48:05.351797] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:37.759 [2024-07-12 00:48:05.361098] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:37.759 [2024-07-12 00:48:05.361518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.759 [2024-07-12 00:48:05.361569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ca950 with addr=10.0.0.2, port=4420 00:35:37.759 [2024-07-12 00:48:05.361596] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ca950 is same with the state(5) to be set 00:35:37.759 [2024-07-12 00:48:05.361862] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ca950 (9): Bad file descriptor 00:35:37.759 [2024-07-12 00:48:05.362129] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:37.759 [2024-07-12 00:48:05.362151] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:37.759 [2024-07-12 00:48:05.362166] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:37.759 [2024-07-12 00:48:05.366219] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:37.759 [2024-07-12 00:48:05.375531] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:37.759 [2024-07-12 00:48:05.376010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.759 [2024-07-12 00:48:05.376068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ca950 with addr=10.0.0.2, port=4420 00:35:37.759 [2024-07-12 00:48:05.376087] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ca950 is same with the state(5) to be set 00:35:37.759 [2024-07-12 00:48:05.376357] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ca950 (9): Bad file descriptor 00:35:37.759 [2024-07-12 00:48:05.376642] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:37.759 [2024-07-12 00:48:05.376665] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:37.759 [2024-07-12 00:48:05.376680] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:37.759 [2024-07-12 00:48:05.380724] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:37.759 [2024-07-12 00:48:05.390028] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:37.759 [2024-07-12 00:48:05.390528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.759 [2024-07-12 00:48:05.390569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ca950 with addr=10.0.0.2, port=4420 00:35:37.759 [2024-07-12 00:48:05.390604] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ca950 is same with the state(5) to be set 00:35:37.759 [2024-07-12 00:48:05.390883] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ca950 (9): Bad file descriptor 00:35:37.759 [2024-07-12 00:48:05.391152] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:37.759 [2024-07-12 00:48:05.391174] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:37.759 [2024-07-12 00:48:05.391189] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:37.759 [2024-07-12 00:48:05.395239] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:37.759 [2024-07-12 00:48:05.404574] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:37.759 [2024-07-12 00:48:05.405010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.759 [2024-07-12 00:48:05.405057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ca950 with addr=10.0.0.2, port=4420 00:35:37.759 [2024-07-12 00:48:05.405074] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ca950 is same with the state(5) to be set 00:35:37.759 [2024-07-12 00:48:05.405338] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ca950 (9): Bad file descriptor 00:35:37.759 [2024-07-12 00:48:05.405617] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:37.759 [2024-07-12 00:48:05.405640] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:37.759 [2024-07-12 00:48:05.405655] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:37.759 [2024-07-12 00:48:05.409707] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:37.759 [2024-07-12 00:48:05.419021] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:37.759 [2024-07-12 00:48:05.419512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.760 [2024-07-12 00:48:05.419569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ca950 with addr=10.0.0.2, port=4420 00:35:37.760 [2024-07-12 00:48:05.419595] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ca950 is same with the state(5) to be set 00:35:37.760 [2024-07-12 00:48:05.419862] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ca950 (9): Bad file descriptor 00:35:37.760 [2024-07-12 00:48:05.420129] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:37.760 [2024-07-12 00:48:05.420151] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:37.760 [2024-07-12 00:48:05.420166] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:37.760 [2024-07-12 00:48:05.424227] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:37.760 [2024-07-12 00:48:05.433530] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:37.760 [2024-07-12 00:48:05.433985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.760 [2024-07-12 00:48:05.434015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ca950 with addr=10.0.0.2, port=4420 00:35:37.760 [2024-07-12 00:48:05.434032] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ca950 is same with the state(5) to be set 00:35:37.760 [2024-07-12 00:48:05.434295] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ca950 (9): Bad file descriptor 00:35:37.760 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 1080171 Killed "${NVMF_APP[@]}" "$@" 00:35:37.760 [2024-07-12 00:48:05.434564] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:37.760 [2024-07-12 00:48:05.434595] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:37.760 [2024-07-12 00:48:05.434612] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:37.760 00:48:05 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:35:37.760 00:48:05 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:35:37.760 00:48:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:35:37.760 00:48:05 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@720 -- # xtrace_disable 00:35:37.760 00:48:05 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:37.760 [2024-07-12 00:48:05.438650] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:37.760 00:48:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=1081049 00:35:37.760 00:48:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 1081049 00:35:37.760 00:48:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:35:37.760 00:48:05 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@827 -- # '[' -z 1081049 ']' 00:35:37.760 00:48:05 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:37.760 00:48:05 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@832 -- # local max_retries=100 00:35:37.760 00:48:05 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:37.760 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:37.760 00:48:05 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@836 -- # xtrace_disable 00:35:37.760 00:48:05 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:37.760 [2024-07-12 00:48:05.447952] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:37.760 [2024-07-12 00:48:05.448387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.760 [2024-07-12 00:48:05.448430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ca950 with addr=10.0.0.2, port=4420 00:35:37.760 [2024-07-12 00:48:05.448448] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ca950 is same with the state(5) to be set 00:35:37.760 [2024-07-12 00:48:05.448724] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ca950 (9): Bad file descriptor 00:35:37.760 [2024-07-12 00:48:05.448993] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:37.760 [2024-07-12 00:48:05.449016] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:37.760 [2024-07-12 00:48:05.449032] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:37.760 [2024-07-12 00:48:05.453073] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:37.760 [2024-07-12 00:48:05.462358] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:37.760 [2024-07-12 00:48:05.462790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.760 [2024-07-12 00:48:05.462819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ca950 with addr=10.0.0.2, port=4420 00:35:37.760 [2024-07-12 00:48:05.462836] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ca950 is same with the state(5) to be set 00:35:37.760 [2024-07-12 00:48:05.463099] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ca950 (9): Bad file descriptor 00:35:37.760 [2024-07-12 00:48:05.463366] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:37.760 [2024-07-12 00:48:05.463388] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:37.760 [2024-07-12 00:48:05.463412] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:37.760 [2024-07-12 00:48:05.467457] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:37.760 [2024-07-12 00:48:05.476754] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:37.760 [2024-07-12 00:48:05.477138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.760 [2024-07-12 00:48:05.477167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ca950 with addr=10.0.0.2, port=4420 00:35:37.760 [2024-07-12 00:48:05.477184] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ca950 is same with the state(5) to be set 00:35:37.760 [2024-07-12 00:48:05.477448] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ca950 (9): Bad file descriptor 00:35:37.760 [2024-07-12 00:48:05.477725] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:37.760 [2024-07-12 00:48:05.477748] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:37.760 [2024-07-12 00:48:05.477762] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:37.760 [2024-07-12 00:48:05.481807] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:37.760 [2024-07-12 00:48:05.488755] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:35:37.760 [2024-07-12 00:48:05.488850] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:37.760 [2024-07-12 00:48:05.491243] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:37.760 [2024-07-12 00:48:05.491623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.760 [2024-07-12 00:48:05.491655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ca950 with addr=10.0.0.2, port=4420 00:35:37.760 [2024-07-12 00:48:05.491674] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ca950 is same with the state(5) to be set 00:35:37.760 [2024-07-12 00:48:05.491945] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ca950 (9): Bad file descriptor 00:35:37.760 [2024-07-12 00:48:05.492214] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:37.760 [2024-07-12 00:48:05.492237] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:37.760 [2024-07-12 00:48:05.492252] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:37.760 [2024-07-12 00:48:05.496298] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:37.760 [2024-07-12 00:48:05.505615] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:37.760 [2024-07-12 00:48:05.506001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.760 [2024-07-12 00:48:05.506031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ca950 with addr=10.0.0.2, port=4420 00:35:37.760 [2024-07-12 00:48:05.506049] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ca950 is same with the state(5) to be set 00:35:37.760 [2024-07-12 00:48:05.506314] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ca950 (9): Bad file descriptor 00:35:37.760 [2024-07-12 00:48:05.506583] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:37.760 [2024-07-12 00:48:05.506614] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:37.760 [2024-07-12 00:48:05.506630] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:37.760 [2024-07-12 00:48:05.510881] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:37.760 [2024-07-12 00:48:05.519944] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:37.760 [2024-07-12 00:48:05.520399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.760 [2024-07-12 00:48:05.520442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ca950 with addr=10.0.0.2, port=4420 00:35:37.760 [2024-07-12 00:48:05.520463] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ca950 is same with the state(5) to be set 00:35:37.760 [2024-07-12 00:48:05.520753] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ca950 (9): Bad file descriptor 00:35:37.760 [2024-07-12 00:48:05.521024] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:37.760 [2024-07-12 00:48:05.521047] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:37.760 [2024-07-12 00:48:05.521063] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:37.760 EAL: No free 2048 kB hugepages reported on node 1 00:35:37.760 [2024-07-12 00:48:05.525105] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:37.760 [2024-07-12 00:48:05.534403] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:37.760 [2024-07-12 00:48:05.534851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.760 [2024-07-12 00:48:05.534893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ca950 with addr=10.0.0.2, port=4420 00:35:37.760 [2024-07-12 00:48:05.534913] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ca950 is same with the state(5) to be set 00:35:37.760 [2024-07-12 00:48:05.535184] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ca950 (9): Bad file descriptor 00:35:37.760 [2024-07-12 00:48:05.535453] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:37.760 [2024-07-12 00:48:05.535476] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:37.760 [2024-07-12 00:48:05.535491] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:37.761 [2024-07-12 00:48:05.539541] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:37.761 [2024-07-12 00:48:05.548842] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:37.761 [2024-07-12 00:48:05.549248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.761 [2024-07-12 00:48:05.549279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ca950 with addr=10.0.0.2, port=4420 00:35:37.761 [2024-07-12 00:48:05.549297] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ca950 is same with the state(5) to be set 00:35:37.761 [2024-07-12 00:48:05.549568] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ca950 (9): Bad file descriptor 00:35:37.761 [2024-07-12 00:48:05.549845] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:37.761 [2024-07-12 00:48:05.549867] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:37.761 [2024-07-12 00:48:05.549883] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:37.761 [2024-07-12 00:48:05.553965] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:37.761 [2024-07-12 00:48:05.556693] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:35:37.761 [2024-07-12 00:48:05.563335] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:37.761 [2024-07-12 00:48:05.563871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.761 [2024-07-12 00:48:05.563909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ca950 with addr=10.0.0.2, port=4420 00:35:37.761 [2024-07-12 00:48:05.563931] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ca950 is same with the state(5) to be set 00:35:37.761 [2024-07-12 00:48:05.564208] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ca950 (9): Bad file descriptor 00:35:37.761 [2024-07-12 00:48:05.564489] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:37.761 [2024-07-12 00:48:05.564512] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:37.761 [2024-07-12 00:48:05.564531] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:37.761 [2024-07-12 00:48:05.568664] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:37.761 [2024-07-12 00:48:05.577788] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:37.761 [2024-07-12 00:48:05.578289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.761 [2024-07-12 00:48:05.578325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ca950 with addr=10.0.0.2, port=4420 00:35:37.761 [2024-07-12 00:48:05.578346] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ca950 is same with the state(5) to be set 00:35:37.761 [2024-07-12 00:48:05.578629] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ca950 (9): Bad file descriptor 00:35:37.761 [2024-07-12 00:48:05.578904] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:37.761 [2024-07-12 00:48:05.578927] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:37.761 [2024-07-12 00:48:05.578945] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:37.761 [2024-07-12 00:48:05.582990] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:37.761 [2024-07-12 00:48:05.592311] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:37.761 [2024-07-12 00:48:05.592795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:37.761 [2024-07-12 00:48:05.592832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ca950 with addr=10.0.0.2, port=4420 00:35:37.761 [2024-07-12 00:48:05.592853] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ca950 is same with the state(5) to be set 00:35:37.761 [2024-07-12 00:48:05.593124] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ca950 (9): Bad file descriptor 00:35:37.761 [2024-07-12 00:48:05.593398] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:37.761 [2024-07-12 00:48:05.593421] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:37.761 [2024-07-12 00:48:05.593440] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:38.021 [2024-07-12 00:48:05.597482] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:38.021 [2024-07-12 00:48:05.606879] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:38.021 [2024-07-12 00:48:05.607395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:38.021 [2024-07-12 00:48:05.607434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ca950 with addr=10.0.0.2, port=4420 00:35:38.021 [2024-07-12 00:48:05.607456] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ca950 is same with the state(5) to be set 00:35:38.021 [2024-07-12 00:48:05.607754] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ca950 (9): Bad file descriptor 00:35:38.021 [2024-07-12 00:48:05.608036] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:38.021 [2024-07-12 00:48:05.608059] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:38.021 [2024-07-12 00:48:05.608077] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:38.021 [2024-07-12 00:48:05.612173] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:38.021 [2024-07-12 00:48:05.621529] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:38.021 [2024-07-12 00:48:05.622039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:38.021 [2024-07-12 00:48:05.622079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ca950 with addr=10.0.0.2, port=4420 00:35:38.021 [2024-07-12 00:48:05.622100] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ca950 is same with the state(5) to be set 00:35:38.021 [2024-07-12 00:48:05.622372] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ca950 (9): Bad file descriptor 00:35:38.021 [2024-07-12 00:48:05.622657] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:38.021 [2024-07-12 00:48:05.622681] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:38.021 [2024-07-12 00:48:05.622700] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:38.021 [2024-07-12 00:48:05.626747] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:38.021 [2024-07-12 00:48:05.636074] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:38.021 [2024-07-12 00:48:05.636544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:38.021 [2024-07-12 00:48:05.636578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ca950 with addr=10.0.0.2, port=4420 00:35:38.021 [2024-07-12 00:48:05.636608] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ca950 is same with the state(5) to be set 00:35:38.021 [2024-07-12 00:48:05.636881] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ca950 (9): Bad file descriptor 00:35:38.021 [2024-07-12 00:48:05.637153] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:38.021 [2024-07-12 00:48:05.637176] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:38.021 [2024-07-12 00:48:05.637195] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:38.021 [2024-07-12 00:48:05.641238] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:38.021 [2024-07-12 00:48:05.643742] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:38.021 [2024-07-12 00:48:05.643779] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:38.021 [2024-07-12 00:48:05.643795] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:38.021 [2024-07-12 00:48:05.643808] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:38.021 [2024-07-12 00:48:05.643820] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:38.021 [2024-07-12 00:48:05.643901] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:35:38.021 [2024-07-12 00:48:05.644095] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:35:38.021 [2024-07-12 00:48:05.644130] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:35:38.021 [2024-07-12 00:48:05.650608] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:38.021 [2024-07-12 00:48:05.651129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:38.021 [2024-07-12 00:48:05.651170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ca950 with addr=10.0.0.2, port=4420 00:35:38.021 [2024-07-12 00:48:05.651192] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ca950 is same with the state(5) to be set 00:35:38.021 [2024-07-12 00:48:05.651469] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ca950 (9): Bad file descriptor 00:35:38.021 [2024-07-12 00:48:05.651757] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:38.021 [2024-07-12 00:48:05.651780] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:38.021 [2024-07-12 00:48:05.651799] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:38.021 [2024-07-12 00:48:05.655914] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:38.021 [2024-07-12 00:48:05.665170] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:38.021 [2024-07-12 00:48:05.665702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:38.021 [2024-07-12 00:48:05.665740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ca950 with addr=10.0.0.2, port=4420 00:35:38.021 [2024-07-12 00:48:05.665762] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ca950 is same with the state(5) to be set 00:35:38.021 [2024-07-12 00:48:05.666037] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ca950 (9): Bad file descriptor 00:35:38.021 [2024-07-12 00:48:05.666316] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:38.021 [2024-07-12 00:48:05.666339] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:38.021 [2024-07-12 00:48:05.666357] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:38.021 [2024-07-12 00:48:05.670479] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:38.021 [2024-07-12 00:48:05.679790] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:38.021 [2024-07-12 00:48:05.680301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:38.021 [2024-07-12 00:48:05.680340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ca950 with addr=10.0.0.2, port=4420 00:35:38.021 [2024-07-12 00:48:05.680361] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ca950 is same with the state(5) to be set 00:35:38.021 [2024-07-12 00:48:05.680649] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ca950 (9): Bad file descriptor 00:35:38.021 [2024-07-12 00:48:05.680929] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:38.021 [2024-07-12 00:48:05.680952] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:38.021 [2024-07-12 00:48:05.680970] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:38.021 [2024-07-12 00:48:05.685077] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:38.021 [2024-07-12 00:48:05.694230] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:38.021 [2024-07-12 00:48:05.694724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:38.021 [2024-07-12 00:48:05.694763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ca950 with addr=10.0.0.2, port=4420 00:35:38.022 [2024-07-12 00:48:05.694784] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ca950 is same with the state(5) to be set 00:35:38.022 [2024-07-12 00:48:05.695069] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ca950 (9): Bad file descriptor 00:35:38.022 [2024-07-12 00:48:05.695345] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:38.022 [2024-07-12 00:48:05.695369] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:38.022 [2024-07-12 00:48:05.695387] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:38.022 [2024-07-12 00:48:05.699491] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:38.022 [2024-07-12 00:48:05.708736] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:38.022 [2024-07-12 00:48:05.709250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:38.022 [2024-07-12 00:48:05.709289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ca950 with addr=10.0.0.2, port=4420 00:35:38.022 [2024-07-12 00:48:05.709310] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ca950 is same with the state(5) to be set 00:35:38.022 [2024-07-12 00:48:05.709595] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ca950 (9): Bad file descriptor 00:35:38.022 [2024-07-12 00:48:05.709875] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:38.022 [2024-07-12 00:48:05.709898] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:38.022 [2024-07-12 00:48:05.709916] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:38.022 [2024-07-12 00:48:05.713975] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:38.022 [2024-07-12 00:48:05.723302] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:38.022 [2024-07-12 00:48:05.723767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:38.022 [2024-07-12 00:48:05.723803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ca950 with addr=10.0.0.2, port=4420 00:35:38.022 [2024-07-12 00:48:05.723822] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ca950 is same with the state(5) to be set 00:35:38.022 [2024-07-12 00:48:05.724092] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ca950 (9): Bad file descriptor 00:35:38.022 [2024-07-12 00:48:05.724364] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:38.022 [2024-07-12 00:48:05.724386] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:38.022 [2024-07-12 00:48:05.724405] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:38.022 [2024-07-12 00:48:05.728454] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:38.022 [2024-07-12 00:48:05.737763] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:38.022 [2024-07-12 00:48:05.738160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:38.022 [2024-07-12 00:48:05.738190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ca950 with addr=10.0.0.2, port=4420 00:35:38.022 [2024-07-12 00:48:05.738208] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ca950 is same with the state(5) to be set 00:35:38.022 [2024-07-12 00:48:05.738472] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ca950 (9): Bad file descriptor 00:35:38.022 [2024-07-12 00:48:05.738760] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:38.022 [2024-07-12 00:48:05.738784] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:38.022 [2024-07-12 00:48:05.738808] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:38.022 00:48:05 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:35:38.022 00:48:05 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@860 -- # return 0 00:35:38.022 [2024-07-12 00:48:05.742930] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:38.022 00:48:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:35:38.022 00:48:05 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:38.022 00:48:05 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:38.022 [2024-07-12 00:48:05.752265] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:38.022 [2024-07-12 00:48:05.752636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:38.022 [2024-07-12 00:48:05.752667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ca950 with addr=10.0.0.2, port=4420 00:35:38.022 [2024-07-12 00:48:05.752684] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ca950 is same with the state(5) to be set 00:35:38.022 [2024-07-12 00:48:05.752948] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ca950 (9): Bad file descriptor 00:35:38.022 [2024-07-12 00:48:05.753218] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:38.022 [2024-07-12 00:48:05.753241] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:38.022 [2024-07-12 00:48:05.753256] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:38.022 [2024-07-12 00:48:05.757294] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:38.022 00:48:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:38.022 00:48:05 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:35:38.022 00:48:05 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:38.022 00:48:05 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:38.022 [2024-07-12 00:48:05.763756] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:38.022 [2024-07-12 00:48:05.766594] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:38.022 [2024-07-12 00:48:05.766981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:38.022 [2024-07-12 00:48:05.767010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ca950 with addr=10.0.0.2, port=4420 00:35:38.022 [2024-07-12 00:48:05.767027] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ca950 is same with the state(5) to be set 00:35:38.022 [2024-07-12 00:48:05.767290] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ca950 (9): Bad file descriptor 00:35:38.022 [2024-07-12 00:48:05.767558] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:38.022 [2024-07-12 00:48:05.767580] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:38.022 [2024-07-12 00:48:05.767605] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:38.022 00:48:05 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:38.022 00:48:05 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:35:38.022 00:48:05 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:38.022 00:48:05 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:38.022 [2024-07-12 00:48:05.771650] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:38.022 [2024-07-12 00:48:05.780938] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:38.022 [2024-07-12 00:48:05.781298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:38.022 [2024-07-12 00:48:05.781333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ca950 with addr=10.0.0.2, port=4420 00:35:38.022 [2024-07-12 00:48:05.781351] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ca950 is same with the state(5) to be set 00:35:38.022 [2024-07-12 00:48:05.781624] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ca950 (9): Bad file descriptor 00:35:38.022 [2024-07-12 00:48:05.781892] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:38.022 [2024-07-12 00:48:05.781915] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:38.022 [2024-07-12 00:48:05.781930] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:38.022 [2024-07-12 00:48:05.785982] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:38.022 [2024-07-12 00:48:05.795366] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:38.022 [2024-07-12 00:48:05.795901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:38.022 [2024-07-12 00:48:05.795940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ca950 with addr=10.0.0.2, port=4420 00:35:38.022 [2024-07-12 00:48:05.795961] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ca950 is same with the state(5) to be set 00:35:38.022 [2024-07-12 00:48:05.796237] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ca950 (9): Bad file descriptor 00:35:38.022 [2024-07-12 00:48:05.796514] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:38.022 [2024-07-12 00:48:05.796537] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:38.022 [2024-07-12 00:48:05.796556] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:38.022 [2024-07-12 00:48:05.800647] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:38.022 Malloc0 00:35:38.022 00:48:05 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:38.022 00:48:05 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:35:38.022 00:48:05 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:38.022 00:48:05 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:38.022 [2024-07-12 00:48:05.809994] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:38.022 [2024-07-12 00:48:05.810431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:38.022 [2024-07-12 00:48:05.810463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24ca950 with addr=10.0.0.2, port=4420 00:35:38.022 [2024-07-12 00:48:05.810484] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ca950 is same with the state(5) to be set 00:35:38.022 00:48:05 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:38.022 [2024-07-12 00:48:05.810765] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ca950 (9): Bad file descriptor 00:35:38.022 00:48:05 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:35:38.022 00:48:05 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:38.022 00:48:05 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:38.022 [2024-07-12 00:48:05.811040] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:38.022 [2024-07-12 00:48:05.811062] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:38.022 [2024-07-12 00:48:05.811080] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:38.022 [2024-07-12 00:48:05.815126] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:38.022 00:48:05 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:38.022 00:48:05 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:38.023 00:48:05 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:38.023 00:48:05 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:38.023 [2024-07-12 00:48:05.822537] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:38.023 [2024-07-12 00:48:05.824432] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:38.023 00:48:05 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:38.023 00:48:05 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 1080345 00:35:38.281 [2024-07-12 00:48:05.955232] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:35:48.253 00:35:48.253 Latency(us) 00:35:48.253 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:48.253 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:35:48.253 Verification LBA range: start 0x0 length 0x4000 00:35:48.253 Nvme1n1 : 15.05 5913.26 23.10 7516.44 0.00 9477.28 719.08 40972.14 00:35:48.253 =================================================================================================================== 00:35:48.253 Total : 5913.26 23.10 7516.44 0.00 9477.28 719.08 40972.14 00:35:48.253 00:48:15 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:35:48.253 00:48:15 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:48.253 00:48:15 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:48.253 00:48:15 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:48.253 00:48:15 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:48.253 00:48:15 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:35:48.253 00:48:15 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:35:48.253 00:48:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@488 -- # nvmfcleanup 00:35:48.253 00:48:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@117 -- # sync 00:35:48.253 00:48:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:35:48.253 00:48:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@120 -- # set +e 00:35:48.253 00:48:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@121 -- # for i in {1..20} 00:35:48.253 00:48:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:35:48.253 rmmod nvme_tcp 00:35:48.253 rmmod nvme_fabrics 00:35:48.253 rmmod nvme_keyring 00:35:48.253 00:48:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:35:48.253 00:48:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@124 -- # set -e 00:35:48.253 00:48:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@125 -- # return 0 00:35:48.253 00:48:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@489 -- # '[' -n 1081049 ']' 00:35:48.253 00:48:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@490 -- # killprocess 1081049 00:35:48.253 00:48:15 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@946 -- # '[' -z 1081049 ']' 00:35:48.253 00:48:15 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@950 -- # kill -0 1081049 00:35:48.253 00:48:15 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@951 -- # uname 00:35:48.253 00:48:15 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:35:48.253 00:48:15 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1081049 00:35:48.253 00:48:15 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:35:48.253 00:48:15 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:35:48.253 00:48:15 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1081049' 00:35:48.253 killing process with pid 1081049 00:35:48.253 00:48:15 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@965 -- # kill 1081049 00:35:48.253 00:48:15 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@970 -- # wait 1081049 00:35:48.253 00:48:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:35:48.253 00:48:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:35:48.253 00:48:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:35:48.253 00:48:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:35:48.253 00:48:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:35:48.253 00:48:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:48.253 00:48:15 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:35:48.253 00:48:15 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:49.630 00:48:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:35:49.630 00:35:49.630 real 0m21.706s 00:35:49.630 user 0m58.936s 00:35:49.630 sys 0m3.877s 00:35:49.630 00:48:17 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:35:49.630 00:48:17 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:49.630 ************************************ 00:35:49.630 END TEST nvmf_bdevperf 00:35:49.630 ************************************ 00:35:49.630 00:48:17 nvmf_tcp -- nvmf/nvmf.sh@123 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:35:49.630 00:48:17 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:35:49.630 00:48:17 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:35:49.630 00:48:17 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:49.630 ************************************ 00:35:49.630 START TEST nvmf_target_disconnect 00:35:49.630 ************************************ 00:35:49.630 00:48:17 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:35:49.630 * Looking for test storage... 00:35:49.630 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:35:49.630 00:48:17 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:49.630 00:48:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:35:49.630 00:48:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:49.630 00:48:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:49.630 00:48:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:49.630 00:48:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:49.630 00:48:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:49.630 00:48:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:49.630 00:48:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:49.630 00:48:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:49.630 00:48:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:49.630 00:48:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:49.630 00:48:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:35:49.630 00:48:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:35:49.630 00:48:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:49.630 00:48:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:49.630 00:48:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:49.630 00:48:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:49.630 00:48:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:49.630 00:48:17 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:49.630 00:48:17 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:49.630 00:48:17 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:49.630 00:48:17 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:49.630 00:48:17 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:49.630 00:48:17 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:49.630 00:48:17 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:35:49.630 00:48:17 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:49.630 00:48:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@47 -- # : 0 00:35:49.630 00:48:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:35:49.630 00:48:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:35:49.630 00:48:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:49.630 00:48:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:49.630 00:48:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:49.630 00:48:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:35:49.630 00:48:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:35:49.630 00:48:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:35:49.630 00:48:17 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:35:49.630 00:48:17 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:35:49.631 00:48:17 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:35:49.631 00:48:17 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:35:49.631 00:48:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:35:49.631 00:48:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:49.631 00:48:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:35:49.631 00:48:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:35:49.631 00:48:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:35:49.631 00:48:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:49.631 00:48:17 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:35:49.631 00:48:17 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:49.631 00:48:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:35:49.631 00:48:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:35:49.631 00:48:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:35:49.631 00:48:17 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:35:51.535 00:48:18 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:51.535 00:48:18 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:35:51.535 00:48:18 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:35:51.535 00:48:18 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:35:51.535 00:48:18 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:35:51.535 00:48:18 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:35:51.535 00:48:18 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:35:51.535 00:48:18 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:35:51.535 00:48:18 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:35:51.535 00:48:18 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@296 -- # e810=() 00:35:51.535 00:48:18 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:35:51.535 00:48:18 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@297 -- # x722=() 00:35:51.535 00:48:18 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:35:51.535 00:48:18 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:35:51.535 00:48:18 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:35:51.535 00:48:18 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:51.535 00:48:18 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:51.535 00:48:18 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:51.535 00:48:18 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:51.535 00:48:18 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:51.535 00:48:18 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:51.535 00:48:18 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:51.535 00:48:18 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:51.535 00:48:18 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:51.535 00:48:18 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:51.535 00:48:18 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:51.535 00:48:18 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:35:51.535 00:48:18 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:35:51.535 00:48:18 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:35:51.535 00:48:18 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:35:51.535 00:48:18 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:35:51.535 00:48:18 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:35:51.535 00:48:18 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:35:51.535 00:48:18 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:35:51.535 Found 0000:08:00.0 (0x8086 - 0x159b) 00:35:51.535 00:48:18 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:35:51.535 00:48:18 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:35:51.535 00:48:18 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:51.535 00:48:18 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:51.535 00:48:18 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:35:51.535 00:48:18 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:35:51.535 00:48:18 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:35:51.535 Found 0000:08:00.1 (0x8086 - 0x159b) 00:35:51.535 00:48:18 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:35:51.535 00:48:18 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:35:51.536 00:48:18 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:51.536 00:48:18 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:51.536 00:48:18 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:35:51.536 00:48:18 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:35:51.536 00:48:18 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:35:51.536 00:48:18 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:35:51.536 00:48:18 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:35:51.536 00:48:18 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:51.536 00:48:18 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:35:51.536 00:48:18 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:51.536 00:48:18 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:35:51.536 00:48:18 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:35:51.536 00:48:18 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:51.536 00:48:18 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:35:51.536 Found net devices under 0000:08:00.0: cvl_0_0 00:35:51.536 00:48:18 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:35:51.536 00:48:18 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:35:51.536 00:48:18 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:51.536 00:48:18 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:35:51.536 00:48:18 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:51.536 00:48:18 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:35:51.536 00:48:18 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:35:51.536 00:48:18 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:51.536 00:48:18 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:35:51.536 Found net devices under 0000:08:00.1: cvl_0_1 00:35:51.536 00:48:18 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:35:51.536 00:48:18 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:35:51.536 00:48:18 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:35:51.536 00:48:18 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:35:51.536 00:48:18 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:35:51.536 00:48:18 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:35:51.536 00:48:18 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:51.536 00:48:18 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:51.536 00:48:18 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:51.536 00:48:18 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:35:51.536 00:48:18 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:51.536 00:48:18 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:51.536 00:48:18 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:35:51.536 00:48:18 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:51.536 00:48:18 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:51.536 00:48:18 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:35:51.536 00:48:18 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:35:51.536 00:48:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:35:51.536 00:48:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:51.536 00:48:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:51.536 00:48:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:51.536 00:48:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:35:51.536 00:48:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:51.536 00:48:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:51.536 00:48:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:51.536 00:48:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:35:51.536 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:51.536 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.307 ms 00:35:51.536 00:35:51.536 --- 10.0.0.2 ping statistics --- 00:35:51.536 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:51.536 rtt min/avg/max/mdev = 0.307/0.307/0.307/0.000 ms 00:35:51.536 00:48:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:51.536 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:51.536 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.125 ms 00:35:51.536 00:35:51.536 --- 10.0.0.1 ping statistics --- 00:35:51.536 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:51.536 rtt min/avg/max/mdev = 0.125/0.125/0.125/0.000 ms 00:35:51.536 00:48:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:51.536 00:48:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@422 -- # return 0 00:35:51.536 00:48:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:35:51.536 00:48:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:51.536 00:48:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:35:51.536 00:48:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:35:51.536 00:48:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:51.536 00:48:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:35:51.536 00:48:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:35:51.536 00:48:19 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:35:51.536 00:48:19 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:35:51.536 00:48:19 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1103 -- # xtrace_disable 00:35:51.536 00:48:19 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:35:51.536 ************************************ 00:35:51.536 START TEST nvmf_target_disconnect_tc1 00:35:51.536 ************************************ 00:35:51.536 00:48:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1121 -- # nvmf_target_disconnect_tc1 00:35:51.536 00:48:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:35:51.536 00:48:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@648 -- # local es=0 00:35:51.536 00:48:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:35:51.536 00:48:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:35:51.536 00:48:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:35:51.536 00:48:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:35:51.536 00:48:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:35:51.536 00:48:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:35:51.536 00:48:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:35:51.536 00:48:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:35:51.536 00:48:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:35:51.536 00:48:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:35:51.536 EAL: No free 2048 kB hugepages reported on node 1 00:35:51.536 [2024-07-12 00:48:19.225213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:51.536 [2024-07-12 00:48:19.225292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e45f0 with addr=10.0.0.2, port=4420 00:35:51.536 [2024-07-12 00:48:19.225327] nvme_tcp.c:2702:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:35:51.536 [2024-07-12 00:48:19.225349] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:35:51.536 [2024-07-12 00:48:19.225364] nvme.c: 898:spdk_nvme_probe: *ERROR*: Create probe context failed 00:35:51.536 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:35:51.536 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:35:51.536 Initializing NVMe Controllers 00:35:51.536 00:48:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # es=1 00:35:51.536 00:48:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:35:51.536 00:48:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:35:51.536 00:48:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:35:51.536 00:35:51.536 real 0m0.091s 00:35:51.536 user 0m0.042s 00:35:51.536 sys 0m0.048s 00:35:51.536 00:48:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:35:51.536 00:48:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:35:51.536 ************************************ 00:35:51.536 END TEST nvmf_target_disconnect_tc1 00:35:51.536 ************************************ 00:35:51.536 00:48:19 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:35:51.536 00:48:19 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:35:51.536 00:48:19 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1103 -- # xtrace_disable 00:35:51.536 00:48:19 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:35:51.536 ************************************ 00:35:51.536 START TEST nvmf_target_disconnect_tc2 00:35:51.536 ************************************ 00:35:51.536 00:48:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1121 -- # nvmf_target_disconnect_tc2 00:35:51.536 00:48:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:35:51.536 00:48:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:35:51.537 00:48:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:35:51.537 00:48:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@720 -- # xtrace_disable 00:35:51.537 00:48:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:51.537 00:48:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=1083902 00:35:51.537 00:48:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:35:51.537 00:48:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 1083902 00:35:51.537 00:48:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@827 -- # '[' -z 1083902 ']' 00:35:51.537 00:48:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:51.537 00:48:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@832 -- # local max_retries=100 00:35:51.537 00:48:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:51.537 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:51.537 00:48:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # xtrace_disable 00:35:51.537 00:48:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:51.537 [2024-07-12 00:48:19.339272] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:35:51.537 [2024-07-12 00:48:19.339351] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:51.537 EAL: No free 2048 kB hugepages reported on node 1 00:35:51.794 [2024-07-12 00:48:19.404238] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:35:51.794 [2024-07-12 00:48:19.492896] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:51.794 [2024-07-12 00:48:19.492954] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:51.794 [2024-07-12 00:48:19.492970] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:51.795 [2024-07-12 00:48:19.492984] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:51.795 [2024-07-12 00:48:19.492997] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:51.795 [2024-07-12 00:48:19.493114] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:35:51.795 [2024-07-12 00:48:19.493684] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:35:51.795 [2024-07-12 00:48:19.493795] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:35:51.795 [2024-07-12 00:48:19.493804] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:35:51.795 00:48:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:35:51.795 00:48:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@860 -- # return 0 00:35:51.795 00:48:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:35:51.795 00:48:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:51.795 00:48:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:52.053 00:48:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:52.053 00:48:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:35:52.053 00:48:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:52.053 00:48:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:52.053 Malloc0 00:35:52.053 00:48:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:52.053 00:48:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:35:52.053 00:48:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:52.053 00:48:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:52.053 [2024-07-12 00:48:19.663792] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:52.053 00:48:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:52.053 00:48:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:35:52.053 00:48:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:52.053 00:48:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:52.053 00:48:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:52.053 00:48:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:35:52.053 00:48:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:52.053 00:48:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:52.053 00:48:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:52.053 00:48:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:52.053 00:48:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:52.053 00:48:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:52.053 [2024-07-12 00:48:19.692026] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:52.053 00:48:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:52.053 00:48:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:35:52.053 00:48:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:52.053 00:48:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:52.053 00:48:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:52.053 00:48:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=1084011 00:35:52.053 00:48:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:35:52.053 00:48:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:35:52.053 EAL: No free 2048 kB hugepages reported on node 1 00:35:53.961 00:48:21 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 1083902 00:35:53.961 00:48:21 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:35:53.961 Read completed with error (sct=0, sc=8) 00:35:53.961 starting I/O failed 00:35:53.961 Read completed with error (sct=0, sc=8) 00:35:53.961 starting I/O failed 00:35:53.961 Read completed with error (sct=0, sc=8) 00:35:53.961 starting I/O failed 00:35:53.961 Read completed with error (sct=0, sc=8) 00:35:53.961 starting I/O failed 00:35:53.961 Read completed with error (sct=0, sc=8) 00:35:53.961 starting I/O failed 00:35:53.961 Read completed with error (sct=0, sc=8) 00:35:53.961 starting I/O failed 00:35:53.961 Read completed with error (sct=0, sc=8) 00:35:53.961 starting I/O failed 00:35:53.961 Read completed with error (sct=0, sc=8) 00:35:53.961 starting I/O failed 00:35:53.961 Write completed with error (sct=0, sc=8) 00:35:53.961 starting I/O failed 00:35:53.961 Read completed with error (sct=0, sc=8) 00:35:53.961 starting I/O failed 00:35:53.961 Write completed with error (sct=0, sc=8) 00:35:53.961 starting I/O failed 00:35:53.961 Write completed with error (sct=0, sc=8) 00:35:53.961 starting I/O failed 00:35:53.961 Write completed with error (sct=0, sc=8) 00:35:53.961 starting I/O failed 00:35:53.961 Write completed with error (sct=0, sc=8) 00:35:53.961 starting I/O failed 00:35:53.961 Read completed with error (sct=0, sc=8) 00:35:53.961 starting I/O failed 00:35:53.961 Write completed with error (sct=0, sc=8) 00:35:53.961 starting I/O failed 00:35:53.961 Read completed with error (sct=0, sc=8) 00:35:53.961 starting I/O failed 00:35:53.961 Read completed with error (sct=0, sc=8) 00:35:53.961 starting I/O failed 00:35:53.961 Read completed with error (sct=0, sc=8) 00:35:53.961 starting I/O failed 00:35:53.961 Read completed with error (sct=0, sc=8) 00:35:53.961 starting I/O failed 00:35:53.961 Read completed with error (sct=0, sc=8) 00:35:53.961 starting I/O failed 00:35:53.961 Read completed with error (sct=0, sc=8) 00:35:53.961 starting I/O failed 00:35:53.961 Write completed with error (sct=0, sc=8) 00:35:53.961 starting I/O failed 00:35:53.961 Write completed with error (sct=0, sc=8) 00:35:53.961 starting I/O failed 00:35:53.961 Read completed with error (sct=0, sc=8) 00:35:53.961 starting I/O failed 00:35:53.961 Read completed with error (sct=0, sc=8) 00:35:53.961 starting I/O failed 00:35:53.961 Write completed with error (sct=0, sc=8) 00:35:53.961 starting I/O failed 00:35:53.961 Write completed with error (sct=0, sc=8) 00:35:53.961 starting I/O failed 00:35:53.961 Write completed with error (sct=0, sc=8) 00:35:53.961 starting I/O failed 00:35:53.961 Write completed with error (sct=0, sc=8) 00:35:53.961 starting I/O failed 00:35:53.961 Read completed with error (sct=0, sc=8) 00:35:53.961 starting I/O failed 00:35:53.961 Read completed with error (sct=0, sc=8) 00:35:53.961 starting I/O failed 00:35:53.961 Read completed with error (sct=0, sc=8) 00:35:53.961 starting I/O failed 00:35:53.961 Read completed with error (sct=0, sc=8) 00:35:53.961 starting I/O failed 00:35:53.961 [2024-07-12 00:48:21.716967] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:53.961 Read completed with error (sct=0, sc=8) 00:35:53.961 starting I/O failed 00:35:53.961 Read completed with error (sct=0, sc=8) 00:35:53.961 starting I/O failed 00:35:53.961 Read completed with error (sct=0, sc=8) 00:35:53.961 starting I/O failed 00:35:53.961 Read completed with error (sct=0, sc=8) 00:35:53.961 starting I/O failed 00:35:53.961 Read completed with error (sct=0, sc=8) 00:35:53.961 starting I/O failed 00:35:53.961 Read completed with error (sct=0, sc=8) 00:35:53.961 starting I/O failed 00:35:53.961 Read completed with error (sct=0, sc=8) 00:35:53.961 starting I/O failed 00:35:53.961 Read completed with error (sct=0, sc=8) 00:35:53.961 starting I/O failed 00:35:53.961 Read completed with error (sct=0, sc=8) 00:35:53.961 starting I/O failed 00:35:53.961 Read completed with error (sct=0, sc=8) 00:35:53.961 starting I/O failed 00:35:53.961 Read completed with error (sct=0, sc=8) 00:35:53.961 starting I/O failed 00:35:53.961 Read completed with error (sct=0, sc=8) 00:35:53.961 starting I/O failed 00:35:53.961 Read completed with error (sct=0, sc=8) 00:35:53.961 starting I/O failed 00:35:53.961 Read completed with error (sct=0, sc=8) 00:35:53.961 starting I/O failed 00:35:53.961 Read completed with error (sct=0, sc=8) 00:35:53.961 starting I/O failed 00:35:53.961 Write completed with error (sct=0, sc=8) 00:35:53.961 starting I/O failed 00:35:53.961 Write completed with error (sct=0, sc=8) 00:35:53.961 starting I/O failed 00:35:53.961 Write completed with error (sct=0, sc=8) 00:35:53.961 starting I/O failed 00:35:53.961 Read completed with error (sct=0, sc=8) 00:35:53.961 starting I/O failed 00:35:53.961 Read completed with error (sct=0, sc=8) 00:35:53.961 starting I/O failed 00:35:53.961 Write completed with error (sct=0, sc=8) 00:35:53.961 starting I/O failed 00:35:53.961 Read completed with error (sct=0, sc=8) 00:35:53.961 starting I/O failed 00:35:53.961 Read completed with error (sct=0, sc=8) 00:35:53.961 starting I/O failed 00:35:53.961 Write completed with error (sct=0, sc=8) 00:35:53.961 starting I/O failed 00:35:53.961 Write completed with error (sct=0, sc=8) 00:35:53.961 starting I/O failed 00:35:53.961 Write completed with error (sct=0, sc=8) 00:35:53.961 starting I/O failed 00:35:53.961 Write completed with error (sct=0, sc=8) 00:35:53.961 starting I/O failed 00:35:53.961 Read completed with error (sct=0, sc=8) 00:35:53.961 starting I/O failed 00:35:53.961 Write completed with error (sct=0, sc=8) 00:35:53.961 starting I/O failed 00:35:53.961 Read completed with error (sct=0, sc=8) 00:35:53.961 starting I/O failed 00:35:53.961 [2024-07-12 00:48:21.717340] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:53.961 [2024-07-12 00:48:21.717574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.961 [2024-07-12 00:48:21.717669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:53.961 qpair failed and we were unable to recover it. 00:35:53.961 [2024-07-12 00:48:21.717839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.961 [2024-07-12 00:48:21.717907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:53.962 qpair failed and we were unable to recover it. 00:35:53.962 [2024-07-12 00:48:21.718159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.962 [2024-07-12 00:48:21.718203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:53.962 qpair failed and we were unable to recover it. 00:35:53.962 [2024-07-12 00:48:21.718323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.962 [2024-07-12 00:48:21.718366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:53.962 qpair failed and we were unable to recover it. 00:35:53.962 [2024-07-12 00:48:21.718488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.962 [2024-07-12 00:48:21.718515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:53.962 qpair failed and we were unable to recover it. 00:35:53.962 [2024-07-12 00:48:21.718622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.962 [2024-07-12 00:48:21.718650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:53.962 qpair failed and we were unable to recover it. 00:35:53.962 [2024-07-12 00:48:21.718775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.962 [2024-07-12 00:48:21.718802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:53.962 qpair failed and we were unable to recover it. 00:35:53.962 [2024-07-12 00:48:21.718965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.962 [2024-07-12 00:48:21.719013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:53.962 qpair failed and we were unable to recover it. 00:35:53.962 [2024-07-12 00:48:21.719132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.962 [2024-07-12 00:48:21.719158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:53.962 qpair failed and we were unable to recover it. 00:35:53.962 [2024-07-12 00:48:21.719257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.962 [2024-07-12 00:48:21.719282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:53.962 qpair failed and we were unable to recover it. 00:35:53.962 [2024-07-12 00:48:21.719476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.962 [2024-07-12 00:48:21.719524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:53.962 qpair failed and we were unable to recover it. 00:35:53.962 [2024-07-12 00:48:21.719630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.962 [2024-07-12 00:48:21.719658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:53.962 qpair failed and we were unable to recover it. 00:35:53.962 [2024-07-12 00:48:21.719794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.962 [2024-07-12 00:48:21.719834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:53.962 qpair failed and we were unable to recover it. 00:35:53.962 [2024-07-12 00:48:21.719997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.962 [2024-07-12 00:48:21.720022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:53.962 qpair failed and we were unable to recover it. 00:35:53.962 [2024-07-12 00:48:21.720169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.962 [2024-07-12 00:48:21.720225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:53.962 qpair failed and we were unable to recover it. 00:35:53.962 [2024-07-12 00:48:21.720365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.962 [2024-07-12 00:48:21.720421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:53.962 qpair failed and we were unable to recover it. 00:35:53.962 [2024-07-12 00:48:21.720565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.962 [2024-07-12 00:48:21.720597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:53.962 qpair failed and we were unable to recover it. 00:35:53.962 [2024-07-12 00:48:21.720728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.962 [2024-07-12 00:48:21.720753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:53.962 qpair failed and we were unable to recover it. 00:35:53.962 [2024-07-12 00:48:21.720867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.962 [2024-07-12 00:48:21.720894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:53.962 qpair failed and we were unable to recover it. 00:35:53.962 [2024-07-12 00:48:21.721086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.962 [2024-07-12 00:48:21.721136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:53.962 qpair failed and we were unable to recover it. 00:35:53.962 [2024-07-12 00:48:21.721217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.962 [2024-07-12 00:48:21.721244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:53.962 qpair failed and we were unable to recover it. 00:35:53.962 [2024-07-12 00:48:21.721332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.962 [2024-07-12 00:48:21.721359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:53.962 qpair failed and we were unable to recover it. 00:35:53.962 [2024-07-12 00:48:21.721473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.962 [2024-07-12 00:48:21.721501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:53.962 qpair failed and we were unable to recover it. 00:35:53.962 [2024-07-12 00:48:21.721623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.962 [2024-07-12 00:48:21.721656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:53.962 qpair failed and we were unable to recover it. 00:35:53.962 [2024-07-12 00:48:21.721776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.962 [2024-07-12 00:48:21.721802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:53.962 qpair failed and we were unable to recover it. 00:35:53.962 [2024-07-12 00:48:21.721985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.962 [2024-07-12 00:48:21.722012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:53.962 qpair failed and we were unable to recover it. 00:35:53.962 [2024-07-12 00:48:21.722129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.962 [2024-07-12 00:48:21.722154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:53.962 qpair failed and we were unable to recover it. 00:35:53.962 [2024-07-12 00:48:21.722236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.962 [2024-07-12 00:48:21.722262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:53.962 qpair failed and we were unable to recover it. 00:35:53.962 [2024-07-12 00:48:21.722369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.962 [2024-07-12 00:48:21.722395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:53.962 qpair failed and we were unable to recover it. 00:35:53.962 [2024-07-12 00:48:21.722509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.962 [2024-07-12 00:48:21.722534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:53.962 qpair failed and we were unable to recover it. 00:35:53.962 [2024-07-12 00:48:21.722635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.962 [2024-07-12 00:48:21.722663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:53.962 qpair failed and we were unable to recover it. 00:35:53.962 [2024-07-12 00:48:21.722826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.962 [2024-07-12 00:48:21.722875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:53.962 qpair failed and we were unable to recover it. 00:35:53.962 [2024-07-12 00:48:21.723018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.962 [2024-07-12 00:48:21.723073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:53.962 qpair failed and we were unable to recover it. 00:35:53.962 [2024-07-12 00:48:21.723152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.962 [2024-07-12 00:48:21.723177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:53.962 qpair failed and we were unable to recover it. 00:35:53.962 [2024-07-12 00:48:21.723288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.962 [2024-07-12 00:48:21.723314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:53.962 qpair failed and we were unable to recover it. 00:35:53.962 [2024-07-12 00:48:21.723411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.962 [2024-07-12 00:48:21.723436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:53.962 qpair failed and we were unable to recover it. 00:35:53.962 [2024-07-12 00:48:21.723525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.962 [2024-07-12 00:48:21.723551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:53.962 qpair failed and we were unable to recover it. 00:35:53.962 [2024-07-12 00:48:21.723744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.962 [2024-07-12 00:48:21.723776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:53.962 qpair failed and we were unable to recover it. 00:35:53.962 [2024-07-12 00:48:21.723895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.962 [2024-07-12 00:48:21.723942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:53.962 qpair failed and we were unable to recover it. 00:35:53.962 [2024-07-12 00:48:21.724034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.962 [2024-07-12 00:48:21.724060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:53.962 qpair failed and we were unable to recover it. 00:35:53.962 [2024-07-12 00:48:21.724221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.962 [2024-07-12 00:48:21.724274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:53.962 qpair failed and we were unable to recover it. 00:35:53.962 [2024-07-12 00:48:21.724397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.962 [2024-07-12 00:48:21.724447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:53.962 qpair failed and we were unable to recover it. 00:35:53.962 [2024-07-12 00:48:21.724616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.963 [2024-07-12 00:48:21.724673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:53.963 qpair failed and we were unable to recover it. 00:35:53.963 [2024-07-12 00:48:21.724794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.963 [2024-07-12 00:48:21.724844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:53.963 qpair failed and we were unable to recover it. 00:35:53.963 [2024-07-12 00:48:21.725008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.963 [2024-07-12 00:48:21.725033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:53.963 qpair failed and we were unable to recover it. 00:35:53.963 [2024-07-12 00:48:21.725225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.963 [2024-07-12 00:48:21.725264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:53.963 qpair failed and we were unable to recover it. 00:35:53.963 [2024-07-12 00:48:21.725395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.963 [2024-07-12 00:48:21.725452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:53.963 qpair failed and we were unable to recover it. 00:35:53.963 [2024-07-12 00:48:21.725541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.963 [2024-07-12 00:48:21.725567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:53.963 qpair failed and we were unable to recover it. 00:35:53.963 [2024-07-12 00:48:21.725744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.963 [2024-07-12 00:48:21.725770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:53.963 qpair failed and we were unable to recover it. 00:35:53.963 [2024-07-12 00:48:21.725882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.963 [2024-07-12 00:48:21.725908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:53.963 qpair failed and we were unable to recover it. 00:35:53.963 [2024-07-12 00:48:21.725996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.963 [2024-07-12 00:48:21.726022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:53.963 qpair failed and we were unable to recover it. 00:35:53.963 [2024-07-12 00:48:21.726162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.963 [2024-07-12 00:48:21.726188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:53.963 qpair failed and we were unable to recover it. 00:35:53.963 [2024-07-12 00:48:21.726306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.963 [2024-07-12 00:48:21.726333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:53.963 qpair failed and we were unable to recover it. 00:35:53.963 [2024-07-12 00:48:21.726452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.963 [2024-07-12 00:48:21.726492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:53.963 qpair failed and we were unable to recover it. 00:35:53.963 [2024-07-12 00:48:21.726604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.963 [2024-07-12 00:48:21.726631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:53.963 qpair failed and we were unable to recover it. 00:35:53.963 [2024-07-12 00:48:21.726835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.963 [2024-07-12 00:48:21.726892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:53.963 qpair failed and we were unable to recover it. 00:35:53.963 [2024-07-12 00:48:21.727088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.963 [2024-07-12 00:48:21.727114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:53.963 qpair failed and we were unable to recover it. 00:35:53.963 [2024-07-12 00:48:21.727267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.963 [2024-07-12 00:48:21.727306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:53.963 qpair failed and we were unable to recover it. 00:35:53.963 [2024-07-12 00:48:21.727406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.963 [2024-07-12 00:48:21.727435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:53.963 qpair failed and we were unable to recover it. 00:35:53.963 [2024-07-12 00:48:21.727542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.963 [2024-07-12 00:48:21.727567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:53.963 qpair failed and we were unable to recover it. 00:35:53.963 [2024-07-12 00:48:21.727743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.963 [2024-07-12 00:48:21.727790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:53.963 qpair failed and we were unable to recover it. 00:35:53.963 [2024-07-12 00:48:21.727908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.963 [2024-07-12 00:48:21.727937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:53.963 qpair failed and we were unable to recover it. 00:35:53.963 [2024-07-12 00:48:21.728029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.963 [2024-07-12 00:48:21.728054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:53.963 qpair failed and we were unable to recover it. 00:35:53.963 [2024-07-12 00:48:21.728136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.963 [2024-07-12 00:48:21.728162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:53.963 qpair failed and we were unable to recover it. 00:35:53.963 Read completed with error (sct=0, sc=8) 00:35:53.963 starting I/O failed 00:35:53.963 Read completed with error (sct=0, sc=8) 00:35:53.963 starting I/O failed 00:35:53.963 Read completed with error (sct=0, sc=8) 00:35:53.963 starting I/O failed 00:35:53.963 Read completed with error (sct=0, sc=8) 00:35:53.963 starting I/O failed 00:35:53.963 Write completed with error (sct=0, sc=8) 00:35:53.963 starting I/O failed 00:35:53.963 Read completed with error (sct=0, sc=8) 00:35:53.963 starting I/O failed 00:35:53.963 Write completed with error (sct=0, sc=8) 00:35:53.963 starting I/O failed 00:35:53.963 Write completed with error (sct=0, sc=8) 00:35:53.963 starting I/O failed 00:35:53.963 Read completed with error (sct=0, sc=8) 00:35:53.963 starting I/O failed 00:35:53.963 Read completed with error (sct=0, sc=8) 00:35:53.963 starting I/O failed 00:35:53.963 Write completed with error (sct=0, sc=8) 00:35:53.963 starting I/O failed 00:35:53.963 Read completed with error (sct=0, sc=8) 00:35:53.963 starting I/O failed 00:35:53.963 Read completed with error (sct=0, sc=8) 00:35:53.963 starting I/O failed 00:35:53.963 Read completed with error (sct=0, sc=8) 00:35:53.963 starting I/O failed 00:35:53.963 Read completed with error (sct=0, sc=8) 00:35:53.963 starting I/O failed 00:35:53.963 Read completed with error (sct=0, sc=8) 00:35:53.963 starting I/O failed 00:35:53.963 Write completed with error (sct=0, sc=8) 00:35:53.963 starting I/O failed 00:35:53.963 Read completed with error (sct=0, sc=8) 00:35:53.963 starting I/O failed 00:35:53.963 Read completed with error (sct=0, sc=8) 00:35:53.963 starting I/O failed 00:35:53.963 Read completed with error (sct=0, sc=8) 00:35:53.963 starting I/O failed 00:35:53.963 Write completed with error (sct=0, sc=8) 00:35:53.963 starting I/O failed 00:35:53.963 Write completed with error (sct=0, sc=8) 00:35:53.963 starting I/O failed 00:35:53.963 Write completed with error (sct=0, sc=8) 00:35:53.963 starting I/O failed 00:35:53.963 Read completed with error (sct=0, sc=8) 00:35:53.963 starting I/O failed 00:35:53.963 Write completed with error (sct=0, sc=8) 00:35:53.963 starting I/O failed 00:35:53.963 Read completed with error (sct=0, sc=8) 00:35:53.963 starting I/O failed 00:35:53.963 Read completed with error (sct=0, sc=8) 00:35:53.963 starting I/O failed 00:35:53.963 Write completed with error (sct=0, sc=8) 00:35:53.963 starting I/O failed 00:35:53.963 Write completed with error (sct=0, sc=8) 00:35:53.963 starting I/O failed 00:35:53.963 Read completed with error (sct=0, sc=8) 00:35:53.963 starting I/O failed 00:35:53.963 Write completed with error (sct=0, sc=8) 00:35:53.963 starting I/O failed 00:35:53.963 Read completed with error (sct=0, sc=8) 00:35:53.963 starting I/O failed 00:35:53.963 [2024-07-12 00:48:21.728520] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:53.963 [2024-07-12 00:48:21.728618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.963 [2024-07-12 00:48:21.728671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:53.963 qpair failed and we were unable to recover it. 00:35:53.963 [2024-07-12 00:48:21.728866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.963 [2024-07-12 00:48:21.728919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:53.963 qpair failed and we were unable to recover it. 00:35:53.963 [2024-07-12 00:48:21.729097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.963 [2024-07-12 00:48:21.729125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:53.963 qpair failed and we were unable to recover it. 00:35:53.963 [2024-07-12 00:48:21.729240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.963 [2024-07-12 00:48:21.729267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:53.963 qpair failed and we were unable to recover it. 00:35:53.963 [2024-07-12 00:48:21.729405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.963 [2024-07-12 00:48:21.729430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:53.963 qpair failed and we were unable to recover it. 00:35:53.963 [2024-07-12 00:48:21.729529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.963 [2024-07-12 00:48:21.729555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:53.963 qpair failed and we were unable to recover it. 00:35:53.963 [2024-07-12 00:48:21.729691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.963 [2024-07-12 00:48:21.729727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:53.963 qpair failed and we were unable to recover it. 00:35:53.963 [2024-07-12 00:48:21.729839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.963 [2024-07-12 00:48:21.729866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:53.964 qpair failed and we were unable to recover it. 00:35:53.964 [2024-07-12 00:48:21.729972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.964 [2024-07-12 00:48:21.729997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:53.964 qpair failed and we were unable to recover it. 00:35:53.964 [2024-07-12 00:48:21.730080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.964 [2024-07-12 00:48:21.730105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:53.964 qpair failed and we were unable to recover it. 00:35:53.964 [2024-07-12 00:48:21.730204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.964 [2024-07-12 00:48:21.730231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:53.964 qpair failed and we were unable to recover it. 00:35:53.964 [2024-07-12 00:48:21.730322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.964 [2024-07-12 00:48:21.730349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:53.964 qpair failed and we were unable to recover it. 00:35:53.964 [2024-07-12 00:48:21.730444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.964 [2024-07-12 00:48:21.730469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:53.964 qpair failed and we were unable to recover it. 00:35:53.964 [2024-07-12 00:48:21.730552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.964 [2024-07-12 00:48:21.730578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:53.964 qpair failed and we were unable to recover it. 00:35:53.964 [2024-07-12 00:48:21.730704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.964 [2024-07-12 00:48:21.730730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:53.964 qpair failed and we were unable to recover it. 00:35:53.964 [2024-07-12 00:48:21.730818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.964 [2024-07-12 00:48:21.730847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:53.964 qpair failed and we were unable to recover it. 00:35:53.964 [2024-07-12 00:48:21.730936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.964 [2024-07-12 00:48:21.730963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:53.964 qpair failed and we were unable to recover it. 00:35:53.964 [2024-07-12 00:48:21.731046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.964 [2024-07-12 00:48:21.731074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:53.964 qpair failed and we were unable to recover it. 00:35:53.964 [2024-07-12 00:48:21.731177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.964 [2024-07-12 00:48:21.731240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:53.964 qpair failed and we were unable to recover it. 00:35:53.964 [2024-07-12 00:48:21.731384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.964 [2024-07-12 00:48:21.731434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:53.964 qpair failed and we were unable to recover it. 00:35:53.964 [2024-07-12 00:48:21.731524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.964 [2024-07-12 00:48:21.731552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:53.964 qpair failed and we were unable to recover it. 00:35:53.964 [2024-07-12 00:48:21.731671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.964 [2024-07-12 00:48:21.731703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:53.964 qpair failed and we were unable to recover it. 00:35:53.964 [2024-07-12 00:48:21.731818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.964 [2024-07-12 00:48:21.731845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:53.964 qpair failed and we were unable to recover it. 00:35:53.964 [2024-07-12 00:48:21.731927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.964 [2024-07-12 00:48:21.731952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:53.964 qpair failed and we were unable to recover it. 00:35:53.964 [2024-07-12 00:48:21.732063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.964 [2024-07-12 00:48:21.732089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:53.964 qpair failed and we were unable to recover it. 00:35:53.964 [2024-07-12 00:48:21.732172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.964 [2024-07-12 00:48:21.732198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:53.964 qpair failed and we were unable to recover it. 00:35:53.964 [2024-07-12 00:48:21.732314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.964 [2024-07-12 00:48:21.732365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:53.964 qpair failed and we were unable to recover it. 00:35:53.964 [2024-07-12 00:48:21.732476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.964 [2024-07-12 00:48:21.732504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:53.964 qpair failed and we were unable to recover it. 00:35:53.964 [2024-07-12 00:48:21.732584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.964 [2024-07-12 00:48:21.732616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:53.964 qpair failed and we were unable to recover it. 00:35:53.964 [2024-07-12 00:48:21.733642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.964 [2024-07-12 00:48:21.733681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:53.964 qpair failed and we were unable to recover it. 00:35:53.964 [2024-07-12 00:48:21.733835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.964 [2024-07-12 00:48:21.733885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:53.964 qpair failed and we were unable to recover it. 00:35:53.964 [2024-07-12 00:48:21.733962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.964 [2024-07-12 00:48:21.733988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:53.964 qpair failed and we were unable to recover it. 00:35:53.964 [2024-07-12 00:48:21.734155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.964 [2024-07-12 00:48:21.734182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:53.964 qpair failed and we were unable to recover it. 00:35:53.964 [2024-07-12 00:48:21.734266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.964 [2024-07-12 00:48:21.734298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:53.964 qpair failed and we were unable to recover it. 00:35:53.964 [2024-07-12 00:48:21.734377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.964 [2024-07-12 00:48:21.734402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:53.964 qpair failed and we were unable to recover it. 00:35:53.964 [2024-07-12 00:48:21.734514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.964 [2024-07-12 00:48:21.734541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:53.964 qpair failed and we were unable to recover it. 00:35:53.964 [2024-07-12 00:48:21.734670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.964 [2024-07-12 00:48:21.734712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:53.964 qpair failed and we were unable to recover it. 00:35:53.964 Read completed with error (sct=0, sc=8) 00:35:53.964 starting I/O failed 00:35:53.964 Read completed with error (sct=0, sc=8) 00:35:53.964 starting I/O failed 00:35:53.964 Read completed with error (sct=0, sc=8) 00:35:53.964 starting I/O failed 00:35:53.964 Read completed with error (sct=0, sc=8) 00:35:53.964 starting I/O failed 00:35:53.964 Read completed with error (sct=0, sc=8) 00:35:53.964 starting I/O failed 00:35:53.964 Read completed with error (sct=0, sc=8) 00:35:53.964 starting I/O failed 00:35:53.964 Read completed with error (sct=0, sc=8) 00:35:53.964 starting I/O failed 00:35:53.964 Read completed with error (sct=0, sc=8) 00:35:53.964 starting I/O failed 00:35:53.964 Read completed with error (sct=0, sc=8) 00:35:53.964 starting I/O failed 00:35:53.964 Read completed with error (sct=0, sc=8) 00:35:53.964 starting I/O failed 00:35:53.964 Read completed with error (sct=0, sc=8) 00:35:53.964 starting I/O failed 00:35:53.964 Read completed with error (sct=0, sc=8) 00:35:53.964 starting I/O failed 00:35:53.964 Write completed with error (sct=0, sc=8) 00:35:53.964 starting I/O failed 00:35:53.964 Write completed with error (sct=0, sc=8) 00:35:53.964 starting I/O failed 00:35:53.964 Write completed with error (sct=0, sc=8) 00:35:53.964 starting I/O failed 00:35:53.964 Write completed with error (sct=0, sc=8) 00:35:53.964 starting I/O failed 00:35:53.964 Write completed with error (sct=0, sc=8) 00:35:53.964 starting I/O failed 00:35:53.964 Write completed with error (sct=0, sc=8) 00:35:53.964 starting I/O failed 00:35:53.964 Write completed with error (sct=0, sc=8) 00:35:53.964 starting I/O failed 00:35:53.964 Read completed with error (sct=0, sc=8) 00:35:53.964 starting I/O failed 00:35:53.964 Read completed with error (sct=0, sc=8) 00:35:53.964 starting I/O failed 00:35:53.964 Write completed with error (sct=0, sc=8) 00:35:53.964 starting I/O failed 00:35:53.964 Read completed with error (sct=0, sc=8) 00:35:53.964 starting I/O failed 00:35:53.964 Write completed with error (sct=0, sc=8) 00:35:53.964 starting I/O failed 00:35:53.964 Write completed with error (sct=0, sc=8) 00:35:53.964 starting I/O failed 00:35:53.964 Write completed with error (sct=0, sc=8) 00:35:53.964 starting I/O failed 00:35:53.964 Read completed with error (sct=0, sc=8) 00:35:53.964 starting I/O failed 00:35:53.964 Write completed with error (sct=0, sc=8) 00:35:53.964 starting I/O failed 00:35:53.964 Read completed with error (sct=0, sc=8) 00:35:53.964 starting I/O failed 00:35:53.964 Read completed with error (sct=0, sc=8) 00:35:53.964 starting I/O failed 00:35:53.964 Read completed with error (sct=0, sc=8) 00:35:53.964 starting I/O failed 00:35:53.964 Read completed with error (sct=0, sc=8) 00:35:53.964 starting I/O failed 00:35:53.965 [2024-07-12 00:48:21.735068] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:53.965 [2024-07-12 00:48:21.735174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.965 [2024-07-12 00:48:21.735211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:53.965 qpair failed and we were unable to recover it. 00:35:53.965 [2024-07-12 00:48:21.735403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.965 [2024-07-12 00:48:21.735453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:53.965 qpair failed and we were unable to recover it. 00:35:53.965 [2024-07-12 00:48:21.735536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.965 [2024-07-12 00:48:21.735563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:53.965 qpair failed and we were unable to recover it. 00:35:53.965 [2024-07-12 00:48:21.735784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.965 [2024-07-12 00:48:21.735811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:53.965 qpair failed and we were unable to recover it. 00:35:53.965 [2024-07-12 00:48:21.735889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.965 [2024-07-12 00:48:21.735915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:53.965 qpair failed and we were unable to recover it. 00:35:53.965 [2024-07-12 00:48:21.736004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.965 [2024-07-12 00:48:21.736029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:53.965 qpair failed and we were unable to recover it. 00:35:53.965 [2024-07-12 00:48:21.736139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.965 [2024-07-12 00:48:21.736165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:53.965 qpair failed and we were unable to recover it. 00:35:53.965 [2024-07-12 00:48:21.736275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.965 [2024-07-12 00:48:21.736301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:53.965 qpair failed and we were unable to recover it. 00:35:53.965 [2024-07-12 00:48:21.736381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.965 [2024-07-12 00:48:21.736406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:53.965 qpair failed and we were unable to recover it. 00:35:53.965 [2024-07-12 00:48:21.736500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.965 [2024-07-12 00:48:21.736534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:53.965 qpair failed and we were unable to recover it. 00:35:53.965 [2024-07-12 00:48:21.736718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.965 [2024-07-12 00:48:21.736747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:53.965 qpair failed and we were unable to recover it. 00:35:53.965 [2024-07-12 00:48:21.736876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.965 [2024-07-12 00:48:21.736919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:53.965 qpair failed and we were unable to recover it. 00:35:53.965 [2024-07-12 00:48:21.737008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.965 [2024-07-12 00:48:21.737035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:53.965 qpair failed and we were unable to recover it. 00:35:53.965 [2024-07-12 00:48:21.737151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.965 [2024-07-12 00:48:21.737177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:53.965 qpair failed and we were unable to recover it. 00:35:53.965 [2024-07-12 00:48:21.737307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.965 [2024-07-12 00:48:21.737374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:53.965 qpair failed and we were unable to recover it. 00:35:53.965 [2024-07-12 00:48:21.737566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.965 [2024-07-12 00:48:21.737604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:53.965 qpair failed and we were unable to recover it. 00:35:53.965 [2024-07-12 00:48:21.737733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.965 [2024-07-12 00:48:21.737792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:53.965 qpair failed and we were unable to recover it. 00:35:53.965 [2024-07-12 00:48:21.737880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.965 [2024-07-12 00:48:21.737907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:53.965 qpair failed and we were unable to recover it. 00:35:53.965 [2024-07-12 00:48:21.738021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.965 [2024-07-12 00:48:21.738079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:53.965 qpair failed and we were unable to recover it. 00:35:53.965 [2024-07-12 00:48:21.738195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.965 [2024-07-12 00:48:21.738244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:53.965 qpair failed and we were unable to recover it. 00:35:53.965 [2024-07-12 00:48:21.738359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.965 [2024-07-12 00:48:21.738386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:53.965 qpair failed and we were unable to recover it. 00:35:53.965 [2024-07-12 00:48:21.738468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.965 [2024-07-12 00:48:21.738494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:53.965 qpair failed and we were unable to recover it. 00:35:53.965 [2024-07-12 00:48:21.738609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.965 [2024-07-12 00:48:21.738637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:53.965 qpair failed and we were unable to recover it. 00:35:53.965 [2024-07-12 00:48:21.738723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.965 [2024-07-12 00:48:21.738751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:53.965 qpair failed and we were unable to recover it. 00:35:53.965 [2024-07-12 00:48:21.738897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.965 [2024-07-12 00:48:21.738924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:53.965 qpair failed and we were unable to recover it. 00:35:53.965 [2024-07-12 00:48:21.739021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.965 [2024-07-12 00:48:21.739048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:53.965 qpair failed and we were unable to recover it. 00:35:53.965 [2024-07-12 00:48:21.739129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.965 [2024-07-12 00:48:21.739156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:53.965 qpair failed and we were unable to recover it. 00:35:53.965 [2024-07-12 00:48:21.739276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.965 [2024-07-12 00:48:21.739303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:53.965 qpair failed and we were unable to recover it. 00:35:53.965 [2024-07-12 00:48:21.739412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.965 [2024-07-12 00:48:21.739439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:53.965 qpair failed and we were unable to recover it. 00:35:53.965 [2024-07-12 00:48:21.739534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.965 [2024-07-12 00:48:21.739561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:53.965 qpair failed and we were unable to recover it. 00:35:53.965 [2024-07-12 00:48:21.739702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.965 [2024-07-12 00:48:21.739729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:53.965 qpair failed and we were unable to recover it. 00:35:53.965 [2024-07-12 00:48:21.739807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.965 [2024-07-12 00:48:21.739833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:53.965 qpair failed and we were unable to recover it. 00:35:53.965 [2024-07-12 00:48:21.739949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.966 [2024-07-12 00:48:21.739976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:53.966 qpair failed and we were unable to recover it. 00:35:53.966 [2024-07-12 00:48:21.740053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.966 [2024-07-12 00:48:21.740079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:53.966 qpair failed and we were unable to recover it. 00:35:53.966 [2024-07-12 00:48:21.740236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.966 [2024-07-12 00:48:21.740263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:53.966 qpair failed and we were unable to recover it. 00:35:53.966 [2024-07-12 00:48:21.740339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.966 [2024-07-12 00:48:21.740366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:53.966 qpair failed and we were unable to recover it. 00:35:53.966 [2024-07-12 00:48:21.740509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.966 [2024-07-12 00:48:21.740563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:53.966 qpair failed and we were unable to recover it. 00:35:53.966 [2024-07-12 00:48:21.740728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.966 [2024-07-12 00:48:21.740781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:53.966 qpair failed and we were unable to recover it. 00:35:53.966 [2024-07-12 00:48:21.740901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.966 [2024-07-12 00:48:21.740945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:53.966 qpair failed and we were unable to recover it. 00:35:53.966 [2024-07-12 00:48:21.741065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.966 [2024-07-12 00:48:21.741123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:53.966 qpair failed and we were unable to recover it. 00:35:53.966 [2024-07-12 00:48:21.741201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.966 [2024-07-12 00:48:21.741228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:53.966 qpair failed and we were unable to recover it. 00:35:53.966 [2024-07-12 00:48:21.741311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.966 [2024-07-12 00:48:21.741337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:53.966 qpair failed and we were unable to recover it. 00:35:53.966 [2024-07-12 00:48:21.741425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.966 [2024-07-12 00:48:21.741451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:53.966 qpair failed and we were unable to recover it. 00:35:53.966 [2024-07-12 00:48:21.741576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.966 [2024-07-12 00:48:21.741610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:53.966 qpair failed and we were unable to recover it. 00:35:53.966 [2024-07-12 00:48:21.741731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.966 [2024-07-12 00:48:21.741757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:53.966 qpair failed and we were unable to recover it. 00:35:53.966 [2024-07-12 00:48:21.741874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.966 [2024-07-12 00:48:21.741904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:53.966 qpair failed and we were unable to recover it. 00:35:53.966 [2024-07-12 00:48:21.742010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.966 [2024-07-12 00:48:21.742064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:53.966 qpair failed and we were unable to recover it. 00:35:53.966 [2024-07-12 00:48:21.742191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.966 [2024-07-12 00:48:21.742244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:53.966 qpair failed and we were unable to recover it. 00:35:53.966 [2024-07-12 00:48:21.742357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.966 [2024-07-12 00:48:21.742383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:53.966 qpair failed and we were unable to recover it. 00:35:53.966 [2024-07-12 00:48:21.742572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.966 [2024-07-12 00:48:21.742630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:53.966 qpair failed and we were unable to recover it. 00:35:53.966 [2024-07-12 00:48:21.742717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.966 [2024-07-12 00:48:21.742744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:53.966 qpair failed and we were unable to recover it. 00:35:53.966 [2024-07-12 00:48:21.742870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.966 [2024-07-12 00:48:21.742919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:53.966 qpair failed and we were unable to recover it. 00:35:53.966 [2024-07-12 00:48:21.743012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.966 [2024-07-12 00:48:21.743039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:53.966 qpair failed and we were unable to recover it. 00:35:53.966 [2024-07-12 00:48:21.743115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.966 [2024-07-12 00:48:21.743141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:53.966 qpair failed and we were unable to recover it. 00:35:53.966 [2024-07-12 00:48:21.743251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.966 [2024-07-12 00:48:21.743280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:53.966 qpair failed and we were unable to recover it. 00:35:53.966 [2024-07-12 00:48:21.743395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.966 [2024-07-12 00:48:21.743420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:53.966 qpair failed and we were unable to recover it. 00:35:53.966 [2024-07-12 00:48:21.743536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.966 [2024-07-12 00:48:21.743567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:53.966 qpair failed and we were unable to recover it. 00:35:53.966 [2024-07-12 00:48:21.743674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.966 [2024-07-12 00:48:21.743700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:53.966 qpair failed and we were unable to recover it. 00:35:53.966 [2024-07-12 00:48:21.743814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.966 [2024-07-12 00:48:21.743840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:53.966 qpair failed and we were unable to recover it. 00:35:53.966 [2024-07-12 00:48:21.743959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.966 [2024-07-12 00:48:21.743985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:53.966 qpair failed and we were unable to recover it. 00:35:53.966 [2024-07-12 00:48:21.744063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.966 [2024-07-12 00:48:21.744088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:53.966 qpair failed and we were unable to recover it. 00:35:53.966 [2024-07-12 00:48:21.744183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.966 [2024-07-12 00:48:21.744246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:53.966 qpair failed and we were unable to recover it. 00:35:53.966 [2024-07-12 00:48:21.744364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.966 [2024-07-12 00:48:21.744393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:53.966 qpair failed and we were unable to recover it. 00:35:53.966 [2024-07-12 00:48:21.744514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.966 [2024-07-12 00:48:21.744561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:53.966 qpair failed and we were unable to recover it. 00:35:53.966 [2024-07-12 00:48:21.744745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.966 [2024-07-12 00:48:21.744772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:53.966 qpair failed and we were unable to recover it. 00:35:53.966 [2024-07-12 00:48:21.744850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.966 [2024-07-12 00:48:21.744876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:53.966 qpair failed and we were unable to recover it. 00:35:53.966 [2024-07-12 00:48:21.744982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.966 [2024-07-12 00:48:21.745037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:53.966 qpair failed and we were unable to recover it. 00:35:53.966 [2024-07-12 00:48:21.745120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.966 [2024-07-12 00:48:21.745146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:53.966 qpair failed and we were unable to recover it. 00:35:53.966 [2024-07-12 00:48:21.745258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.966 [2024-07-12 00:48:21.745284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:53.966 qpair failed and we were unable to recover it. 00:35:53.966 [2024-07-12 00:48:21.745443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.966 [2024-07-12 00:48:21.745469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:53.966 qpair failed and we were unable to recover it. 00:35:53.966 [2024-07-12 00:48:21.745593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.966 [2024-07-12 00:48:21.745620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:53.966 qpair failed and we were unable to recover it. 00:35:53.966 [2024-07-12 00:48:21.745760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.966 [2024-07-12 00:48:21.745817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:53.966 qpair failed and we were unable to recover it. 00:35:53.966 [2024-07-12 00:48:21.746007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.966 [2024-07-12 00:48:21.746033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:53.966 qpair failed and we were unable to recover it. 00:35:53.967 [2024-07-12 00:48:21.746143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.967 [2024-07-12 00:48:21.746168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:53.967 qpair failed and we were unable to recover it. 00:35:53.967 [2024-07-12 00:48:21.746361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.967 [2024-07-12 00:48:21.746387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:53.967 qpair failed and we were unable to recover it. 00:35:53.967 [2024-07-12 00:48:21.746501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.967 [2024-07-12 00:48:21.746551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:53.967 qpair failed and we were unable to recover it. 00:35:53.967 [2024-07-12 00:48:21.746678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.967 [2024-07-12 00:48:21.746704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:53.967 qpair failed and we were unable to recover it. 00:35:53.967 [2024-07-12 00:48:21.746790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.967 [2024-07-12 00:48:21.746817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:53.967 qpair failed and we were unable to recover it. 00:35:53.967 [2024-07-12 00:48:21.746937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.967 [2024-07-12 00:48:21.746986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:53.967 qpair failed and we were unable to recover it. 00:35:53.967 [2024-07-12 00:48:21.747141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.967 [2024-07-12 00:48:21.747189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:53.967 qpair failed and we were unable to recover it. 00:35:53.967 [2024-07-12 00:48:21.747265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.967 [2024-07-12 00:48:21.747291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:53.967 qpair failed and we were unable to recover it. 00:35:53.967 [2024-07-12 00:48:21.747428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.967 [2024-07-12 00:48:21.747483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:53.967 qpair failed and we were unable to recover it. 00:35:53.967 [2024-07-12 00:48:21.747601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.967 [2024-07-12 00:48:21.747628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:53.967 qpair failed and we were unable to recover it. 00:35:53.967 [2024-07-12 00:48:21.747742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.967 [2024-07-12 00:48:21.747769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:53.967 qpair failed and we were unable to recover it. 00:35:53.967 [2024-07-12 00:48:21.747886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.967 [2024-07-12 00:48:21.747912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:53.967 qpair failed and we were unable to recover it. 00:35:53.967 [2024-07-12 00:48:21.748003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.967 [2024-07-12 00:48:21.748029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:53.967 qpair failed and we were unable to recover it. 00:35:53.967 [2024-07-12 00:48:21.748127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.967 [2024-07-12 00:48:21.748158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:53.967 qpair failed and we were unable to recover it. 00:35:53.967 [2024-07-12 00:48:21.748259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.967 [2024-07-12 00:48:21.748286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:53.967 qpair failed and we were unable to recover it. 00:35:53.967 [2024-07-12 00:48:21.748379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.967 [2024-07-12 00:48:21.748404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:53.967 qpair failed and we were unable to recover it. 00:35:53.967 [2024-07-12 00:48:21.748482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.967 [2024-07-12 00:48:21.748508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:53.967 qpair failed and we were unable to recover it. 00:35:53.967 [2024-07-12 00:48:21.748615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.967 [2024-07-12 00:48:21.748642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:53.967 qpair failed and we were unable to recover it. 00:35:53.967 [2024-07-12 00:48:21.748736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.967 [2024-07-12 00:48:21.748762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:53.967 qpair failed and we were unable to recover it. 00:35:53.967 [2024-07-12 00:48:21.748839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.967 [2024-07-12 00:48:21.748865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:53.967 qpair failed and we were unable to recover it. 00:35:53.967 [2024-07-12 00:48:21.748950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.967 [2024-07-12 00:48:21.748974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:53.967 qpair failed and we were unable to recover it. 00:35:53.967 [2024-07-12 00:48:21.749083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.967 [2024-07-12 00:48:21.749109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:53.967 qpair failed and we were unable to recover it. 00:35:53.967 [2024-07-12 00:48:21.749199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.967 [2024-07-12 00:48:21.749229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:53.967 qpair failed and we were unable to recover it. 00:35:53.967 [2024-07-12 00:48:21.749355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.967 [2024-07-12 00:48:21.749386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:53.967 qpair failed and we were unable to recover it. 00:35:53.967 [2024-07-12 00:48:21.749494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.967 [2024-07-12 00:48:21.749520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:53.967 qpair failed and we were unable to recover it. 00:35:53.967 [2024-07-12 00:48:21.749620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.967 [2024-07-12 00:48:21.749662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:53.967 qpair failed and we were unable to recover it. 00:35:53.967 [2024-07-12 00:48:21.749852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.967 [2024-07-12 00:48:21.749904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:53.967 qpair failed and we were unable to recover it. 00:35:53.967 [2024-07-12 00:48:21.750068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.967 [2024-07-12 00:48:21.750129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:53.967 qpair failed and we were unable to recover it. 00:35:53.967 [2024-07-12 00:48:21.750223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.967 [2024-07-12 00:48:21.750249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:53.967 qpair failed and we were unable to recover it. 00:35:53.967 [2024-07-12 00:48:21.750365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.967 [2024-07-12 00:48:21.750395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:53.967 qpair failed and we were unable to recover it. 00:35:53.967 [2024-07-12 00:48:21.750515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.967 [2024-07-12 00:48:21.750577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:53.967 qpair failed and we were unable to recover it. 00:35:53.967 [2024-07-12 00:48:21.750711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.967 [2024-07-12 00:48:21.750738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:53.967 qpair failed and we were unable to recover it. 00:35:53.967 [2024-07-12 00:48:21.750839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.967 [2024-07-12 00:48:21.750867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:53.967 qpair failed and we were unable to recover it. 00:35:53.967 [2024-07-12 00:48:21.750955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.967 [2024-07-12 00:48:21.750983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:53.967 qpair failed and we were unable to recover it. 00:35:53.967 [2024-07-12 00:48:21.751085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.967 [2024-07-12 00:48:21.751112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:53.967 qpair failed and we were unable to recover it. 00:35:53.967 [2024-07-12 00:48:21.751241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.967 [2024-07-12 00:48:21.751269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:53.967 qpair failed and we were unable to recover it. 00:35:53.967 [2024-07-12 00:48:21.751397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.967 [2024-07-12 00:48:21.751422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:53.967 qpair failed and we were unable to recover it. 00:35:53.967 [2024-07-12 00:48:21.751511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.967 [2024-07-12 00:48:21.751537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:53.967 qpair failed and we were unable to recover it. 00:35:53.967 [2024-07-12 00:48:21.751623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.967 [2024-07-12 00:48:21.751649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:53.967 qpair failed and we were unable to recover it. 00:35:53.967 [2024-07-12 00:48:21.751752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.967 [2024-07-12 00:48:21.751778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:53.967 qpair failed and we were unable to recover it. 00:35:53.967 [2024-07-12 00:48:21.751857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.967 [2024-07-12 00:48:21.751882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:53.967 qpair failed and we were unable to recover it. 00:35:53.968 [2024-07-12 00:48:21.752049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.968 [2024-07-12 00:48:21.752075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:53.968 qpair failed and we were unable to recover it. 00:35:53.968 [2024-07-12 00:48:21.752191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.968 [2024-07-12 00:48:21.752216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:53.968 qpair failed and we were unable to recover it. 00:35:53.968 [2024-07-12 00:48:21.752339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.968 [2024-07-12 00:48:21.752392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:53.968 qpair failed and we were unable to recover it. 00:35:53.968 [2024-07-12 00:48:21.752473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.968 [2024-07-12 00:48:21.752500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:53.968 qpair failed and we were unable to recover it. 00:35:53.968 [2024-07-12 00:48:21.752600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.968 [2024-07-12 00:48:21.752626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:53.968 qpair failed and we were unable to recover it. 00:35:53.968 [2024-07-12 00:48:21.752709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.968 [2024-07-12 00:48:21.752735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:53.968 qpair failed and we were unable to recover it. 00:35:53.968 [2024-07-12 00:48:21.752859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.968 [2024-07-12 00:48:21.752903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:53.968 qpair failed and we were unable to recover it. 00:35:53.968 [2024-07-12 00:48:21.753013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.968 [2024-07-12 00:48:21.753064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:53.968 qpair failed and we were unable to recover it. 00:35:53.968 [2024-07-12 00:48:21.753146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.968 [2024-07-12 00:48:21.753173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:53.968 qpair failed and we were unable to recover it. 00:35:53.968 [2024-07-12 00:48:21.753290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.968 [2024-07-12 00:48:21.753316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:53.968 qpair failed and we were unable to recover it. 00:35:53.968 [2024-07-12 00:48:21.753430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.968 [2024-07-12 00:48:21.753456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:53.968 qpair failed and we were unable to recover it. 00:35:53.968 [2024-07-12 00:48:21.753557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.968 [2024-07-12 00:48:21.753591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:53.968 qpair failed and we were unable to recover it. 00:35:53.968 [2024-07-12 00:48:21.753765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.968 [2024-07-12 00:48:21.753792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:53.968 qpair failed and we were unable to recover it. 00:35:53.968 [2024-07-12 00:48:21.753904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.968 [2024-07-12 00:48:21.753930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:53.968 qpair failed and we were unable to recover it. 00:35:53.968 [2024-07-12 00:48:21.754147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.968 [2024-07-12 00:48:21.754173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:53.968 qpair failed and we were unable to recover it. 00:35:53.968 [2024-07-12 00:48:21.754282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.968 [2024-07-12 00:48:21.754334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:53.968 qpair failed and we were unable to recover it. 00:35:53.968 [2024-07-12 00:48:21.754429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.968 [2024-07-12 00:48:21.754455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:53.968 qpair failed and we were unable to recover it. 00:35:53.968 [2024-07-12 00:48:21.754608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.968 [2024-07-12 00:48:21.754658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:53.968 qpair failed and we were unable to recover it. 00:35:53.968 [2024-07-12 00:48:21.754748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.968 [2024-07-12 00:48:21.754774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:53.968 qpair failed and we were unable to recover it. 00:35:53.968 [2024-07-12 00:48:21.754890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.968 [2024-07-12 00:48:21.754944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:53.968 qpair failed and we were unable to recover it. 00:35:53.968 [2024-07-12 00:48:21.755040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.968 [2024-07-12 00:48:21.755066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:53.968 qpair failed and we were unable to recover it. 00:35:53.968 [2024-07-12 00:48:21.755226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.968 [2024-07-12 00:48:21.755276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:53.968 qpair failed and we were unable to recover it. 00:35:53.968 [2024-07-12 00:48:21.755386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.968 [2024-07-12 00:48:21.755418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:53.968 qpair failed and we were unable to recover it. 00:35:53.968 [2024-07-12 00:48:21.755563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.968 [2024-07-12 00:48:21.755629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:53.968 qpair failed and we were unable to recover it. 00:35:53.968 [2024-07-12 00:48:21.755712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.968 [2024-07-12 00:48:21.755739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:53.968 qpair failed and we were unable to recover it. 00:35:53.968 [2024-07-12 00:48:21.755874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.968 [2024-07-12 00:48:21.755924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:53.968 qpair failed and we were unable to recover it. 00:35:53.968 [2024-07-12 00:48:21.756037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.968 [2024-07-12 00:48:21.756065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:53.968 qpair failed and we were unable to recover it. 00:35:53.968 [2024-07-12 00:48:21.756217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.968 [2024-07-12 00:48:21.756258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:53.968 qpair failed and we were unable to recover it. 00:35:53.968 [2024-07-12 00:48:21.756385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.968 [2024-07-12 00:48:21.756410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:53.968 qpair failed and we were unable to recover it. 00:35:53.968 [2024-07-12 00:48:21.756504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.968 [2024-07-12 00:48:21.756530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:53.968 qpair failed and we were unable to recover it. 00:35:53.968 [2024-07-12 00:48:21.756661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.968 [2024-07-12 00:48:21.756717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:53.968 qpair failed and we were unable to recover it. 00:35:53.968 [2024-07-12 00:48:21.757618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.968 [2024-07-12 00:48:21.757650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:53.968 qpair failed and we were unable to recover it. 00:35:53.968 [2024-07-12 00:48:21.757748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.968 [2024-07-12 00:48:21.757776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:53.968 qpair failed and we were unable to recover it. 00:35:53.968 [2024-07-12 00:48:21.757901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.968 [2024-07-12 00:48:21.757959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:53.968 qpair failed and we were unable to recover it. 00:35:53.968 [2024-07-12 00:48:21.758085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.968 [2024-07-12 00:48:21.758139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:53.968 qpair failed and we were unable to recover it. 00:35:53.968 [2024-07-12 00:48:21.758242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.968 [2024-07-12 00:48:21.758300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:53.968 qpair failed and we were unable to recover it. 00:35:53.968 [2024-07-12 00:48:21.758439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.968 [2024-07-12 00:48:21.758488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:53.968 qpair failed and we were unable to recover it. 00:35:53.968 [2024-07-12 00:48:21.758660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.968 [2024-07-12 00:48:21.758687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:53.968 qpair failed and we were unable to recover it. 00:35:53.968 [2024-07-12 00:48:21.758841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.968 [2024-07-12 00:48:21.758882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:53.968 qpair failed and we were unable to recover it. 00:35:53.968 [2024-07-12 00:48:21.758998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.968 [2024-07-12 00:48:21.759051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:53.968 qpair failed and we were unable to recover it. 00:35:53.968 [2024-07-12 00:48:21.759138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.968 [2024-07-12 00:48:21.759165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:53.969 qpair failed and we were unable to recover it. 00:35:53.969 [2024-07-12 00:48:21.759247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.969 [2024-07-12 00:48:21.759273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:53.969 qpair failed and we were unable to recover it. 00:35:53.969 [2024-07-12 00:48:21.759368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.969 [2024-07-12 00:48:21.759394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:53.969 qpair failed and we were unable to recover it. 00:35:53.969 [2024-07-12 00:48:21.759482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.969 [2024-07-12 00:48:21.759509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:53.969 qpair failed and we were unable to recover it. 00:35:53.969 [2024-07-12 00:48:21.759610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.969 [2024-07-12 00:48:21.759637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:53.969 qpair failed and we were unable to recover it. 00:35:53.969 [2024-07-12 00:48:21.759766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.969 [2024-07-12 00:48:21.759811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:53.969 qpair failed and we were unable to recover it. 00:35:53.969 [2024-07-12 00:48:21.759932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.969 [2024-07-12 00:48:21.759971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:53.969 qpair failed and we were unable to recover it. 00:35:53.969 [2024-07-12 00:48:21.760094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.969 [2024-07-12 00:48:21.760136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:53.969 qpair failed and we were unable to recover it. 00:35:53.969 [2024-07-12 00:48:21.760277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.969 [2024-07-12 00:48:21.760329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:53.969 qpair failed and we were unable to recover it. 00:35:53.969 [2024-07-12 00:48:21.760449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.969 [2024-07-12 00:48:21.760498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:53.969 qpair failed and we were unable to recover it. 00:35:53.969 [2024-07-12 00:48:21.760579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.969 [2024-07-12 00:48:21.760612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:53.969 qpair failed and we were unable to recover it. 00:35:53.969 [2024-07-12 00:48:21.760773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.969 [2024-07-12 00:48:21.760816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:53.969 qpair failed and we were unable to recover it. 00:35:53.969 [2024-07-12 00:48:21.761006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.969 [2024-07-12 00:48:21.761058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:53.969 qpair failed and we were unable to recover it. 00:35:53.969 [2024-07-12 00:48:21.761138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.969 [2024-07-12 00:48:21.761164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:53.969 qpair failed and we were unable to recover it. 00:35:53.969 [2024-07-12 00:48:21.761290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.969 [2024-07-12 00:48:21.761316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:53.969 qpair failed and we were unable to recover it. 00:35:53.969 [2024-07-12 00:48:21.761410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.969 [2024-07-12 00:48:21.761435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:53.969 qpair failed and we were unable to recover it. 00:35:53.969 [2024-07-12 00:48:21.761519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.969 [2024-07-12 00:48:21.761546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:53.969 qpair failed and we were unable to recover it. 00:35:53.969 [2024-07-12 00:48:21.761641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.969 [2024-07-12 00:48:21.761667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:53.969 qpair failed and we were unable to recover it. 00:35:53.969 [2024-07-12 00:48:21.761818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.969 [2024-07-12 00:48:21.761844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:53.969 qpair failed and we were unable to recover it. 00:35:53.969 [2024-07-12 00:48:21.761937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.969 [2024-07-12 00:48:21.761963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:53.969 qpair failed and we were unable to recover it. 00:35:53.969 [2024-07-12 00:48:21.762049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.969 [2024-07-12 00:48:21.762078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:53.969 qpair failed and we were unable to recover it. 00:35:53.969 [2024-07-12 00:48:21.762207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.969 [2024-07-12 00:48:21.762239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:53.969 qpair failed and we were unable to recover it. 00:35:53.969 [2024-07-12 00:48:21.762378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.969 [2024-07-12 00:48:21.762436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:53.969 qpair failed and we were unable to recover it. 00:35:53.969 [2024-07-12 00:48:21.762522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.969 [2024-07-12 00:48:21.762550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:53.969 qpair failed and we were unable to recover it. 00:35:53.969 [2024-07-12 00:48:21.762693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.969 [2024-07-12 00:48:21.762720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:53.969 qpair failed and we were unable to recover it. 00:35:53.969 [2024-07-12 00:48:21.762846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.969 [2024-07-12 00:48:21.762890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:53.969 qpair failed and we were unable to recover it. 00:35:53.969 [2024-07-12 00:48:21.763021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.969 [2024-07-12 00:48:21.763078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:53.969 qpair failed and we were unable to recover it. 00:35:53.969 [2024-07-12 00:48:21.763210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.969 [2024-07-12 00:48:21.763268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:53.969 qpair failed and we were unable to recover it. 00:35:53.969 [2024-07-12 00:48:21.763380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.969 [2024-07-12 00:48:21.763432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:53.969 qpair failed and we were unable to recover it. 00:35:53.969 [2024-07-12 00:48:21.763553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.969 [2024-07-12 00:48:21.763584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:53.969 qpair failed and we were unable to recover it. 00:35:53.969 [2024-07-12 00:48:21.763752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.969 [2024-07-12 00:48:21.763812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:53.969 qpair failed and we were unable to recover it. 00:35:53.969 [2024-07-12 00:48:21.763923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.969 [2024-07-12 00:48:21.763948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:53.969 qpair failed and we were unable to recover it. 00:35:53.969 [2024-07-12 00:48:21.764110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.969 [2024-07-12 00:48:21.764141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:53.969 qpair failed and we were unable to recover it. 00:35:53.969 [2024-07-12 00:48:21.764275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.969 [2024-07-12 00:48:21.764320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:53.969 qpair failed and we were unable to recover it. 00:35:53.969 [2024-07-12 00:48:21.764437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.969 [2024-07-12 00:48:21.764493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:53.969 qpair failed and we were unable to recover it. 00:35:53.969 [2024-07-12 00:48:21.764592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.969 [2024-07-12 00:48:21.764619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:53.969 qpair failed and we were unable to recover it. 00:35:53.969 [2024-07-12 00:48:21.764808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.969 [2024-07-12 00:48:21.764834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:53.969 qpair failed and we were unable to recover it. 00:35:53.969 [2024-07-12 00:48:21.765023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.969 [2024-07-12 00:48:21.765049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:53.969 qpair failed and we were unable to recover it. 00:35:53.969 [2024-07-12 00:48:21.765149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.969 [2024-07-12 00:48:21.765176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:53.969 qpair failed and we were unable to recover it. 00:35:53.969 [2024-07-12 00:48:21.765280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.969 [2024-07-12 00:48:21.765337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:53.969 qpair failed and we were unable to recover it. 00:35:53.969 [2024-07-12 00:48:21.765466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.969 [2024-07-12 00:48:21.765492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:53.969 qpair failed and we were unable to recover it. 00:35:53.969 [2024-07-12 00:48:21.765619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.970 [2024-07-12 00:48:21.765675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:53.970 qpair failed and we were unable to recover it. 00:35:53.970 [2024-07-12 00:48:21.765779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.970 [2024-07-12 00:48:21.765805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:53.970 qpair failed and we were unable to recover it. 00:35:53.970 [2024-07-12 00:48:21.765885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.970 [2024-07-12 00:48:21.765911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:53.970 qpair failed and we were unable to recover it. 00:35:53.970 [2024-07-12 00:48:21.766000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.970 [2024-07-12 00:48:21.766028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:53.970 qpair failed and we were unable to recover it. 00:35:53.970 [2024-07-12 00:48:21.766113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.970 [2024-07-12 00:48:21.766139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:53.970 qpair failed and we were unable to recover it. 00:35:53.970 [2024-07-12 00:48:21.766247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.970 [2024-07-12 00:48:21.766276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:53.970 qpair failed and we were unable to recover it. 00:35:53.970 [2024-07-12 00:48:21.766358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.970 [2024-07-12 00:48:21.766385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:53.970 qpair failed and we were unable to recover it. 00:35:53.970 [2024-07-12 00:48:21.766500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.970 [2024-07-12 00:48:21.766527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:53.970 qpair failed and we were unable to recover it. 00:35:53.970 [2024-07-12 00:48:21.766614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.970 [2024-07-12 00:48:21.766641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:53.970 qpair failed and we were unable to recover it. 00:35:53.970 [2024-07-12 00:48:21.766755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.970 [2024-07-12 00:48:21.766780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:53.970 qpair failed and we were unable to recover it. 00:35:53.970 [2024-07-12 00:48:21.766859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.970 [2024-07-12 00:48:21.766885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:53.970 qpair failed and we were unable to recover it. 00:35:53.970 [2024-07-12 00:48:21.767006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.970 [2024-07-12 00:48:21.767035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:53.970 qpair failed and we were unable to recover it. 00:35:53.970 [2024-07-12 00:48:21.767162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.970 [2024-07-12 00:48:21.767221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:53.970 qpair failed and we were unable to recover it. 00:35:53.970 [2024-07-12 00:48:21.767337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.970 [2024-07-12 00:48:21.767380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:53.970 qpair failed and we were unable to recover it. 00:35:53.970 [2024-07-12 00:48:21.767519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.970 [2024-07-12 00:48:21.767560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:53.970 qpair failed and we were unable to recover it. 00:35:53.970 [2024-07-12 00:48:21.767704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.970 [2024-07-12 00:48:21.767753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:53.970 qpair failed and we were unable to recover it. 00:35:53.970 [2024-07-12 00:48:21.767849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.970 [2024-07-12 00:48:21.767907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:53.970 qpair failed and we were unable to recover it. 00:35:53.970 [2024-07-12 00:48:21.768019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.970 [2024-07-12 00:48:21.768083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:53.970 qpair failed and we were unable to recover it. 00:35:53.970 [2024-07-12 00:48:21.768171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.970 [2024-07-12 00:48:21.768197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:53.970 qpair failed and we were unable to recover it. 00:35:53.970 [2024-07-12 00:48:21.768318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.970 [2024-07-12 00:48:21.768363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:53.970 qpair failed and we were unable to recover it. 00:35:53.970 [2024-07-12 00:48:21.768488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.970 [2024-07-12 00:48:21.768539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:53.970 qpair failed and we were unable to recover it. 00:35:53.970 [2024-07-12 00:48:21.768632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.970 [2024-07-12 00:48:21.768697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:53.970 qpair failed and we were unable to recover it. 00:35:53.970 [2024-07-12 00:48:21.768808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.970 [2024-07-12 00:48:21.768850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:53.970 qpair failed and we were unable to recover it. 00:35:53.970 [2024-07-12 00:48:21.768960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.970 [2024-07-12 00:48:21.769010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:53.970 qpair failed and we were unable to recover it. 00:35:53.970 [2024-07-12 00:48:21.769106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.970 [2024-07-12 00:48:21.769131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:53.970 qpair failed and we were unable to recover it. 00:35:53.970 [2024-07-12 00:48:21.769231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.970 [2024-07-12 00:48:21.769282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:53.970 qpair failed and we were unable to recover it. 00:35:53.970 [2024-07-12 00:48:21.769410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.970 [2024-07-12 00:48:21.769460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:53.970 qpair failed and we were unable to recover it. 00:35:53.970 [2024-07-12 00:48:21.769594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.970 [2024-07-12 00:48:21.769625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:53.970 qpair failed and we were unable to recover it. 00:35:53.970 [2024-07-12 00:48:21.769766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.970 [2024-07-12 00:48:21.769816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:53.970 qpair failed and we were unable to recover it. 00:35:53.970 [2024-07-12 00:48:21.769931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.970 [2024-07-12 00:48:21.769980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:53.970 qpair failed and we were unable to recover it. 00:35:53.970 [2024-07-12 00:48:21.770102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.970 [2024-07-12 00:48:21.770148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:53.970 qpair failed and we were unable to recover it. 00:35:53.970 [2024-07-12 00:48:21.770252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.970 [2024-07-12 00:48:21.770312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:53.970 qpair failed and we were unable to recover it. 00:35:53.970 [2024-07-12 00:48:21.770391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.970 [2024-07-12 00:48:21.770416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:53.970 qpair failed and we were unable to recover it. 00:35:53.970 [2024-07-12 00:48:21.770511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.970 [2024-07-12 00:48:21.770538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:53.970 qpair failed and we were unable to recover it. 00:35:53.970 [2024-07-12 00:48:21.770620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.970 [2024-07-12 00:48:21.770646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:53.970 qpair failed and we were unable to recover it. 00:35:53.970 [2024-07-12 00:48:21.770736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.971 [2024-07-12 00:48:21.770764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:53.971 qpair failed and we were unable to recover it. 00:35:53.971 [2024-07-12 00:48:21.770886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.971 [2024-07-12 00:48:21.770932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:53.971 qpair failed and we were unable to recover it. 00:35:53.971 [2024-07-12 00:48:21.771079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.971 [2024-07-12 00:48:21.771107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:53.971 qpair failed and we were unable to recover it. 00:35:53.971 [2024-07-12 00:48:21.771245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.971 [2024-07-12 00:48:21.771291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:53.971 qpair failed and we were unable to recover it. 00:35:53.971 [2024-07-12 00:48:21.771377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.971 [2024-07-12 00:48:21.771403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:53.971 qpair failed and we were unable to recover it. 00:35:53.971 [2024-07-12 00:48:21.771498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.971 [2024-07-12 00:48:21.771524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:53.971 qpair failed and we were unable to recover it. 00:35:53.971 [2024-07-12 00:48:21.771630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.971 [2024-07-12 00:48:21.771692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:53.971 qpair failed and we were unable to recover it. 00:35:53.971 [2024-07-12 00:48:21.771824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.971 [2024-07-12 00:48:21.771884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:53.971 qpair failed and we were unable to recover it. 00:35:53.971 [2024-07-12 00:48:21.772004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.971 [2024-07-12 00:48:21.772048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:53.971 qpair failed and we were unable to recover it. 00:35:53.971 [2024-07-12 00:48:21.772146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.971 [2024-07-12 00:48:21.772202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:53.971 qpair failed and we were unable to recover it. 00:35:53.971 [2024-07-12 00:48:21.772304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.971 [2024-07-12 00:48:21.772358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:53.971 qpair failed and we were unable to recover it. 00:35:53.971 [2024-07-12 00:48:21.772506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.971 [2024-07-12 00:48:21.772545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:53.971 qpair failed and we were unable to recover it. 00:35:53.971 [2024-07-12 00:48:21.772635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.971 [2024-07-12 00:48:21.772661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:53.971 qpair failed and we were unable to recover it. 00:35:53.971 [2024-07-12 00:48:21.772752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.971 [2024-07-12 00:48:21.772781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:53.971 qpair failed and we were unable to recover it. 00:35:53.971 [2024-07-12 00:48:21.772884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.971 [2024-07-12 00:48:21.772932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:53.971 qpair failed and we were unable to recover it. 00:35:53.971 [2024-07-12 00:48:21.773041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.971 [2024-07-12 00:48:21.773092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:53.971 qpair failed and we were unable to recover it. 00:35:53.971 [2024-07-12 00:48:21.773177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.971 [2024-07-12 00:48:21.773202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:53.971 qpair failed and we were unable to recover it. 00:35:53.971 [2024-07-12 00:48:21.773276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.971 [2024-07-12 00:48:21.773302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:53.971 qpair failed and we were unable to recover it. 00:35:53.971 [2024-07-12 00:48:21.773382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.971 [2024-07-12 00:48:21.773407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:53.971 qpair failed and we were unable to recover it. 00:35:53.971 [2024-07-12 00:48:21.773482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.971 [2024-07-12 00:48:21.773509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:53.971 qpair failed and we were unable to recover it. 00:35:53.971 [2024-07-12 00:48:21.773582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.971 [2024-07-12 00:48:21.773612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:53.971 qpair failed and we were unable to recover it. 00:35:53.971 [2024-07-12 00:48:21.773697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.971 [2024-07-12 00:48:21.773725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:53.971 qpair failed and we were unable to recover it. 00:35:53.971 [2024-07-12 00:48:21.773828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.971 [2024-07-12 00:48:21.773853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:53.971 qpair failed and we were unable to recover it. 00:35:53.971 [2024-07-12 00:48:21.773932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.971 [2024-07-12 00:48:21.773958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:53.971 qpair failed and we were unable to recover it. 00:35:53.971 [2024-07-12 00:48:21.774050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.971 [2024-07-12 00:48:21.774076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:53.971 qpair failed and we were unable to recover it. 00:35:53.971 [2024-07-12 00:48:21.774175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.971 [2024-07-12 00:48:21.774202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:53.971 qpair failed and we were unable to recover it. 00:35:53.971 [2024-07-12 00:48:21.774330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.971 [2024-07-12 00:48:21.774362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:53.971 qpair failed and we were unable to recover it. 00:35:53.971 [2024-07-12 00:48:21.774441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.971 [2024-07-12 00:48:21.774467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:53.971 qpair failed and we were unable to recover it. 00:35:53.971 [2024-07-12 00:48:21.774547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.971 [2024-07-12 00:48:21.774573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:53.971 qpair failed and we were unable to recover it. 00:35:53.971 [2024-07-12 00:48:21.774680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.971 [2024-07-12 00:48:21.774708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:53.971 qpair failed and we were unable to recover it. 00:35:53.971 [2024-07-12 00:48:21.774805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.971 [2024-07-12 00:48:21.774831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:53.971 qpair failed and we were unable to recover it. 00:35:53.971 [2024-07-12 00:48:21.774908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.971 [2024-07-12 00:48:21.774934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:53.971 qpair failed and we were unable to recover it. 00:35:53.971 [2024-07-12 00:48:21.775008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.971 [2024-07-12 00:48:21.775033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:53.971 qpair failed and we were unable to recover it. 00:35:53.971 [2024-07-12 00:48:21.775152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.971 [2024-07-12 00:48:21.775192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:53.971 qpair failed and we were unable to recover it. 00:35:53.971 [2024-07-12 00:48:21.775269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.971 [2024-07-12 00:48:21.775295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:53.971 qpair failed and we were unable to recover it. 00:35:53.971 [2024-07-12 00:48:21.775393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.971 [2024-07-12 00:48:21.775420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:53.971 qpair failed and we were unable to recover it. 00:35:53.971 [2024-07-12 00:48:21.775504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.971 [2024-07-12 00:48:21.775532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:53.971 qpair failed and we were unable to recover it. 00:35:53.971 [2024-07-12 00:48:21.775630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.971 [2024-07-12 00:48:21.775656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:53.971 qpair failed and we were unable to recover it. 00:35:53.971 [2024-07-12 00:48:21.775758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.971 [2024-07-12 00:48:21.775807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:53.971 qpair failed and we were unable to recover it. 00:35:53.971 [2024-07-12 00:48:21.775911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.971 [2024-07-12 00:48:21.775960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:53.971 qpair failed and we were unable to recover it. 00:35:53.971 [2024-07-12 00:48:21.776045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.971 [2024-07-12 00:48:21.776071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:53.971 qpair failed and we were unable to recover it. 00:35:53.972 [2024-07-12 00:48:21.776147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.972 [2024-07-12 00:48:21.776172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:53.972 qpair failed and we were unable to recover it. 00:35:53.972 [2024-07-12 00:48:21.776267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.972 [2024-07-12 00:48:21.776293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:53.972 qpair failed and we were unable to recover it. 00:35:53.972 [2024-07-12 00:48:21.776390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.972 [2024-07-12 00:48:21.776416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:53.972 qpair failed and we were unable to recover it. 00:35:53.972 [2024-07-12 00:48:21.776494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.972 [2024-07-12 00:48:21.776520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:53.972 qpair failed and we were unable to recover it. 00:35:53.972 [2024-07-12 00:48:21.776648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.972 [2024-07-12 00:48:21.776677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:53.972 qpair failed and we were unable to recover it. 00:35:53.972 [2024-07-12 00:48:21.776755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.972 [2024-07-12 00:48:21.776782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:53.972 qpair failed and we were unable to recover it. 00:35:53.972 [2024-07-12 00:48:21.776857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.972 [2024-07-12 00:48:21.776882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:53.972 qpair failed and we were unable to recover it. 00:35:53.972 [2024-07-12 00:48:21.776963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.972 [2024-07-12 00:48:21.776989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:53.972 qpair failed and we were unable to recover it. 00:35:53.972 [2024-07-12 00:48:21.777065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.972 [2024-07-12 00:48:21.777091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:53.972 qpair failed and we were unable to recover it. 00:35:53.972 [2024-07-12 00:48:21.777168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.972 [2024-07-12 00:48:21.777194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:53.972 qpair failed and we were unable to recover it. 00:35:53.972 [2024-07-12 00:48:21.777269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.972 [2024-07-12 00:48:21.777296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:53.972 qpair failed and we were unable to recover it. 00:35:53.972 [2024-07-12 00:48:21.777375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.972 [2024-07-12 00:48:21.777400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:53.972 qpair failed and we were unable to recover it. 00:35:53.972 [2024-07-12 00:48:21.777498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.972 [2024-07-12 00:48:21.777524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:53.972 qpair failed and we were unable to recover it. 00:35:53.972 [2024-07-12 00:48:21.777611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.972 [2024-07-12 00:48:21.777641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:53.972 qpair failed and we were unable to recover it. 00:35:53.972 [2024-07-12 00:48:21.777723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.972 [2024-07-12 00:48:21.777748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:53.972 qpair failed and we were unable to recover it. 00:35:53.972 [2024-07-12 00:48:21.777836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.972 [2024-07-12 00:48:21.777862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:53.972 qpair failed and we were unable to recover it. 00:35:53.972 [2024-07-12 00:48:21.777963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.972 [2024-07-12 00:48:21.777991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:53.972 qpair failed and we were unable to recover it. 00:35:53.972 [2024-07-12 00:48:21.778072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.972 [2024-07-12 00:48:21.778101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:53.972 qpair failed and we were unable to recover it. 00:35:53.972 [2024-07-12 00:48:21.778249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.972 [2024-07-12 00:48:21.778290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:53.972 qpair failed and we were unable to recover it. 00:35:53.972 [2024-07-12 00:48:21.778376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.972 [2024-07-12 00:48:21.778403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:53.972 qpair failed and we were unable to recover it. 00:35:53.972 [2024-07-12 00:48:21.778486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.972 [2024-07-12 00:48:21.778513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:53.972 qpair failed and we were unable to recover it. 00:35:53.972 [2024-07-12 00:48:21.778609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.972 [2024-07-12 00:48:21.778637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:53.972 qpair failed and we were unable to recover it. 00:35:53.972 [2024-07-12 00:48:21.778769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.972 [2024-07-12 00:48:21.778820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:53.972 qpair failed and we were unable to recover it. 00:35:53.972 [2024-07-12 00:48:21.778900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.972 [2024-07-12 00:48:21.778927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:53.972 qpair failed and we were unable to recover it. 00:35:53.972 [2024-07-12 00:48:21.779057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.972 [2024-07-12 00:48:21.779083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:53.972 qpair failed and we were unable to recover it. 00:35:53.972 [2024-07-12 00:48:21.779157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.972 [2024-07-12 00:48:21.779190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:53.972 qpair failed and we were unable to recover it. 00:35:53.972 [2024-07-12 00:48:21.779283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.972 [2024-07-12 00:48:21.779309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:53.972 qpair failed and we were unable to recover it. 00:35:53.972 [2024-07-12 00:48:21.779386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.972 [2024-07-12 00:48:21.779412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:53.972 qpair failed and we were unable to recover it. 00:35:53.972 [2024-07-12 00:48:21.779503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.972 [2024-07-12 00:48:21.779531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:53.972 qpair failed and we were unable to recover it. 00:35:53.972 [2024-07-12 00:48:21.779612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.972 [2024-07-12 00:48:21.779641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:53.972 qpair failed and we were unable to recover it. 00:35:53.972 [2024-07-12 00:48:21.779729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.972 [2024-07-12 00:48:21.779756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:53.972 qpair failed and we were unable to recover it. 00:35:53.972 [2024-07-12 00:48:21.779831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.972 [2024-07-12 00:48:21.779857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:53.972 qpair failed and we were unable to recover it. 00:35:53.972 [2024-07-12 00:48:21.779951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.972 [2024-07-12 00:48:21.779976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:53.972 qpair failed and we were unable to recover it. 00:35:53.972 [2024-07-12 00:48:21.780050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.972 [2024-07-12 00:48:21.780075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:53.972 qpair failed and we were unable to recover it. 00:35:53.972 [2024-07-12 00:48:21.780156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.972 [2024-07-12 00:48:21.780181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:53.972 qpair failed and we were unable to recover it. 00:35:53.972 [2024-07-12 00:48:21.780308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.972 [2024-07-12 00:48:21.780335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:53.972 qpair failed and we were unable to recover it. 00:35:53.972 [2024-07-12 00:48:21.780410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.972 [2024-07-12 00:48:21.780436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:53.972 qpair failed and we were unable to recover it. 00:35:53.972 [2024-07-12 00:48:21.780523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.972 [2024-07-12 00:48:21.780580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:53.972 qpair failed and we were unable to recover it. 00:35:53.972 [2024-07-12 00:48:21.780725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.972 [2024-07-12 00:48:21.780766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:53.972 qpair failed and we were unable to recover it. 00:35:53.972 [2024-07-12 00:48:21.780883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.972 [2024-07-12 00:48:21.780928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:53.972 qpair failed and we were unable to recover it. 00:35:53.972 [2024-07-12 00:48:21.781030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.972 [2024-07-12 00:48:21.781081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:53.972 qpair failed and we were unable to recover it. 00:35:53.973 [2024-07-12 00:48:21.781180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.973 [2024-07-12 00:48:21.781232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:53.973 qpair failed and we were unable to recover it. 00:35:53.973 [2024-07-12 00:48:21.781339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.973 [2024-07-12 00:48:21.781388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:53.973 qpair failed and we were unable to recover it. 00:35:53.973 [2024-07-12 00:48:21.781469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.973 [2024-07-12 00:48:21.781496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:53.973 qpair failed and we were unable to recover it. 00:35:53.973 [2024-07-12 00:48:21.781570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.973 [2024-07-12 00:48:21.781603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:53.973 qpair failed and we were unable to recover it. 00:35:53.973 [2024-07-12 00:48:21.781688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.973 [2024-07-12 00:48:21.781715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:53.973 qpair failed and we were unable to recover it. 00:35:53.973 [2024-07-12 00:48:21.781795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.973 [2024-07-12 00:48:21.781821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:53.973 qpair failed and we were unable to recover it. 00:35:53.973 [2024-07-12 00:48:21.781900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.973 [2024-07-12 00:48:21.781926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:53.973 qpair failed and we were unable to recover it. 00:35:53.973 [2024-07-12 00:48:21.782007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.973 [2024-07-12 00:48:21.782032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:53.973 qpair failed and we were unable to recover it. 00:35:53.973 [2024-07-12 00:48:21.782131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.973 [2024-07-12 00:48:21.782158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:53.973 qpair failed and we were unable to recover it. 00:35:53.973 [2024-07-12 00:48:21.782245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.973 [2024-07-12 00:48:21.782272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:53.973 qpair failed and we were unable to recover it. 00:35:53.973 [2024-07-12 00:48:21.782363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.973 [2024-07-12 00:48:21.782391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:53.973 qpair failed and we were unable to recover it. 00:35:53.973 [2024-07-12 00:48:21.782476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.973 [2024-07-12 00:48:21.782504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:53.973 qpair failed and we were unable to recover it. 00:35:53.973 [2024-07-12 00:48:21.782597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.973 [2024-07-12 00:48:21.782626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:53.973 qpair failed and we were unable to recover it. 00:35:53.973 [2024-07-12 00:48:21.782708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.973 [2024-07-12 00:48:21.782734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:53.973 qpair failed and we were unable to recover it. 00:35:53.973 [2024-07-12 00:48:21.782812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.973 [2024-07-12 00:48:21.782838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:53.973 qpair failed and we were unable to recover it. 00:35:53.973 [2024-07-12 00:48:21.782942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.973 [2024-07-12 00:48:21.782996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:53.973 qpair failed and we were unable to recover it. 00:35:53.973 [2024-07-12 00:48:21.783108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.973 [2024-07-12 00:48:21.783157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:53.973 qpair failed and we were unable to recover it. 00:35:53.973 [2024-07-12 00:48:21.783241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.973 [2024-07-12 00:48:21.783266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:53.973 qpair failed and we were unable to recover it. 00:35:53.973 [2024-07-12 00:48:21.783346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.973 [2024-07-12 00:48:21.783372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:53.973 qpair failed and we were unable to recover it. 00:35:53.973 [2024-07-12 00:48:21.783460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.973 [2024-07-12 00:48:21.783486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:53.973 qpair failed and we were unable to recover it. 00:35:53.973 [2024-07-12 00:48:21.783574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.973 [2024-07-12 00:48:21.783610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:53.973 qpair failed and we were unable to recover it. 00:35:53.973 [2024-07-12 00:48:21.783700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.973 [2024-07-12 00:48:21.783725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:53.973 qpair failed and we were unable to recover it. 00:35:53.973 [2024-07-12 00:48:21.783806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.973 [2024-07-12 00:48:21.783832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:53.973 qpair failed and we were unable to recover it. 00:35:53.973 [2024-07-12 00:48:21.783923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.973 [2024-07-12 00:48:21.783948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:53.973 qpair failed and we were unable to recover it. 00:35:53.973 [2024-07-12 00:48:21.784025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.973 [2024-07-12 00:48:21.784055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:53.973 qpair failed and we were unable to recover it. 00:35:53.973 [2024-07-12 00:48:21.784152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.973 [2024-07-12 00:48:21.784178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:53.973 qpair failed and we were unable to recover it. 00:35:53.973 [2024-07-12 00:48:21.784253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.973 [2024-07-12 00:48:21.784279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:53.973 qpair failed and we were unable to recover it. 00:35:53.973 [2024-07-12 00:48:21.784373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.973 [2024-07-12 00:48:21.784398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:53.973 qpair failed and we were unable to recover it. 00:35:53.973 [2024-07-12 00:48:21.784472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.973 [2024-07-12 00:48:21.784496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:53.973 qpair failed and we were unable to recover it. 00:35:53.973 [2024-07-12 00:48:21.784616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.973 [2024-07-12 00:48:21.784660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:53.973 qpair failed and we were unable to recover it. 00:35:53.973 [2024-07-12 00:48:21.784760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.973 [2024-07-12 00:48:21.784808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:53.973 qpair failed and we were unable to recover it. 00:35:53.973 [2024-07-12 00:48:21.784889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.973 [2024-07-12 00:48:21.784914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:53.973 qpair failed and we were unable to recover it. 00:35:53.973 [2024-07-12 00:48:21.785029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.973 [2024-07-12 00:48:21.785078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:53.973 qpair failed and we were unable to recover it. 00:35:53.973 [2024-07-12 00:48:21.785201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.973 [2024-07-12 00:48:21.785261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:53.973 qpair failed and we were unable to recover it. 00:35:53.973 [2024-07-12 00:48:21.785360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.973 [2024-07-12 00:48:21.785411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:53.973 qpair failed and we were unable to recover it. 00:35:53.973 [2024-07-12 00:48:21.785515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.973 [2024-07-12 00:48:21.785565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:53.973 qpair failed and we were unable to recover it. 00:35:53.973 [2024-07-12 00:48:21.785656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.973 [2024-07-12 00:48:21.785684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:53.973 qpair failed and we were unable to recover it. 00:35:53.973 [2024-07-12 00:48:21.785774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.973 [2024-07-12 00:48:21.785799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:53.973 qpair failed and we were unable to recover it. 00:35:53.973 [2024-07-12 00:48:21.785892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.973 [2024-07-12 00:48:21.785919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:53.973 qpair failed and we were unable to recover it. 00:35:53.973 [2024-07-12 00:48:21.785996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.973 [2024-07-12 00:48:21.786020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:53.973 qpair failed and we were unable to recover it. 00:35:53.974 [2024-07-12 00:48:21.786127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.974 [2024-07-12 00:48:21.786172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:53.974 qpair failed and we were unable to recover it. 00:35:53.974 [2024-07-12 00:48:21.786272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.974 [2024-07-12 00:48:21.786320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:53.974 qpair failed and we were unable to recover it. 00:35:53.974 [2024-07-12 00:48:21.786417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.974 [2024-07-12 00:48:21.786443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:53.974 qpair failed and we were unable to recover it. 00:35:53.974 [2024-07-12 00:48:21.786531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.974 [2024-07-12 00:48:21.786561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:53.974 qpair failed and we were unable to recover it. 00:35:53.974 [2024-07-12 00:48:21.786679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.974 [2024-07-12 00:48:21.786706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:53.974 qpair failed and we were unable to recover it. 00:35:53.974 [2024-07-12 00:48:21.786826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.974 [2024-07-12 00:48:21.786869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:53.974 qpair failed and we were unable to recover it. 00:35:53.974 [2024-07-12 00:48:21.786952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.974 [2024-07-12 00:48:21.786977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:53.974 qpair failed and we were unable to recover it. 00:35:53.974 [2024-07-12 00:48:21.787067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.974 [2024-07-12 00:48:21.787092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:53.974 qpair failed and we were unable to recover it. 00:35:53.974 [2024-07-12 00:48:21.787192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.974 [2024-07-12 00:48:21.787219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:53.974 qpair failed and we were unable to recover it. 00:35:53.974 [2024-07-12 00:48:21.787315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.974 [2024-07-12 00:48:21.787342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:53.974 qpair failed and we were unable to recover it. 00:35:53.974 [2024-07-12 00:48:21.787426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.974 [2024-07-12 00:48:21.787452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:53.974 qpair failed and we were unable to recover it. 00:35:53.974 [2024-07-12 00:48:21.787549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.974 [2024-07-12 00:48:21.787576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:53.974 qpair failed and we were unable to recover it. 00:35:53.974 [2024-07-12 00:48:21.787664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.974 [2024-07-12 00:48:21.787689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:53.974 qpair failed and we were unable to recover it. 00:35:53.974 [2024-07-12 00:48:21.787777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.974 [2024-07-12 00:48:21.787802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:53.974 qpair failed and we were unable to recover it. 00:35:53.974 [2024-07-12 00:48:21.787895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.974 [2024-07-12 00:48:21.787920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:53.974 qpair failed and we were unable to recover it. 00:35:53.974 [2024-07-12 00:48:21.788007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.974 [2024-07-12 00:48:21.788033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:53.974 qpair failed and we were unable to recover it. 00:35:53.974 [2024-07-12 00:48:21.788129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.974 [2024-07-12 00:48:21.788155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:53.974 qpair failed and we were unable to recover it. 00:35:53.974 [2024-07-12 00:48:21.789256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.974 [2024-07-12 00:48:21.789290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:53.974 qpair failed and we were unable to recover it. 00:35:53.974 [2024-07-12 00:48:21.789380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.974 [2024-07-12 00:48:21.789408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:53.974 qpair failed and we were unable to recover it. 00:35:53.974 [2024-07-12 00:48:21.789502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.974 [2024-07-12 00:48:21.789530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:53.974 qpair failed and we were unable to recover it. 00:35:53.974 [2024-07-12 00:48:21.789649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.974 [2024-07-12 00:48:21.789700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:53.974 qpair failed and we were unable to recover it. 00:35:53.974 [2024-07-12 00:48:21.789811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.974 [2024-07-12 00:48:21.789859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:53.974 qpair failed and we were unable to recover it. 00:35:53.974 [2024-07-12 00:48:21.789970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.974 [2024-07-12 00:48:21.790018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:53.974 qpair failed and we were unable to recover it. 00:35:53.974 [2024-07-12 00:48:21.790130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.974 [2024-07-12 00:48:21.790180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:53.974 qpair failed and we were unable to recover it. 00:35:53.974 [2024-07-12 00:48:21.790294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.974 [2024-07-12 00:48:21.790352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:53.974 qpair failed and we were unable to recover it. 00:35:53.974 [2024-07-12 00:48:21.790443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.974 [2024-07-12 00:48:21.790470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:53.974 qpair failed and we were unable to recover it. 00:35:53.974 [2024-07-12 00:48:21.790600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.974 [2024-07-12 00:48:21.790644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:53.974 qpair failed and we were unable to recover it. 00:35:53.974 [2024-07-12 00:48:21.790762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.275 [2024-07-12 00:48:21.790810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.275 qpair failed and we were unable to recover it. 00:35:54.275 [2024-07-12 00:48:21.790926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.275 [2024-07-12 00:48:21.790985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.275 qpair failed and we were unable to recover it. 00:35:54.275 [2024-07-12 00:48:21.791097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.275 [2024-07-12 00:48:21.791145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.275 qpair failed and we were unable to recover it. 00:35:54.275 [2024-07-12 00:48:21.791241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.275 [2024-07-12 00:48:21.791269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.275 qpair failed and we were unable to recover it. 00:35:54.275 [2024-07-12 00:48:21.791404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.275 [2024-07-12 00:48:21.791430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.275 qpair failed and we were unable to recover it. 00:35:54.275 [2024-07-12 00:48:21.791540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.275 [2024-07-12 00:48:21.791606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.275 qpair failed and we were unable to recover it. 00:35:54.275 [2024-07-12 00:48:21.791787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.275 [2024-07-12 00:48:21.791841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.275 qpair failed and we were unable to recover it. 00:35:54.275 [2024-07-12 00:48:21.791938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.275 [2024-07-12 00:48:21.791967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.275 qpair failed and we were unable to recover it. 00:35:54.275 [2024-07-12 00:48:21.792057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.275 [2024-07-12 00:48:21.792083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.275 qpair failed and we were unable to recover it. 00:35:54.275 [2024-07-12 00:48:21.792182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.275 [2024-07-12 00:48:21.792235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.275 qpair failed and we were unable to recover it. 00:35:54.275 [2024-07-12 00:48:21.792331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.275 [2024-07-12 00:48:21.792360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.275 qpair failed and we were unable to recover it. 00:35:54.275 [2024-07-12 00:48:21.792453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.275 [2024-07-12 00:48:21.792481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.275 qpair failed and we were unable to recover it. 00:35:54.275 [2024-07-12 00:48:21.792569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.275 [2024-07-12 00:48:21.792611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.275 qpair failed and we were unable to recover it. 00:35:54.275 [2024-07-12 00:48:21.792703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.275 [2024-07-12 00:48:21.792730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.275 qpair failed and we were unable to recover it. 00:35:54.275 [2024-07-12 00:48:21.792832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.275 [2024-07-12 00:48:21.792879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.275 qpair failed and we were unable to recover it. 00:35:54.275 [2024-07-12 00:48:21.792960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.275 [2024-07-12 00:48:21.792986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.275 qpair failed and we were unable to recover it. 00:35:54.275 [2024-07-12 00:48:21.793064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.275 [2024-07-12 00:48:21.793090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.275 qpair failed and we were unable to recover it. 00:35:54.275 [2024-07-12 00:48:21.793180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.275 [2024-07-12 00:48:21.793215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.275 qpair failed and we were unable to recover it. 00:35:54.275 [2024-07-12 00:48:21.793308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.275 [2024-07-12 00:48:21.793336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.275 qpair failed and we were unable to recover it. 00:35:54.275 [2024-07-12 00:48:21.793433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.275 [2024-07-12 00:48:21.793461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.275 qpair failed and we were unable to recover it. 00:35:54.275 [2024-07-12 00:48:21.793551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.275 [2024-07-12 00:48:21.793593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.275 qpair failed and we were unable to recover it. 00:35:54.275 [2024-07-12 00:48:21.793696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.275 [2024-07-12 00:48:21.793746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.275 qpair failed and we were unable to recover it. 00:35:54.275 [2024-07-12 00:48:21.793839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.275 [2024-07-12 00:48:21.793866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.275 qpair failed and we were unable to recover it. 00:35:54.275 [2024-07-12 00:48:21.793952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.275 [2024-07-12 00:48:21.793978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.275 qpair failed and we were unable to recover it. 00:35:54.275 [2024-07-12 00:48:21.794071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.275 [2024-07-12 00:48:21.794109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.275 qpair failed and we were unable to recover it. 00:35:54.275 [2024-07-12 00:48:21.794205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.275 [2024-07-12 00:48:21.794233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.275 qpair failed and we were unable to recover it. 00:35:54.275 [2024-07-12 00:48:21.794322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.276 [2024-07-12 00:48:21.794349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.276 qpair failed and we were unable to recover it. 00:35:54.276 [2024-07-12 00:48:21.794457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.276 [2024-07-12 00:48:21.794502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.276 qpair failed and we were unable to recover it. 00:35:54.276 [2024-07-12 00:48:21.794597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.276 [2024-07-12 00:48:21.794632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.276 qpair failed and we were unable to recover it. 00:35:54.276 [2024-07-12 00:48:21.794719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.276 [2024-07-12 00:48:21.794747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.276 qpair failed and we were unable to recover it. 00:35:54.276 [2024-07-12 00:48:21.794845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.276 [2024-07-12 00:48:21.794884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.276 qpair failed and we were unable to recover it. 00:35:54.276 [2024-07-12 00:48:21.794995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.276 [2024-07-12 00:48:21.795022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.276 qpair failed and we were unable to recover it. 00:35:54.276 [2024-07-12 00:48:21.795117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.276 [2024-07-12 00:48:21.795154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.276 qpair failed and we were unable to recover it. 00:35:54.276 [2024-07-12 00:48:21.795262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.276 [2024-07-12 00:48:21.795288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.276 qpair failed and we were unable to recover it. 00:35:54.276 [2024-07-12 00:48:21.795379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.276 [2024-07-12 00:48:21.795409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.276 qpair failed and we were unable to recover it. 00:35:54.276 [2024-07-12 00:48:21.795493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.276 [2024-07-12 00:48:21.795520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.276 qpair failed and we were unable to recover it. 00:35:54.276 [2024-07-12 00:48:21.795616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.276 [2024-07-12 00:48:21.795647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.276 qpair failed and we were unable to recover it. 00:35:54.276 [2024-07-12 00:48:21.795756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.276 [2024-07-12 00:48:21.795801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.276 qpair failed and we were unable to recover it. 00:35:54.276 [2024-07-12 00:48:21.795921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.276 [2024-07-12 00:48:21.795966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.276 qpair failed and we were unable to recover it. 00:35:54.276 [2024-07-12 00:48:21.796056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.276 [2024-07-12 00:48:21.796082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.276 qpair failed and we were unable to recover it. 00:35:54.276 [2024-07-12 00:48:21.796163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.276 [2024-07-12 00:48:21.796188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.276 qpair failed and we were unable to recover it. 00:35:54.276 [2024-07-12 00:48:21.796273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.276 [2024-07-12 00:48:21.796298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.276 qpair failed and we were unable to recover it. 00:35:54.276 [2024-07-12 00:48:21.796607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.276 [2024-07-12 00:48:21.796645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.276 qpair failed and we were unable to recover it. 00:35:54.276 [2024-07-12 00:48:21.796733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.276 [2024-07-12 00:48:21.796758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.276 qpair failed and we were unable to recover it. 00:35:54.276 [2024-07-12 00:48:21.796870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.276 [2024-07-12 00:48:21.796915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.276 qpair failed and we were unable to recover it. 00:35:54.276 [2024-07-12 00:48:21.797010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.276 [2024-07-12 00:48:21.797036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.276 qpair failed and we were unable to recover it. 00:35:54.276 [2024-07-12 00:48:21.797126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.276 [2024-07-12 00:48:21.797153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.276 qpair failed and we were unable to recover it. 00:35:54.276 [2024-07-12 00:48:21.797241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.276 [2024-07-12 00:48:21.797267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.276 qpair failed and we were unable to recover it. 00:35:54.276 [2024-07-12 00:48:21.797350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.276 [2024-07-12 00:48:21.797381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.276 qpair failed and we were unable to recover it. 00:35:54.276 [2024-07-12 00:48:21.797490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.276 [2024-07-12 00:48:21.797546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.276 qpair failed and we were unable to recover it. 00:35:54.276 [2024-07-12 00:48:21.797650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.276 [2024-07-12 00:48:21.797682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.276 qpair failed and we were unable to recover it. 00:35:54.276 [2024-07-12 00:48:21.797773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.276 [2024-07-12 00:48:21.797804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.276 qpair failed and we were unable to recover it. 00:35:54.276 [2024-07-12 00:48:21.797904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.276 [2024-07-12 00:48:21.797930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.276 qpair failed and we were unable to recover it. 00:35:54.276 [2024-07-12 00:48:21.798031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.276 [2024-07-12 00:48:21.798059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.276 qpair failed and we were unable to recover it. 00:35:54.276 [2024-07-12 00:48:21.798154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.276 [2024-07-12 00:48:21.798190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.276 qpair failed and we were unable to recover it. 00:35:54.276 [2024-07-12 00:48:21.798273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.276 [2024-07-12 00:48:21.798299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.276 qpair failed and we were unable to recover it. 00:35:54.276 [2024-07-12 00:48:21.798380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.276 [2024-07-12 00:48:21.798407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.276 qpair failed and we were unable to recover it. 00:35:54.276 [2024-07-12 00:48:21.798490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.276 [2024-07-12 00:48:21.798516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.276 qpair failed and we were unable to recover it. 00:35:54.276 [2024-07-12 00:48:21.798601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.276 [2024-07-12 00:48:21.798629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.276 qpair failed and we were unable to recover it. 00:35:54.276 [2024-07-12 00:48:21.798719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.276 [2024-07-12 00:48:21.798746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.276 qpair failed and we were unable to recover it. 00:35:54.276 [2024-07-12 00:48:21.798840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.276 [2024-07-12 00:48:21.798866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.276 qpair failed and we were unable to recover it. 00:35:54.276 [2024-07-12 00:48:21.798962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.276 [2024-07-12 00:48:21.798989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.276 qpair failed and we were unable to recover it. 00:35:54.276 [2024-07-12 00:48:21.799073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.276 [2024-07-12 00:48:21.799100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.276 qpair failed and we were unable to recover it. 00:35:54.276 [2024-07-12 00:48:21.799188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.276 [2024-07-12 00:48:21.799217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.276 qpair failed and we were unable to recover it. 00:35:54.276 [2024-07-12 00:48:21.799299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.276 [2024-07-12 00:48:21.799327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.276 qpair failed and we were unable to recover it. 00:35:54.276 [2024-07-12 00:48:21.799428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.276 [2024-07-12 00:48:21.799455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.276 qpair failed and we were unable to recover it. 00:35:54.276 [2024-07-12 00:48:21.799539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.276 [2024-07-12 00:48:21.799567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.276 qpair failed and we were unable to recover it. 00:35:54.276 [2024-07-12 00:48:21.799656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.277 [2024-07-12 00:48:21.799683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.277 qpair failed and we were unable to recover it. 00:35:54.277 [2024-07-12 00:48:21.799761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.277 [2024-07-12 00:48:21.799787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.277 qpair failed and we were unable to recover it. 00:35:54.277 [2024-07-12 00:48:21.799864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.277 [2024-07-12 00:48:21.799889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.277 qpair failed and we were unable to recover it. 00:35:54.277 [2024-07-12 00:48:21.799977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.277 [2024-07-12 00:48:21.800001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.277 qpair failed and we were unable to recover it. 00:35:54.277 [2024-07-12 00:48:21.800078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.277 [2024-07-12 00:48:21.800104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.277 qpair failed and we were unable to recover it. 00:35:54.277 [2024-07-12 00:48:21.800185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.277 [2024-07-12 00:48:21.800216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.277 qpair failed and we were unable to recover it. 00:35:54.277 [2024-07-12 00:48:21.800311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.277 [2024-07-12 00:48:21.800339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.277 qpair failed and we were unable to recover it. 00:35:54.277 [2024-07-12 00:48:21.800425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.277 [2024-07-12 00:48:21.800452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.277 qpair failed and we were unable to recover it. 00:35:54.277 [2024-07-12 00:48:21.800542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.277 [2024-07-12 00:48:21.800579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.277 qpair failed and we were unable to recover it. 00:35:54.277 [2024-07-12 00:48:21.800693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.277 [2024-07-12 00:48:21.800720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.277 qpair failed and we were unable to recover it. 00:35:54.277 [2024-07-12 00:48:21.800815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.277 [2024-07-12 00:48:21.800843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.277 qpair failed and we were unable to recover it. 00:35:54.277 [2024-07-12 00:48:21.800944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.277 [2024-07-12 00:48:21.800979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.277 qpair failed and we were unable to recover it. 00:35:54.277 [2024-07-12 00:48:21.801078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.277 [2024-07-12 00:48:21.801106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.277 qpair failed and we were unable to recover it. 00:35:54.277 [2024-07-12 00:48:21.801199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.277 [2024-07-12 00:48:21.801229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.277 qpair failed and we were unable to recover it. 00:35:54.277 [2024-07-12 00:48:21.801318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.277 [2024-07-12 00:48:21.801344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.277 qpair failed and we were unable to recover it. 00:35:54.277 [2024-07-12 00:48:21.801427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.277 [2024-07-12 00:48:21.801453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.277 qpair failed and we were unable to recover it. 00:35:54.277 [2024-07-12 00:48:21.801543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.277 [2024-07-12 00:48:21.801568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.277 qpair failed and we were unable to recover it. 00:35:54.277 [2024-07-12 00:48:21.801664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.277 [2024-07-12 00:48:21.801690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.277 qpair failed and we were unable to recover it. 00:35:54.277 [2024-07-12 00:48:21.801775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.277 [2024-07-12 00:48:21.801803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.277 qpair failed and we were unable to recover it. 00:35:54.277 [2024-07-12 00:48:21.801891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.277 [2024-07-12 00:48:21.801916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.277 qpair failed and we were unable to recover it. 00:35:54.277 [2024-07-12 00:48:21.802003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.277 [2024-07-12 00:48:21.802028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.277 qpair failed and we were unable to recover it. 00:35:54.277 [2024-07-12 00:48:21.802112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.277 [2024-07-12 00:48:21.802137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.277 qpair failed and we were unable to recover it. 00:35:54.277 [2024-07-12 00:48:21.802226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.277 [2024-07-12 00:48:21.802252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.277 qpair failed and we were unable to recover it. 00:35:54.277 [2024-07-12 00:48:21.802341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.277 [2024-07-12 00:48:21.802368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.277 qpair failed and we were unable to recover it. 00:35:54.277 [2024-07-12 00:48:21.802452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.277 [2024-07-12 00:48:21.802484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.277 qpair failed and we were unable to recover it. 00:35:54.277 [2024-07-12 00:48:21.802565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.277 [2024-07-12 00:48:21.802596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.277 qpair failed and we were unable to recover it. 00:35:54.277 [2024-07-12 00:48:21.802676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.277 [2024-07-12 00:48:21.802701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.277 qpair failed and we were unable to recover it. 00:35:54.277 [2024-07-12 00:48:21.802783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.277 [2024-07-12 00:48:21.802809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.277 qpair failed and we were unable to recover it. 00:35:54.277 [2024-07-12 00:48:21.802894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.277 [2024-07-12 00:48:21.802920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.277 qpair failed and we were unable to recover it. 00:35:54.277 [2024-07-12 00:48:21.802997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.277 [2024-07-12 00:48:21.803022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.277 qpair failed and we were unable to recover it. 00:35:54.277 [2024-07-12 00:48:21.803122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.277 [2024-07-12 00:48:21.803149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.277 qpair failed and we were unable to recover it. 00:35:54.277 [2024-07-12 00:48:21.803238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.277 [2024-07-12 00:48:21.803265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.277 qpair failed and we were unable to recover it. 00:35:54.277 [2024-07-12 00:48:21.803352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.277 [2024-07-12 00:48:21.803378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.277 qpair failed and we were unable to recover it. 00:35:54.277 [2024-07-12 00:48:21.803477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.277 [2024-07-12 00:48:21.803516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.277 qpair failed and we were unable to recover it. 00:35:54.277 [2024-07-12 00:48:21.803625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.277 [2024-07-12 00:48:21.803656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.277 qpair failed and we were unable to recover it. 00:35:54.277 [2024-07-12 00:48:21.803745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.277 [2024-07-12 00:48:21.803772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.277 qpair failed and we were unable to recover it. 00:35:54.277 [2024-07-12 00:48:21.803863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.277 [2024-07-12 00:48:21.803891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.277 qpair failed and we were unable to recover it. 00:35:54.277 [2024-07-12 00:48:21.803973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.277 [2024-07-12 00:48:21.803999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.277 qpair failed and we were unable to recover it. 00:35:54.277 [2024-07-12 00:48:21.804098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.277 [2024-07-12 00:48:21.804125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.277 qpair failed and we were unable to recover it. 00:35:54.277 [2024-07-12 00:48:21.804212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.277 [2024-07-12 00:48:21.804239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.277 qpair failed and we were unable to recover it. 00:35:54.277 [2024-07-12 00:48:21.804330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.277 [2024-07-12 00:48:21.804358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.277 qpair failed and we were unable to recover it. 00:35:54.278 [2024-07-12 00:48:21.804441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.278 [2024-07-12 00:48:21.804469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.278 qpair failed and we were unable to recover it. 00:35:54.278 [2024-07-12 00:48:21.804558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.278 [2024-07-12 00:48:21.804594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.278 qpair failed and we were unable to recover it. 00:35:54.278 [2024-07-12 00:48:21.804674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.278 [2024-07-12 00:48:21.804708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.278 qpair failed and we were unable to recover it. 00:35:54.278 [2024-07-12 00:48:21.804797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.278 [2024-07-12 00:48:21.804824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.278 qpair failed and we were unable to recover it. 00:35:54.278 [2024-07-12 00:48:21.804910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.278 [2024-07-12 00:48:21.804938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.278 qpair failed and we were unable to recover it. 00:35:54.278 [2024-07-12 00:48:21.805028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.278 [2024-07-12 00:48:21.805055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.278 qpair failed and we were unable to recover it. 00:35:54.278 [2024-07-12 00:48:21.805144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.278 [2024-07-12 00:48:21.805172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.278 qpair failed and we were unable to recover it. 00:35:54.278 [2024-07-12 00:48:21.805254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.278 [2024-07-12 00:48:21.805281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.278 qpair failed and we were unable to recover it. 00:35:54.278 [2024-07-12 00:48:21.805368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.278 [2024-07-12 00:48:21.805395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.278 qpair failed and we were unable to recover it. 00:35:54.278 [2024-07-12 00:48:21.805483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.278 [2024-07-12 00:48:21.805509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.278 qpair failed and we were unable to recover it. 00:35:54.278 [2024-07-12 00:48:21.805722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.278 [2024-07-12 00:48:21.805757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.278 qpair failed and we were unable to recover it. 00:35:54.278 [2024-07-12 00:48:21.805867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.278 [2024-07-12 00:48:21.805899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.278 qpair failed and we were unable to recover it. 00:35:54.278 [2024-07-12 00:48:21.806009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.278 [2024-07-12 00:48:21.806037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.278 qpair failed and we were unable to recover it. 00:35:54.278 [2024-07-12 00:48:21.806137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.278 [2024-07-12 00:48:21.806174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.278 qpair failed and we were unable to recover it. 00:35:54.278 [2024-07-12 00:48:21.806266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.278 [2024-07-12 00:48:21.806294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.278 qpair failed and we were unable to recover it. 00:35:54.278 [2024-07-12 00:48:21.806380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.278 [2024-07-12 00:48:21.806407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.278 qpair failed and we were unable to recover it. 00:35:54.278 [2024-07-12 00:48:21.806491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.278 [2024-07-12 00:48:21.806517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.278 qpair failed and we were unable to recover it. 00:35:54.278 [2024-07-12 00:48:21.806617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.278 [2024-07-12 00:48:21.806644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.278 qpair failed and we were unable to recover it. 00:35:54.278 [2024-07-12 00:48:21.806731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.278 [2024-07-12 00:48:21.806757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.278 qpair failed and we were unable to recover it. 00:35:54.278 [2024-07-12 00:48:21.806841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.278 [2024-07-12 00:48:21.806868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.278 qpair failed and we were unable to recover it. 00:35:54.278 [2024-07-12 00:48:21.806954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.278 [2024-07-12 00:48:21.806981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.278 qpair failed and we were unable to recover it. 00:35:54.278 [2024-07-12 00:48:21.807062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.278 [2024-07-12 00:48:21.807089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.278 qpair failed and we were unable to recover it. 00:35:54.278 [2024-07-12 00:48:21.807177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.278 [2024-07-12 00:48:21.807203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.278 qpair failed and we were unable to recover it. 00:35:54.278 [2024-07-12 00:48:21.807284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.278 [2024-07-12 00:48:21.807315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.278 qpair failed and we were unable to recover it. 00:35:54.278 [2024-07-12 00:48:21.807395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.278 [2024-07-12 00:48:21.807421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.278 qpair failed and we were unable to recover it. 00:35:54.278 [2024-07-12 00:48:21.807512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.278 [2024-07-12 00:48:21.807542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.278 qpair failed and we were unable to recover it. 00:35:54.278 [2024-07-12 00:48:21.807642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.278 [2024-07-12 00:48:21.807675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.278 qpair failed and we were unable to recover it. 00:35:54.278 [2024-07-12 00:48:21.807810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.278 [2024-07-12 00:48:21.807836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.278 qpair failed and we were unable to recover it. 00:35:54.278 [2024-07-12 00:48:21.807925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.278 [2024-07-12 00:48:21.807953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.278 qpair failed and we were unable to recover it. 00:35:54.278 [2024-07-12 00:48:21.808031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.278 [2024-07-12 00:48:21.808059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.278 qpair failed and we were unable to recover it. 00:35:54.278 [2024-07-12 00:48:21.808147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.278 [2024-07-12 00:48:21.808174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.278 qpair failed and we were unable to recover it. 00:35:54.278 [2024-07-12 00:48:21.808260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.278 [2024-07-12 00:48:21.808288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.278 qpair failed and we were unable to recover it. 00:35:54.278 [2024-07-12 00:48:21.808364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.278 [2024-07-12 00:48:21.808390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.278 qpair failed and we were unable to recover it. 00:35:54.278 [2024-07-12 00:48:21.808474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.278 [2024-07-12 00:48:21.808501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.278 qpair failed and we were unable to recover it. 00:35:54.278 [2024-07-12 00:48:21.808601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.278 [2024-07-12 00:48:21.808629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.278 qpair failed and we were unable to recover it. 00:35:54.278 [2024-07-12 00:48:21.808711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.278 [2024-07-12 00:48:21.808737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.278 qpair failed and we were unable to recover it. 00:35:54.278 [2024-07-12 00:48:21.808833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.278 [2024-07-12 00:48:21.808860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.278 qpair failed and we were unable to recover it. 00:35:54.278 [2024-07-12 00:48:21.808956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.278 [2024-07-12 00:48:21.808983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.278 qpair failed and we were unable to recover it. 00:35:54.278 [2024-07-12 00:48:21.809063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.278 [2024-07-12 00:48:21.809088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.278 qpair failed and we were unable to recover it. 00:35:54.278 [2024-07-12 00:48:21.809168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.278 [2024-07-12 00:48:21.809195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.278 qpair failed and we were unable to recover it. 00:35:54.278 [2024-07-12 00:48:21.809278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.278 [2024-07-12 00:48:21.809304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.279 qpair failed and we were unable to recover it. 00:35:54.279 [2024-07-12 00:48:21.809394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.279 [2024-07-12 00:48:21.809421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.279 qpair failed and we were unable to recover it. 00:35:54.279 [2024-07-12 00:48:21.809495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.279 [2024-07-12 00:48:21.809522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.279 qpair failed and we were unable to recover it. 00:35:54.279 [2024-07-12 00:48:21.809612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.279 [2024-07-12 00:48:21.809640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.279 qpair failed and we were unable to recover it. 00:35:54.279 [2024-07-12 00:48:21.809720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.279 [2024-07-12 00:48:21.809746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.279 qpair failed and we were unable to recover it. 00:35:54.279 [2024-07-12 00:48:21.809842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.279 [2024-07-12 00:48:21.809877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.279 qpair failed and we were unable to recover it. 00:35:54.279 [2024-07-12 00:48:21.809955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.279 [2024-07-12 00:48:21.809980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.279 qpair failed and we were unable to recover it. 00:35:54.279 [2024-07-12 00:48:21.810065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.279 [2024-07-12 00:48:21.810091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.279 qpair failed and we were unable to recover it. 00:35:54.279 [2024-07-12 00:48:21.810174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.279 [2024-07-12 00:48:21.810201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.279 qpair failed and we were unable to recover it. 00:35:54.279 [2024-07-12 00:48:21.810285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.279 [2024-07-12 00:48:21.810312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.279 qpair failed and we were unable to recover it. 00:35:54.279 [2024-07-12 00:48:21.810389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.279 [2024-07-12 00:48:21.810420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.279 qpair failed and we were unable to recover it. 00:35:54.279 [2024-07-12 00:48:21.810499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.279 [2024-07-12 00:48:21.810525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.279 qpair failed and we were unable to recover it. 00:35:54.279 [2024-07-12 00:48:21.810613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.279 [2024-07-12 00:48:21.810644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.279 qpair failed and we were unable to recover it. 00:35:54.279 [2024-07-12 00:48:21.810723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.279 [2024-07-12 00:48:21.810750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.279 qpair failed and we were unable to recover it. 00:35:54.279 [2024-07-12 00:48:21.810880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.279 [2024-07-12 00:48:21.810906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.279 qpair failed and we were unable to recover it. 00:35:54.279 [2024-07-12 00:48:21.811004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.279 [2024-07-12 00:48:21.811031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.279 qpair failed and we were unable to recover it. 00:35:54.279 [2024-07-12 00:48:21.811112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.279 [2024-07-12 00:48:21.811138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.279 qpair failed and we were unable to recover it. 00:35:54.279 [2024-07-12 00:48:21.811227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.279 [2024-07-12 00:48:21.811259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.279 qpair failed and we were unable to recover it. 00:35:54.279 [2024-07-12 00:48:21.811344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.279 [2024-07-12 00:48:21.811371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.279 qpair failed and we were unable to recover it. 00:35:54.279 [2024-07-12 00:48:21.811458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.279 [2024-07-12 00:48:21.811486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.279 qpair failed and we were unable to recover it. 00:35:54.279 [2024-07-12 00:48:21.811562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.279 [2024-07-12 00:48:21.811603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.279 qpair failed and we were unable to recover it. 00:35:54.279 [2024-07-12 00:48:21.811690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.279 [2024-07-12 00:48:21.811718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.280 qpair failed and we were unable to recover it. 00:35:54.280 [2024-07-12 00:48:21.812027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.280 [2024-07-12 00:48:21.812056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.280 qpair failed and we were unable to recover it. 00:35:54.280 [2024-07-12 00:48:21.812158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.280 [2024-07-12 00:48:21.812186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.280 qpair failed and we were unable to recover it. 00:35:54.280 [2024-07-12 00:48:21.812273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.280 [2024-07-12 00:48:21.812300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.280 qpair failed and we were unable to recover it. 00:35:54.280 [2024-07-12 00:48:21.812388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.280 [2024-07-12 00:48:21.812416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.280 qpair failed and we were unable to recover it. 00:35:54.280 [2024-07-12 00:48:21.812497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.280 [2024-07-12 00:48:21.812524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.280 qpair failed and we were unable to recover it. 00:35:54.280 [2024-07-12 00:48:21.812619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.280 [2024-07-12 00:48:21.812649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.280 qpair failed and we were unable to recover it. 00:35:54.280 [2024-07-12 00:48:21.812726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.280 [2024-07-12 00:48:21.812752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.280 qpair failed and we were unable to recover it. 00:35:54.280 [2024-07-12 00:48:21.812833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.280 [2024-07-12 00:48:21.812859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.280 qpair failed and we were unable to recover it. 00:35:54.280 [2024-07-12 00:48:21.812940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.280 [2024-07-12 00:48:21.812965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.280 qpair failed and we were unable to recover it. 00:35:54.280 [2024-07-12 00:48:21.813044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.280 [2024-07-12 00:48:21.813070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.280 qpair failed and we were unable to recover it. 00:35:54.280 [2024-07-12 00:48:21.813152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.280 [2024-07-12 00:48:21.813177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.280 qpair failed and we were unable to recover it. 00:35:54.280 [2024-07-12 00:48:21.813258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.280 [2024-07-12 00:48:21.813284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.280 qpair failed and we were unable to recover it. 00:35:54.280 [2024-07-12 00:48:21.813362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.280 [2024-07-12 00:48:21.813388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.280 qpair failed and we were unable to recover it. 00:35:54.280 [2024-07-12 00:48:21.813473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.280 [2024-07-12 00:48:21.813499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.280 qpair failed and we were unable to recover it. 00:35:54.280 [2024-07-12 00:48:21.813576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.280 [2024-07-12 00:48:21.813607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.280 qpair failed and we were unable to recover it. 00:35:54.280 [2024-07-12 00:48:21.813692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.280 [2024-07-12 00:48:21.813718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.280 qpair failed and we were unable to recover it. 00:35:54.280 [2024-07-12 00:48:21.813804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.280 [2024-07-12 00:48:21.813829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.280 qpair failed and we were unable to recover it. 00:35:54.280 [2024-07-12 00:48:21.813910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.280 [2024-07-12 00:48:21.813936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.280 qpair failed and we were unable to recover it. 00:35:54.280 [2024-07-12 00:48:21.814041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.280 [2024-07-12 00:48:21.814068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.280 qpair failed and we were unable to recover it. 00:35:54.280 [2024-07-12 00:48:21.814151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.280 [2024-07-12 00:48:21.814177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.280 qpair failed and we were unable to recover it. 00:35:54.280 [2024-07-12 00:48:21.814257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.280 [2024-07-12 00:48:21.814283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.280 qpair failed and we were unable to recover it. 00:35:54.280 [2024-07-12 00:48:21.814489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.280 [2024-07-12 00:48:21.814525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.280 qpair failed and we were unable to recover it. 00:35:54.280 [2024-07-12 00:48:21.814620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.280 [2024-07-12 00:48:21.814650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.280 qpair failed and we were unable to recover it. 00:35:54.280 [2024-07-12 00:48:21.815458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.280 [2024-07-12 00:48:21.815488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.281 qpair failed and we were unable to recover it. 00:35:54.281 [2024-07-12 00:48:21.815596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.281 [2024-07-12 00:48:21.815623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.281 qpair failed and we were unable to recover it. 00:35:54.281 [2024-07-12 00:48:21.815725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.281 [2024-07-12 00:48:21.815752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.281 qpair failed and we were unable to recover it. 00:35:54.281 [2024-07-12 00:48:21.815863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.281 [2024-07-12 00:48:21.815908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.281 qpair failed and we were unable to recover it. 00:35:54.281 [2024-07-12 00:48:21.816012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.281 [2024-07-12 00:48:21.816058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.281 qpair failed and we were unable to recover it. 00:35:54.281 [2024-07-12 00:48:21.816136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.281 [2024-07-12 00:48:21.816173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.281 qpair failed and we were unable to recover it. 00:35:54.281 [2024-07-12 00:48:21.816271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.281 [2024-07-12 00:48:21.816299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.281 qpair failed and we were unable to recover it. 00:35:54.281 [2024-07-12 00:48:21.816417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.281 [2024-07-12 00:48:21.816461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.281 qpair failed and we were unable to recover it. 00:35:54.281 [2024-07-12 00:48:21.816561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.281 [2024-07-12 00:48:21.816617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.281 qpair failed and we were unable to recover it. 00:35:54.281 [2024-07-12 00:48:21.816730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.281 [2024-07-12 00:48:21.816775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.281 qpair failed and we were unable to recover it. 00:35:54.281 [2024-07-12 00:48:21.816908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.281 [2024-07-12 00:48:21.816948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.281 qpair failed and we were unable to recover it. 00:35:54.281 [2024-07-12 00:48:21.817050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.281 [2024-07-12 00:48:21.817081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.281 qpair failed and we were unable to recover it. 00:35:54.281 [2024-07-12 00:48:21.817172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.281 [2024-07-12 00:48:21.817198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.281 qpair failed and we were unable to recover it. 00:35:54.281 [2024-07-12 00:48:21.817295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.281 [2024-07-12 00:48:21.817325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.281 qpair failed and we were unable to recover it. 00:35:54.281 [2024-07-12 00:48:21.817419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.281 [2024-07-12 00:48:21.817445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.281 qpair failed and we were unable to recover it. 00:35:54.281 [2024-07-12 00:48:21.817527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.281 [2024-07-12 00:48:21.817556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.281 qpair failed and we were unable to recover it. 00:35:54.281 [2024-07-12 00:48:21.817671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.281 [2024-07-12 00:48:21.817705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.281 qpair failed and we were unable to recover it. 00:35:54.281 [2024-07-12 00:48:21.817847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.281 [2024-07-12 00:48:21.817878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.281 qpair failed and we were unable to recover it. 00:35:54.281 [2024-07-12 00:48:21.817974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.281 [2024-07-12 00:48:21.818000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.281 qpair failed and we were unable to recover it. 00:35:54.281 [2024-07-12 00:48:21.818103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.281 [2024-07-12 00:48:21.818133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.281 qpair failed and we were unable to recover it. 00:35:54.281 [2024-07-12 00:48:21.818248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.281 [2024-07-12 00:48:21.818291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.281 qpair failed and we were unable to recover it. 00:35:54.281 [2024-07-12 00:48:21.818405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.281 [2024-07-12 00:48:21.818451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.281 qpair failed and we were unable to recover it. 00:35:54.281 [2024-07-12 00:48:21.818563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.281 [2024-07-12 00:48:21.818613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.281 qpair failed and we were unable to recover it. 00:35:54.281 [2024-07-12 00:48:21.818745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.281 [2024-07-12 00:48:21.818784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.281 qpair failed and we were unable to recover it. 00:35:54.281 [2024-07-12 00:48:21.818997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.281 [2024-07-12 00:48:21.819044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.281 qpair failed and we were unable to recover it. 00:35:54.281 [2024-07-12 00:48:21.819153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.281 [2024-07-12 00:48:21.819185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.281 qpair failed and we were unable to recover it. 00:35:54.281 [2024-07-12 00:48:21.819295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.281 [2024-07-12 00:48:21.819325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.281 qpair failed and we were unable to recover it. 00:35:54.281 [2024-07-12 00:48:21.819422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.281 [2024-07-12 00:48:21.819449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.281 qpair failed and we were unable to recover it. 00:35:54.281 [2024-07-12 00:48:21.819544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.281 [2024-07-12 00:48:21.819570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.281 qpair failed and we were unable to recover it. 00:35:54.281 [2024-07-12 00:48:21.819665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.281 [2024-07-12 00:48:21.819694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.281 qpair failed and we were unable to recover it. 00:35:54.281 [2024-07-12 00:48:21.819785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.281 [2024-07-12 00:48:21.819814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.281 qpair failed and we were unable to recover it. 00:35:54.281 [2024-07-12 00:48:21.819898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.281 [2024-07-12 00:48:21.819924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.281 qpair failed and we were unable to recover it. 00:35:54.281 [2024-07-12 00:48:21.820012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.281 [2024-07-12 00:48:21.820040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.281 qpair failed and we were unable to recover it. 00:35:54.281 [2024-07-12 00:48:21.820122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.281 [2024-07-12 00:48:21.820148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.281 qpair failed and we were unable to recover it. 00:35:54.281 [2024-07-12 00:48:21.820237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.281 [2024-07-12 00:48:21.820263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.281 qpair failed and we were unable to recover it. 00:35:54.281 [2024-07-12 00:48:21.820348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.281 [2024-07-12 00:48:21.820375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.281 qpair failed and we were unable to recover it. 00:35:54.281 [2024-07-12 00:48:21.820458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.281 [2024-07-12 00:48:21.820483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.281 qpair failed and we were unable to recover it. 00:35:54.281 [2024-07-12 00:48:21.820566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.281 [2024-07-12 00:48:21.820599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.281 qpair failed and we were unable to recover it. 00:35:54.281 [2024-07-12 00:48:21.820683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.281 [2024-07-12 00:48:21.820708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.281 qpair failed and we were unable to recover it. 00:35:54.281 [2024-07-12 00:48:21.820794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.281 [2024-07-12 00:48:21.820820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.281 qpair failed and we were unable to recover it. 00:35:54.281 [2024-07-12 00:48:21.820901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.281 [2024-07-12 00:48:21.820927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.281 qpair failed and we were unable to recover it. 00:35:54.281 [2024-07-12 00:48:21.821018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.281 [2024-07-12 00:48:21.821044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.281 qpair failed and we were unable to recover it. 00:35:54.282 [2024-07-12 00:48:21.821131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.282 [2024-07-12 00:48:21.821159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.282 qpair failed and we were unable to recover it. 00:35:54.282 [2024-07-12 00:48:21.821249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.282 [2024-07-12 00:48:21.821276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.282 qpair failed and we were unable to recover it. 00:35:54.282 [2024-07-12 00:48:21.821357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.282 [2024-07-12 00:48:21.821382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.282 qpair failed and we were unable to recover it. 00:35:54.282 [2024-07-12 00:48:21.821459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.282 [2024-07-12 00:48:21.821483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.282 qpair failed and we were unable to recover it. 00:35:54.282 [2024-07-12 00:48:21.821570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.282 [2024-07-12 00:48:21.821606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.282 qpair failed and we were unable to recover it. 00:35:54.282 [2024-07-12 00:48:21.821689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.282 [2024-07-12 00:48:21.821715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.282 qpair failed and we were unable to recover it. 00:35:54.282 [2024-07-12 00:48:21.821801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.282 [2024-07-12 00:48:21.821827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.282 qpair failed and we were unable to recover it. 00:35:54.282 [2024-07-12 00:48:21.821907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.282 [2024-07-12 00:48:21.821933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.282 qpair failed and we were unable to recover it. 00:35:54.282 [2024-07-12 00:48:21.822005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.282 [2024-07-12 00:48:21.822030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.282 qpair failed and we were unable to recover it. 00:35:54.282 [2024-07-12 00:48:21.822116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.282 [2024-07-12 00:48:21.822140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.282 qpair failed and we were unable to recover it. 00:35:54.282 [2024-07-12 00:48:21.822222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.282 [2024-07-12 00:48:21.822247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.282 qpair failed and we were unable to recover it. 00:35:54.282 [2024-07-12 00:48:21.822330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.282 [2024-07-12 00:48:21.822355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.282 qpair failed and we were unable to recover it. 00:35:54.282 [2024-07-12 00:48:21.822440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.282 [2024-07-12 00:48:21.822465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.282 qpair failed and we were unable to recover it. 00:35:54.282 [2024-07-12 00:48:21.822548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.282 [2024-07-12 00:48:21.822573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.282 qpair failed and we were unable to recover it. 00:35:54.282 [2024-07-12 00:48:21.822675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.282 [2024-07-12 00:48:21.822702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.282 qpair failed and we were unable to recover it. 00:35:54.282 [2024-07-12 00:48:21.822806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.282 [2024-07-12 00:48:21.822854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.282 qpair failed and we were unable to recover it. 00:35:54.282 [2024-07-12 00:48:21.822942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.282 [2024-07-12 00:48:21.822969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.282 qpair failed and we were unable to recover it. 00:35:54.282 [2024-07-12 00:48:21.823066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.282 [2024-07-12 00:48:21.823097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.282 qpair failed and we were unable to recover it. 00:35:54.282 [2024-07-12 00:48:21.823183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.282 [2024-07-12 00:48:21.823209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.282 qpair failed and we were unable to recover it. 00:35:54.282 [2024-07-12 00:48:21.823296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.282 [2024-07-12 00:48:21.823323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.282 qpair failed and we were unable to recover it. 00:35:54.282 [2024-07-12 00:48:21.823411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.282 [2024-07-12 00:48:21.823437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.282 qpair failed and we were unable to recover it. 00:35:54.282 [2024-07-12 00:48:21.823528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.282 [2024-07-12 00:48:21.823554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.282 qpair failed and we were unable to recover it. 00:35:54.282 [2024-07-12 00:48:21.823661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.282 [2024-07-12 00:48:21.823688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.282 qpair failed and we were unable to recover it. 00:35:54.282 [2024-07-12 00:48:21.823802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.282 [2024-07-12 00:48:21.823827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.282 qpair failed and we were unable to recover it. 00:35:54.282 [2024-07-12 00:48:21.823913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.282 [2024-07-12 00:48:21.823939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.282 qpair failed and we were unable to recover it. 00:35:54.282 [2024-07-12 00:48:21.824024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.282 [2024-07-12 00:48:21.824050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.282 qpair failed and we were unable to recover it. 00:35:54.282 [2024-07-12 00:48:21.824136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.282 [2024-07-12 00:48:21.824163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.282 qpair failed and we were unable to recover it. 00:35:54.282 [2024-07-12 00:48:21.824255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.282 [2024-07-12 00:48:21.824281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.282 qpair failed and we were unable to recover it. 00:35:54.282 [2024-07-12 00:48:21.824369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.282 [2024-07-12 00:48:21.824396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.282 qpair failed and we were unable to recover it. 00:35:54.282 [2024-07-12 00:48:21.824478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.282 [2024-07-12 00:48:21.824504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.282 qpair failed and we were unable to recover it. 00:35:54.282 [2024-07-12 00:48:21.824609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.282 [2024-07-12 00:48:21.824636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.282 qpair failed and we were unable to recover it. 00:35:54.282 [2024-07-12 00:48:21.824731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.282 [2024-07-12 00:48:21.824757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.282 qpair failed and we were unable to recover it. 00:35:54.282 [2024-07-12 00:48:21.824833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.282 [2024-07-12 00:48:21.824859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.282 qpair failed and we were unable to recover it. 00:35:54.282 [2024-07-12 00:48:21.824947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.282 [2024-07-12 00:48:21.824972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.282 qpair failed and we were unable to recover it. 00:35:54.282 [2024-07-12 00:48:21.825059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.282 [2024-07-12 00:48:21.825084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.282 qpair failed and we were unable to recover it. 00:35:54.282 [2024-07-12 00:48:21.825177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.282 [2024-07-12 00:48:21.825205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.282 qpair failed and we were unable to recover it. 00:35:54.282 [2024-07-12 00:48:21.825320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.282 [2024-07-12 00:48:21.825348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.282 qpair failed and we were unable to recover it. 00:35:54.282 [2024-07-12 00:48:21.825435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.282 [2024-07-12 00:48:21.825460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.282 qpair failed and we were unable to recover it. 00:35:54.282 [2024-07-12 00:48:21.825557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.282 [2024-07-12 00:48:21.825595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.282 qpair failed and we were unable to recover it. 00:35:54.282 [2024-07-12 00:48:21.825693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.282 [2024-07-12 00:48:21.825719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.282 qpair failed and we were unable to recover it. 00:35:54.282 [2024-07-12 00:48:21.825795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.282 [2024-07-12 00:48:21.825820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.282 qpair failed and we were unable to recover it. 00:35:54.283 [2024-07-12 00:48:21.825921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.283 [2024-07-12 00:48:21.825949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.283 qpair failed and we were unable to recover it. 00:35:54.283 [2024-07-12 00:48:21.826057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.283 [2024-07-12 00:48:21.826087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.283 qpair failed and we were unable to recover it. 00:35:54.283 [2024-07-12 00:48:21.826208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.283 [2024-07-12 00:48:21.826266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.283 qpair failed and we were unable to recover it. 00:35:54.283 [2024-07-12 00:48:21.826399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.283 [2024-07-12 00:48:21.826459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.283 qpair failed and we were unable to recover it. 00:35:54.283 [2024-07-12 00:48:21.826571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.283 [2024-07-12 00:48:21.826640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.283 qpair failed and we were unable to recover it. 00:35:54.283 [2024-07-12 00:48:21.826730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.283 [2024-07-12 00:48:21.826760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.283 qpair failed and we were unable to recover it. 00:35:54.283 [2024-07-12 00:48:21.826861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.283 [2024-07-12 00:48:21.826887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.283 qpair failed and we were unable to recover it. 00:35:54.283 [2024-07-12 00:48:21.826982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.283 [2024-07-12 00:48:21.827011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.283 qpair failed and we were unable to recover it. 00:35:54.283 [2024-07-12 00:48:21.827104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.283 [2024-07-12 00:48:21.827129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.283 qpair failed and we were unable to recover it. 00:35:54.283 [2024-07-12 00:48:21.827208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.283 [2024-07-12 00:48:21.827232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.283 qpair failed and we were unable to recover it. 00:35:54.283 [2024-07-12 00:48:21.827317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.283 [2024-07-12 00:48:21.827351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.283 qpair failed and we were unable to recover it. 00:35:54.283 [2024-07-12 00:48:21.827443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.283 [2024-07-12 00:48:21.827471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.283 qpair failed and we were unable to recover it. 00:35:54.283 [2024-07-12 00:48:21.827550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.283 [2024-07-12 00:48:21.827577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.283 qpair failed and we were unable to recover it. 00:35:54.283 [2024-07-12 00:48:21.827695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.283 [2024-07-12 00:48:21.827722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.283 qpair failed and we were unable to recover it. 00:35:54.283 [2024-07-12 00:48:21.828461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.283 [2024-07-12 00:48:21.828491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.283 qpair failed and we were unable to recover it. 00:35:54.283 [2024-07-12 00:48:21.828576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.283 [2024-07-12 00:48:21.828610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.283 qpair failed and we were unable to recover it. 00:35:54.283 [2024-07-12 00:48:21.828691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.283 [2024-07-12 00:48:21.828722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.283 qpair failed and we were unable to recover it. 00:35:54.283 [2024-07-12 00:48:21.828814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.283 [2024-07-12 00:48:21.828839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.283 qpair failed and we were unable to recover it. 00:35:54.283 [2024-07-12 00:48:21.828925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.283 [2024-07-12 00:48:21.828952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.283 qpair failed and we were unable to recover it. 00:35:54.283 [2024-07-12 00:48:21.829031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.283 [2024-07-12 00:48:21.829057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.283 qpair failed and we were unable to recover it. 00:35:54.283 [2024-07-12 00:48:21.829141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.283 [2024-07-12 00:48:21.829170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.283 qpair failed and we were unable to recover it. 00:35:54.283 [2024-07-12 00:48:21.829253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.283 [2024-07-12 00:48:21.829278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.283 qpair failed and we were unable to recover it. 00:35:54.283 [2024-07-12 00:48:21.829370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.283 [2024-07-12 00:48:21.829397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.283 qpair failed and we were unable to recover it. 00:35:54.283 [2024-07-12 00:48:21.829483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.283 [2024-07-12 00:48:21.829509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.283 qpair failed and we were unable to recover it. 00:35:54.283 [2024-07-12 00:48:21.829596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.283 [2024-07-12 00:48:21.829621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.283 qpair failed and we were unable to recover it. 00:35:54.283 [2024-07-12 00:48:21.829718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.283 [2024-07-12 00:48:21.829744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.283 qpair failed and we were unable to recover it. 00:35:54.283 [2024-07-12 00:48:21.829831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.283 [2024-07-12 00:48:21.829859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.283 qpair failed and we were unable to recover it. 00:35:54.283 [2024-07-12 00:48:21.829963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.283 [2024-07-12 00:48:21.830009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.283 qpair failed and we were unable to recover it. 00:35:54.283 [2024-07-12 00:48:21.830090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.283 [2024-07-12 00:48:21.830116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.283 qpair failed and we were unable to recover it. 00:35:54.283 [2024-07-12 00:48:21.830198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.283 [2024-07-12 00:48:21.830226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.283 qpair failed and we were unable to recover it. 00:35:54.283 [2024-07-12 00:48:21.830313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.283 [2024-07-12 00:48:21.830339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.283 qpair failed and we were unable to recover it. 00:35:54.283 [2024-07-12 00:48:21.830418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.283 [2024-07-12 00:48:21.830445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.283 qpair failed and we were unable to recover it. 00:35:54.283 [2024-07-12 00:48:21.830530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.283 [2024-07-12 00:48:21.830557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.283 qpair failed and we were unable to recover it. 00:35:54.283 [2024-07-12 00:48:21.830653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.283 [2024-07-12 00:48:21.830680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.283 qpair failed and we were unable to recover it. 00:35:54.283 [2024-07-12 00:48:21.830764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.283 [2024-07-12 00:48:21.830790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.283 qpair failed and we were unable to recover it. 00:35:54.283 [2024-07-12 00:48:21.830871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.283 [2024-07-12 00:48:21.830897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.283 qpair failed and we were unable to recover it. 00:35:54.283 [2024-07-12 00:48:21.830984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.283 [2024-07-12 00:48:21.831010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.283 qpair failed and we were unable to recover it. 00:35:54.283 [2024-07-12 00:48:21.831090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.283 [2024-07-12 00:48:21.831116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.283 qpair failed and we were unable to recover it. 00:35:54.283 [2024-07-12 00:48:21.831194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.283 [2024-07-12 00:48:21.831221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.283 qpair failed and we were unable to recover it. 00:35:54.283 [2024-07-12 00:48:21.831295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.283 [2024-07-12 00:48:21.831321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.283 qpair failed and we were unable to recover it. 00:35:54.283 [2024-07-12 00:48:21.831417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.283 [2024-07-12 00:48:21.831448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.283 qpair failed and we were unable to recover it. 00:35:54.284 [2024-07-12 00:48:21.831538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.284 [2024-07-12 00:48:21.831563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.284 qpair failed and we were unable to recover it. 00:35:54.284 [2024-07-12 00:48:21.831695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.284 [2024-07-12 00:48:21.831735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.284 qpair failed and we were unable to recover it. 00:35:54.284 [2024-07-12 00:48:21.831828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.284 [2024-07-12 00:48:21.831858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.284 qpair failed and we were unable to recover it. 00:35:54.284 [2024-07-12 00:48:21.831934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.284 [2024-07-12 00:48:21.831961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.284 qpair failed and we were unable to recover it. 00:35:54.284 [2024-07-12 00:48:21.832052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.284 [2024-07-12 00:48:21.832080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.284 qpair failed and we were unable to recover it. 00:35:54.284 [2024-07-12 00:48:21.832165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.284 [2024-07-12 00:48:21.832192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.284 qpair failed and we were unable to recover it. 00:35:54.284 [2024-07-12 00:48:21.832274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.284 [2024-07-12 00:48:21.832299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.284 qpair failed and we were unable to recover it. 00:35:54.284 [2024-07-12 00:48:21.832382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.284 [2024-07-12 00:48:21.832408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.284 qpair failed and we were unable to recover it. 00:35:54.284 [2024-07-12 00:48:21.832483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.284 [2024-07-12 00:48:21.832509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.284 qpair failed and we were unable to recover it. 00:35:54.284 [2024-07-12 00:48:21.832607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.284 [2024-07-12 00:48:21.832634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.284 qpair failed and we were unable to recover it. 00:35:54.284 [2024-07-12 00:48:21.832709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.284 [2024-07-12 00:48:21.832736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.284 qpair failed and we were unable to recover it. 00:35:54.284 [2024-07-12 00:48:21.832832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.284 [2024-07-12 00:48:21.832858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.284 qpair failed and we were unable to recover it. 00:35:54.284 [2024-07-12 00:48:21.832936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.284 [2024-07-12 00:48:21.832962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.284 qpair failed and we were unable to recover it. 00:35:54.284 [2024-07-12 00:48:21.833041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.284 [2024-07-12 00:48:21.833066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.284 qpair failed and we were unable to recover it. 00:35:54.284 [2024-07-12 00:48:21.834055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.284 [2024-07-12 00:48:21.834088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.284 qpair failed and we were unable to recover it. 00:35:54.284 [2024-07-12 00:48:21.834173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.284 [2024-07-12 00:48:21.834205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.284 qpair failed and we were unable to recover it. 00:35:54.284 [2024-07-12 00:48:21.834323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.284 [2024-07-12 00:48:21.834350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.284 qpair failed and we were unable to recover it. 00:35:54.284 [2024-07-12 00:48:21.834428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.284 [2024-07-12 00:48:21.834454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.284 qpair failed and we were unable to recover it. 00:35:54.284 [2024-07-12 00:48:21.834529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.284 [2024-07-12 00:48:21.834555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.284 qpair failed and we were unable to recover it. 00:35:54.284 [2024-07-12 00:48:21.834652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.284 [2024-07-12 00:48:21.834679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.284 qpair failed and we were unable to recover it. 00:35:54.284 [2024-07-12 00:48:21.834762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.284 [2024-07-12 00:48:21.834787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.284 qpair failed and we were unable to recover it. 00:35:54.284 [2024-07-12 00:48:21.834870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.284 [2024-07-12 00:48:21.834896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.284 qpair failed and we were unable to recover it. 00:35:54.284 [2024-07-12 00:48:21.834989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.284 [2024-07-12 00:48:21.835014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.284 qpair failed and we were unable to recover it. 00:35:54.284 [2024-07-12 00:48:21.835092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.284 [2024-07-12 00:48:21.835117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.284 qpair failed and we were unable to recover it. 00:35:54.284 [2024-07-12 00:48:21.835191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.284 [2024-07-12 00:48:21.835217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.284 qpair failed and we were unable to recover it. 00:35:54.284 [2024-07-12 00:48:21.835295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.284 [2024-07-12 00:48:21.835321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.284 qpair failed and we were unable to recover it. 00:35:54.284 [2024-07-12 00:48:21.835402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.284 [2024-07-12 00:48:21.835427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.284 qpair failed and we were unable to recover it. 00:35:54.284 [2024-07-12 00:48:21.835509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.284 [2024-07-12 00:48:21.835533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.284 qpair failed and we were unable to recover it. 00:35:54.284 [2024-07-12 00:48:21.835615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.284 [2024-07-12 00:48:21.835647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.284 qpair failed and we were unable to recover it. 00:35:54.284 [2024-07-12 00:48:21.835735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.284 [2024-07-12 00:48:21.835763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.284 qpair failed and we were unable to recover it. 00:35:54.284 [2024-07-12 00:48:21.835860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.284 [2024-07-12 00:48:21.835886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.284 qpair failed and we were unable to recover it. 00:35:54.284 [2024-07-12 00:48:21.836030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.284 [2024-07-12 00:48:21.836057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.284 qpair failed and we were unable to recover it. 00:35:54.284 [2024-07-12 00:48:21.836135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.284 [2024-07-12 00:48:21.836162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.284 qpair failed and we were unable to recover it. 00:35:54.284 [2024-07-12 00:48:21.836246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.284 [2024-07-12 00:48:21.836273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.284 qpair failed and we were unable to recover it. 00:35:54.284 [2024-07-12 00:48:21.836352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.284 [2024-07-12 00:48:21.836378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.284 qpair failed and we were unable to recover it. 00:35:54.285 [2024-07-12 00:48:21.836471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.285 [2024-07-12 00:48:21.836497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.285 qpair failed and we were unable to recover it. 00:35:54.285 [2024-07-12 00:48:21.836580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.285 [2024-07-12 00:48:21.836615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.285 qpair failed and we were unable to recover it. 00:35:54.285 [2024-07-12 00:48:21.836699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.285 [2024-07-12 00:48:21.836724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.285 qpair failed and we were unable to recover it. 00:35:54.285 [2024-07-12 00:48:21.836814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.285 [2024-07-12 00:48:21.836840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.285 qpair failed and we were unable to recover it. 00:35:54.285 [2024-07-12 00:48:21.836917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.285 [2024-07-12 00:48:21.836941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.285 qpair failed and we were unable to recover it. 00:35:54.285 [2024-07-12 00:48:21.837024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.285 [2024-07-12 00:48:21.837056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.285 qpair failed and we were unable to recover it. 00:35:54.285 [2024-07-12 00:48:21.837148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.285 [2024-07-12 00:48:21.837179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.285 qpair failed and we were unable to recover it. 00:35:54.285 [2024-07-12 00:48:21.837292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.285 [2024-07-12 00:48:21.837334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.285 qpair failed and we were unable to recover it. 00:35:54.285 [2024-07-12 00:48:21.837457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.285 [2024-07-12 00:48:21.837493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.285 qpair failed and we were unable to recover it. 00:35:54.285 [2024-07-12 00:48:21.837617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.285 [2024-07-12 00:48:21.837653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.285 qpair failed and we were unable to recover it. 00:35:54.285 [2024-07-12 00:48:21.837752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.285 [2024-07-12 00:48:21.837780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.285 qpair failed and we were unable to recover it. 00:35:54.285 [2024-07-12 00:48:21.837887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.285 [2024-07-12 00:48:21.837932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.285 qpair failed and we were unable to recover it. 00:35:54.285 [2024-07-12 00:48:21.838976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.285 [2024-07-12 00:48:21.839008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.285 qpair failed and we were unable to recover it. 00:35:54.285 [2024-07-12 00:48:21.839146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.285 [2024-07-12 00:48:21.839187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.285 qpair failed and we were unable to recover it. 00:35:54.285 [2024-07-12 00:48:21.839266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.285 [2024-07-12 00:48:21.839292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.285 qpair failed and we were unable to recover it. 00:35:54.285 [2024-07-12 00:48:21.839423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.285 [2024-07-12 00:48:21.839465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.285 qpair failed and we were unable to recover it. 00:35:54.285 [2024-07-12 00:48:21.839604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.285 [2024-07-12 00:48:21.839652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.285 qpair failed and we were unable to recover it. 00:35:54.285 [2024-07-12 00:48:21.839736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.285 [2024-07-12 00:48:21.839761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.285 qpair failed and we were unable to recover it. 00:35:54.285 [2024-07-12 00:48:21.839867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.285 [2024-07-12 00:48:21.839908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.285 qpair failed and we were unable to recover it. 00:35:54.285 [2024-07-12 00:48:21.840002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.285 [2024-07-12 00:48:21.840033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.285 qpair failed and we were unable to recover it. 00:35:54.285 [2024-07-12 00:48:21.840146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.285 [2024-07-12 00:48:21.840189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.285 qpair failed and we were unable to recover it. 00:35:54.285 [2024-07-12 00:48:21.840296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.285 [2024-07-12 00:48:21.840342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.285 qpair failed and we were unable to recover it. 00:35:54.285 [2024-07-12 00:48:21.840436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.285 [2024-07-12 00:48:21.840479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.285 qpair failed and we were unable to recover it. 00:35:54.285 [2024-07-12 00:48:21.840577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.285 [2024-07-12 00:48:21.840632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.285 qpair failed and we were unable to recover it. 00:35:54.285 [2024-07-12 00:48:21.840722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.285 [2024-07-12 00:48:21.840748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.285 qpair failed and we were unable to recover it. 00:35:54.285 [2024-07-12 00:48:21.840845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.285 [2024-07-12 00:48:21.840876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.285 qpair failed and we were unable to recover it. 00:35:54.285 [2024-07-12 00:48:21.840988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.285 [2024-07-12 00:48:21.841030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.285 qpair failed and we were unable to recover it. 00:35:54.285 [2024-07-12 00:48:21.841108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.285 [2024-07-12 00:48:21.841134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.285 qpair failed and we were unable to recover it. 00:35:54.285 [2024-07-12 00:48:21.841241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.285 [2024-07-12 00:48:21.841284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.285 qpair failed and we were unable to recover it. 00:35:54.285 [2024-07-12 00:48:21.841376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.285 [2024-07-12 00:48:21.841405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.285 qpair failed and we were unable to recover it. 00:35:54.285 [2024-07-12 00:48:21.841492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.285 [2024-07-12 00:48:21.841521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.285 qpair failed and we were unable to recover it. 00:35:54.285 [2024-07-12 00:48:21.841614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.285 [2024-07-12 00:48:21.841651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.285 qpair failed and we were unable to recover it. 00:35:54.285 [2024-07-12 00:48:21.841735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.285 [2024-07-12 00:48:21.841762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.285 qpair failed and we were unable to recover it. 00:35:54.285 [2024-07-12 00:48:21.841849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.285 [2024-07-12 00:48:21.841874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.285 qpair failed and we were unable to recover it. 00:35:54.285 [2024-07-12 00:48:21.841956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.285 [2024-07-12 00:48:21.841982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.285 qpair failed and we were unable to recover it. 00:35:54.285 [2024-07-12 00:48:21.842064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.285 [2024-07-12 00:48:21.842092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.285 qpair failed and we were unable to recover it. 00:35:54.285 [2024-07-12 00:48:21.842173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.285 [2024-07-12 00:48:21.842199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.285 qpair failed and we were unable to recover it. 00:35:54.285 [2024-07-12 00:48:21.842281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.285 [2024-07-12 00:48:21.842309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.285 qpair failed and we were unable to recover it. 00:35:54.285 [2024-07-12 00:48:21.842406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.285 [2024-07-12 00:48:21.842437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.285 qpair failed and we were unable to recover it. 00:35:54.285 [2024-07-12 00:48:21.842528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.285 [2024-07-12 00:48:21.842554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.285 qpair failed and we were unable to recover it. 00:35:54.285 [2024-07-12 00:48:21.842666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.285 [2024-07-12 00:48:21.842710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.285 qpair failed and we were unable to recover it. 00:35:54.286 [2024-07-12 00:48:21.842787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.286 [2024-07-12 00:48:21.842812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.286 qpair failed and we were unable to recover it. 00:35:54.286 [2024-07-12 00:48:21.842900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.286 [2024-07-12 00:48:21.842928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.286 qpair failed and we were unable to recover it. 00:35:54.286 [2024-07-12 00:48:21.843006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.286 [2024-07-12 00:48:21.843032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.286 qpair failed and we were unable to recover it. 00:35:54.286 [2024-07-12 00:48:21.843126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.286 [2024-07-12 00:48:21.843154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.286 qpair failed and we were unable to recover it. 00:35:54.286 [2024-07-12 00:48:21.843243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.286 [2024-07-12 00:48:21.843269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.286 qpair failed and we were unable to recover it. 00:35:54.286 [2024-07-12 00:48:21.843357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.286 [2024-07-12 00:48:21.843385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.286 qpair failed and we were unable to recover it. 00:35:54.286 [2024-07-12 00:48:21.843471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.286 [2024-07-12 00:48:21.843500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.286 qpair failed and we were unable to recover it. 00:35:54.286 [2024-07-12 00:48:21.843583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.286 [2024-07-12 00:48:21.843615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.286 qpair failed and we were unable to recover it. 00:35:54.286 [2024-07-12 00:48:21.843704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.286 [2024-07-12 00:48:21.843730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.286 qpair failed and we were unable to recover it. 00:35:54.286 [2024-07-12 00:48:21.843810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.286 [2024-07-12 00:48:21.843835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.286 qpair failed and we were unable to recover it. 00:35:54.286 [2024-07-12 00:48:21.843914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.286 [2024-07-12 00:48:21.843938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.286 qpair failed and we were unable to recover it. 00:35:54.286 [2024-07-12 00:48:21.844015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.286 [2024-07-12 00:48:21.844040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.286 qpair failed and we were unable to recover it. 00:35:54.286 [2024-07-12 00:48:21.844125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.286 [2024-07-12 00:48:21.844152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.286 qpair failed and we were unable to recover it. 00:35:54.286 [2024-07-12 00:48:21.844236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.286 [2024-07-12 00:48:21.844262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.286 qpair failed and we were unable to recover it. 00:35:54.286 [2024-07-12 00:48:21.844357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.286 [2024-07-12 00:48:21.844386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.286 qpair failed and we were unable to recover it. 00:35:54.286 [2024-07-12 00:48:21.844533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.286 [2024-07-12 00:48:21.844577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.286 qpair failed and we were unable to recover it. 00:35:54.286 [2024-07-12 00:48:21.844691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.286 [2024-07-12 00:48:21.844737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.286 qpair failed and we were unable to recover it. 00:35:54.286 [2024-07-12 00:48:21.844815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.286 [2024-07-12 00:48:21.844843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.286 qpair failed and we were unable to recover it. 00:35:54.286 [2024-07-12 00:48:21.844939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.286 [2024-07-12 00:48:21.844968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.286 qpair failed and we were unable to recover it. 00:35:54.286 [2024-07-12 00:48:21.845072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.286 [2024-07-12 00:48:21.845099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.286 qpair failed and we were unable to recover it. 00:35:54.286 [2024-07-12 00:48:21.845187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.286 [2024-07-12 00:48:21.845215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.286 qpair failed and we were unable to recover it. 00:35:54.286 [2024-07-12 00:48:21.845297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.286 [2024-07-12 00:48:21.845324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.286 qpair failed and we were unable to recover it. 00:35:54.286 [2024-07-12 00:48:21.845415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.286 [2024-07-12 00:48:21.845442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.286 qpair failed and we were unable to recover it. 00:35:54.286 [2024-07-12 00:48:21.845525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.286 [2024-07-12 00:48:21.845552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.286 qpair failed and we were unable to recover it. 00:35:54.286 [2024-07-12 00:48:21.845635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.286 [2024-07-12 00:48:21.845662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.286 qpair failed and we were unable to recover it. 00:35:54.286 [2024-07-12 00:48:21.845745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.286 [2024-07-12 00:48:21.845772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.286 qpair failed and we were unable to recover it. 00:35:54.286 [2024-07-12 00:48:21.845852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.286 [2024-07-12 00:48:21.845878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.286 qpair failed and we were unable to recover it. 00:35:54.286 [2024-07-12 00:48:21.845957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.286 [2024-07-12 00:48:21.845984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.286 qpair failed and we were unable to recover it. 00:35:54.286 [2024-07-12 00:48:21.846066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.286 [2024-07-12 00:48:21.846092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.286 qpair failed and we were unable to recover it. 00:35:54.286 [2024-07-12 00:48:21.846174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.286 [2024-07-12 00:48:21.846200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.286 qpair failed and we were unable to recover it. 00:35:54.286 [2024-07-12 00:48:21.846291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.286 [2024-07-12 00:48:21.846317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.286 qpair failed and we were unable to recover it. 00:35:54.286 [2024-07-12 00:48:21.846392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.286 [2024-07-12 00:48:21.846418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.286 qpair failed and we were unable to recover it. 00:35:54.286 [2024-07-12 00:48:21.846498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.286 [2024-07-12 00:48:21.846524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.286 qpair failed and we were unable to recover it. 00:35:54.286 [2024-07-12 00:48:21.846622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.286 [2024-07-12 00:48:21.846652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.286 qpair failed and we were unable to recover it. 00:35:54.286 [2024-07-12 00:48:21.846734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.286 [2024-07-12 00:48:21.846759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.286 qpair failed and we were unable to recover it. 00:35:54.286 [2024-07-12 00:48:21.846834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.286 [2024-07-12 00:48:21.846860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.286 qpair failed and we were unable to recover it. 00:35:54.286 [2024-07-12 00:48:21.846941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.286 [2024-07-12 00:48:21.846968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.286 qpair failed and we were unable to recover it. 00:35:54.286 [2024-07-12 00:48:21.847056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.286 [2024-07-12 00:48:21.847088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.286 qpair failed and we were unable to recover it. 00:35:54.286 [2024-07-12 00:48:21.847175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.286 [2024-07-12 00:48:21.847204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.286 qpair failed and we were unable to recover it. 00:35:54.286 [2024-07-12 00:48:21.847286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.286 [2024-07-12 00:48:21.847314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.286 qpair failed and we were unable to recover it. 00:35:54.286 [2024-07-12 00:48:21.847401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.287 [2024-07-12 00:48:21.847428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.287 qpair failed and we were unable to recover it. 00:35:54.287 [2024-07-12 00:48:21.847502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.287 [2024-07-12 00:48:21.847528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.287 qpair failed and we were unable to recover it. 00:35:54.287 [2024-07-12 00:48:21.847621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.287 [2024-07-12 00:48:21.847651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.287 qpair failed and we were unable to recover it. 00:35:54.287 [2024-07-12 00:48:21.847725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.287 [2024-07-12 00:48:21.847752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.287 qpair failed and we were unable to recover it. 00:35:54.287 [2024-07-12 00:48:21.847832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.287 [2024-07-12 00:48:21.847858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.287 qpair failed and we were unable to recover it. 00:35:54.287 [2024-07-12 00:48:21.847934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.287 [2024-07-12 00:48:21.847960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.287 qpair failed and we were unable to recover it. 00:35:54.287 [2024-07-12 00:48:21.848035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.287 [2024-07-12 00:48:21.848064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.287 qpair failed and we were unable to recover it. 00:35:54.287 [2024-07-12 00:48:21.848142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.287 [2024-07-12 00:48:21.848168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.287 qpair failed and we were unable to recover it. 00:35:54.287 [2024-07-12 00:48:21.848299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.287 [2024-07-12 00:48:21.848328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.287 qpair failed and we were unable to recover it. 00:35:54.287 [2024-07-12 00:48:21.848420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.287 [2024-07-12 00:48:21.848449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.287 qpair failed and we were unable to recover it. 00:35:54.287 [2024-07-12 00:48:21.848525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.287 [2024-07-12 00:48:21.848579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.287 qpair failed and we were unable to recover it. 00:35:54.287 [2024-07-12 00:48:21.848672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.287 [2024-07-12 00:48:21.848699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.287 qpair failed and we were unable to recover it. 00:35:54.287 [2024-07-12 00:48:21.848780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.287 [2024-07-12 00:48:21.848806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.287 qpair failed and we were unable to recover it. 00:35:54.287 [2024-07-12 00:48:21.848890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.287 [2024-07-12 00:48:21.848917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.287 qpair failed and we were unable to recover it. 00:35:54.287 [2024-07-12 00:48:21.848993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.287 [2024-07-12 00:48:21.849019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.287 qpair failed and we were unable to recover it. 00:35:54.287 [2024-07-12 00:48:21.849100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.287 [2024-07-12 00:48:21.849126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.287 qpair failed and we were unable to recover it. 00:35:54.287 [2024-07-12 00:48:21.849202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.287 [2024-07-12 00:48:21.849227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.287 qpair failed and we were unable to recover it. 00:35:54.287 [2024-07-12 00:48:21.849313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.287 [2024-07-12 00:48:21.849341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.287 qpair failed and we were unable to recover it. 00:35:54.287 [2024-07-12 00:48:21.849429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.287 [2024-07-12 00:48:21.849456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.287 qpair failed and we were unable to recover it. 00:35:54.287 [2024-07-12 00:48:21.849539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.287 [2024-07-12 00:48:21.849564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.287 qpair failed and we were unable to recover it. 00:35:54.287 [2024-07-12 00:48:21.849652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.287 [2024-07-12 00:48:21.849684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.287 qpair failed and we were unable to recover it. 00:35:54.287 [2024-07-12 00:48:21.849812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.287 [2024-07-12 00:48:21.849841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.287 qpair failed and we were unable to recover it. 00:35:54.287 [2024-07-12 00:48:21.849966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.287 [2024-07-12 00:48:21.849992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.287 qpair failed and we were unable to recover it. 00:35:54.287 [2024-07-12 00:48:21.850081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.287 [2024-07-12 00:48:21.850109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.287 qpair failed and we were unable to recover it. 00:35:54.287 [2024-07-12 00:48:21.850185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.287 [2024-07-12 00:48:21.850210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.287 qpair failed and we were unable to recover it. 00:35:54.287 [2024-07-12 00:48:21.850302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.287 [2024-07-12 00:48:21.850328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.287 qpair failed and we were unable to recover it. 00:35:54.287 [2024-07-12 00:48:21.850404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.287 [2024-07-12 00:48:21.850430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.287 qpair failed and we were unable to recover it. 00:35:54.287 [2024-07-12 00:48:21.850511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.287 [2024-07-12 00:48:21.850536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.287 qpair failed and we were unable to recover it. 00:35:54.287 [2024-07-12 00:48:21.850621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.287 [2024-07-12 00:48:21.850647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.287 qpair failed and we were unable to recover it. 00:35:54.287 [2024-07-12 00:48:21.850729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.287 [2024-07-12 00:48:21.850754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.287 qpair failed and we were unable to recover it. 00:35:54.287 [2024-07-12 00:48:21.850836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.287 [2024-07-12 00:48:21.850871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.287 qpair failed and we were unable to recover it. 00:35:54.287 [2024-07-12 00:48:21.850957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.287 [2024-07-12 00:48:21.850982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.287 qpair failed and we were unable to recover it. 00:35:54.287 [2024-07-12 00:48:21.851059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.287 [2024-07-12 00:48:21.851084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.287 qpair failed and we were unable to recover it. 00:35:54.287 [2024-07-12 00:48:21.851168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.287 [2024-07-12 00:48:21.851193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.287 qpair failed and we were unable to recover it. 00:35:54.287 [2024-07-12 00:48:21.851288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.287 [2024-07-12 00:48:21.851313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.287 qpair failed and we were unable to recover it. 00:35:54.287 [2024-07-12 00:48:21.851396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.287 [2024-07-12 00:48:21.851422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.287 qpair failed and we were unable to recover it. 00:35:54.287 [2024-07-12 00:48:21.851504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.287 [2024-07-12 00:48:21.851529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.287 qpair failed and we were unable to recover it. 00:35:54.287 [2024-07-12 00:48:21.851615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.287 [2024-07-12 00:48:21.851642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.287 qpair failed and we were unable to recover it. 00:35:54.287 [2024-07-12 00:48:21.851724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.287 [2024-07-12 00:48:21.851749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.287 qpair failed and we were unable to recover it. 00:35:54.287 [2024-07-12 00:48:21.851825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.287 [2024-07-12 00:48:21.851850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.287 qpair failed and we were unable to recover it. 00:35:54.287 [2024-07-12 00:48:21.851930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.288 [2024-07-12 00:48:21.851956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.288 qpair failed and we were unable to recover it. 00:35:54.288 [2024-07-12 00:48:21.852064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.288 [2024-07-12 00:48:21.852090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.288 qpair failed and we were unable to recover it. 00:35:54.288 [2024-07-12 00:48:21.852165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.288 [2024-07-12 00:48:21.852192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.288 qpair failed and we were unable to recover it. 00:35:54.288 [2024-07-12 00:48:21.852274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.288 [2024-07-12 00:48:21.852301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.288 qpair failed and we were unable to recover it. 00:35:54.288 [2024-07-12 00:48:21.852387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.288 [2024-07-12 00:48:21.852416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.288 qpair failed and we were unable to recover it. 00:35:54.288 [2024-07-12 00:48:21.852497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.288 [2024-07-12 00:48:21.852524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.288 qpair failed and we were unable to recover it. 00:35:54.288 [2024-07-12 00:48:21.852606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.288 [2024-07-12 00:48:21.852641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.288 qpair failed and we were unable to recover it. 00:35:54.288 [2024-07-12 00:48:21.852722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.288 [2024-07-12 00:48:21.852749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.288 qpair failed and we were unable to recover it. 00:35:54.288 [2024-07-12 00:48:21.852833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.288 [2024-07-12 00:48:21.852860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.288 qpair failed and we were unable to recover it. 00:35:54.288 [2024-07-12 00:48:21.852943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.288 [2024-07-12 00:48:21.852971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.288 qpair failed and we were unable to recover it. 00:35:54.288 [2024-07-12 00:48:21.853051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.288 [2024-07-12 00:48:21.853078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.288 qpair failed and we were unable to recover it. 00:35:54.288 [2024-07-12 00:48:21.853153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.288 [2024-07-12 00:48:21.853179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.288 qpair failed and we were unable to recover it. 00:35:54.288 [2024-07-12 00:48:21.853266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.288 [2024-07-12 00:48:21.853294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.288 qpair failed and we were unable to recover it. 00:35:54.288 [2024-07-12 00:48:21.853367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.288 [2024-07-12 00:48:21.853393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.288 qpair failed and we were unable to recover it. 00:35:54.288 [2024-07-12 00:48:21.853471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.288 [2024-07-12 00:48:21.853497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.288 qpair failed and we were unable to recover it. 00:35:54.288 [2024-07-12 00:48:21.853595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.288 [2024-07-12 00:48:21.853621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.288 qpair failed and we were unable to recover it. 00:35:54.288 [2024-07-12 00:48:21.853697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.288 [2024-07-12 00:48:21.853722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.288 qpair failed and we were unable to recover it. 00:35:54.288 [2024-07-12 00:48:21.853806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.288 [2024-07-12 00:48:21.853834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.288 qpair failed and we were unable to recover it. 00:35:54.288 [2024-07-12 00:48:21.853918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.288 [2024-07-12 00:48:21.853945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.288 qpair failed and we were unable to recover it. 00:35:54.288 [2024-07-12 00:48:21.854021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.288 [2024-07-12 00:48:21.854047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.288 qpair failed and we were unable to recover it. 00:35:54.288 [2024-07-12 00:48:21.854139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.288 [2024-07-12 00:48:21.854166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.288 qpair failed and we were unable to recover it. 00:35:54.288 [2024-07-12 00:48:21.854241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.288 [2024-07-12 00:48:21.854267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.288 qpair failed and we were unable to recover it. 00:35:54.288 [2024-07-12 00:48:21.854356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.288 [2024-07-12 00:48:21.854382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.288 qpair failed and we were unable to recover it. 00:35:54.288 [2024-07-12 00:48:21.854458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.288 [2024-07-12 00:48:21.854486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.288 qpair failed and we were unable to recover it. 00:35:54.288 [2024-07-12 00:48:21.854562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.288 [2024-07-12 00:48:21.854594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.288 qpair failed and we were unable to recover it. 00:35:54.288 [2024-07-12 00:48:21.854676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.288 [2024-07-12 00:48:21.854703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.288 qpair failed and we were unable to recover it. 00:35:54.288 [2024-07-12 00:48:21.854789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.288 [2024-07-12 00:48:21.854815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.288 qpair failed and we were unable to recover it. 00:35:54.288 [2024-07-12 00:48:21.854893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.288 [2024-07-12 00:48:21.854926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.288 qpair failed and we were unable to recover it. 00:35:54.288 [2024-07-12 00:48:21.855003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.288 [2024-07-12 00:48:21.855029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.288 qpair failed and we were unable to recover it. 00:35:54.288 [2024-07-12 00:48:21.855102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.288 [2024-07-12 00:48:21.855128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.288 qpair failed and we were unable to recover it. 00:35:54.288 [2024-07-12 00:48:21.855206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.288 [2024-07-12 00:48:21.855233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.288 qpair failed and we were unable to recover it. 00:35:54.288 [2024-07-12 00:48:21.855315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.288 [2024-07-12 00:48:21.855343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.288 qpair failed and we were unable to recover it. 00:35:54.288 [2024-07-12 00:48:21.855420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.288 [2024-07-12 00:48:21.855446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.288 qpair failed and we were unable to recover it. 00:35:54.288 [2024-07-12 00:48:21.855533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.288 [2024-07-12 00:48:21.855564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.288 qpair failed and we were unable to recover it. 00:35:54.288 [2024-07-12 00:48:21.855663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.288 [2024-07-12 00:48:21.855690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.288 qpair failed and we were unable to recover it. 00:35:54.288 [2024-07-12 00:48:21.855817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.289 [2024-07-12 00:48:21.855843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.289 qpair failed and we were unable to recover it. 00:35:54.289 [2024-07-12 00:48:21.855922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.289 [2024-07-12 00:48:21.855949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.289 qpair failed and we were unable to recover it. 00:35:54.289 [2024-07-12 00:48:21.856032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.289 [2024-07-12 00:48:21.856058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.289 qpair failed and we were unable to recover it. 00:35:54.289 [2024-07-12 00:48:21.856137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.289 [2024-07-12 00:48:21.856163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.289 qpair failed and we were unable to recover it. 00:35:54.289 [2024-07-12 00:48:21.856253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.289 [2024-07-12 00:48:21.856281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.289 qpair failed and we were unable to recover it. 00:35:54.289 [2024-07-12 00:48:21.856374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.289 [2024-07-12 00:48:21.856400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.289 qpair failed and we were unable to recover it. 00:35:54.289 [2024-07-12 00:48:21.856476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.289 [2024-07-12 00:48:21.856503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.289 qpair failed and we were unable to recover it. 00:35:54.289 [2024-07-12 00:48:21.856591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.289 [2024-07-12 00:48:21.856620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.289 qpair failed and we were unable to recover it. 00:35:54.289 [2024-07-12 00:48:21.856714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.289 [2024-07-12 00:48:21.856742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.289 qpair failed and we were unable to recover it. 00:35:54.289 [2024-07-12 00:48:21.856836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.289 [2024-07-12 00:48:21.856864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.289 qpair failed and we were unable to recover it. 00:35:54.289 [2024-07-12 00:48:21.856948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.289 [2024-07-12 00:48:21.856974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.289 qpair failed and we were unable to recover it. 00:35:54.289 [2024-07-12 00:48:21.857049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.289 [2024-07-12 00:48:21.857079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.289 qpair failed and we were unable to recover it. 00:35:54.289 [2024-07-12 00:48:21.857156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.289 [2024-07-12 00:48:21.857182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.289 qpair failed and we were unable to recover it. 00:35:54.289 [2024-07-12 00:48:21.857267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.289 [2024-07-12 00:48:21.857294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.289 qpair failed and we were unable to recover it. 00:35:54.289 [2024-07-12 00:48:21.857372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.289 [2024-07-12 00:48:21.857399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.289 qpair failed and we were unable to recover it. 00:35:54.289 [2024-07-12 00:48:21.857474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.289 [2024-07-12 00:48:21.857500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.289 qpair failed and we were unable to recover it. 00:35:54.289 [2024-07-12 00:48:21.857581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.289 [2024-07-12 00:48:21.857613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.289 qpair failed and we were unable to recover it. 00:35:54.289 [2024-07-12 00:48:21.857696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.289 [2024-07-12 00:48:21.857723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.289 qpair failed and we were unable to recover it. 00:35:54.289 [2024-07-12 00:48:21.857808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.289 [2024-07-12 00:48:21.857833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.289 qpair failed and we were unable to recover it. 00:35:54.289 [2024-07-12 00:48:21.857934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.289 [2024-07-12 00:48:21.857960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.289 qpair failed and we were unable to recover it. 00:35:54.289 [2024-07-12 00:48:21.858062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.289 [2024-07-12 00:48:21.858090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.289 qpair failed and we were unable to recover it. 00:35:54.289 [2024-07-12 00:48:21.858181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.289 [2024-07-12 00:48:21.858208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.289 qpair failed and we were unable to recover it. 00:35:54.289 [2024-07-12 00:48:21.858321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.289 [2024-07-12 00:48:21.858350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.289 qpair failed and we were unable to recover it. 00:35:54.289 [2024-07-12 00:48:21.858442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.289 [2024-07-12 00:48:21.858468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.289 qpair failed and we were unable to recover it. 00:35:54.289 [2024-07-12 00:48:21.858560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.289 [2024-07-12 00:48:21.858592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.289 qpair failed and we were unable to recover it. 00:35:54.289 [2024-07-12 00:48:21.858691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.289 [2024-07-12 00:48:21.858717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.289 qpair failed and we were unable to recover it. 00:35:54.289 [2024-07-12 00:48:21.858800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.289 [2024-07-12 00:48:21.858832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.289 qpair failed and we were unable to recover it. 00:35:54.289 [2024-07-12 00:48:21.858915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.289 [2024-07-12 00:48:21.858943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.289 qpair failed and we were unable to recover it. 00:35:54.289 [2024-07-12 00:48:21.859035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.289 [2024-07-12 00:48:21.859067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.289 qpair failed and we were unable to recover it. 00:35:54.289 [2024-07-12 00:48:21.859154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.289 [2024-07-12 00:48:21.859180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.289 qpair failed and we were unable to recover it. 00:35:54.289 [2024-07-12 00:48:21.859258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.289 [2024-07-12 00:48:21.859284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.289 qpair failed and we were unable to recover it. 00:35:54.289 [2024-07-12 00:48:21.859366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.289 [2024-07-12 00:48:21.859393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.289 qpair failed and we were unable to recover it. 00:35:54.289 [2024-07-12 00:48:21.859473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.289 [2024-07-12 00:48:21.859499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.289 qpair failed and we were unable to recover it. 00:35:54.289 [2024-07-12 00:48:21.859593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.289 [2024-07-12 00:48:21.859619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.289 qpair failed and we were unable to recover it. 00:35:54.289 [2024-07-12 00:48:21.859716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.289 [2024-07-12 00:48:21.859742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.289 qpair failed and we were unable to recover it. 00:35:54.289 [2024-07-12 00:48:21.859825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.289 [2024-07-12 00:48:21.859851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.289 qpair failed and we were unable to recover it. 00:35:54.289 [2024-07-12 00:48:21.859928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.289 [2024-07-12 00:48:21.859954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.289 qpair failed and we were unable to recover it. 00:35:54.289 [2024-07-12 00:48:21.860040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.289 [2024-07-12 00:48:21.860073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.289 qpair failed and we were unable to recover it. 00:35:54.289 [2024-07-12 00:48:21.860166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.289 [2024-07-12 00:48:21.860199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.289 qpair failed and we were unable to recover it. 00:35:54.289 [2024-07-12 00:48:21.860278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.289 [2024-07-12 00:48:21.860304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.289 qpair failed and we were unable to recover it. 00:35:54.289 [2024-07-12 00:48:21.860379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.289 [2024-07-12 00:48:21.860405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.289 qpair failed and we were unable to recover it. 00:35:54.290 [2024-07-12 00:48:21.860487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.290 [2024-07-12 00:48:21.860514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.290 qpair failed and we were unable to recover it. 00:35:54.290 [2024-07-12 00:48:21.860594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.290 [2024-07-12 00:48:21.860622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.290 qpair failed and we were unable to recover it. 00:35:54.290 [2024-07-12 00:48:21.860707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.290 [2024-07-12 00:48:21.860734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.290 qpair failed and we were unable to recover it. 00:35:54.290 [2024-07-12 00:48:21.860810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.290 [2024-07-12 00:48:21.860837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.290 qpair failed and we were unable to recover it. 00:35:54.290 [2024-07-12 00:48:21.860916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.290 [2024-07-12 00:48:21.860945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.290 qpair failed and we were unable to recover it. 00:35:54.290 [2024-07-12 00:48:21.861024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.290 [2024-07-12 00:48:21.861053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.290 qpair failed and we were unable to recover it. 00:35:54.290 [2024-07-12 00:48:21.861143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.290 [2024-07-12 00:48:21.861173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.290 qpair failed and we were unable to recover it. 00:35:54.290 [2024-07-12 00:48:21.861252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.290 [2024-07-12 00:48:21.861279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.290 qpair failed and we were unable to recover it. 00:35:54.290 [2024-07-12 00:48:21.861355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.290 [2024-07-12 00:48:21.861382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.290 qpair failed and we were unable to recover it. 00:35:54.290 [2024-07-12 00:48:21.861481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.290 [2024-07-12 00:48:21.861509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.290 qpair failed and we were unable to recover it. 00:35:54.290 [2024-07-12 00:48:21.861593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.290 [2024-07-12 00:48:21.861624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.290 qpair failed and we were unable to recover it. 00:35:54.290 [2024-07-12 00:48:21.861718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.290 [2024-07-12 00:48:21.861745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.290 qpair failed and we were unable to recover it. 00:35:54.290 [2024-07-12 00:48:21.861841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.290 [2024-07-12 00:48:21.861869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.290 qpair failed and we were unable to recover it. 00:35:54.290 [2024-07-12 00:48:21.861947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.290 [2024-07-12 00:48:21.861973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.290 qpair failed and we were unable to recover it. 00:35:54.290 [2024-07-12 00:48:21.862049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.290 [2024-07-12 00:48:21.862075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.290 qpair failed and we were unable to recover it. 00:35:54.290 [2024-07-12 00:48:21.862159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.290 [2024-07-12 00:48:21.862186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.290 qpair failed and we were unable to recover it. 00:35:54.290 [2024-07-12 00:48:21.862264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.290 [2024-07-12 00:48:21.862290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.290 qpair failed and we were unable to recover it. 00:35:54.290 [2024-07-12 00:48:21.862373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.290 [2024-07-12 00:48:21.862399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.290 qpair failed and we were unable to recover it. 00:35:54.290 [2024-07-12 00:48:21.862487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.290 [2024-07-12 00:48:21.862514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.290 qpair failed and we were unable to recover it. 00:35:54.290 [2024-07-12 00:48:21.862600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.290 [2024-07-12 00:48:21.862626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.290 qpair failed and we were unable to recover it. 00:35:54.290 [2024-07-12 00:48:21.862706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.290 [2024-07-12 00:48:21.862731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.290 qpair failed and we were unable to recover it. 00:35:54.290 [2024-07-12 00:48:21.862823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.290 [2024-07-12 00:48:21.862850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.290 qpair failed and we were unable to recover it. 00:35:54.290 [2024-07-12 00:48:21.862942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.290 [2024-07-12 00:48:21.862968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.290 qpair failed and we were unable to recover it. 00:35:54.290 [2024-07-12 00:48:21.863043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.290 [2024-07-12 00:48:21.863067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.290 qpair failed and we were unable to recover it. 00:35:54.290 [2024-07-12 00:48:21.863160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.290 [2024-07-12 00:48:21.863188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.290 qpair failed and we were unable to recover it. 00:35:54.290 [2024-07-12 00:48:21.863267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.290 [2024-07-12 00:48:21.863293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.290 qpair failed and we were unable to recover it. 00:35:54.290 [2024-07-12 00:48:21.863377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.290 [2024-07-12 00:48:21.863405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.290 qpair failed and we were unable to recover it. 00:35:54.290 [2024-07-12 00:48:21.863483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.290 [2024-07-12 00:48:21.863510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.290 qpair failed and we were unable to recover it. 00:35:54.290 [2024-07-12 00:48:21.863584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.290 [2024-07-12 00:48:21.863616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.290 qpair failed and we were unable to recover it. 00:35:54.290 [2024-07-12 00:48:21.863701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.290 [2024-07-12 00:48:21.863726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.290 qpair failed and we were unable to recover it. 00:35:54.290 [2024-07-12 00:48:21.863808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.290 [2024-07-12 00:48:21.863833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.290 qpair failed and we were unable to recover it. 00:35:54.290 [2024-07-12 00:48:21.863911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.290 [2024-07-12 00:48:21.863937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.290 qpair failed and we were unable to recover it. 00:35:54.290 [2024-07-12 00:48:21.864014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.290 [2024-07-12 00:48:21.864040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.290 qpair failed and we were unable to recover it. 00:35:54.290 [2024-07-12 00:48:21.864124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.290 [2024-07-12 00:48:21.864150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.290 qpair failed and we were unable to recover it. 00:35:54.290 [2024-07-12 00:48:21.864237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.290 [2024-07-12 00:48:21.864262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.290 qpair failed and we were unable to recover it. 00:35:54.290 [2024-07-12 00:48:21.864359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.290 [2024-07-12 00:48:21.864389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.290 qpair failed and we were unable to recover it. 00:35:54.290 [2024-07-12 00:48:21.864494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.290 [2024-07-12 00:48:21.864524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.290 qpair failed and we were unable to recover it. 00:35:54.290 [2024-07-12 00:48:21.864611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.290 [2024-07-12 00:48:21.864638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.290 qpair failed and we were unable to recover it. 00:35:54.290 [2024-07-12 00:48:21.864727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.290 [2024-07-12 00:48:21.864753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.290 qpair failed and we were unable to recover it. 00:35:54.290 [2024-07-12 00:48:21.864828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.290 [2024-07-12 00:48:21.864854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.290 qpair failed and we were unable to recover it. 00:35:54.290 [2024-07-12 00:48:21.864937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.290 [2024-07-12 00:48:21.864964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.290 qpair failed and we were unable to recover it. 00:35:54.291 [2024-07-12 00:48:21.865045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.291 [2024-07-12 00:48:21.865073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.291 qpair failed and we were unable to recover it. 00:35:54.291 [2024-07-12 00:48:21.865156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.291 [2024-07-12 00:48:21.865185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.291 qpair failed and we were unable to recover it. 00:35:54.291 [2024-07-12 00:48:21.865265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.291 [2024-07-12 00:48:21.865292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.291 qpair failed and we were unable to recover it. 00:35:54.291 [2024-07-12 00:48:21.865367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.291 [2024-07-12 00:48:21.865393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.291 qpair failed and we were unable to recover it. 00:35:54.291 [2024-07-12 00:48:21.865473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.291 [2024-07-12 00:48:21.865499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.291 qpair failed and we were unable to recover it. 00:35:54.291 [2024-07-12 00:48:21.865641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.291 [2024-07-12 00:48:21.865669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.291 qpair failed and we were unable to recover it. 00:35:54.291 [2024-07-12 00:48:21.865809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.291 [2024-07-12 00:48:21.865848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.291 qpair failed and we were unable to recover it. 00:35:54.291 [2024-07-12 00:48:21.865923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.291 [2024-07-12 00:48:21.865950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.291 qpair failed and we were unable to recover it. 00:35:54.291 [2024-07-12 00:48:21.866023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.291 [2024-07-12 00:48:21.866048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.291 qpair failed and we were unable to recover it. 00:35:54.291 [2024-07-12 00:48:21.866123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.291 [2024-07-12 00:48:21.866154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.291 qpair failed and we were unable to recover it. 00:35:54.291 [2024-07-12 00:48:21.866231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.291 [2024-07-12 00:48:21.866259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.291 qpair failed and we were unable to recover it. 00:35:54.291 [2024-07-12 00:48:21.866353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.291 [2024-07-12 00:48:21.866379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.291 qpair failed and we were unable to recover it. 00:35:54.291 [2024-07-12 00:48:21.866466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.291 [2024-07-12 00:48:21.866492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.291 qpair failed and we were unable to recover it. 00:35:54.291 [2024-07-12 00:48:21.866571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.291 [2024-07-12 00:48:21.866602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.291 qpair failed and we were unable to recover it. 00:35:54.291 [2024-07-12 00:48:21.866687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.291 [2024-07-12 00:48:21.866714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.291 qpair failed and we were unable to recover it. 00:35:54.291 [2024-07-12 00:48:21.866797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.291 [2024-07-12 00:48:21.866824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.291 qpair failed and we were unable to recover it. 00:35:54.291 [2024-07-12 00:48:21.866909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.291 [2024-07-12 00:48:21.866935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.291 qpair failed and we were unable to recover it. 00:35:54.291 [2024-07-12 00:48:21.867016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.291 [2024-07-12 00:48:21.867042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.291 qpair failed and we were unable to recover it. 00:35:54.291 [2024-07-12 00:48:21.867124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.291 [2024-07-12 00:48:21.867153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.291 qpair failed and we were unable to recover it. 00:35:54.291 [2024-07-12 00:48:21.867234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.291 [2024-07-12 00:48:21.867260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.291 qpair failed and we were unable to recover it. 00:35:54.291 [2024-07-12 00:48:21.867347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.291 [2024-07-12 00:48:21.867373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.291 qpair failed and we were unable to recover it. 00:35:54.291 [2024-07-12 00:48:21.867454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.291 [2024-07-12 00:48:21.867481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.291 qpair failed and we were unable to recover it. 00:35:54.291 [2024-07-12 00:48:21.867572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.291 [2024-07-12 00:48:21.867604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.291 qpair failed and we were unable to recover it. 00:35:54.291 [2024-07-12 00:48:21.867696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.291 [2024-07-12 00:48:21.867737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.291 qpair failed and we were unable to recover it. 00:35:54.291 [2024-07-12 00:48:21.867824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.291 [2024-07-12 00:48:21.867860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.291 qpair failed and we were unable to recover it. 00:35:54.291 [2024-07-12 00:48:21.867947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.291 [2024-07-12 00:48:21.867976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.291 qpair failed and we were unable to recover it. 00:35:54.291 [2024-07-12 00:48:21.868058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.291 [2024-07-12 00:48:21.868084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.291 qpair failed and we were unable to recover it. 00:35:54.291 [2024-07-12 00:48:21.868167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.291 [2024-07-12 00:48:21.868194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.291 qpair failed and we were unable to recover it. 00:35:54.291 [2024-07-12 00:48:21.868274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.291 [2024-07-12 00:48:21.868300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.291 qpair failed and we were unable to recover it. 00:35:54.291 [2024-07-12 00:48:21.868384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.291 [2024-07-12 00:48:21.868411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.291 qpair failed and we were unable to recover it. 00:35:54.291 [2024-07-12 00:48:21.868494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.291 [2024-07-12 00:48:21.868520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.291 qpair failed and we were unable to recover it. 00:35:54.291 [2024-07-12 00:48:21.868602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.291 [2024-07-12 00:48:21.868629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.291 qpair failed and we were unable to recover it. 00:35:54.291 [2024-07-12 00:48:21.868707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.291 [2024-07-12 00:48:21.868731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.291 qpair failed and we were unable to recover it. 00:35:54.291 [2024-07-12 00:48:21.868807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.291 [2024-07-12 00:48:21.868833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.291 qpair failed and we were unable to recover it. 00:35:54.291 [2024-07-12 00:48:21.868923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.291 [2024-07-12 00:48:21.868951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.291 qpair failed and we were unable to recover it. 00:35:54.291 [2024-07-12 00:48:21.869057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.291 [2024-07-12 00:48:21.869102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.291 qpair failed and we were unable to recover it. 00:35:54.291 [2024-07-12 00:48:21.869197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.291 [2024-07-12 00:48:21.869225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.291 qpair failed and we were unable to recover it. 00:35:54.291 [2024-07-12 00:48:21.869301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.291 [2024-07-12 00:48:21.869327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.291 qpair failed and we were unable to recover it. 00:35:54.291 [2024-07-12 00:48:21.869402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.291 [2024-07-12 00:48:21.869429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.291 qpair failed and we were unable to recover it. 00:35:54.291 [2024-07-12 00:48:21.869507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.291 [2024-07-12 00:48:21.869533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.291 qpair failed and we were unable to recover it. 00:35:54.291 [2024-07-12 00:48:21.869615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.291 [2024-07-12 00:48:21.869642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.292 qpair failed and we were unable to recover it. 00:35:54.292 [2024-07-12 00:48:21.869729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.292 [2024-07-12 00:48:21.869755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.292 qpair failed and we were unable to recover it. 00:35:54.292 [2024-07-12 00:48:21.869829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.292 [2024-07-12 00:48:21.869856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.292 qpair failed and we were unable to recover it. 00:35:54.292 [2024-07-12 00:48:21.869929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.292 [2024-07-12 00:48:21.869955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.292 qpair failed and we were unable to recover it. 00:35:54.292 [2024-07-12 00:48:21.870033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.292 [2024-07-12 00:48:21.870059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.292 qpair failed and we were unable to recover it. 00:35:54.292 [2024-07-12 00:48:21.870137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.292 [2024-07-12 00:48:21.870163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.292 qpair failed and we were unable to recover it. 00:35:54.292 [2024-07-12 00:48:21.870238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.292 [2024-07-12 00:48:21.870264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.292 qpair failed and we were unable to recover it. 00:35:54.292 [2024-07-12 00:48:21.870391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.292 [2024-07-12 00:48:21.870417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.292 qpair failed and we were unable to recover it. 00:35:54.292 [2024-07-12 00:48:21.870494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.292 [2024-07-12 00:48:21.870520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.292 qpair failed and we were unable to recover it. 00:35:54.292 [2024-07-12 00:48:21.870599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.292 [2024-07-12 00:48:21.870630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.292 qpair failed and we were unable to recover it. 00:35:54.292 [2024-07-12 00:48:21.870710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.292 [2024-07-12 00:48:21.870736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.292 qpair failed and we were unable to recover it. 00:35:54.292 [2024-07-12 00:48:21.870812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.292 [2024-07-12 00:48:21.870838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.292 qpair failed and we were unable to recover it. 00:35:54.292 [2024-07-12 00:48:21.870916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.292 [2024-07-12 00:48:21.870945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.292 qpair failed and we were unable to recover it. 00:35:54.292 [2024-07-12 00:48:21.871036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.292 [2024-07-12 00:48:21.871065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.292 qpair failed and we were unable to recover it. 00:35:54.292 [2024-07-12 00:48:21.871167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.292 [2024-07-12 00:48:21.871194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.292 qpair failed and we were unable to recover it. 00:35:54.292 [2024-07-12 00:48:21.871278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.292 [2024-07-12 00:48:21.871305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.292 qpair failed and we were unable to recover it. 00:35:54.292 [2024-07-12 00:48:21.871383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.292 [2024-07-12 00:48:21.871408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.292 qpair failed and we were unable to recover it. 00:35:54.292 [2024-07-12 00:48:21.871489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.292 [2024-07-12 00:48:21.871514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.292 qpair failed and we were unable to recover it. 00:35:54.292 [2024-07-12 00:48:21.871607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.292 [2024-07-12 00:48:21.871634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.292 qpair failed and we were unable to recover it. 00:35:54.292 [2024-07-12 00:48:21.871726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.292 [2024-07-12 00:48:21.871752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.292 qpair failed and we were unable to recover it. 00:35:54.292 [2024-07-12 00:48:21.871832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.292 [2024-07-12 00:48:21.871857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.292 qpair failed and we were unable to recover it. 00:35:54.292 [2024-07-12 00:48:21.871942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.292 [2024-07-12 00:48:21.871967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.292 qpair failed and we were unable to recover it. 00:35:54.292 [2024-07-12 00:48:21.872053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.292 [2024-07-12 00:48:21.872079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.292 qpair failed and we were unable to recover it. 00:35:54.292 [2024-07-12 00:48:21.872162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.292 [2024-07-12 00:48:21.872188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.292 qpair failed and we were unable to recover it. 00:35:54.292 [2024-07-12 00:48:21.872268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.292 [2024-07-12 00:48:21.872293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.292 qpair failed and we were unable to recover it. 00:35:54.292 [2024-07-12 00:48:21.872376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.292 [2024-07-12 00:48:21.872401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.292 qpair failed and we were unable to recover it. 00:35:54.292 [2024-07-12 00:48:21.872497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.292 [2024-07-12 00:48:21.872524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.292 qpair failed and we were unable to recover it. 00:35:54.292 [2024-07-12 00:48:21.872606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.292 [2024-07-12 00:48:21.872643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.292 qpair failed and we were unable to recover it. 00:35:54.292 [2024-07-12 00:48:21.872731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.292 [2024-07-12 00:48:21.872758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.292 qpair failed and we were unable to recover it. 00:35:54.292 [2024-07-12 00:48:21.872841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.292 [2024-07-12 00:48:21.872866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.292 qpair failed and we were unable to recover it. 00:35:54.292 [2024-07-12 00:48:21.872942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.292 [2024-07-12 00:48:21.872968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.292 qpair failed and we were unable to recover it. 00:35:54.292 [2024-07-12 00:48:21.873054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.292 [2024-07-12 00:48:21.873083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.292 qpair failed and we were unable to recover it. 00:35:54.292 [2024-07-12 00:48:21.873164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.292 [2024-07-12 00:48:21.873192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.292 qpair failed and we were unable to recover it. 00:35:54.292 [2024-07-12 00:48:21.873267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.292 [2024-07-12 00:48:21.873293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.292 qpair failed and we were unable to recover it. 00:35:54.292 [2024-07-12 00:48:21.873367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.292 [2024-07-12 00:48:21.873393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.292 qpair failed and we were unable to recover it. 00:35:54.292 [2024-07-12 00:48:21.873469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.292 [2024-07-12 00:48:21.873495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.292 qpair failed and we were unable to recover it. 00:35:54.292 [2024-07-12 00:48:21.873578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.292 [2024-07-12 00:48:21.873614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.292 qpair failed and we were unable to recover it. 00:35:54.292 [2024-07-12 00:48:21.873701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.292 [2024-07-12 00:48:21.873729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.292 qpair failed and we were unable to recover it. 00:35:54.292 [2024-07-12 00:48:21.873808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.292 [2024-07-12 00:48:21.873835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.292 qpair failed and we were unable to recover it. 00:35:54.292 [2024-07-12 00:48:21.873911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.292 [2024-07-12 00:48:21.873938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.292 qpair failed and we were unable to recover it. 00:35:54.292 [2024-07-12 00:48:21.874030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.292 [2024-07-12 00:48:21.874058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.292 qpair failed and we were unable to recover it. 00:35:54.292 [2024-07-12 00:48:21.874136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.292 [2024-07-12 00:48:21.874162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.293 qpair failed and we were unable to recover it. 00:35:54.293 [2024-07-12 00:48:21.874245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.293 [2024-07-12 00:48:21.874271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.293 qpair failed and we were unable to recover it. 00:35:54.293 [2024-07-12 00:48:21.874353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.293 [2024-07-12 00:48:21.874388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.293 qpair failed and we were unable to recover it. 00:35:54.293 [2024-07-12 00:48:21.874468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.293 [2024-07-12 00:48:21.874494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.293 qpair failed and we were unable to recover it. 00:35:54.293 [2024-07-12 00:48:21.874569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.293 [2024-07-12 00:48:21.874602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.293 qpair failed and we were unable to recover it. 00:35:54.293 [2024-07-12 00:48:21.874679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.293 [2024-07-12 00:48:21.874703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.293 qpair failed and we were unable to recover it. 00:35:54.293 [2024-07-12 00:48:21.874784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.293 [2024-07-12 00:48:21.874809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.293 qpair failed and we were unable to recover it. 00:35:54.293 [2024-07-12 00:48:21.874894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.293 [2024-07-12 00:48:21.874920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.293 qpair failed and we were unable to recover it. 00:35:54.293 [2024-07-12 00:48:21.874999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.293 [2024-07-12 00:48:21.875029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.293 qpair failed and we were unable to recover it. 00:35:54.293 [2024-07-12 00:48:21.875111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.293 [2024-07-12 00:48:21.875136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.293 qpair failed and we were unable to recover it. 00:35:54.293 [2024-07-12 00:48:21.875217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.293 [2024-07-12 00:48:21.875245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.293 qpair failed and we were unable to recover it. 00:35:54.293 [2024-07-12 00:48:21.875334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.293 [2024-07-12 00:48:21.875360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.293 qpair failed and we were unable to recover it. 00:35:54.293 [2024-07-12 00:48:21.875437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.293 [2024-07-12 00:48:21.875463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.293 qpair failed and we were unable to recover it. 00:35:54.293 [2024-07-12 00:48:21.875544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.293 [2024-07-12 00:48:21.875570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.293 qpair failed and we were unable to recover it. 00:35:54.293 [2024-07-12 00:48:21.875667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.293 [2024-07-12 00:48:21.875693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.293 qpair failed and we were unable to recover it. 00:35:54.293 [2024-07-12 00:48:21.875778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.293 [2024-07-12 00:48:21.875805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.293 qpair failed and we were unable to recover it. 00:35:54.293 [2024-07-12 00:48:21.875898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.293 [2024-07-12 00:48:21.875925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.293 qpair failed and we were unable to recover it. 00:35:54.293 [2024-07-12 00:48:21.876003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.293 [2024-07-12 00:48:21.876028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.293 qpair failed and we were unable to recover it. 00:35:54.293 [2024-07-12 00:48:21.876117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.293 [2024-07-12 00:48:21.876142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.293 qpair failed and we were unable to recover it. 00:35:54.293 [2024-07-12 00:48:21.876219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.293 [2024-07-12 00:48:21.876244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.293 qpair failed and we were unable to recover it. 00:35:54.293 [2024-07-12 00:48:21.876319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.293 [2024-07-12 00:48:21.876345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.293 qpair failed and we were unable to recover it. 00:35:54.293 [2024-07-12 00:48:21.876427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.293 [2024-07-12 00:48:21.876452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.293 qpair failed and we were unable to recover it. 00:35:54.293 [2024-07-12 00:48:21.876532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.293 [2024-07-12 00:48:21.876558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.293 qpair failed and we were unable to recover it. 00:35:54.293 [2024-07-12 00:48:21.876656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.293 [2024-07-12 00:48:21.876681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.293 qpair failed and we were unable to recover it. 00:35:54.293 [2024-07-12 00:48:21.876755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.293 [2024-07-12 00:48:21.876781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.293 qpair failed and we were unable to recover it. 00:35:54.293 [2024-07-12 00:48:21.876869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.293 [2024-07-12 00:48:21.876914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.293 qpair failed and we were unable to recover it. 00:35:54.293 [2024-07-12 00:48:21.876994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.293 [2024-07-12 00:48:21.877020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.293 qpair failed and we were unable to recover it. 00:35:54.293 [2024-07-12 00:48:21.877114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.293 [2024-07-12 00:48:21.877141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.293 qpair failed and we were unable to recover it. 00:35:54.293 [2024-07-12 00:48:21.877216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.293 [2024-07-12 00:48:21.877241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.293 qpair failed and we were unable to recover it. 00:35:54.293 [2024-07-12 00:48:21.877321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.293 [2024-07-12 00:48:21.877347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.293 qpair failed and we were unable to recover it. 00:35:54.293 [2024-07-12 00:48:21.877437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.293 [2024-07-12 00:48:21.877460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.293 qpair failed and we were unable to recover it. 00:35:54.293 [2024-07-12 00:48:21.877541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.293 [2024-07-12 00:48:21.877567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.293 qpair failed and we were unable to recover it. 00:35:54.293 [2024-07-12 00:48:21.877657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.293 [2024-07-12 00:48:21.877686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.293 qpair failed and we were unable to recover it. 00:35:54.293 [2024-07-12 00:48:21.877772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.293 [2024-07-12 00:48:21.877799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.293 qpair failed and we were unable to recover it. 00:35:54.293 [2024-07-12 00:48:21.877876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.293 [2024-07-12 00:48:21.877902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.293 qpair failed and we were unable to recover it. 00:35:54.293 [2024-07-12 00:48:21.877981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.293 [2024-07-12 00:48:21.878007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.293 qpair failed and we were unable to recover it. 00:35:54.294 [2024-07-12 00:48:21.878096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.294 [2024-07-12 00:48:21.878122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.294 qpair failed and we were unable to recover it. 00:35:54.294 [2024-07-12 00:48:21.878198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.294 [2024-07-12 00:48:21.878223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.294 qpair failed and we were unable to recover it. 00:35:54.294 [2024-07-12 00:48:21.878300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.294 [2024-07-12 00:48:21.878341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.294 qpair failed and we were unable to recover it. 00:35:54.294 [2024-07-12 00:48:21.878418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.294 [2024-07-12 00:48:21.878444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.294 qpair failed and we were unable to recover it. 00:35:54.294 [2024-07-12 00:48:21.878530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.294 [2024-07-12 00:48:21.878556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.294 qpair failed and we were unable to recover it. 00:35:54.294 [2024-07-12 00:48:21.878643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.294 [2024-07-12 00:48:21.878670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.294 qpair failed and we were unable to recover it. 00:35:54.294 [2024-07-12 00:48:21.878755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.294 [2024-07-12 00:48:21.878785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.294 qpair failed and we were unable to recover it. 00:35:54.294 [2024-07-12 00:48:21.878911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.294 [2024-07-12 00:48:21.878937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.294 qpair failed and we were unable to recover it. 00:35:54.294 [2024-07-12 00:48:21.879015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.294 [2024-07-12 00:48:21.879041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.294 qpair failed and we were unable to recover it. 00:35:54.294 [2024-07-12 00:48:21.879117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.294 [2024-07-12 00:48:21.879143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.294 qpair failed and we were unable to recover it. 00:35:54.294 [2024-07-12 00:48:21.879268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.294 [2024-07-12 00:48:21.879294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.294 qpair failed and we were unable to recover it. 00:35:54.294 [2024-07-12 00:48:21.879372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.294 [2024-07-12 00:48:21.879398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.294 qpair failed and we were unable to recover it. 00:35:54.294 [2024-07-12 00:48:21.879476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.294 [2024-07-12 00:48:21.879505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.294 qpair failed and we were unable to recover it. 00:35:54.294 [2024-07-12 00:48:21.879581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.294 [2024-07-12 00:48:21.879622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.294 qpair failed and we were unable to recover it. 00:35:54.294 [2024-07-12 00:48:21.879707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.294 [2024-07-12 00:48:21.879734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.294 qpair failed and we were unable to recover it. 00:35:54.294 [2024-07-12 00:48:21.879813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.294 [2024-07-12 00:48:21.879855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.294 qpair failed and we were unable to recover it. 00:35:54.294 [2024-07-12 00:48:21.879942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.294 [2024-07-12 00:48:21.879971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.294 qpair failed and we were unable to recover it. 00:35:54.294 [2024-07-12 00:48:21.880058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.294 [2024-07-12 00:48:21.880084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.294 qpair failed and we were unable to recover it. 00:35:54.294 [2024-07-12 00:48:21.880169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.294 [2024-07-12 00:48:21.880199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.294 qpair failed and we were unable to recover it. 00:35:54.294 [2024-07-12 00:48:21.880280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.294 [2024-07-12 00:48:21.880307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.294 qpair failed and we were unable to recover it. 00:35:54.294 [2024-07-12 00:48:21.880395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.294 [2024-07-12 00:48:21.880422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.294 qpair failed and we were unable to recover it. 00:35:54.294 [2024-07-12 00:48:21.880498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.294 [2024-07-12 00:48:21.880524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.294 qpair failed and we were unable to recover it. 00:35:54.294 [2024-07-12 00:48:21.880650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.294 [2024-07-12 00:48:21.880677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.294 qpair failed and we were unable to recover it. 00:35:54.294 [2024-07-12 00:48:21.880759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.294 [2024-07-12 00:48:21.880785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.294 qpair failed and we were unable to recover it. 00:35:54.294 [2024-07-12 00:48:21.880861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.294 [2024-07-12 00:48:21.880887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.294 qpair failed and we were unable to recover it. 00:35:54.294 [2024-07-12 00:48:21.881011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.294 [2024-07-12 00:48:21.881038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.294 qpair failed and we were unable to recover it. 00:35:54.294 [2024-07-12 00:48:21.881116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.294 [2024-07-12 00:48:21.881143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.294 qpair failed and we were unable to recover it. 00:35:54.294 [2024-07-12 00:48:21.881224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.294 [2024-07-12 00:48:21.881250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.294 qpair failed and we were unable to recover it. 00:35:54.294 [2024-07-12 00:48:21.881332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.294 [2024-07-12 00:48:21.881359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.294 qpair failed and we were unable to recover it. 00:35:54.294 [2024-07-12 00:48:21.881442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.294 [2024-07-12 00:48:21.881470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.294 qpair failed and we were unable to recover it. 00:35:54.294 [2024-07-12 00:48:21.881552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.294 [2024-07-12 00:48:21.881577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.294 qpair failed and we were unable to recover it. 00:35:54.294 [2024-07-12 00:48:21.881668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.294 [2024-07-12 00:48:21.881693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.294 qpair failed and we were unable to recover it. 00:35:54.294 [2024-07-12 00:48:21.881776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.294 [2024-07-12 00:48:21.881804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.294 qpair failed and we were unable to recover it. 00:35:54.294 [2024-07-12 00:48:21.881879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.294 [2024-07-12 00:48:21.881905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.294 qpair failed and we were unable to recover it. 00:35:54.294 [2024-07-12 00:48:21.881991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.294 [2024-07-12 00:48:21.882016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.294 qpair failed and we were unable to recover it. 00:35:54.294 [2024-07-12 00:48:21.882099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.294 [2024-07-12 00:48:21.882127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.294 qpair failed and we were unable to recover it. 00:35:54.294 [2024-07-12 00:48:21.882210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.294 [2024-07-12 00:48:21.882237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.294 qpair failed and we were unable to recover it. 00:35:54.294 [2024-07-12 00:48:21.882313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.294 [2024-07-12 00:48:21.882339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.294 qpair failed and we were unable to recover it. 00:35:54.294 [2024-07-12 00:48:21.882418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.294 [2024-07-12 00:48:21.882444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.294 qpair failed and we were unable to recover it. 00:35:54.294 [2024-07-12 00:48:21.882537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.294 [2024-07-12 00:48:21.882563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.294 qpair failed and we were unable to recover it. 00:35:54.294 [2024-07-12 00:48:21.882657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.294 [2024-07-12 00:48:21.882686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.294 qpair failed and we were unable to recover it. 00:35:54.295 [2024-07-12 00:48:21.882778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.295 [2024-07-12 00:48:21.882803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.295 qpair failed and we were unable to recover it. 00:35:54.295 [2024-07-12 00:48:21.882890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.295 [2024-07-12 00:48:21.882933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.295 qpair failed and we were unable to recover it. 00:35:54.295 [2024-07-12 00:48:21.883017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.295 [2024-07-12 00:48:21.883043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.295 qpair failed and we were unable to recover it. 00:35:54.295 [2024-07-12 00:48:21.883139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.295 [2024-07-12 00:48:21.883168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.295 qpair failed and we were unable to recover it. 00:35:54.295 [2024-07-12 00:48:21.883245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.295 [2024-07-12 00:48:21.883271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.295 qpair failed and we were unable to recover it. 00:35:54.295 [2024-07-12 00:48:21.883356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.295 [2024-07-12 00:48:21.883384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.295 qpair failed and we were unable to recover it. 00:35:54.295 [2024-07-12 00:48:21.883468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.295 [2024-07-12 00:48:21.883494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.295 qpair failed and we were unable to recover it. 00:35:54.295 [2024-07-12 00:48:21.883578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.295 [2024-07-12 00:48:21.883612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.295 qpair failed and we were unable to recover it. 00:35:54.295 [2024-07-12 00:48:21.883688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.295 [2024-07-12 00:48:21.883713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.295 qpair failed and we were unable to recover it. 00:35:54.295 [2024-07-12 00:48:21.883794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.295 [2024-07-12 00:48:21.883820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.295 qpair failed and we were unable to recover it. 00:35:54.295 [2024-07-12 00:48:21.883903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.295 [2024-07-12 00:48:21.883929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.295 qpair failed and we were unable to recover it. 00:35:54.295 [2024-07-12 00:48:21.884011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.295 [2024-07-12 00:48:21.884041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.295 qpair failed and we were unable to recover it. 00:35:54.295 [2024-07-12 00:48:21.884117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.295 [2024-07-12 00:48:21.884143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.295 qpair failed and we were unable to recover it. 00:35:54.295 [2024-07-12 00:48:21.884217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.295 [2024-07-12 00:48:21.884243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.295 qpair failed and we were unable to recover it. 00:35:54.295 [2024-07-12 00:48:21.884323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.295 [2024-07-12 00:48:21.884349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.295 qpair failed and we were unable to recover it. 00:35:54.295 [2024-07-12 00:48:21.884426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.295 [2024-07-12 00:48:21.884451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.295 qpair failed and we were unable to recover it. 00:35:54.295 [2024-07-12 00:48:21.884527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.295 [2024-07-12 00:48:21.884552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.295 qpair failed and we were unable to recover it. 00:35:54.295 [2024-07-12 00:48:21.884648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.295 [2024-07-12 00:48:21.884680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.295 qpair failed and we were unable to recover it. 00:35:54.295 [2024-07-12 00:48:21.884757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.295 [2024-07-12 00:48:21.884783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.295 qpair failed and we were unable to recover it. 00:35:54.295 [2024-07-12 00:48:21.884860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.295 [2024-07-12 00:48:21.884887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.295 qpair failed and we were unable to recover it. 00:35:54.295 [2024-07-12 00:48:21.884968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.295 [2024-07-12 00:48:21.884994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.295 qpair failed and we were unable to recover it. 00:35:54.295 [2024-07-12 00:48:21.885070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.295 [2024-07-12 00:48:21.885096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.295 qpair failed and we were unable to recover it. 00:35:54.295 [2024-07-12 00:48:21.885173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.295 [2024-07-12 00:48:21.885199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.295 qpair failed and we were unable to recover it. 00:35:54.295 [2024-07-12 00:48:21.885282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.295 [2024-07-12 00:48:21.885310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.295 qpair failed and we were unable to recover it. 00:35:54.295 [2024-07-12 00:48:21.885387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.295 [2024-07-12 00:48:21.885415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.295 qpair failed and we were unable to recover it. 00:35:54.295 [2024-07-12 00:48:21.885500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.295 [2024-07-12 00:48:21.885525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.295 qpair failed and we were unable to recover it. 00:35:54.295 [2024-07-12 00:48:21.885608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.295 [2024-07-12 00:48:21.885643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.295 qpair failed and we were unable to recover it. 00:35:54.295 [2024-07-12 00:48:21.885736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.295 [2024-07-12 00:48:21.885763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.295 qpair failed and we were unable to recover it. 00:35:54.295 [2024-07-12 00:48:21.885848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.295 [2024-07-12 00:48:21.885875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.295 qpair failed and we were unable to recover it. 00:35:54.295 [2024-07-12 00:48:21.885958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.295 [2024-07-12 00:48:21.885985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.295 qpair failed and we were unable to recover it. 00:35:54.295 [2024-07-12 00:48:21.886075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.295 [2024-07-12 00:48:21.886102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.295 qpair failed and we were unable to recover it. 00:35:54.295 [2024-07-12 00:48:21.886184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.295 [2024-07-12 00:48:21.886211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.295 qpair failed and we were unable to recover it. 00:35:54.295 [2024-07-12 00:48:21.886294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.295 [2024-07-12 00:48:21.886320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.295 qpair failed and we were unable to recover it. 00:35:54.295 [2024-07-12 00:48:21.886396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.295 [2024-07-12 00:48:21.886422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.295 qpair failed and we were unable to recover it. 00:35:54.295 [2024-07-12 00:48:21.886505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.295 [2024-07-12 00:48:21.886532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.295 qpair failed and we were unable to recover it. 00:35:54.295 [2024-07-12 00:48:21.886615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.295 [2024-07-12 00:48:21.886641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.295 qpair failed and we were unable to recover it. 00:35:54.295 [2024-07-12 00:48:21.886728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.295 [2024-07-12 00:48:21.886754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.295 qpair failed and we were unable to recover it. 00:35:54.295 [2024-07-12 00:48:21.886827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.295 [2024-07-12 00:48:21.886852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.295 qpair failed and we were unable to recover it. 00:35:54.295 [2024-07-12 00:48:21.886938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.295 [2024-07-12 00:48:21.886967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.295 qpair failed and we were unable to recover it. 00:35:54.295 [2024-07-12 00:48:21.887054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.295 [2024-07-12 00:48:21.887080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.295 qpair failed and we were unable to recover it. 00:35:54.295 [2024-07-12 00:48:21.887167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.295 [2024-07-12 00:48:21.887195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.296 qpair failed and we were unable to recover it. 00:35:54.296 [2024-07-12 00:48:21.887275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.296 [2024-07-12 00:48:21.887300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.296 qpair failed and we were unable to recover it. 00:35:54.296 [2024-07-12 00:48:21.887388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.296 [2024-07-12 00:48:21.887414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.296 qpair failed and we were unable to recover it. 00:35:54.296 [2024-07-12 00:48:21.887498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.296 [2024-07-12 00:48:21.887524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.296 qpair failed and we were unable to recover it. 00:35:54.296 [2024-07-12 00:48:21.887611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.296 [2024-07-12 00:48:21.887654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.296 qpair failed and we were unable to recover it. 00:35:54.296 [2024-07-12 00:48:21.887745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.296 [2024-07-12 00:48:21.887781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.296 qpair failed and we were unable to recover it. 00:35:54.296 [2024-07-12 00:48:21.887865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.296 [2024-07-12 00:48:21.887890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.296 qpair failed and we were unable to recover it. 00:35:54.296 [2024-07-12 00:48:21.887965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.296 [2024-07-12 00:48:21.887990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.296 qpair failed and we were unable to recover it. 00:35:54.296 [2024-07-12 00:48:21.888070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.296 [2024-07-12 00:48:21.888094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.296 qpair failed and we were unable to recover it. 00:35:54.296 [2024-07-12 00:48:21.888172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.296 [2024-07-12 00:48:21.888196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.296 qpair failed and we were unable to recover it. 00:35:54.296 [2024-07-12 00:48:21.888272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.296 [2024-07-12 00:48:21.888296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.296 qpair failed and we were unable to recover it. 00:35:54.296 [2024-07-12 00:48:21.888377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.296 [2024-07-12 00:48:21.888407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.296 qpair failed and we were unable to recover it. 00:35:54.296 [2024-07-12 00:48:21.888486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.296 [2024-07-12 00:48:21.888516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.296 qpair failed and we were unable to recover it. 00:35:54.296 [2024-07-12 00:48:21.888610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.296 [2024-07-12 00:48:21.888644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.296 qpair failed and we were unable to recover it. 00:35:54.296 [2024-07-12 00:48:21.888720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.296 [2024-07-12 00:48:21.888747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.296 qpair failed and we were unable to recover it. 00:35:54.296 [2024-07-12 00:48:21.888873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.296 [2024-07-12 00:48:21.888899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.296 qpair failed and we were unable to recover it. 00:35:54.296 [2024-07-12 00:48:21.888978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.296 [2024-07-12 00:48:21.889004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.296 qpair failed and we were unable to recover it. 00:35:54.296 [2024-07-12 00:48:21.889085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.296 [2024-07-12 00:48:21.889112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.296 qpair failed and we were unable to recover it. 00:35:54.296 [2024-07-12 00:48:21.889187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.296 [2024-07-12 00:48:21.889213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.296 qpair failed and we were unable to recover it. 00:35:54.296 [2024-07-12 00:48:21.889293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.296 [2024-07-12 00:48:21.889323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.296 qpair failed and we were unable to recover it. 00:35:54.296 [2024-07-12 00:48:21.889449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.296 [2024-07-12 00:48:21.889476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.296 qpair failed and we were unable to recover it. 00:35:54.296 [2024-07-12 00:48:21.889556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.296 [2024-07-12 00:48:21.889582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.296 qpair failed and we were unable to recover it. 00:35:54.296 [2024-07-12 00:48:21.889676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.296 [2024-07-12 00:48:21.889702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.296 qpair failed and we were unable to recover it. 00:35:54.296 [2024-07-12 00:48:21.889785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.296 [2024-07-12 00:48:21.889813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.296 qpair failed and we were unable to recover it. 00:35:54.296 [2024-07-12 00:48:21.889891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.296 [2024-07-12 00:48:21.889917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.296 qpair failed and we were unable to recover it. 00:35:54.296 [2024-07-12 00:48:21.889999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.296 [2024-07-12 00:48:21.890026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.296 qpair failed and we were unable to recover it. 00:35:54.296 [2024-07-12 00:48:21.890101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.296 [2024-07-12 00:48:21.890127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.296 qpair failed and we were unable to recover it. 00:35:54.296 [2024-07-12 00:48:21.890207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.296 [2024-07-12 00:48:21.890232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.296 qpair failed and we were unable to recover it. 00:35:54.296 [2024-07-12 00:48:21.890308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.296 [2024-07-12 00:48:21.890334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.296 qpair failed and we were unable to recover it. 00:35:54.296 [2024-07-12 00:48:21.890415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.296 [2024-07-12 00:48:21.890440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.296 qpair failed and we were unable to recover it. 00:35:54.296 [2024-07-12 00:48:21.890520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.296 [2024-07-12 00:48:21.890546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.296 qpair failed and we were unable to recover it. 00:35:54.296 [2024-07-12 00:48:21.890640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.296 [2024-07-12 00:48:21.890667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.296 qpair failed and we were unable to recover it. 00:35:54.296 [2024-07-12 00:48:21.890756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.296 [2024-07-12 00:48:21.890785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.296 qpair failed and we were unable to recover it. 00:35:54.296 [2024-07-12 00:48:21.890871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.296 [2024-07-12 00:48:21.890897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.296 qpair failed and we were unable to recover it. 00:35:54.296 [2024-07-12 00:48:21.890978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.296 [2024-07-12 00:48:21.891004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.296 qpair failed and we were unable to recover it. 00:35:54.296 [2024-07-12 00:48:21.891079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.296 [2024-07-12 00:48:21.891104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.296 qpair failed and we were unable to recover it. 00:35:54.296 [2024-07-12 00:48:21.891180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.296 [2024-07-12 00:48:21.891206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.296 qpair failed and we were unable to recover it. 00:35:54.296 [2024-07-12 00:48:21.891289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.296 [2024-07-12 00:48:21.891316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.296 qpair failed and we were unable to recover it. 00:35:54.296 [2024-07-12 00:48:21.891391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.296 [2024-07-12 00:48:21.891422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.296 qpair failed and we were unable to recover it. 00:35:54.296 [2024-07-12 00:48:21.891512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.296 [2024-07-12 00:48:21.891540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.296 qpair failed and we were unable to recover it. 00:35:54.296 [2024-07-12 00:48:21.891632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.296 [2024-07-12 00:48:21.891662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.296 qpair failed and we were unable to recover it. 00:35:54.297 [2024-07-12 00:48:21.891743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.297 [2024-07-12 00:48:21.891767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.297 qpair failed and we were unable to recover it. 00:35:54.297 [2024-07-12 00:48:21.891846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.297 [2024-07-12 00:48:21.891872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.297 qpair failed and we were unable to recover it. 00:35:54.297 [2024-07-12 00:48:21.891952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.297 [2024-07-12 00:48:21.891977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.297 qpair failed and we were unable to recover it. 00:35:54.297 [2024-07-12 00:48:21.892057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.297 [2024-07-12 00:48:21.892082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.297 qpair failed and we were unable to recover it. 00:35:54.297 [2024-07-12 00:48:21.892158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.297 [2024-07-12 00:48:21.892184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.297 qpair failed and we were unable to recover it. 00:35:54.297 [2024-07-12 00:48:21.892265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.297 [2024-07-12 00:48:21.892290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.297 qpair failed and we were unable to recover it. 00:35:54.297 [2024-07-12 00:48:21.892367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.297 [2024-07-12 00:48:21.892394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.297 qpair failed and we were unable to recover it. 00:35:54.297 [2024-07-12 00:48:21.892474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.297 [2024-07-12 00:48:21.892501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.297 qpair failed and we were unable to recover it. 00:35:54.297 [2024-07-12 00:48:21.892576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.297 [2024-07-12 00:48:21.892609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.297 qpair failed and we were unable to recover it. 00:35:54.297 [2024-07-12 00:48:21.892686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.297 [2024-07-12 00:48:21.892712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.297 qpair failed and we were unable to recover it. 00:35:54.297 [2024-07-12 00:48:21.892796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.297 [2024-07-12 00:48:21.892823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.297 qpair failed and we were unable to recover it. 00:35:54.297 [2024-07-12 00:48:21.892906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.297 [2024-07-12 00:48:21.892932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.297 qpair failed and we were unable to recover it. 00:35:54.297 [2024-07-12 00:48:21.893015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.297 [2024-07-12 00:48:21.893045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.297 qpair failed and we were unable to recover it. 00:35:54.297 [2024-07-12 00:48:21.893121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.297 [2024-07-12 00:48:21.893147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.297 qpair failed and we were unable to recover it. 00:35:54.297 [2024-07-12 00:48:21.893224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.297 [2024-07-12 00:48:21.893250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.297 qpair failed and we were unable to recover it. 00:35:54.297 [2024-07-12 00:48:21.893332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.297 [2024-07-12 00:48:21.893360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.297 qpair failed and we were unable to recover it. 00:35:54.297 [2024-07-12 00:48:21.893441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.297 [2024-07-12 00:48:21.893466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.297 qpair failed and we were unable to recover it. 00:35:54.297 [2024-07-12 00:48:21.893548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.297 [2024-07-12 00:48:21.893574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.297 qpair failed and we were unable to recover it. 00:35:54.297 [2024-07-12 00:48:21.893667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.297 [2024-07-12 00:48:21.893693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.297 qpair failed and we were unable to recover it. 00:35:54.297 [2024-07-12 00:48:21.893772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.297 [2024-07-12 00:48:21.893798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.297 qpair failed and we were unable to recover it. 00:35:54.297 [2024-07-12 00:48:21.893889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.297 [2024-07-12 00:48:21.893915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.297 qpair failed and we were unable to recover it. 00:35:54.297 [2024-07-12 00:48:21.893997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.297 [2024-07-12 00:48:21.894026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.297 qpair failed and we were unable to recover it. 00:35:54.297 [2024-07-12 00:48:21.894117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.297 [2024-07-12 00:48:21.894144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.297 qpair failed and we were unable to recover it. 00:35:54.297 [2024-07-12 00:48:21.894221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.297 [2024-07-12 00:48:21.894247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.297 qpair failed and we were unable to recover it. 00:35:54.297 [2024-07-12 00:48:21.894338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.297 [2024-07-12 00:48:21.894370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.297 qpair failed and we were unable to recover it. 00:35:54.297 [2024-07-12 00:48:21.894445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.297 [2024-07-12 00:48:21.894471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.297 qpair failed and we were unable to recover it. 00:35:54.297 [2024-07-12 00:48:21.894555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.297 [2024-07-12 00:48:21.894581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.297 qpair failed and we were unable to recover it. 00:35:54.297 [2024-07-12 00:48:21.894683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.297 [2024-07-12 00:48:21.894709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.297 qpair failed and we were unable to recover it. 00:35:54.297 [2024-07-12 00:48:21.894787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.297 [2024-07-12 00:48:21.894813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.297 qpair failed and we were unable to recover it. 00:35:54.297 [2024-07-12 00:48:21.894904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.297 [2024-07-12 00:48:21.894932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.297 qpair failed and we were unable to recover it. 00:35:54.297 [2024-07-12 00:48:21.895011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.297 [2024-07-12 00:48:21.895039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.297 qpair failed and we were unable to recover it. 00:35:54.297 [2024-07-12 00:48:21.895125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.297 [2024-07-12 00:48:21.895151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.297 qpair failed and we were unable to recover it. 00:35:54.297 [2024-07-12 00:48:21.895232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.297 [2024-07-12 00:48:21.895258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.297 qpair failed and we were unable to recover it. 00:35:54.297 [2024-07-12 00:48:21.895344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.297 [2024-07-12 00:48:21.895370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.297 qpair failed and we were unable to recover it. 00:35:54.297 [2024-07-12 00:48:21.895456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.297 [2024-07-12 00:48:21.895481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.297 qpair failed and we were unable to recover it. 00:35:54.297 [2024-07-12 00:48:21.895562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.298 [2024-07-12 00:48:21.895597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.298 qpair failed and we were unable to recover it. 00:35:54.298 [2024-07-12 00:48:21.895716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.298 [2024-07-12 00:48:21.895743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.298 qpair failed and we were unable to recover it. 00:35:54.298 [2024-07-12 00:48:21.895834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.298 [2024-07-12 00:48:21.895864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.298 qpair failed and we were unable to recover it. 00:35:54.298 [2024-07-12 00:48:21.895960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.298 [2024-07-12 00:48:21.895987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.298 qpair failed and we were unable to recover it. 00:35:54.298 [2024-07-12 00:48:21.896070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.298 [2024-07-12 00:48:21.896097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.298 qpair failed and we were unable to recover it. 00:35:54.298 [2024-07-12 00:48:21.896177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.298 [2024-07-12 00:48:21.896210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.298 qpair failed and we were unable to recover it. 00:35:54.298 [2024-07-12 00:48:21.896323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.298 [2024-07-12 00:48:21.896351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.298 qpair failed and we were unable to recover it. 00:35:54.298 [2024-07-12 00:48:21.896427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.298 [2024-07-12 00:48:21.896453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.298 qpair failed and we were unable to recover it. 00:35:54.298 [2024-07-12 00:48:21.896528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.298 [2024-07-12 00:48:21.896554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.298 qpair failed and we were unable to recover it. 00:35:54.298 [2024-07-12 00:48:21.896657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.298 [2024-07-12 00:48:21.896684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.298 qpair failed and we were unable to recover it. 00:35:54.298 [2024-07-12 00:48:21.896761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.298 [2024-07-12 00:48:21.896787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.298 qpair failed and we were unable to recover it. 00:35:54.298 [2024-07-12 00:48:21.896874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.298 [2024-07-12 00:48:21.896901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.298 qpair failed and we were unable to recover it. 00:35:54.298 [2024-07-12 00:48:21.896983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.298 [2024-07-12 00:48:21.897011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.298 qpair failed and we were unable to recover it. 00:35:54.298 [2024-07-12 00:48:21.897087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.298 [2024-07-12 00:48:21.897113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.298 qpair failed and we were unable to recover it. 00:35:54.298 [2024-07-12 00:48:21.897191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.298 [2024-07-12 00:48:21.897217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.298 qpair failed and we were unable to recover it. 00:35:54.298 [2024-07-12 00:48:21.897298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.298 [2024-07-12 00:48:21.897325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.298 qpair failed and we were unable to recover it. 00:35:54.298 [2024-07-12 00:48:21.897408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.298 [2024-07-12 00:48:21.897434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.298 qpair failed and we were unable to recover it. 00:35:54.298 [2024-07-12 00:48:21.897517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.298 [2024-07-12 00:48:21.897543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.298 qpair failed and we were unable to recover it. 00:35:54.298 [2024-07-12 00:48:21.897628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.298 [2024-07-12 00:48:21.897655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.298 qpair failed and we were unable to recover it. 00:35:54.298 [2024-07-12 00:48:21.897737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.298 [2024-07-12 00:48:21.897764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.298 qpair failed and we were unable to recover it. 00:35:54.298 [2024-07-12 00:48:21.897846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.298 [2024-07-12 00:48:21.897873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.298 qpair failed and we were unable to recover it. 00:35:54.298 [2024-07-12 00:48:21.897955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.298 [2024-07-12 00:48:21.897982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.298 qpair failed and we were unable to recover it. 00:35:54.298 [2024-07-12 00:48:21.898059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.298 [2024-07-12 00:48:21.898085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.298 qpair failed and we were unable to recover it. 00:35:54.298 [2024-07-12 00:48:21.898217] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x871320 is same with the state(5) to be set 00:35:54.298 [2024-07-12 00:48:21.898324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.298 [2024-07-12 00:48:21.898353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.298 qpair failed and we were unable to recover it. 00:35:54.298 [2024-07-12 00:48:21.898454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.298 [2024-07-12 00:48:21.898487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.298 qpair failed and we were unable to recover it. 00:35:54.298 [2024-07-12 00:48:21.898582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.298 [2024-07-12 00:48:21.898615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.298 qpair failed and we were unable to recover it. 00:35:54.298 [2024-07-12 00:48:21.898703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.298 [2024-07-12 00:48:21.898730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.298 qpair failed and we were unable to recover it. 00:35:54.298 [2024-07-12 00:48:21.898814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.298 [2024-07-12 00:48:21.898839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.298 qpair failed and we were unable to recover it. 00:35:54.298 [2024-07-12 00:48:21.898915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.298 [2024-07-12 00:48:21.898945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.298 qpair failed and we were unable to recover it. 00:35:54.298 [2024-07-12 00:48:21.899023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.298 [2024-07-12 00:48:21.899048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.298 qpair failed and we were unable to recover it. 00:35:54.298 [2024-07-12 00:48:21.899133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.298 [2024-07-12 00:48:21.899158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.298 qpair failed and we were unable to recover it. 00:35:54.298 [2024-07-12 00:48:21.899245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.298 [2024-07-12 00:48:21.899271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.298 qpair failed and we were unable to recover it. 00:35:54.298 [2024-07-12 00:48:21.899353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.298 [2024-07-12 00:48:21.899382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.298 qpair failed and we were unable to recover it. 00:35:54.298 [2024-07-12 00:48:21.899459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.298 [2024-07-12 00:48:21.899488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.298 qpair failed and we were unable to recover it. 00:35:54.298 [2024-07-12 00:48:21.899573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.298 [2024-07-12 00:48:21.899605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.298 qpair failed and we were unable to recover it. 00:35:54.298 [2024-07-12 00:48:21.899684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.298 [2024-07-12 00:48:21.899709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.298 qpair failed and we were unable to recover it. 00:35:54.298 [2024-07-12 00:48:21.899786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.298 [2024-07-12 00:48:21.899811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.298 qpair failed and we were unable to recover it. 00:35:54.298 [2024-07-12 00:48:21.899889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.298 [2024-07-12 00:48:21.899915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.298 qpair failed and we were unable to recover it. 00:35:54.298 [2024-07-12 00:48:21.900002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.298 [2024-07-12 00:48:21.900030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.298 qpair failed and we were unable to recover it. 00:35:54.298 [2024-07-12 00:48:21.900113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.298 [2024-07-12 00:48:21.900138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.298 qpair failed and we were unable to recover it. 00:35:54.298 [2024-07-12 00:48:21.900214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.298 [2024-07-12 00:48:21.900240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.298 qpair failed and we were unable to recover it. 00:35:54.299 [2024-07-12 00:48:21.900316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.299 [2024-07-12 00:48:21.900341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.299 qpair failed and we were unable to recover it. 00:35:54.299 [2024-07-12 00:48:21.900423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.299 [2024-07-12 00:48:21.900448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.299 qpair failed and we were unable to recover it. 00:35:54.299 [2024-07-12 00:48:21.900523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.299 [2024-07-12 00:48:21.900548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.299 qpair failed and we were unable to recover it. 00:35:54.299 [2024-07-12 00:48:21.900647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.299 [2024-07-12 00:48:21.900675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.299 qpair failed and we were unable to recover it. 00:35:54.299 [2024-07-12 00:48:21.900756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.299 [2024-07-12 00:48:21.900785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.299 qpair failed and we were unable to recover it. 00:35:54.299 [2024-07-12 00:48:21.900867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.299 [2024-07-12 00:48:21.900893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.299 qpair failed and we were unable to recover it. 00:35:54.299 [2024-07-12 00:48:21.900974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.299 [2024-07-12 00:48:21.901000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.299 qpair failed and we were unable to recover it. 00:35:54.299 [2024-07-12 00:48:21.901076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.299 [2024-07-12 00:48:21.901102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.299 qpair failed and we were unable to recover it. 00:35:54.299 [2024-07-12 00:48:21.901201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.299 [2024-07-12 00:48:21.901242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.299 qpair failed and we were unable to recover it. 00:35:54.299 [2024-07-12 00:48:21.901323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.299 [2024-07-12 00:48:21.901350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.299 qpair failed and we were unable to recover it. 00:35:54.299 [2024-07-12 00:48:21.901428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.299 [2024-07-12 00:48:21.901456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.299 qpair failed and we were unable to recover it. 00:35:54.299 [2024-07-12 00:48:21.901532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.299 [2024-07-12 00:48:21.901558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.299 qpair failed and we were unable to recover it. 00:35:54.299 [2024-07-12 00:48:21.901652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.299 [2024-07-12 00:48:21.901680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.299 qpair failed and we were unable to recover it. 00:35:54.299 [2024-07-12 00:48:21.901792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.299 [2024-07-12 00:48:21.901829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.299 qpair failed and we were unable to recover it. 00:35:54.299 [2024-07-12 00:48:21.901918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.299 [2024-07-12 00:48:21.901949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.299 qpair failed and we were unable to recover it. 00:35:54.299 [2024-07-12 00:48:21.902043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.299 [2024-07-12 00:48:21.902069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.299 qpair failed and we were unable to recover it. 00:35:54.299 [2024-07-12 00:48:21.902152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.299 [2024-07-12 00:48:21.902179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.299 qpair failed and we were unable to recover it. 00:35:54.299 [2024-07-12 00:48:21.902260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.299 [2024-07-12 00:48:21.902286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.299 qpair failed and we were unable to recover it. 00:35:54.299 [2024-07-12 00:48:21.902365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.299 [2024-07-12 00:48:21.902391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.299 qpair failed and we were unable to recover it. 00:35:54.299 [2024-07-12 00:48:21.902465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.299 [2024-07-12 00:48:21.902489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.299 qpair failed and we were unable to recover it. 00:35:54.299 [2024-07-12 00:48:21.902568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.299 [2024-07-12 00:48:21.902605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.299 qpair failed and we were unable to recover it. 00:35:54.299 [2024-07-12 00:48:21.902689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.299 [2024-07-12 00:48:21.902716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.299 qpair failed and we were unable to recover it. 00:35:54.299 [2024-07-12 00:48:21.902798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.299 [2024-07-12 00:48:21.902824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.299 qpair failed and we were unable to recover it. 00:35:54.299 [2024-07-12 00:48:21.902909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.299 [2024-07-12 00:48:21.902935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.299 qpair failed and we were unable to recover it. 00:35:54.299 [2024-07-12 00:48:21.903016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.299 [2024-07-12 00:48:21.903042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.299 qpair failed and we were unable to recover it. 00:35:54.299 [2024-07-12 00:48:21.903118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.299 [2024-07-12 00:48:21.903144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.299 qpair failed and we were unable to recover it. 00:35:54.299 [2024-07-12 00:48:21.903231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.299 [2024-07-12 00:48:21.903257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.299 qpair failed and we were unable to recover it. 00:35:54.299 [2024-07-12 00:48:21.903340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.299 [2024-07-12 00:48:21.903366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.299 qpair failed and we were unable to recover it. 00:35:54.299 [2024-07-12 00:48:21.903456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.299 [2024-07-12 00:48:21.903482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.299 qpair failed and we were unable to recover it. 00:35:54.299 [2024-07-12 00:48:21.903564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.299 [2024-07-12 00:48:21.903596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.299 qpair failed and we were unable to recover it. 00:35:54.299 [2024-07-12 00:48:21.903678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.299 [2024-07-12 00:48:21.903704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.299 qpair failed and we were unable to recover it. 00:35:54.299 [2024-07-12 00:48:21.903781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.299 [2024-07-12 00:48:21.903807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.299 qpair failed and we were unable to recover it. 00:35:54.299 [2024-07-12 00:48:21.903893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.299 [2024-07-12 00:48:21.903920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.299 qpair failed and we were unable to recover it. 00:35:54.299 [2024-07-12 00:48:21.904002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.299 [2024-07-12 00:48:21.904029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.299 qpair failed and we were unable to recover it. 00:35:54.299 [2024-07-12 00:48:21.904112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.299 [2024-07-12 00:48:21.904141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.299 qpair failed and we were unable to recover it. 00:35:54.299 [2024-07-12 00:48:21.904227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.299 [2024-07-12 00:48:21.904255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.299 qpair failed and we were unable to recover it. 00:35:54.299 [2024-07-12 00:48:21.904343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.299 [2024-07-12 00:48:21.904374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.299 qpair failed and we were unable to recover it. 00:35:54.299 [2024-07-12 00:48:21.904454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.299 [2024-07-12 00:48:21.904480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.299 qpair failed and we were unable to recover it. 00:35:54.299 [2024-07-12 00:48:21.904559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.299 [2024-07-12 00:48:21.904602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.299 qpair failed and we were unable to recover it. 00:35:54.299 [2024-07-12 00:48:21.904694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.299 [2024-07-12 00:48:21.904720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.299 qpair failed and we were unable to recover it. 00:35:54.299 [2024-07-12 00:48:21.904798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.299 [2024-07-12 00:48:21.904824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.299 qpair failed and we were unable to recover it. 00:35:54.300 [2024-07-12 00:48:21.904905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.300 [2024-07-12 00:48:21.904932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.300 qpair failed and we were unable to recover it. 00:35:54.300 [2024-07-12 00:48:21.905020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.300 [2024-07-12 00:48:21.905047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.300 qpair failed and we were unable to recover it. 00:35:54.300 [2024-07-12 00:48:21.905131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.300 [2024-07-12 00:48:21.905159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.300 qpair failed and we were unable to recover it. 00:35:54.300 [2024-07-12 00:48:21.905249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.300 [2024-07-12 00:48:21.905277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.300 qpair failed and we were unable to recover it. 00:35:54.300 [2024-07-12 00:48:21.905359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.300 [2024-07-12 00:48:21.905385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.300 qpair failed and we were unable to recover it. 00:35:54.300 [2024-07-12 00:48:21.905460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.300 [2024-07-12 00:48:21.905486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.300 qpair failed and we were unable to recover it. 00:35:54.300 [2024-07-12 00:48:21.905561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.300 [2024-07-12 00:48:21.905593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.300 qpair failed and we were unable to recover it. 00:35:54.300 [2024-07-12 00:48:21.905676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.300 [2024-07-12 00:48:21.905704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.300 qpair failed and we were unable to recover it. 00:35:54.300 [2024-07-12 00:48:21.905787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.300 [2024-07-12 00:48:21.905814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.300 qpair failed and we were unable to recover it. 00:35:54.300 [2024-07-12 00:48:21.905891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.300 [2024-07-12 00:48:21.905917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.300 qpair failed and we were unable to recover it. 00:35:54.300 [2024-07-12 00:48:21.905998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.300 [2024-07-12 00:48:21.906024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.300 qpair failed and we were unable to recover it. 00:35:54.300 [2024-07-12 00:48:21.906104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.300 [2024-07-12 00:48:21.906129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.300 qpair failed and we were unable to recover it. 00:35:54.300 [2024-07-12 00:48:21.906214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.300 [2024-07-12 00:48:21.906238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.300 qpair failed and we were unable to recover it. 00:35:54.300 [2024-07-12 00:48:21.906327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.300 [2024-07-12 00:48:21.906355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.300 qpair failed and we were unable to recover it. 00:35:54.300 [2024-07-12 00:48:21.906449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.300 [2024-07-12 00:48:21.906476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.300 qpair failed and we were unable to recover it. 00:35:54.300 [2024-07-12 00:48:21.906554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.300 [2024-07-12 00:48:21.906581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.300 qpair failed and we were unable to recover it. 00:35:54.300 [2024-07-12 00:48:21.906680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.300 [2024-07-12 00:48:21.906706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.300 qpair failed and we were unable to recover it. 00:35:54.300 [2024-07-12 00:48:21.906787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.300 [2024-07-12 00:48:21.906814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.300 qpair failed and we were unable to recover it. 00:35:54.300 [2024-07-12 00:48:21.906894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.300 [2024-07-12 00:48:21.906920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.300 qpair failed and we were unable to recover it. 00:35:54.300 [2024-07-12 00:48:21.907005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.300 [2024-07-12 00:48:21.907033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.300 qpair failed and we were unable to recover it. 00:35:54.300 [2024-07-12 00:48:21.907122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.300 [2024-07-12 00:48:21.907151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.300 qpair failed and we were unable to recover it. 00:35:54.300 [2024-07-12 00:48:21.907241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.300 [2024-07-12 00:48:21.907266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.300 qpair failed and we were unable to recover it. 00:35:54.300 [2024-07-12 00:48:21.907347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.300 [2024-07-12 00:48:21.907373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.300 qpair failed and we were unable to recover it. 00:35:54.300 [2024-07-12 00:48:21.907456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.300 [2024-07-12 00:48:21.907483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.300 qpair failed and we were unable to recover it. 00:35:54.300 [2024-07-12 00:48:21.907567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.300 [2024-07-12 00:48:21.907604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.300 qpair failed and we were unable to recover it. 00:35:54.300 [2024-07-12 00:48:21.907688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.300 [2024-07-12 00:48:21.907714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.300 qpair failed and we were unable to recover it. 00:35:54.300 [2024-07-12 00:48:21.907791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.300 [2024-07-12 00:48:21.907816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.300 qpair failed and we were unable to recover it. 00:35:54.300 [2024-07-12 00:48:21.907906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.300 [2024-07-12 00:48:21.907932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.300 qpair failed and we were unable to recover it. 00:35:54.300 [2024-07-12 00:48:21.908019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.300 [2024-07-12 00:48:21.908046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.300 qpair failed and we were unable to recover it. 00:35:54.300 [2024-07-12 00:48:21.908130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.300 [2024-07-12 00:48:21.908156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.300 qpair failed and we were unable to recover it. 00:35:54.300 [2024-07-12 00:48:21.908234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.300 [2024-07-12 00:48:21.908260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.300 qpair failed and we were unable to recover it. 00:35:54.300 [2024-07-12 00:48:21.908334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.300 [2024-07-12 00:48:21.908360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.300 qpair failed and we were unable to recover it. 00:35:54.300 [2024-07-12 00:48:21.908437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.300 [2024-07-12 00:48:21.908463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.300 qpair failed and we were unable to recover it. 00:35:54.300 [2024-07-12 00:48:21.908540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.300 [2024-07-12 00:48:21.908566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.300 qpair failed and we were unable to recover it. 00:35:54.300 [2024-07-12 00:48:21.908659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.300 [2024-07-12 00:48:21.908688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.300 qpair failed and we were unable to recover it. 00:35:54.300 [2024-07-12 00:48:21.908767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.300 [2024-07-12 00:48:21.908791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.300 qpair failed and we were unable to recover it. 00:35:54.300 [2024-07-12 00:48:21.908878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.300 [2024-07-12 00:48:21.908907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.300 qpair failed and we were unable to recover it. 00:35:54.300 [2024-07-12 00:48:21.908991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.300 [2024-07-12 00:48:21.909018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.300 qpair failed and we were unable to recover it. 00:35:54.300 [2024-07-12 00:48:21.909100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.300 [2024-07-12 00:48:21.909128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.300 qpair failed and we were unable to recover it. 00:35:54.300 [2024-07-12 00:48:21.909210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.300 [2024-07-12 00:48:21.909237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.300 qpair failed and we were unable to recover it. 00:35:54.300 [2024-07-12 00:48:21.909316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.300 [2024-07-12 00:48:21.909348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.300 qpair failed and we were unable to recover it. 00:35:54.301 [2024-07-12 00:48:21.909429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.301 [2024-07-12 00:48:21.909458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.301 qpair failed and we were unable to recover it. 00:35:54.301 [2024-07-12 00:48:21.909534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.301 [2024-07-12 00:48:21.909560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.301 qpair failed and we were unable to recover it. 00:35:54.301 [2024-07-12 00:48:21.909647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.301 [2024-07-12 00:48:21.909673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.301 qpair failed and we were unable to recover it. 00:35:54.301 [2024-07-12 00:48:21.909751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.301 [2024-07-12 00:48:21.909775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.301 qpair failed and we were unable to recover it. 00:35:54.301 [2024-07-12 00:48:21.909863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.301 [2024-07-12 00:48:21.909887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.301 qpair failed and we were unable to recover it. 00:35:54.301 [2024-07-12 00:48:21.909970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.301 [2024-07-12 00:48:21.909997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.301 qpair failed and we were unable to recover it. 00:35:54.301 [2024-07-12 00:48:21.910086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.301 [2024-07-12 00:48:21.910113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.301 qpair failed and we were unable to recover it. 00:35:54.301 [2024-07-12 00:48:21.910192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.301 [2024-07-12 00:48:21.910220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.301 qpair failed and we were unable to recover it. 00:35:54.301 [2024-07-12 00:48:21.910304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.301 [2024-07-12 00:48:21.910330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.301 qpair failed and we were unable to recover it. 00:35:54.301 [2024-07-12 00:48:21.910410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.301 [2024-07-12 00:48:21.910436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.301 qpair failed and we were unable to recover it. 00:35:54.301 [2024-07-12 00:48:21.910512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.301 [2024-07-12 00:48:21.910554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.301 qpair failed and we were unable to recover it. 00:35:54.301 [2024-07-12 00:48:21.910647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.301 [2024-07-12 00:48:21.910675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.301 qpair failed and we were unable to recover it. 00:35:54.301 [2024-07-12 00:48:21.910750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.301 [2024-07-12 00:48:21.910775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.301 qpair failed and we were unable to recover it. 00:35:54.301 [2024-07-12 00:48:21.910862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.301 [2024-07-12 00:48:21.910890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.301 qpair failed and we were unable to recover it. 00:35:54.301 [2024-07-12 00:48:21.910977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.301 [2024-07-12 00:48:21.911004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.301 qpair failed and we were unable to recover it. 00:35:54.301 [2024-07-12 00:48:21.911093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.301 [2024-07-12 00:48:21.911121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.301 qpair failed and we were unable to recover it. 00:35:54.301 [2024-07-12 00:48:21.911215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.301 [2024-07-12 00:48:21.911242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.301 qpair failed and we were unable to recover it. 00:35:54.301 [2024-07-12 00:48:21.911326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.301 [2024-07-12 00:48:21.911352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.301 qpair failed and we were unable to recover it. 00:35:54.301 [2024-07-12 00:48:21.911437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.301 [2024-07-12 00:48:21.911466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.301 qpair failed and we were unable to recover it. 00:35:54.301 [2024-07-12 00:48:21.911555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.301 [2024-07-12 00:48:21.911582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.301 qpair failed and we were unable to recover it. 00:35:54.301 [2024-07-12 00:48:21.911679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.301 [2024-07-12 00:48:21.911708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.301 qpair failed and we were unable to recover it. 00:35:54.301 [2024-07-12 00:48:21.911798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.301 [2024-07-12 00:48:21.911824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.301 qpair failed and we were unable to recover it. 00:35:54.301 [2024-07-12 00:48:21.911905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.301 [2024-07-12 00:48:21.911931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.301 qpair failed and we were unable to recover it. 00:35:54.301 [2024-07-12 00:48:21.912017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.301 [2024-07-12 00:48:21.912042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.301 qpair failed and we were unable to recover it. 00:35:54.301 [2024-07-12 00:48:21.912120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.301 [2024-07-12 00:48:21.912145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.301 qpair failed and we were unable to recover it. 00:35:54.301 [2024-07-12 00:48:21.912241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.301 [2024-07-12 00:48:21.912267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.301 qpair failed and we were unable to recover it. 00:35:54.301 [2024-07-12 00:48:21.912348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.301 [2024-07-12 00:48:21.912375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.301 qpair failed and we were unable to recover it. 00:35:54.301 [2024-07-12 00:48:21.912458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.301 [2024-07-12 00:48:21.912485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.301 qpair failed and we were unable to recover it. 00:35:54.301 [2024-07-12 00:48:21.912572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.301 [2024-07-12 00:48:21.912613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.301 qpair failed and we were unable to recover it. 00:35:54.301 [2024-07-12 00:48:21.912700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.301 [2024-07-12 00:48:21.912726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.301 qpair failed and we were unable to recover it. 00:35:54.301 [2024-07-12 00:48:21.912801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.301 [2024-07-12 00:48:21.912828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.301 qpair failed and we were unable to recover it. 00:35:54.301 [2024-07-12 00:48:21.912904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.301 [2024-07-12 00:48:21.912929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.301 qpair failed and we were unable to recover it. 00:35:54.301 [2024-07-12 00:48:21.913002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.301 [2024-07-12 00:48:21.913027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.301 qpair failed and we were unable to recover it. 00:35:54.301 [2024-07-12 00:48:21.913114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.301 [2024-07-12 00:48:21.913143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.301 qpair failed and we were unable to recover it. 00:35:54.301 [2024-07-12 00:48:21.913221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.301 [2024-07-12 00:48:21.913248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.301 qpair failed and we were unable to recover it. 00:35:54.301 [2024-07-12 00:48:21.913331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.301 [2024-07-12 00:48:21.913358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.302 qpair failed and we were unable to recover it. 00:35:54.302 [2024-07-12 00:48:21.913438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.302 [2024-07-12 00:48:21.913464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.302 qpair failed and we were unable to recover it. 00:35:54.302 [2024-07-12 00:48:21.913546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.302 [2024-07-12 00:48:21.913574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.302 qpair failed and we were unable to recover it. 00:35:54.302 [2024-07-12 00:48:21.913664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.302 [2024-07-12 00:48:21.913693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.302 qpair failed and we were unable to recover it. 00:35:54.302 [2024-07-12 00:48:21.913791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.302 [2024-07-12 00:48:21.913820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.302 qpair failed and we were unable to recover it. 00:35:54.302 [2024-07-12 00:48:21.913911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.302 [2024-07-12 00:48:21.913937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.302 qpair failed and we were unable to recover it. 00:35:54.302 [2024-07-12 00:48:21.914020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.302 [2024-07-12 00:48:21.914045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.302 qpair failed and we were unable to recover it. 00:35:54.302 [2024-07-12 00:48:21.914130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.302 [2024-07-12 00:48:21.914154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.302 qpair failed and we were unable to recover it. 00:35:54.302 [2024-07-12 00:48:21.914230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.302 [2024-07-12 00:48:21.914256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.302 qpair failed and we were unable to recover it. 00:35:54.302 [2024-07-12 00:48:21.914334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.302 [2024-07-12 00:48:21.914359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.302 qpair failed and we were unable to recover it. 00:35:54.302 [2024-07-12 00:48:21.914434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.302 [2024-07-12 00:48:21.914459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.302 qpair failed and we were unable to recover it. 00:35:54.302 [2024-07-12 00:48:21.914548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.302 [2024-07-12 00:48:21.914577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.302 qpair failed and we were unable to recover it. 00:35:54.302 [2024-07-12 00:48:21.914663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.302 [2024-07-12 00:48:21.914690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.302 qpair failed and we were unable to recover it. 00:35:54.302 [2024-07-12 00:48:21.914775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.302 [2024-07-12 00:48:21.914802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.302 qpair failed and we were unable to recover it. 00:35:54.302 [2024-07-12 00:48:21.914881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.302 [2024-07-12 00:48:21.914907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.302 qpair failed and we were unable to recover it. 00:35:54.302 [2024-07-12 00:48:21.914987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.302 [2024-07-12 00:48:21.915014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.302 qpair failed and we were unable to recover it. 00:35:54.302 [2024-07-12 00:48:21.915090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.302 [2024-07-12 00:48:21.915115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.302 qpair failed and we were unable to recover it. 00:35:54.302 [2024-07-12 00:48:21.915194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.302 [2024-07-12 00:48:21.915220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.302 qpair failed and we were unable to recover it. 00:35:54.302 [2024-07-12 00:48:21.915308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.302 [2024-07-12 00:48:21.915335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.302 qpair failed and we were unable to recover it. 00:35:54.302 [2024-07-12 00:48:21.915422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.302 [2024-07-12 00:48:21.915448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.302 qpair failed and we were unable to recover it. 00:35:54.302 [2024-07-12 00:48:21.915533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.302 [2024-07-12 00:48:21.915560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.302 qpair failed and we were unable to recover it. 00:35:54.302 [2024-07-12 00:48:21.915648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.302 [2024-07-12 00:48:21.915675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.302 qpair failed and we were unable to recover it. 00:35:54.302 [2024-07-12 00:48:21.915758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.302 [2024-07-12 00:48:21.915783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.302 qpair failed and we were unable to recover it. 00:35:54.302 [2024-07-12 00:48:21.915862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.302 [2024-07-12 00:48:21.915888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.302 qpair failed and we were unable to recover it. 00:35:54.302 [2024-07-12 00:48:21.915962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.302 [2024-07-12 00:48:21.915988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.302 qpair failed and we were unable to recover it. 00:35:54.302 [2024-07-12 00:48:21.916065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.302 [2024-07-12 00:48:21.916091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.302 qpair failed and we were unable to recover it. 00:35:54.302 [2024-07-12 00:48:21.916172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.302 [2024-07-12 00:48:21.916198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.302 qpair failed and we were unable to recover it. 00:35:54.302 [2024-07-12 00:48:21.916273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.302 [2024-07-12 00:48:21.916299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.302 qpair failed and we were unable to recover it. 00:35:54.302 [2024-07-12 00:48:21.916379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.302 [2024-07-12 00:48:21.916404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.302 qpair failed and we were unable to recover it. 00:35:54.302 [2024-07-12 00:48:21.916514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.302 [2024-07-12 00:48:21.916541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.302 qpair failed and we were unable to recover it. 00:35:54.302 [2024-07-12 00:48:21.916641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.302 [2024-07-12 00:48:21.916667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.302 qpair failed and we were unable to recover it. 00:35:54.302 [2024-07-12 00:48:21.916747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.302 [2024-07-12 00:48:21.916776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.302 qpair failed and we were unable to recover it. 00:35:54.302 [2024-07-12 00:48:21.916888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.302 [2024-07-12 00:48:21.916945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.302 qpair failed and we were unable to recover it. 00:35:54.302 [2024-07-12 00:48:21.917024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.302 [2024-07-12 00:48:21.917053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.302 qpair failed and we were unable to recover it. 00:35:54.302 [2024-07-12 00:48:21.917182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.302 [2024-07-12 00:48:21.917208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.302 qpair failed and we were unable to recover it. 00:35:54.302 [2024-07-12 00:48:21.917289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.302 [2024-07-12 00:48:21.917315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.302 qpair failed and we were unable to recover it. 00:35:54.302 [2024-07-12 00:48:21.917394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.302 [2024-07-12 00:48:21.917420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.302 qpair failed and we were unable to recover it. 00:35:54.302 [2024-07-12 00:48:21.917496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.302 [2024-07-12 00:48:21.917522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.302 qpair failed and we were unable to recover it. 00:35:54.302 [2024-07-12 00:48:21.917604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.302 [2024-07-12 00:48:21.917631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.302 qpair failed and we were unable to recover it. 00:35:54.302 [2024-07-12 00:48:21.917712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.302 [2024-07-12 00:48:21.917739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.302 qpair failed and we were unable to recover it. 00:35:54.302 [2024-07-12 00:48:21.917817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.302 [2024-07-12 00:48:21.917843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.303 qpair failed and we were unable to recover it. 00:35:54.303 [2024-07-12 00:48:21.917918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.303 [2024-07-12 00:48:21.917944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.303 qpair failed and we were unable to recover it. 00:35:54.303 [2024-07-12 00:48:21.918027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.303 [2024-07-12 00:48:21.918056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.303 qpair failed and we were unable to recover it. 00:35:54.303 [2024-07-12 00:48:21.918141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.303 [2024-07-12 00:48:21.918170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.303 qpair failed and we were unable to recover it. 00:35:54.303 [2024-07-12 00:48:21.918246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.303 [2024-07-12 00:48:21.918271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.303 qpair failed and we were unable to recover it. 00:35:54.303 [2024-07-12 00:48:21.918356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.303 [2024-07-12 00:48:21.918398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.303 qpair failed and we were unable to recover it. 00:35:54.303 [2024-07-12 00:48:21.918482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.303 [2024-07-12 00:48:21.918508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.303 qpair failed and we were unable to recover it. 00:35:54.303 [2024-07-12 00:48:21.918597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.303 [2024-07-12 00:48:21.918626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.303 qpair failed and we were unable to recover it. 00:35:54.303 [2024-07-12 00:48:21.918740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.303 [2024-07-12 00:48:21.918768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.303 qpair failed and we were unable to recover it. 00:35:54.303 [2024-07-12 00:48:21.918854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.303 [2024-07-12 00:48:21.918881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.303 qpair failed and we were unable to recover it. 00:35:54.303 [2024-07-12 00:48:21.918960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.303 [2024-07-12 00:48:21.918986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.303 qpair failed and we were unable to recover it. 00:35:54.303 [2024-07-12 00:48:21.919062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.303 [2024-07-12 00:48:21.919089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.303 qpair failed and we were unable to recover it. 00:35:54.303 [2024-07-12 00:48:21.919173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.303 [2024-07-12 00:48:21.919198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.303 qpair failed and we were unable to recover it. 00:35:54.303 [2024-07-12 00:48:21.919279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.303 [2024-07-12 00:48:21.919306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.303 qpair failed and we were unable to recover it. 00:35:54.303 [2024-07-12 00:48:21.919390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.303 [2024-07-12 00:48:21.919420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.303 qpair failed and we were unable to recover it. 00:35:54.303 [2024-07-12 00:48:21.919500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.303 [2024-07-12 00:48:21.919525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.303 qpair failed and we were unable to recover it. 00:35:54.303 [2024-07-12 00:48:21.919605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.303 [2024-07-12 00:48:21.919631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.303 qpair failed and we were unable to recover it. 00:35:54.303 [2024-07-12 00:48:21.919707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.303 [2024-07-12 00:48:21.919733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.303 qpair failed and we were unable to recover it. 00:35:54.303 [2024-07-12 00:48:21.919825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.303 [2024-07-12 00:48:21.919850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.303 qpair failed and we were unable to recover it. 00:35:54.303 [2024-07-12 00:48:21.919940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.303 [2024-07-12 00:48:21.919967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.303 qpair failed and we were unable to recover it. 00:35:54.303 [2024-07-12 00:48:21.920052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.303 [2024-07-12 00:48:21.920089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.303 qpair failed and we were unable to recover it. 00:35:54.303 [2024-07-12 00:48:21.920168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.303 [2024-07-12 00:48:21.920193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.303 qpair failed and we were unable to recover it. 00:35:54.303 [2024-07-12 00:48:21.920281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.303 [2024-07-12 00:48:21.920312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.303 qpair failed and we were unable to recover it. 00:35:54.303 [2024-07-12 00:48:21.920387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.303 [2024-07-12 00:48:21.920413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.303 qpair failed and we were unable to recover it. 00:35:54.303 [2024-07-12 00:48:21.920490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.303 [2024-07-12 00:48:21.920516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.303 qpair failed and we were unable to recover it. 00:35:54.303 [2024-07-12 00:48:21.920598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.303 [2024-07-12 00:48:21.920625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.303 qpair failed and we were unable to recover it. 00:35:54.303 [2024-07-12 00:48:21.920715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.303 [2024-07-12 00:48:21.920744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.303 qpair failed and we were unable to recover it. 00:35:54.303 [2024-07-12 00:48:21.920820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.303 [2024-07-12 00:48:21.920847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.303 qpair failed and we were unable to recover it. 00:35:54.303 [2024-07-12 00:48:21.920936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.303 [2024-07-12 00:48:21.920964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.303 qpair failed and we were unable to recover it. 00:35:54.303 [2024-07-12 00:48:21.921041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.303 [2024-07-12 00:48:21.921067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.303 qpair failed and we were unable to recover it. 00:35:54.303 [2024-07-12 00:48:21.921147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.303 [2024-07-12 00:48:21.921171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.303 qpair failed and we were unable to recover it. 00:35:54.303 [2024-07-12 00:48:21.921257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.303 [2024-07-12 00:48:21.921292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.303 qpair failed and we were unable to recover it. 00:35:54.303 [2024-07-12 00:48:21.921374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.303 [2024-07-12 00:48:21.921401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.303 qpair failed and we were unable to recover it. 00:35:54.303 [2024-07-12 00:48:21.921485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.303 [2024-07-12 00:48:21.921510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.303 qpair failed and we were unable to recover it. 00:35:54.303 [2024-07-12 00:48:21.921607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.303 [2024-07-12 00:48:21.921635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.303 qpair failed and we were unable to recover it. 00:35:54.303 [2024-07-12 00:48:21.921714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.303 [2024-07-12 00:48:21.921741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.303 qpair failed and we were unable to recover it. 00:35:54.303 [2024-07-12 00:48:21.921818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.303 [2024-07-12 00:48:21.921843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.303 qpair failed and we were unable to recover it. 00:35:54.303 [2024-07-12 00:48:21.921927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.303 [2024-07-12 00:48:21.921954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.303 qpair failed and we were unable to recover it. 00:35:54.303 [2024-07-12 00:48:21.922038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.303 [2024-07-12 00:48:21.922067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.303 qpair failed and we were unable to recover it. 00:35:54.303 [2024-07-12 00:48:21.922147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.303 [2024-07-12 00:48:21.922172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.303 qpair failed and we were unable to recover it. 00:35:54.303 [2024-07-12 00:48:21.922255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.303 [2024-07-12 00:48:21.922283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.303 qpair failed and we were unable to recover it. 00:35:54.303 [2024-07-12 00:48:21.922361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.303 [2024-07-12 00:48:21.922388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.304 qpair failed and we were unable to recover it. 00:35:54.304 [2024-07-12 00:48:21.922469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.304 [2024-07-12 00:48:21.922496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.304 qpair failed and we were unable to recover it. 00:35:54.304 [2024-07-12 00:48:21.922572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.304 [2024-07-12 00:48:21.922609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.304 qpair failed and we were unable to recover it. 00:35:54.304 [2024-07-12 00:48:21.922694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.304 [2024-07-12 00:48:21.922720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.304 qpair failed and we were unable to recover it. 00:35:54.304 [2024-07-12 00:48:21.922799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.304 [2024-07-12 00:48:21.922826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.304 qpair failed and we were unable to recover it. 00:35:54.304 [2024-07-12 00:48:21.922906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.304 [2024-07-12 00:48:21.922933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.304 qpair failed and we were unable to recover it. 00:35:54.304 [2024-07-12 00:48:21.923022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.304 [2024-07-12 00:48:21.923051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.304 qpair failed and we were unable to recover it. 00:35:54.304 [2024-07-12 00:48:21.923128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.304 [2024-07-12 00:48:21.923156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.304 qpair failed and we were unable to recover it. 00:35:54.304 [2024-07-12 00:48:21.923236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.304 [2024-07-12 00:48:21.923263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.304 qpair failed and we were unable to recover it. 00:35:54.304 [2024-07-12 00:48:21.923343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.304 [2024-07-12 00:48:21.923375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.304 qpair failed and we were unable to recover it. 00:35:54.304 [2024-07-12 00:48:21.923454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.304 [2024-07-12 00:48:21.923482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.304 qpair failed and we were unable to recover it. 00:35:54.304 [2024-07-12 00:48:21.923561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.304 [2024-07-12 00:48:21.923592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.304 qpair failed and we were unable to recover it. 00:35:54.304 [2024-07-12 00:48:21.923673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.304 [2024-07-12 00:48:21.923700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.304 qpair failed and we were unable to recover it. 00:35:54.304 [2024-07-12 00:48:21.923780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.304 [2024-07-12 00:48:21.923804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.304 qpair failed and we were unable to recover it. 00:35:54.304 [2024-07-12 00:48:21.923885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.304 [2024-07-12 00:48:21.923914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.304 qpair failed and we were unable to recover it. 00:35:54.304 [2024-07-12 00:48:21.924001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.304 [2024-07-12 00:48:21.924028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.304 qpair failed and we were unable to recover it. 00:35:54.304 [2024-07-12 00:48:21.924102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.304 [2024-07-12 00:48:21.924126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.304 qpair failed and we were unable to recover it. 00:35:54.304 [2024-07-12 00:48:21.924216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.304 [2024-07-12 00:48:21.924243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.304 qpair failed and we were unable to recover it. 00:35:54.304 [2024-07-12 00:48:21.924319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.304 [2024-07-12 00:48:21.924347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.304 qpair failed and we were unable to recover it. 00:35:54.304 [2024-07-12 00:48:21.924422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.304 [2024-07-12 00:48:21.924447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.304 qpair failed and we were unable to recover it. 00:35:54.304 [2024-07-12 00:48:21.924526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.304 [2024-07-12 00:48:21.924552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.304 qpair failed and we were unable to recover it. 00:35:54.304 [2024-07-12 00:48:21.924642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.304 [2024-07-12 00:48:21.924668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.304 qpair failed and we were unable to recover it. 00:35:54.304 [2024-07-12 00:48:21.924750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.304 [2024-07-12 00:48:21.924775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.304 qpair failed and we were unable to recover it. 00:35:54.304 [2024-07-12 00:48:21.924853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.304 [2024-07-12 00:48:21.924880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.304 qpair failed and we were unable to recover it. 00:35:54.304 [2024-07-12 00:48:21.924959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.304 [2024-07-12 00:48:21.924987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.304 qpair failed and we were unable to recover it. 00:35:54.304 [2024-07-12 00:48:21.925064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.304 [2024-07-12 00:48:21.925090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.304 qpair failed and we were unable to recover it. 00:35:54.304 [2024-07-12 00:48:21.925165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.304 [2024-07-12 00:48:21.925191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.304 qpair failed and we were unable to recover it. 00:35:54.304 [2024-07-12 00:48:21.925271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.304 [2024-07-12 00:48:21.925298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.304 qpair failed and we were unable to recover it. 00:35:54.304 [2024-07-12 00:48:21.925373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.304 [2024-07-12 00:48:21.925399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.304 qpair failed and we were unable to recover it. 00:35:54.304 [2024-07-12 00:48:21.925480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.304 [2024-07-12 00:48:21.925508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.304 qpair failed and we were unable to recover it. 00:35:54.304 [2024-07-12 00:48:21.925598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.304 [2024-07-12 00:48:21.925628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.304 qpair failed and we were unable to recover it. 00:35:54.304 [2024-07-12 00:48:21.925712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.304 [2024-07-12 00:48:21.925738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.304 qpair failed and we were unable to recover it. 00:35:54.304 [2024-07-12 00:48:21.925816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.304 [2024-07-12 00:48:21.925842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.304 qpair failed and we were unable to recover it. 00:35:54.304 [2024-07-12 00:48:21.925924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.304 [2024-07-12 00:48:21.925950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.304 qpair failed and we were unable to recover it. 00:35:54.304 [2024-07-12 00:48:21.926036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.304 [2024-07-12 00:48:21.926063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.304 qpair failed and we were unable to recover it. 00:35:54.304 [2024-07-12 00:48:21.926138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.304 [2024-07-12 00:48:21.926165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.304 qpair failed and we were unable to recover it. 00:35:54.304 [2024-07-12 00:48:21.926254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.304 [2024-07-12 00:48:21.926283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.304 qpair failed and we were unable to recover it. 00:35:54.304 [2024-07-12 00:48:21.926362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.304 [2024-07-12 00:48:21.926388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.304 qpair failed and we were unable to recover it. 00:35:54.304 [2024-07-12 00:48:21.926465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.304 [2024-07-12 00:48:21.926491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.304 qpair failed and we were unable to recover it. 00:35:54.304 [2024-07-12 00:48:21.926571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.304 [2024-07-12 00:48:21.926606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.304 qpair failed and we were unable to recover it. 00:35:54.304 [2024-07-12 00:48:21.926698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.304 [2024-07-12 00:48:21.926725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.304 qpair failed and we were unable to recover it. 00:35:54.304 [2024-07-12 00:48:21.926805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.305 [2024-07-12 00:48:21.926832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.305 qpair failed and we were unable to recover it. 00:35:54.305 [2024-07-12 00:48:21.926913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.305 [2024-07-12 00:48:21.926939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.305 qpair failed and we were unable to recover it. 00:35:54.305 [2024-07-12 00:48:21.927016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.305 [2024-07-12 00:48:21.927042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.305 qpair failed and we were unable to recover it. 00:35:54.305 [2024-07-12 00:48:21.927134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.305 [2024-07-12 00:48:21.927162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.305 qpair failed and we were unable to recover it. 00:35:54.305 [2024-07-12 00:48:21.927243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.305 [2024-07-12 00:48:21.927272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.305 qpair failed and we were unable to recover it. 00:35:54.305 [2024-07-12 00:48:21.927348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.305 [2024-07-12 00:48:21.927375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.305 qpair failed and we were unable to recover it. 00:35:54.305 [2024-07-12 00:48:21.927451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.305 [2024-07-12 00:48:21.927477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.305 qpair failed and we were unable to recover it. 00:35:54.305 [2024-07-12 00:48:21.927557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.305 [2024-07-12 00:48:21.927585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.305 qpair failed and we were unable to recover it. 00:35:54.305 [2024-07-12 00:48:21.927674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.305 [2024-07-12 00:48:21.927701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.305 qpair failed and we were unable to recover it. 00:35:54.305 [2024-07-12 00:48:21.927829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.305 [2024-07-12 00:48:21.927855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.305 qpair failed and we were unable to recover it. 00:35:54.305 [2024-07-12 00:48:21.927937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.305 [2024-07-12 00:48:21.927964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.305 qpair failed and we were unable to recover it. 00:35:54.305 [2024-07-12 00:48:21.928049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.305 [2024-07-12 00:48:21.928077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.305 qpair failed and we were unable to recover it. 00:35:54.305 [2024-07-12 00:48:21.928159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.305 [2024-07-12 00:48:21.928186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.305 qpair failed and we were unable to recover it. 00:35:54.305 [2024-07-12 00:48:21.928265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.305 [2024-07-12 00:48:21.928291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.305 qpair failed and we were unable to recover it. 00:35:54.305 [2024-07-12 00:48:21.928371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.305 [2024-07-12 00:48:21.928396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.305 qpair failed and we were unable to recover it. 00:35:54.305 [2024-07-12 00:48:21.928472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.305 [2024-07-12 00:48:21.928501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.305 qpair failed and we were unable to recover it. 00:35:54.305 [2024-07-12 00:48:21.928598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.305 [2024-07-12 00:48:21.928630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.305 qpair failed and we were unable to recover it. 00:35:54.305 [2024-07-12 00:48:21.928719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.305 [2024-07-12 00:48:21.928746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.305 qpair failed and we were unable to recover it. 00:35:54.305 [2024-07-12 00:48:21.928821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.305 [2024-07-12 00:48:21.928848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.305 qpair failed and we were unable to recover it. 00:35:54.305 [2024-07-12 00:48:21.928931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.305 [2024-07-12 00:48:21.928957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.305 qpair failed and we were unable to recover it. 00:35:54.305 [2024-07-12 00:48:21.929032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.305 [2024-07-12 00:48:21.929074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.305 qpair failed and we were unable to recover it. 00:35:54.305 [2024-07-12 00:48:21.929167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.305 [2024-07-12 00:48:21.929203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.305 qpair failed and we were unable to recover it. 00:35:54.305 [2024-07-12 00:48:21.929284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.305 [2024-07-12 00:48:21.929310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.305 qpair failed and we were unable to recover it. 00:35:54.305 [2024-07-12 00:48:21.929399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.305 [2024-07-12 00:48:21.929426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.305 qpair failed and we were unable to recover it. 00:35:54.305 [2024-07-12 00:48:21.929503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.305 [2024-07-12 00:48:21.929528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.305 qpair failed and we were unable to recover it. 00:35:54.305 [2024-07-12 00:48:21.929614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.305 [2024-07-12 00:48:21.929643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.305 qpair failed and we were unable to recover it. 00:35:54.305 [2024-07-12 00:48:21.929722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.305 [2024-07-12 00:48:21.929748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.305 qpair failed and we were unable to recover it. 00:35:54.305 [2024-07-12 00:48:21.929831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.305 [2024-07-12 00:48:21.929857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.305 qpair failed and we were unable to recover it. 00:35:54.305 [2024-07-12 00:48:21.929935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.305 [2024-07-12 00:48:21.929961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.305 qpair failed and we were unable to recover it. 00:35:54.305 [2024-07-12 00:48:21.930046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.305 [2024-07-12 00:48:21.930078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.305 qpair failed and we were unable to recover it. 00:35:54.305 [2024-07-12 00:48:21.930153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.305 [2024-07-12 00:48:21.930179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.305 qpair failed and we were unable to recover it. 00:35:54.305 [2024-07-12 00:48:21.930254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.305 [2024-07-12 00:48:21.930280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.305 qpair failed and we were unable to recover it. 00:35:54.305 [2024-07-12 00:48:21.930362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.305 [2024-07-12 00:48:21.930387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.305 qpair failed and we were unable to recover it. 00:35:54.305 [2024-07-12 00:48:21.930466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.305 [2024-07-12 00:48:21.930492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.305 qpair failed and we were unable to recover it. 00:35:54.305 [2024-07-12 00:48:21.930569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.305 [2024-07-12 00:48:21.930602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.305 qpair failed and we were unable to recover it. 00:35:54.305 [2024-07-12 00:48:21.930682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.305 [2024-07-12 00:48:21.930706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.305 qpair failed and we were unable to recover it. 00:35:54.305 [2024-07-12 00:48:21.930782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.305 [2024-07-12 00:48:21.930808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.305 qpair failed and we were unable to recover it. 00:35:54.305 [2024-07-12 00:48:21.930891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.305 [2024-07-12 00:48:21.930917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.305 qpair failed and we were unable to recover it. 00:35:54.305 [2024-07-12 00:48:21.930998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.306 [2024-07-12 00:48:21.931024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.306 qpair failed and we were unable to recover it. 00:35:54.306 [2024-07-12 00:48:21.931104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.306 [2024-07-12 00:48:21.931129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.306 qpair failed and we were unable to recover it. 00:35:54.306 [2024-07-12 00:48:21.931212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.306 [2024-07-12 00:48:21.931241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.306 qpair failed and we were unable to recover it. 00:35:54.306 [2024-07-12 00:48:21.931319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.306 [2024-07-12 00:48:21.931345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.306 qpair failed and we were unable to recover it. 00:35:54.306 [2024-07-12 00:48:21.931427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.306 [2024-07-12 00:48:21.931457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.306 qpair failed and we were unable to recover it. 00:35:54.306 [2024-07-12 00:48:21.931547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.306 [2024-07-12 00:48:21.931572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.306 qpair failed and we were unable to recover it. 00:35:54.306 [2024-07-12 00:48:21.931661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.306 [2024-07-12 00:48:21.931686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.306 qpair failed and we were unable to recover it. 00:35:54.306 [2024-07-12 00:48:21.931760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.306 [2024-07-12 00:48:21.931785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.306 qpair failed and we were unable to recover it. 00:35:54.306 [2024-07-12 00:48:21.931859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.306 [2024-07-12 00:48:21.931884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.306 qpair failed and we were unable to recover it. 00:35:54.306 [2024-07-12 00:48:21.931964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.306 [2024-07-12 00:48:21.932005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.306 qpair failed and we were unable to recover it. 00:35:54.306 [2024-07-12 00:48:21.932087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.306 [2024-07-12 00:48:21.932116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.306 qpair failed and we were unable to recover it. 00:35:54.306 [2024-07-12 00:48:21.932244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.306 [2024-07-12 00:48:21.932270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.306 qpair failed and we were unable to recover it. 00:35:54.306 [2024-07-12 00:48:21.932351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.306 [2024-07-12 00:48:21.932378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.306 qpair failed and we were unable to recover it. 00:35:54.306 [2024-07-12 00:48:21.932454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.306 [2024-07-12 00:48:21.932480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.306 qpair failed and we were unable to recover it. 00:35:54.306 [2024-07-12 00:48:21.932561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.306 [2024-07-12 00:48:21.932597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.306 qpair failed and we were unable to recover it. 00:35:54.306 [2024-07-12 00:48:21.932675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.306 [2024-07-12 00:48:21.932702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.306 qpair failed and we were unable to recover it. 00:35:54.306 [2024-07-12 00:48:21.932787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.306 [2024-07-12 00:48:21.932815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.306 qpair failed and we were unable to recover it. 00:35:54.306 [2024-07-12 00:48:21.932900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.306 [2024-07-12 00:48:21.932930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.306 qpair failed and we were unable to recover it. 00:35:54.306 [2024-07-12 00:48:21.933013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.306 [2024-07-12 00:48:21.933039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.306 qpair failed and we were unable to recover it. 00:35:54.306 [2024-07-12 00:48:21.933116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.306 [2024-07-12 00:48:21.933141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.306 qpair failed and we were unable to recover it. 00:35:54.306 [2024-07-12 00:48:21.933224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.306 [2024-07-12 00:48:21.933251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.306 qpair failed and we were unable to recover it. 00:35:54.306 [2024-07-12 00:48:21.933327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.306 [2024-07-12 00:48:21.933353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.306 qpair failed and we were unable to recover it. 00:35:54.306 [2024-07-12 00:48:21.933437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.306 [2024-07-12 00:48:21.933465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.306 qpair failed and we were unable to recover it. 00:35:54.306 [2024-07-12 00:48:21.933562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.306 [2024-07-12 00:48:21.933595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.306 qpair failed and we were unable to recover it. 00:35:54.306 [2024-07-12 00:48:21.933683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.306 [2024-07-12 00:48:21.933710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.306 qpair failed and we were unable to recover it. 00:35:54.306 [2024-07-12 00:48:21.933787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.306 [2024-07-12 00:48:21.933813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.306 qpair failed and we were unable to recover it. 00:35:54.306 [2024-07-12 00:48:21.933893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.306 [2024-07-12 00:48:21.933918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.306 qpair failed and we were unable to recover it. 00:35:54.306 [2024-07-12 00:48:21.933998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.306 [2024-07-12 00:48:21.934022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.306 qpair failed and we were unable to recover it. 00:35:54.306 [2024-07-12 00:48:21.934099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.306 [2024-07-12 00:48:21.934134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.306 qpair failed and we were unable to recover it. 00:35:54.306 [2024-07-12 00:48:21.934211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.306 [2024-07-12 00:48:21.934236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.306 qpair failed and we were unable to recover it. 00:35:54.306 [2024-07-12 00:48:21.934317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.306 [2024-07-12 00:48:21.934342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.306 qpair failed and we were unable to recover it. 00:35:54.306 [2024-07-12 00:48:21.934421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.306 [2024-07-12 00:48:21.934469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.306 qpair failed and we were unable to recover it. 00:35:54.306 [2024-07-12 00:48:21.934557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.306 [2024-07-12 00:48:21.934592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.306 qpair failed and we were unable to recover it. 00:35:54.306 [2024-07-12 00:48:21.934673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.306 [2024-07-12 00:48:21.934699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.306 qpair failed and we were unable to recover it. 00:35:54.306 [2024-07-12 00:48:21.934781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.306 [2024-07-12 00:48:21.934810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.306 qpair failed and we were unable to recover it. 00:35:54.306 [2024-07-12 00:48:21.934899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.306 [2024-07-12 00:48:21.934926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.306 qpair failed and we were unable to recover it. 00:35:54.306 [2024-07-12 00:48:21.935007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.306 [2024-07-12 00:48:21.935034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.306 qpair failed and we were unable to recover it. 00:35:54.306 [2024-07-12 00:48:21.935113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.306 [2024-07-12 00:48:21.935139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.306 qpair failed and we were unable to recover it. 00:35:54.307 [2024-07-12 00:48:21.935218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.307 [2024-07-12 00:48:21.935247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.307 qpair failed and we were unable to recover it. 00:35:54.307 [2024-07-12 00:48:21.935323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.307 [2024-07-12 00:48:21.935348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.307 qpair failed and we were unable to recover it. 00:35:54.307 [2024-07-12 00:48:21.935429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.307 [2024-07-12 00:48:21.935455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.307 qpair failed and we were unable to recover it. 00:35:54.307 [2024-07-12 00:48:21.935539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.307 [2024-07-12 00:48:21.935564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.307 qpair failed and we were unable to recover it. 00:35:54.307 [2024-07-12 00:48:21.935659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.307 [2024-07-12 00:48:21.935684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.307 qpair failed and we were unable to recover it. 00:35:54.307 [2024-07-12 00:48:21.935766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.307 [2024-07-12 00:48:21.935807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.307 qpair failed and we were unable to recover it. 00:35:54.307 [2024-07-12 00:48:21.935891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.307 [2024-07-12 00:48:21.935920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.307 qpair failed and we were unable to recover it. 00:35:54.307 [2024-07-12 00:48:21.936017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.307 [2024-07-12 00:48:21.936044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.307 qpair failed and we were unable to recover it. 00:35:54.307 [2024-07-12 00:48:21.936128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.307 [2024-07-12 00:48:21.936154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.307 qpair failed and we were unable to recover it. 00:35:54.307 [2024-07-12 00:48:21.936251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.307 [2024-07-12 00:48:21.936279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.307 qpair failed and we were unable to recover it. 00:35:54.307 [2024-07-12 00:48:21.936357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.307 [2024-07-12 00:48:21.936383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.307 qpair failed and we were unable to recover it. 00:35:54.307 [2024-07-12 00:48:21.936467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.307 [2024-07-12 00:48:21.936494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.307 qpair failed and we were unable to recover it. 00:35:54.307 [2024-07-12 00:48:21.936573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.307 [2024-07-12 00:48:21.936610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.307 qpair failed and we were unable to recover it. 00:35:54.307 [2024-07-12 00:48:21.936691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.307 [2024-07-12 00:48:21.936717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.307 qpair failed and we were unable to recover it. 00:35:54.307 [2024-07-12 00:48:21.936798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.307 [2024-07-12 00:48:21.936825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.307 qpair failed and we were unable to recover it. 00:35:54.307 [2024-07-12 00:48:21.936908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.307 [2024-07-12 00:48:21.936936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.307 qpair failed and we were unable to recover it. 00:35:54.307 [2024-07-12 00:48:21.937018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.307 [2024-07-12 00:48:21.937045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.307 qpair failed and we were unable to recover it. 00:35:54.307 [2024-07-12 00:48:21.937122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.307 [2024-07-12 00:48:21.937148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.307 qpair failed and we were unable to recover it. 00:35:54.307 [2024-07-12 00:48:21.937228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.307 [2024-07-12 00:48:21.937256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.307 qpair failed and we were unable to recover it. 00:35:54.307 [2024-07-12 00:48:21.937346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.307 [2024-07-12 00:48:21.937372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.307 qpair failed and we were unable to recover it. 00:35:54.307 [2024-07-12 00:48:21.937451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.307 [2024-07-12 00:48:21.937480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.307 qpair failed and we were unable to recover it. 00:35:54.307 [2024-07-12 00:48:21.937557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.307 [2024-07-12 00:48:21.937583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.307 qpair failed and we were unable to recover it. 00:35:54.307 [2024-07-12 00:48:21.937678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.307 [2024-07-12 00:48:21.937704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.307 qpair failed and we were unable to recover it. 00:35:54.307 [2024-07-12 00:48:21.937784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.307 [2024-07-12 00:48:21.937812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.307 qpair failed and we were unable to recover it. 00:35:54.307 [2024-07-12 00:48:21.937888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.307 [2024-07-12 00:48:21.937914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.307 qpair failed and we were unable to recover it. 00:35:54.307 [2024-07-12 00:48:21.937993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.307 [2024-07-12 00:48:21.938020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.307 qpair failed and we were unable to recover it. 00:35:54.307 [2024-07-12 00:48:21.938103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.307 [2024-07-12 00:48:21.938130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.307 qpair failed and we were unable to recover it. 00:35:54.307 [2024-07-12 00:48:21.938207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.307 [2024-07-12 00:48:21.938236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.307 qpair failed and we were unable to recover it. 00:35:54.307 [2024-07-12 00:48:21.938319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.307 [2024-07-12 00:48:21.938345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.307 qpair failed and we were unable to recover it. 00:35:54.307 [2024-07-12 00:48:21.938435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.307 [2024-07-12 00:48:21.938463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.307 qpair failed and we were unable to recover it. 00:35:54.307 [2024-07-12 00:48:21.938545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.307 [2024-07-12 00:48:21.938571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.307 qpair failed and we were unable to recover it. 00:35:54.307 [2024-07-12 00:48:21.938654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.307 [2024-07-12 00:48:21.938680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.307 qpair failed and we were unable to recover it. 00:35:54.307 [2024-07-12 00:48:21.938757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.307 [2024-07-12 00:48:21.938784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.307 qpair failed and we were unable to recover it. 00:35:54.307 [2024-07-12 00:48:21.938862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.307 [2024-07-12 00:48:21.938893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.307 qpair failed and we were unable to recover it. 00:35:54.307 [2024-07-12 00:48:21.938972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.307 [2024-07-12 00:48:21.938997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.307 qpair failed and we were unable to recover it. 00:35:54.307 [2024-07-12 00:48:21.939075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.307 [2024-07-12 00:48:21.939100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.307 qpair failed and we were unable to recover it. 00:35:54.307 [2024-07-12 00:48:21.939180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.307 [2024-07-12 00:48:21.939206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.307 qpair failed and we were unable to recover it. 00:35:54.307 [2024-07-12 00:48:21.939289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.307 [2024-07-12 00:48:21.939316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.307 qpair failed and we were unable to recover it. 00:35:54.307 [2024-07-12 00:48:21.939394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.307 [2024-07-12 00:48:21.939419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.307 qpair failed and we were unable to recover it. 00:35:54.307 [2024-07-12 00:48:21.939502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.307 [2024-07-12 00:48:21.939531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.307 qpair failed and we were unable to recover it. 00:35:54.307 [2024-07-12 00:48:21.939622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.307 [2024-07-12 00:48:21.939650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.308 qpair failed and we were unable to recover it. 00:35:54.308 [2024-07-12 00:48:21.939736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.308 [2024-07-12 00:48:21.939763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.308 qpair failed and we were unable to recover it. 00:35:54.308 [2024-07-12 00:48:21.939847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.308 [2024-07-12 00:48:21.939873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.308 qpair failed and we were unable to recover it. 00:35:54.308 [2024-07-12 00:48:21.939950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.308 [2024-07-12 00:48:21.939977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.308 qpair failed and we were unable to recover it. 00:35:54.308 [2024-07-12 00:48:21.940058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.308 [2024-07-12 00:48:21.940088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.308 qpair failed and we were unable to recover it. 00:35:54.308 [2024-07-12 00:48:21.940166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.308 [2024-07-12 00:48:21.940193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.308 qpair failed and we were unable to recover it. 00:35:54.308 [2024-07-12 00:48:21.940275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.308 [2024-07-12 00:48:21.940300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.308 qpair failed and we were unable to recover it. 00:35:54.308 [2024-07-12 00:48:21.940388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.308 [2024-07-12 00:48:21.940432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.308 qpair failed and we were unable to recover it. 00:35:54.308 [2024-07-12 00:48:21.940522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.308 [2024-07-12 00:48:21.940552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.308 qpair failed and we were unable to recover it. 00:35:54.308 [2024-07-12 00:48:21.940643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.308 [2024-07-12 00:48:21.940669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.308 qpair failed and we were unable to recover it. 00:35:54.308 [2024-07-12 00:48:21.940752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.308 [2024-07-12 00:48:21.940779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.308 qpair failed and we were unable to recover it. 00:35:54.308 [2024-07-12 00:48:21.940862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.308 [2024-07-12 00:48:21.940889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.308 qpair failed and we were unable to recover it. 00:35:54.308 [2024-07-12 00:48:21.940966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.308 [2024-07-12 00:48:21.940993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.308 qpair failed and we were unable to recover it. 00:35:54.308 [2024-07-12 00:48:21.941068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.308 [2024-07-12 00:48:21.941094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.308 qpair failed and we were unable to recover it. 00:35:54.308 [2024-07-12 00:48:21.941181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.308 [2024-07-12 00:48:21.941207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.308 qpair failed and we were unable to recover it. 00:35:54.308 [2024-07-12 00:48:21.941283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.308 [2024-07-12 00:48:21.941310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.308 qpair failed and we were unable to recover it. 00:35:54.308 [2024-07-12 00:48:21.941387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.308 [2024-07-12 00:48:21.941413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.308 qpair failed and we were unable to recover it. 00:35:54.308 [2024-07-12 00:48:21.941512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.308 [2024-07-12 00:48:21.941539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.308 qpair failed and we were unable to recover it. 00:35:54.308 [2024-07-12 00:48:21.941626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.308 [2024-07-12 00:48:21.941652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.308 qpair failed and we were unable to recover it. 00:35:54.308 [2024-07-12 00:48:21.941726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.308 [2024-07-12 00:48:21.941752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.308 qpair failed and we were unable to recover it. 00:35:54.308 [2024-07-12 00:48:21.941837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.308 [2024-07-12 00:48:21.941864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.308 qpair failed and we were unable to recover it. 00:35:54.308 [2024-07-12 00:48:21.941940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.308 [2024-07-12 00:48:21.941966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.308 qpair failed and we were unable to recover it. 00:35:54.308 [2024-07-12 00:48:21.942044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.308 [2024-07-12 00:48:21.942071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.308 qpair failed and we were unable to recover it. 00:35:54.308 [2024-07-12 00:48:21.942148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.308 [2024-07-12 00:48:21.942174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.308 qpair failed and we were unable to recover it. 00:35:54.308 [2024-07-12 00:48:21.942257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.308 [2024-07-12 00:48:21.942284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.308 qpair failed and we were unable to recover it. 00:35:54.308 [2024-07-12 00:48:21.942364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.308 [2024-07-12 00:48:21.942394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.308 qpair failed and we were unable to recover it. 00:35:54.308 [2024-07-12 00:48:21.942472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.308 [2024-07-12 00:48:21.942498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.308 qpair failed and we were unable to recover it. 00:35:54.308 [2024-07-12 00:48:21.942571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.308 [2024-07-12 00:48:21.942608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.308 qpair failed and we were unable to recover it. 00:35:54.308 [2024-07-12 00:48:21.942691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.308 [2024-07-12 00:48:21.942716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.308 qpair failed and we were unable to recover it. 00:35:54.308 [2024-07-12 00:48:21.942794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.308 [2024-07-12 00:48:21.942823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.308 qpair failed and we were unable to recover it. 00:35:54.308 [2024-07-12 00:48:21.942899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.308 [2024-07-12 00:48:21.942925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.308 qpair failed and we were unable to recover it. 00:35:54.308 [2024-07-12 00:48:21.943001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.308 [2024-07-12 00:48:21.943026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.308 qpair failed and we were unable to recover it. 00:35:54.308 [2024-07-12 00:48:21.943100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.308 [2024-07-12 00:48:21.943125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.308 qpair failed and we were unable to recover it. 00:35:54.308 [2024-07-12 00:48:21.943209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.308 [2024-07-12 00:48:21.943239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.308 qpair failed and we were unable to recover it. 00:35:54.308 [2024-07-12 00:48:21.943320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.308 [2024-07-12 00:48:21.943348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.308 qpair failed and we were unable to recover it. 00:35:54.308 [2024-07-12 00:48:21.943436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.308 [2024-07-12 00:48:21.943477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.308 qpair failed and we were unable to recover it. 00:35:54.308 [2024-07-12 00:48:21.943557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.308 [2024-07-12 00:48:21.943582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.308 qpair failed and we were unable to recover it. 00:35:54.308 [2024-07-12 00:48:21.943671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.308 [2024-07-12 00:48:21.943699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.308 qpair failed and we were unable to recover it. 00:35:54.308 [2024-07-12 00:48:21.943786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.308 [2024-07-12 00:48:21.943814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.308 qpair failed and we were unable to recover it. 00:35:54.308 [2024-07-12 00:48:21.943898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.308 [2024-07-12 00:48:21.943923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.308 qpair failed and we were unable to recover it. 00:35:54.308 [2024-07-12 00:48:21.944006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.308 [2024-07-12 00:48:21.944034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.308 qpair failed and we were unable to recover it. 00:35:54.308 [2024-07-12 00:48:21.944124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.308 [2024-07-12 00:48:21.944152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.309 qpair failed and we were unable to recover it. 00:35:54.309 [2024-07-12 00:48:21.944235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.309 [2024-07-12 00:48:21.944265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.309 qpair failed and we were unable to recover it. 00:35:54.309 [2024-07-12 00:48:21.944350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.309 [2024-07-12 00:48:21.944376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.309 qpair failed and we were unable to recover it. 00:35:54.309 [2024-07-12 00:48:21.944453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.309 [2024-07-12 00:48:21.944481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.309 qpair failed and we were unable to recover it. 00:35:54.309 [2024-07-12 00:48:21.944568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.309 [2024-07-12 00:48:21.944605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.309 qpair failed and we were unable to recover it. 00:35:54.309 [2024-07-12 00:48:21.944689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.309 [2024-07-12 00:48:21.944716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.309 qpair failed and we were unable to recover it. 00:35:54.309 [2024-07-12 00:48:21.944802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.309 [2024-07-12 00:48:21.944829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.309 qpair failed and we were unable to recover it. 00:35:54.309 [2024-07-12 00:48:21.944905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.309 [2024-07-12 00:48:21.944931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.309 qpair failed and we were unable to recover it. 00:35:54.309 [2024-07-12 00:48:21.945013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.309 [2024-07-12 00:48:21.945040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.309 qpair failed and we were unable to recover it. 00:35:54.309 [2024-07-12 00:48:21.945123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.309 [2024-07-12 00:48:21.945154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.309 qpair failed and we were unable to recover it. 00:35:54.309 [2024-07-12 00:48:21.945231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.309 [2024-07-12 00:48:21.945257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.309 qpair failed and we were unable to recover it. 00:35:54.309 [2024-07-12 00:48:21.945341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.309 [2024-07-12 00:48:21.945368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.309 qpair failed and we were unable to recover it. 00:35:54.309 [2024-07-12 00:48:21.945451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.309 [2024-07-12 00:48:21.945477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.309 qpair failed and we were unable to recover it. 00:35:54.309 [2024-07-12 00:48:21.945552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.309 [2024-07-12 00:48:21.945579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.309 qpair failed and we were unable to recover it. 00:35:54.309 [2024-07-12 00:48:21.945678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.309 [2024-07-12 00:48:21.945705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.309 qpair failed and we were unable to recover it. 00:35:54.309 [2024-07-12 00:48:21.945785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.309 [2024-07-12 00:48:21.945812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.309 qpair failed and we were unable to recover it. 00:35:54.309 [2024-07-12 00:48:21.945890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.309 [2024-07-12 00:48:21.945916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.309 qpair failed and we were unable to recover it. 00:35:54.309 [2024-07-12 00:48:21.945999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.309 [2024-07-12 00:48:21.946029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.309 qpair failed and we were unable to recover it. 00:35:54.309 [2024-07-12 00:48:21.946110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.309 [2024-07-12 00:48:21.946136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.309 qpair failed and we were unable to recover it. 00:35:54.309 [2024-07-12 00:48:21.946223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.309 [2024-07-12 00:48:21.946253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.309 qpair failed and we were unable to recover it. 00:35:54.309 [2024-07-12 00:48:21.946337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.309 [2024-07-12 00:48:21.946365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.309 qpair failed and we were unable to recover it. 00:35:54.309 [2024-07-12 00:48:21.946443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.309 [2024-07-12 00:48:21.946468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.309 qpair failed and we were unable to recover it. 00:35:54.309 [2024-07-12 00:48:21.946561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.309 [2024-07-12 00:48:21.946593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.309 qpair failed and we were unable to recover it. 00:35:54.309 [2024-07-12 00:48:21.946679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.309 [2024-07-12 00:48:21.946707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.309 qpair failed and we were unable to recover it. 00:35:54.309 [2024-07-12 00:48:21.946796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.309 [2024-07-12 00:48:21.946824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.309 qpair failed and we were unable to recover it. 00:35:54.309 [2024-07-12 00:48:21.946907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.309 [2024-07-12 00:48:21.946934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.309 qpair failed and we were unable to recover it. 00:35:54.309 [2024-07-12 00:48:21.947017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.309 [2024-07-12 00:48:21.947045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.309 qpair failed and we were unable to recover it. 00:35:54.309 [2024-07-12 00:48:21.947134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.309 [2024-07-12 00:48:21.947161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.309 qpair failed and we were unable to recover it. 00:35:54.309 [2024-07-12 00:48:21.947240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.309 [2024-07-12 00:48:21.947266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.309 qpair failed and we were unable to recover it. 00:35:54.309 [2024-07-12 00:48:21.947342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.309 [2024-07-12 00:48:21.947368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.309 qpair failed and we were unable to recover it. 00:35:54.309 [2024-07-12 00:48:21.947444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.309 [2024-07-12 00:48:21.947470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.309 qpair failed and we were unable to recover it. 00:35:54.309 [2024-07-12 00:48:21.947545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.309 [2024-07-12 00:48:21.947571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.309 qpair failed and we were unable to recover it. 00:35:54.309 [2024-07-12 00:48:21.947656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.309 [2024-07-12 00:48:21.947688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.309 qpair failed and we were unable to recover it. 00:35:54.309 [2024-07-12 00:48:21.947770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.309 [2024-07-12 00:48:21.947797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.309 qpair failed and we were unable to recover it. 00:35:54.309 [2024-07-12 00:48:21.947871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.309 [2024-07-12 00:48:21.947897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.309 qpair failed and we were unable to recover it. 00:35:54.309 [2024-07-12 00:48:21.947974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.309 [2024-07-12 00:48:21.948003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.309 qpair failed and we were unable to recover it. 00:35:54.309 [2024-07-12 00:48:21.948106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.309 [2024-07-12 00:48:21.948135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.309 qpair failed and we were unable to recover it. 00:35:54.309 [2024-07-12 00:48:21.948217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.309 [2024-07-12 00:48:21.948243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.309 qpair failed and we were unable to recover it. 00:35:54.309 [2024-07-12 00:48:21.948320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.309 [2024-07-12 00:48:21.948347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.309 qpair failed and we were unable to recover it. 00:35:54.309 [2024-07-12 00:48:21.948422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.309 [2024-07-12 00:48:21.948447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.309 qpair failed and we were unable to recover it. 00:35:54.309 [2024-07-12 00:48:21.948526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.309 [2024-07-12 00:48:21.948550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.309 qpair failed and we were unable to recover it. 00:35:54.309 [2024-07-12 00:48:21.948641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.309 [2024-07-12 00:48:21.948670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.310 qpair failed and we were unable to recover it. 00:35:54.310 [2024-07-12 00:48:21.948751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.310 [2024-07-12 00:48:21.948779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.310 qpair failed and we were unable to recover it. 00:35:54.310 [2024-07-12 00:48:21.948860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.310 [2024-07-12 00:48:21.948885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.310 qpair failed and we were unable to recover it. 00:35:54.310 [2024-07-12 00:48:21.948958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.310 [2024-07-12 00:48:21.948984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.310 qpair failed and we were unable to recover it. 00:35:54.310 [2024-07-12 00:48:21.949063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.310 [2024-07-12 00:48:21.949090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.310 qpair failed and we were unable to recover it. 00:35:54.310 [2024-07-12 00:48:21.949173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.310 [2024-07-12 00:48:21.949199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.310 qpair failed and we were unable to recover it. 00:35:54.310 [2024-07-12 00:48:21.949287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.310 [2024-07-12 00:48:21.949314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.310 qpair failed and we were unable to recover it. 00:35:54.310 [2024-07-12 00:48:21.949388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.310 [2024-07-12 00:48:21.949415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.310 qpair failed and we were unable to recover it. 00:35:54.310 [2024-07-12 00:48:21.949489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.310 [2024-07-12 00:48:21.949516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.310 qpair failed and we were unable to recover it. 00:35:54.310 [2024-07-12 00:48:21.949600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.310 [2024-07-12 00:48:21.949628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.310 qpair failed and we were unable to recover it. 00:35:54.310 [2024-07-12 00:48:21.949714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.310 [2024-07-12 00:48:21.949742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.310 qpair failed and we were unable to recover it. 00:35:54.310 [2024-07-12 00:48:21.949821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.310 [2024-07-12 00:48:21.949848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.310 qpair failed and we were unable to recover it. 00:35:54.310 [2024-07-12 00:48:21.949928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.310 [2024-07-12 00:48:21.949954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.310 qpair failed and we were unable to recover it. 00:35:54.310 [2024-07-12 00:48:21.950033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.310 [2024-07-12 00:48:21.950059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.310 qpair failed and we were unable to recover it. 00:35:54.310 [2024-07-12 00:48:21.950135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.310 [2024-07-12 00:48:21.950161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.310 qpair failed and we were unable to recover it. 00:35:54.310 [2024-07-12 00:48:21.950246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.310 [2024-07-12 00:48:21.950275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.310 qpair failed and we were unable to recover it. 00:35:54.310 [2024-07-12 00:48:21.950354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.310 [2024-07-12 00:48:21.950380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.310 qpair failed and we were unable to recover it. 00:35:54.310 [2024-07-12 00:48:21.950456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.310 [2024-07-12 00:48:21.950482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.310 qpair failed and we were unable to recover it. 00:35:54.310 [2024-07-12 00:48:21.950566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.310 [2024-07-12 00:48:21.950604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.310 qpair failed and we were unable to recover it. 00:35:54.310 [2024-07-12 00:48:21.950683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.310 [2024-07-12 00:48:21.950710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.310 qpair failed and we were unable to recover it. 00:35:54.310 [2024-07-12 00:48:21.950791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.310 [2024-07-12 00:48:21.950822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.310 qpair failed and we were unable to recover it. 00:35:54.310 [2024-07-12 00:48:21.950905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.310 [2024-07-12 00:48:21.950931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.310 qpair failed and we were unable to recover it. 00:35:54.310 [2024-07-12 00:48:21.951008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.310 [2024-07-12 00:48:21.951038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.310 qpair failed and we were unable to recover it. 00:35:54.310 [2024-07-12 00:48:21.951118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.310 [2024-07-12 00:48:21.951144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.310 qpair failed and we were unable to recover it. 00:35:54.310 [2024-07-12 00:48:21.951227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.310 [2024-07-12 00:48:21.951253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.310 qpair failed and we were unable to recover it. 00:35:54.310 [2024-07-12 00:48:21.951333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.310 [2024-07-12 00:48:21.951360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.310 qpair failed and we were unable to recover it. 00:35:54.310 [2024-07-12 00:48:21.951433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.310 [2024-07-12 00:48:21.951459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.310 qpair failed and we were unable to recover it. 00:35:54.310 [2024-07-12 00:48:21.951549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.310 [2024-07-12 00:48:21.951575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.310 qpair failed and we were unable to recover it. 00:35:54.310 [2024-07-12 00:48:21.951669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.310 [2024-07-12 00:48:21.951695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.310 qpair failed and we were unable to recover it. 00:35:54.310 [2024-07-12 00:48:21.951773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.310 [2024-07-12 00:48:21.951800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.310 qpair failed and we were unable to recover it. 00:35:54.310 [2024-07-12 00:48:21.951883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.310 [2024-07-12 00:48:21.951910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.310 qpair failed and we were unable to recover it. 00:35:54.310 [2024-07-12 00:48:21.951991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.310 [2024-07-12 00:48:21.952021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.310 qpair failed and we were unable to recover it. 00:35:54.310 [2024-07-12 00:48:21.952100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.310 [2024-07-12 00:48:21.952126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.310 qpair failed and we were unable to recover it. 00:35:54.310 [2024-07-12 00:48:21.952209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.310 [2024-07-12 00:48:21.952238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.310 qpair failed and we were unable to recover it. 00:35:54.310 [2024-07-12 00:48:21.952322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.310 [2024-07-12 00:48:21.952350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.310 qpair failed and we were unable to recover it. 00:35:54.310 [2024-07-12 00:48:21.952437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.310 [2024-07-12 00:48:21.952466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.310 qpair failed and we were unable to recover it. 00:35:54.310 [2024-07-12 00:48:21.952553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.311 [2024-07-12 00:48:21.952580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.311 qpair failed and we were unable to recover it. 00:35:54.311 [2024-07-12 00:48:21.952663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.311 [2024-07-12 00:48:21.952690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.311 qpair failed and we were unable to recover it. 00:35:54.311 [2024-07-12 00:48:21.952768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.311 [2024-07-12 00:48:21.952794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.311 qpair failed and we were unable to recover it. 00:35:54.311 [2024-07-12 00:48:21.952870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.311 [2024-07-12 00:48:21.952898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.311 qpair failed and we were unable to recover it. 00:35:54.311 [2024-07-12 00:48:21.952976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.311 [2024-07-12 00:48:21.953002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.311 qpair failed and we were unable to recover it. 00:35:54.311 [2024-07-12 00:48:21.953083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.311 [2024-07-12 00:48:21.953110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.311 qpair failed and we were unable to recover it. 00:35:54.311 [2024-07-12 00:48:21.953199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.311 [2024-07-12 00:48:21.953225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.311 qpair failed and we were unable to recover it. 00:35:54.311 [2024-07-12 00:48:21.953307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.311 [2024-07-12 00:48:21.953333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.311 qpair failed and we were unable to recover it. 00:35:54.311 [2024-07-12 00:48:21.953411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.311 [2024-07-12 00:48:21.953438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.311 qpair failed and we were unable to recover it. 00:35:54.311 [2024-07-12 00:48:21.953525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.311 [2024-07-12 00:48:21.953552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.311 qpair failed and we were unable to recover it. 00:35:54.311 [2024-07-12 00:48:21.953638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.311 [2024-07-12 00:48:21.953667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.311 qpair failed and we were unable to recover it. 00:35:54.311 [2024-07-12 00:48:21.953746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.311 [2024-07-12 00:48:21.953772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.311 qpair failed and we were unable to recover it. 00:35:54.311 [2024-07-12 00:48:21.953846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.311 [2024-07-12 00:48:21.953871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.311 qpair failed and we were unable to recover it. 00:35:54.311 [2024-07-12 00:48:21.953951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.311 [2024-07-12 00:48:21.953977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.311 qpair failed and we were unable to recover it. 00:35:54.311 [2024-07-12 00:48:21.954055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.311 [2024-07-12 00:48:21.954099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.311 qpair failed and we were unable to recover it. 00:35:54.311 [2024-07-12 00:48:21.954186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.311 [2024-07-12 00:48:21.954214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.311 qpair failed and we were unable to recover it. 00:35:54.311 [2024-07-12 00:48:21.954290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.311 [2024-07-12 00:48:21.954317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.311 qpair failed and we were unable to recover it. 00:35:54.311 [2024-07-12 00:48:21.954398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.311 [2024-07-12 00:48:21.954429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.311 qpair failed and we were unable to recover it. 00:35:54.311 [2024-07-12 00:48:21.954509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.311 [2024-07-12 00:48:21.954536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.311 qpair failed and we were unable to recover it. 00:35:54.311 [2024-07-12 00:48:21.954617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.311 [2024-07-12 00:48:21.954644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.311 qpair failed and we were unable to recover it. 00:35:54.311 [2024-07-12 00:48:21.954720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.311 [2024-07-12 00:48:21.954747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.311 qpair failed and we were unable to recover it. 00:35:54.311 [2024-07-12 00:48:21.954826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.311 [2024-07-12 00:48:21.954854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.311 qpair failed and we were unable to recover it. 00:35:54.311 [2024-07-12 00:48:21.954934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.311 [2024-07-12 00:48:21.954963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.311 qpair failed and we were unable to recover it. 00:35:54.311 [2024-07-12 00:48:21.955046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.311 [2024-07-12 00:48:21.955074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.311 qpair failed and we were unable to recover it. 00:35:54.311 [2024-07-12 00:48:21.955154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.311 [2024-07-12 00:48:21.955180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.311 qpair failed and we were unable to recover it. 00:35:54.311 [2024-07-12 00:48:21.955261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.311 [2024-07-12 00:48:21.955288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.311 qpair failed and we were unable to recover it. 00:35:54.311 [2024-07-12 00:48:21.955364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.311 [2024-07-12 00:48:21.955390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.311 qpair failed and we were unable to recover it. 00:35:54.311 [2024-07-12 00:48:21.955469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.311 [2024-07-12 00:48:21.955495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.311 qpair failed and we were unable to recover it. 00:35:54.311 [2024-07-12 00:48:21.955577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.311 [2024-07-12 00:48:21.955618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.311 qpair failed and we were unable to recover it. 00:35:54.311 [2024-07-12 00:48:21.955693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.311 [2024-07-12 00:48:21.955719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.311 qpair failed and we were unable to recover it. 00:35:54.311 [2024-07-12 00:48:21.955793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.311 [2024-07-12 00:48:21.955819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.311 qpair failed and we were unable to recover it. 00:35:54.311 [2024-07-12 00:48:21.955908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.311 [2024-07-12 00:48:21.955936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.311 qpair failed and we were unable to recover it. 00:35:54.311 [2024-07-12 00:48:21.956010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.311 [2024-07-12 00:48:21.956036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.311 qpair failed and we were unable to recover it. 00:35:54.311 [2024-07-12 00:48:21.956112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.311 [2024-07-12 00:48:21.956141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.311 qpair failed and we were unable to recover it. 00:35:54.311 [2024-07-12 00:48:21.956219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.311 [2024-07-12 00:48:21.956245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.311 qpair failed and we were unable to recover it. 00:35:54.311 [2024-07-12 00:48:21.956328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.311 [2024-07-12 00:48:21.956358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.311 qpair failed and we were unable to recover it. 00:35:54.311 [2024-07-12 00:48:21.956433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.311 [2024-07-12 00:48:21.956460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.311 qpair failed and we were unable to recover it. 00:35:54.311 [2024-07-12 00:48:21.956540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.311 [2024-07-12 00:48:21.956566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.311 qpair failed and we were unable to recover it. 00:35:54.311 [2024-07-12 00:48:21.956647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.311 [2024-07-12 00:48:21.956674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.311 qpair failed and we were unable to recover it. 00:35:54.311 [2024-07-12 00:48:21.956751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.311 [2024-07-12 00:48:21.956778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.311 qpair failed and we were unable to recover it. 00:35:54.311 [2024-07-12 00:48:21.956853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.311 [2024-07-12 00:48:21.956879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.311 qpair failed and we were unable to recover it. 00:35:54.311 [2024-07-12 00:48:21.956954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.312 [2024-07-12 00:48:21.956980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.312 qpair failed and we were unable to recover it. 00:35:54.312 [2024-07-12 00:48:21.957055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.312 [2024-07-12 00:48:21.957081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.312 qpair failed and we were unable to recover it. 00:35:54.312 [2024-07-12 00:48:21.957167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.312 [2024-07-12 00:48:21.957194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.312 qpair failed and we were unable to recover it. 00:35:54.312 [2024-07-12 00:48:21.957280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.312 [2024-07-12 00:48:21.957306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.312 qpair failed and we were unable to recover it. 00:35:54.312 [2024-07-12 00:48:21.957380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.312 [2024-07-12 00:48:21.957406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.312 qpair failed and we were unable to recover it. 00:35:54.312 [2024-07-12 00:48:21.957495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.312 [2024-07-12 00:48:21.957524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.312 qpair failed and we were unable to recover it. 00:35:54.312 [2024-07-12 00:48:21.957616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.312 [2024-07-12 00:48:21.957642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.312 qpair failed and we were unable to recover it. 00:35:54.312 [2024-07-12 00:48:21.957742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.312 [2024-07-12 00:48:21.957771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.312 qpair failed and we were unable to recover it. 00:35:54.312 [2024-07-12 00:48:21.957869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.312 [2024-07-12 00:48:21.957898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.312 qpair failed and we were unable to recover it. 00:35:54.312 [2024-07-12 00:48:21.957981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.312 [2024-07-12 00:48:21.958007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.312 qpair failed and we were unable to recover it. 00:35:54.312 [2024-07-12 00:48:21.958086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.312 [2024-07-12 00:48:21.958111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.312 qpair failed and we were unable to recover it. 00:35:54.312 [2024-07-12 00:48:21.958192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.312 [2024-07-12 00:48:21.958218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.312 qpair failed and we were unable to recover it. 00:35:54.312 [2024-07-12 00:48:21.958299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.312 [2024-07-12 00:48:21.958325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.312 qpair failed and we were unable to recover it. 00:35:54.312 [2024-07-12 00:48:21.958407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.312 [2024-07-12 00:48:21.958435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.312 qpair failed and we were unable to recover it. 00:35:54.312 [2024-07-12 00:48:21.958513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.312 [2024-07-12 00:48:21.958539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.312 qpair failed and we were unable to recover it. 00:35:54.312 [2024-07-12 00:48:21.958622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.312 [2024-07-12 00:48:21.958648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.312 qpair failed and we were unable to recover it. 00:35:54.312 [2024-07-12 00:48:21.958726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.312 [2024-07-12 00:48:21.958752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.312 qpair failed and we were unable to recover it. 00:35:54.312 [2024-07-12 00:48:21.958832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.312 [2024-07-12 00:48:21.958858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.312 qpair failed and we were unable to recover it. 00:35:54.312 [2024-07-12 00:48:21.958931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.312 [2024-07-12 00:48:21.958956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.312 qpair failed and we were unable to recover it. 00:35:54.312 [2024-07-12 00:48:21.959032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.312 [2024-07-12 00:48:21.959058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.312 qpair failed and we were unable to recover it. 00:35:54.312 [2024-07-12 00:48:21.959143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.312 [2024-07-12 00:48:21.959172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.312 qpair failed and we were unable to recover it. 00:35:54.312 [2024-07-12 00:48:21.959268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.312 [2024-07-12 00:48:21.959295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.312 qpair failed and we were unable to recover it. 00:35:54.312 [2024-07-12 00:48:21.959394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.312 [2024-07-12 00:48:21.959422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.312 qpair failed and we were unable to recover it. 00:35:54.312 [2024-07-12 00:48:21.959503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.312 [2024-07-12 00:48:21.959529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.312 qpair failed and we were unable to recover it. 00:35:54.312 [2024-07-12 00:48:21.959622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.312 [2024-07-12 00:48:21.959649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.312 qpair failed and we were unable to recover it. 00:35:54.312 [2024-07-12 00:48:21.959732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.312 [2024-07-12 00:48:21.959756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.312 qpair failed and we were unable to recover it. 00:35:54.312 [2024-07-12 00:48:21.959842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.312 [2024-07-12 00:48:21.959870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.312 qpair failed and we were unable to recover it. 00:35:54.312 [2024-07-12 00:48:21.959958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.312 [2024-07-12 00:48:21.959985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.312 qpair failed and we were unable to recover it. 00:35:54.312 [2024-07-12 00:48:21.960073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.312 [2024-07-12 00:48:21.960102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.312 qpair failed and we were unable to recover it. 00:35:54.312 [2024-07-12 00:48:21.960184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.312 [2024-07-12 00:48:21.960211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.312 qpair failed and we were unable to recover it. 00:35:54.312 [2024-07-12 00:48:21.960288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.312 [2024-07-12 00:48:21.960315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.312 qpair failed and we were unable to recover it. 00:35:54.312 [2024-07-12 00:48:21.960393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.312 [2024-07-12 00:48:21.960420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.312 qpair failed and we were unable to recover it. 00:35:54.312 [2024-07-12 00:48:21.960496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.312 [2024-07-12 00:48:21.960523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.312 qpair failed and we were unable to recover it. 00:35:54.312 [2024-07-12 00:48:21.960608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.312 [2024-07-12 00:48:21.960636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.312 qpair failed and we were unable to recover it. 00:35:54.312 [2024-07-12 00:48:21.960714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.312 [2024-07-12 00:48:21.960742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.312 qpair failed and we were unable to recover it. 00:35:54.312 [2024-07-12 00:48:21.960836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.312 [2024-07-12 00:48:21.960864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.312 qpair failed and we were unable to recover it. 00:35:54.312 [2024-07-12 00:48:21.960940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.312 [2024-07-12 00:48:21.960982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.312 qpair failed and we were unable to recover it. 00:35:54.312 [2024-07-12 00:48:21.961063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.312 [2024-07-12 00:48:21.961090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.312 qpair failed and we were unable to recover it. 00:35:54.312 [2024-07-12 00:48:21.961168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.312 [2024-07-12 00:48:21.961204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.312 qpair failed and we were unable to recover it. 00:35:54.312 [2024-07-12 00:48:21.961289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.312 [2024-07-12 00:48:21.961315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.312 qpair failed and we were unable to recover it. 00:35:54.312 [2024-07-12 00:48:21.961397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.313 [2024-07-12 00:48:21.961425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.313 qpair failed and we were unable to recover it. 00:35:54.313 [2024-07-12 00:48:21.961502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.313 [2024-07-12 00:48:21.961529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.313 qpair failed and we were unable to recover it. 00:35:54.313 [2024-07-12 00:48:21.961614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.313 [2024-07-12 00:48:21.961640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.313 qpair failed and we were unable to recover it. 00:35:54.313 [2024-07-12 00:48:21.961717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.313 [2024-07-12 00:48:21.961743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.313 qpair failed and we were unable to recover it. 00:35:54.313 [2024-07-12 00:48:21.961828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.313 [2024-07-12 00:48:21.961857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.313 qpair failed and we were unable to recover it. 00:35:54.313 [2024-07-12 00:48:21.961939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.313 [2024-07-12 00:48:21.961967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.313 qpair failed and we were unable to recover it. 00:35:54.313 [2024-07-12 00:48:21.962047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.313 [2024-07-12 00:48:21.962073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.313 qpair failed and we were unable to recover it. 00:35:54.313 [2024-07-12 00:48:21.962153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.313 [2024-07-12 00:48:21.962178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.313 qpair failed and we were unable to recover it. 00:35:54.313 [2024-07-12 00:48:21.962267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.313 [2024-07-12 00:48:21.962294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.313 qpair failed and we were unable to recover it. 00:35:54.313 [2024-07-12 00:48:21.962370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.313 [2024-07-12 00:48:21.962397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.313 qpair failed and we were unable to recover it. 00:35:54.313 [2024-07-12 00:48:21.962482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.313 [2024-07-12 00:48:21.962509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.313 qpair failed and we were unable to recover it. 00:35:54.313 [2024-07-12 00:48:21.962592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.313 [2024-07-12 00:48:21.962617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.313 qpair failed and we were unable to recover it. 00:35:54.313 [2024-07-12 00:48:21.962702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.313 [2024-07-12 00:48:21.962730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.313 qpair failed and we were unable to recover it. 00:35:54.313 [2024-07-12 00:48:21.962811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.313 [2024-07-12 00:48:21.962837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.313 qpair failed and we were unable to recover it. 00:35:54.313 [2024-07-12 00:48:21.962918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.313 [2024-07-12 00:48:21.962943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.313 qpair failed and we were unable to recover it. 00:35:54.313 [2024-07-12 00:48:21.963023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.313 [2024-07-12 00:48:21.963049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.313 qpair failed and we were unable to recover it. 00:35:54.313 [2024-07-12 00:48:21.963124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.313 [2024-07-12 00:48:21.963150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.313 qpair failed and we were unable to recover it. 00:35:54.313 [2024-07-12 00:48:21.963232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.313 [2024-07-12 00:48:21.963259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.313 qpair failed and we were unable to recover it. 00:35:54.313 [2024-07-12 00:48:21.963342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.313 [2024-07-12 00:48:21.963371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.313 qpair failed and we were unable to recover it. 00:35:54.313 [2024-07-12 00:48:21.963459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.313 [2024-07-12 00:48:21.963487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.313 qpair failed and we were unable to recover it. 00:35:54.313 [2024-07-12 00:48:21.963573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.313 [2024-07-12 00:48:21.963608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.313 qpair failed and we were unable to recover it. 00:35:54.313 [2024-07-12 00:48:21.963695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.313 [2024-07-12 00:48:21.963729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.313 qpair failed and we were unable to recover it. 00:35:54.313 [2024-07-12 00:48:21.963815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.313 [2024-07-12 00:48:21.963845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.313 qpair failed and we were unable to recover it. 00:35:54.313 [2024-07-12 00:48:21.963927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.313 [2024-07-12 00:48:21.963953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.313 qpair failed and we were unable to recover it. 00:35:54.313 [2024-07-12 00:48:21.964029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.313 [2024-07-12 00:48:21.964055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.313 qpair failed and we were unable to recover it. 00:35:54.313 [2024-07-12 00:48:21.964134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.313 [2024-07-12 00:48:21.964160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.313 qpair failed and we were unable to recover it. 00:35:54.313 [2024-07-12 00:48:21.964236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.313 [2024-07-12 00:48:21.964262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.313 qpair failed and we were unable to recover it. 00:35:54.313 [2024-07-12 00:48:21.964339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.313 [2024-07-12 00:48:21.964365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.313 qpair failed and we were unable to recover it. 00:35:54.313 [2024-07-12 00:48:21.964441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.313 [2024-07-12 00:48:21.964467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.313 qpair failed and we were unable to recover it. 00:35:54.313 [2024-07-12 00:48:21.964542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.313 [2024-07-12 00:48:21.964568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.313 qpair failed and we were unable to recover it. 00:35:54.313 [2024-07-12 00:48:21.964654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.313 [2024-07-12 00:48:21.964680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.313 qpair failed and we were unable to recover it. 00:35:54.313 [2024-07-12 00:48:21.964759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.313 [2024-07-12 00:48:21.964786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.313 qpair failed and we were unable to recover it. 00:35:54.313 [2024-07-12 00:48:21.964861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.313 [2024-07-12 00:48:21.964888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.313 qpair failed and we were unable to recover it. 00:35:54.313 [2024-07-12 00:48:21.964969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.313 [2024-07-12 00:48:21.964995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.313 qpair failed and we were unable to recover it. 00:35:54.313 [2024-07-12 00:48:21.965075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.313 [2024-07-12 00:48:21.965102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.313 qpair failed and we were unable to recover it. 00:35:54.313 [2024-07-12 00:48:21.965190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.313 [2024-07-12 00:48:21.965218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.313 qpair failed and we were unable to recover it. 00:35:54.313 [2024-07-12 00:48:21.965299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.313 [2024-07-12 00:48:21.965325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.313 qpair failed and we were unable to recover it. 00:35:54.313 [2024-07-12 00:48:21.965402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.313 [2024-07-12 00:48:21.965446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.313 qpair failed and we were unable to recover it. 00:35:54.313 [2024-07-12 00:48:21.965526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.313 [2024-07-12 00:48:21.965551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.313 qpair failed and we were unable to recover it. 00:35:54.313 [2024-07-12 00:48:21.965636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.313 [2024-07-12 00:48:21.965673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.313 qpair failed and we were unable to recover it. 00:35:54.313 [2024-07-12 00:48:21.965751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.313 [2024-07-12 00:48:21.965777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.313 qpair failed and we were unable to recover it. 00:35:54.313 [2024-07-12 00:48:21.965859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.314 [2024-07-12 00:48:21.965884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.314 qpair failed and we were unable to recover it. 00:35:54.314 [2024-07-12 00:48:21.965961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.314 [2024-07-12 00:48:21.965988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.314 qpair failed and we were unable to recover it. 00:35:54.314 [2024-07-12 00:48:21.966071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.314 [2024-07-12 00:48:21.966097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.314 qpair failed and we were unable to recover it. 00:35:54.314 [2024-07-12 00:48:21.966176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.314 [2024-07-12 00:48:21.966201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.314 qpair failed and we were unable to recover it. 00:35:54.314 [2024-07-12 00:48:21.966276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.314 [2024-07-12 00:48:21.966301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.314 qpair failed and we were unable to recover it. 00:35:54.314 [2024-07-12 00:48:21.966386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.314 [2024-07-12 00:48:21.966414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.314 qpair failed and we were unable to recover it. 00:35:54.314 [2024-07-12 00:48:21.966493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.314 [2024-07-12 00:48:21.966520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.314 qpair failed and we were unable to recover it. 00:35:54.314 [2024-07-12 00:48:21.966602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.314 [2024-07-12 00:48:21.966628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.314 qpair failed and we were unable to recover it. 00:35:54.314 [2024-07-12 00:48:21.966710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.314 [2024-07-12 00:48:21.966737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.314 qpair failed and we were unable to recover it. 00:35:54.314 [2024-07-12 00:48:21.966820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.314 [2024-07-12 00:48:21.966846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.314 qpair failed and we were unable to recover it. 00:35:54.314 [2024-07-12 00:48:21.966928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.314 [2024-07-12 00:48:21.966955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.314 qpair failed and we were unable to recover it. 00:35:54.314 [2024-07-12 00:48:21.967036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.314 [2024-07-12 00:48:21.967064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.314 qpair failed and we were unable to recover it. 00:35:54.314 [2024-07-12 00:48:21.967140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.314 [2024-07-12 00:48:21.967164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.314 qpair failed and we were unable to recover it. 00:35:54.314 [2024-07-12 00:48:21.967267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.314 [2024-07-12 00:48:21.967293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.314 qpair failed and we were unable to recover it. 00:35:54.314 [2024-07-12 00:48:21.967378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.314 [2024-07-12 00:48:21.967405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.314 qpair failed and we were unable to recover it. 00:35:54.314 [2024-07-12 00:48:21.967485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.314 [2024-07-12 00:48:21.967511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.314 qpair failed and we were unable to recover it. 00:35:54.314 [2024-07-12 00:48:21.967611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.314 [2024-07-12 00:48:21.967639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.314 qpair failed and we were unable to recover it. 00:35:54.314 [2024-07-12 00:48:21.967723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.314 [2024-07-12 00:48:21.967752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.314 qpair failed and we were unable to recover it. 00:35:54.314 [2024-07-12 00:48:21.967836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.314 [2024-07-12 00:48:21.967861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.314 qpair failed and we were unable to recover it. 00:35:54.314 [2024-07-12 00:48:21.967938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.314 [2024-07-12 00:48:21.967964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.314 qpair failed and we were unable to recover it. 00:35:54.314 [2024-07-12 00:48:21.968076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.314 [2024-07-12 00:48:21.968105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.314 qpair failed and we were unable to recover it. 00:35:54.314 [2024-07-12 00:48:21.968185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.314 [2024-07-12 00:48:21.968210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.314 qpair failed and we were unable to recover it. 00:35:54.314 [2024-07-12 00:48:21.968291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.314 [2024-07-12 00:48:21.968321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.314 qpair failed and we were unable to recover it. 00:35:54.314 [2024-07-12 00:48:21.968403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.314 [2024-07-12 00:48:21.968429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.314 qpair failed and we were unable to recover it. 00:35:54.314 [2024-07-12 00:48:21.968511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.314 [2024-07-12 00:48:21.968538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.314 qpair failed and we were unable to recover it. 00:35:54.314 [2024-07-12 00:48:21.968618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.314 [2024-07-12 00:48:21.968644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.314 qpair failed and we were unable to recover it. 00:35:54.314 [2024-07-12 00:48:21.968719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.314 [2024-07-12 00:48:21.968745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.314 qpair failed and we were unable to recover it. 00:35:54.314 [2024-07-12 00:48:21.968822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.314 [2024-07-12 00:48:21.968848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.314 qpair failed and we were unable to recover it. 00:35:54.314 [2024-07-12 00:48:21.968930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.314 [2024-07-12 00:48:21.968958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.314 qpair failed and we were unable to recover it. 00:35:54.314 [2024-07-12 00:48:21.969035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.314 [2024-07-12 00:48:21.969064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.314 qpair failed and we were unable to recover it. 00:35:54.314 [2024-07-12 00:48:21.969148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.314 [2024-07-12 00:48:21.969175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.314 qpair failed and we were unable to recover it. 00:35:54.314 [2024-07-12 00:48:21.969255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.314 [2024-07-12 00:48:21.969285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.314 qpair failed and we were unable to recover it. 00:35:54.314 [2024-07-12 00:48:21.969366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.314 [2024-07-12 00:48:21.969392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.314 qpair failed and we were unable to recover it. 00:35:54.314 [2024-07-12 00:48:21.969469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.314 [2024-07-12 00:48:21.969495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.314 qpair failed and we were unable to recover it. 00:35:54.314 [2024-07-12 00:48:21.969578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.314 [2024-07-12 00:48:21.969613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.314 qpair failed and we were unable to recover it. 00:35:54.314 [2024-07-12 00:48:21.969693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.314 [2024-07-12 00:48:21.969720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.315 qpair failed and we were unable to recover it. 00:35:54.315 [2024-07-12 00:48:21.969796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.315 [2024-07-12 00:48:21.969822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.315 qpair failed and we were unable to recover it. 00:35:54.315 [2024-07-12 00:48:21.969895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.315 [2024-07-12 00:48:21.969921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.315 qpair failed and we were unable to recover it. 00:35:54.315 [2024-07-12 00:48:21.970001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.315 [2024-07-12 00:48:21.970028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.315 qpair failed and we were unable to recover it. 00:35:54.315 [2024-07-12 00:48:21.970111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.315 [2024-07-12 00:48:21.970141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.315 qpair failed and we were unable to recover it. 00:35:54.315 [2024-07-12 00:48:21.970228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.315 [2024-07-12 00:48:21.970255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.315 qpair failed and we were unable to recover it. 00:35:54.315 [2024-07-12 00:48:21.970334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.315 [2024-07-12 00:48:21.970363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.315 qpair failed and we were unable to recover it. 00:35:54.315 [2024-07-12 00:48:21.970449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.315 [2024-07-12 00:48:21.970474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.315 qpair failed and we were unable to recover it. 00:35:54.315 [2024-07-12 00:48:21.970553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.315 [2024-07-12 00:48:21.970578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.315 qpair failed and we were unable to recover it. 00:35:54.315 [2024-07-12 00:48:21.970665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.315 [2024-07-12 00:48:21.970692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.315 qpair failed and we were unable to recover it. 00:35:54.315 [2024-07-12 00:48:21.970770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.315 [2024-07-12 00:48:21.970796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.315 qpair failed and we were unable to recover it. 00:35:54.315 [2024-07-12 00:48:21.970880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.315 [2024-07-12 00:48:21.970909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.315 qpair failed and we were unable to recover it. 00:35:54.315 [2024-07-12 00:48:21.970994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.315 [2024-07-12 00:48:21.971019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.315 qpair failed and we were unable to recover it. 00:35:54.315 [2024-07-12 00:48:21.971097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.315 [2024-07-12 00:48:21.971123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.315 qpair failed and we were unable to recover it. 00:35:54.315 [2024-07-12 00:48:21.971214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.315 [2024-07-12 00:48:21.971240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.315 qpair failed and we were unable to recover it. 00:35:54.315 [2024-07-12 00:48:21.971322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.315 [2024-07-12 00:48:21.971348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.315 qpair failed and we were unable to recover it. 00:35:54.315 [2024-07-12 00:48:21.971429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.315 [2024-07-12 00:48:21.971454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.315 qpair failed and we were unable to recover it. 00:35:54.315 [2024-07-12 00:48:21.971531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.315 [2024-07-12 00:48:21.971558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.315 qpair failed and we were unable to recover it. 00:35:54.315 [2024-07-12 00:48:21.971642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.315 [2024-07-12 00:48:21.971668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.315 qpair failed and we were unable to recover it. 00:35:54.315 [2024-07-12 00:48:21.971749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.315 [2024-07-12 00:48:21.971775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.315 qpair failed and we were unable to recover it. 00:35:54.315 [2024-07-12 00:48:21.971859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.315 [2024-07-12 00:48:21.971887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.315 qpair failed and we were unable to recover it. 00:35:54.315 [2024-07-12 00:48:21.971968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.315 [2024-07-12 00:48:21.971994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.315 qpair failed and we were unable to recover it. 00:35:54.315 [2024-07-12 00:48:21.972070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.315 [2024-07-12 00:48:21.972096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.315 qpair failed and we were unable to recover it. 00:35:54.315 [2024-07-12 00:48:21.972174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.315 [2024-07-12 00:48:21.972200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.315 qpair failed and we were unable to recover it. 00:35:54.315 [2024-07-12 00:48:21.972281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.315 [2024-07-12 00:48:21.972307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.315 qpair failed and we were unable to recover it. 00:35:54.315 [2024-07-12 00:48:21.972380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.315 [2024-07-12 00:48:21.972410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.315 qpair failed and we were unable to recover it. 00:35:54.315 [2024-07-12 00:48:21.972486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.315 [2024-07-12 00:48:21.972512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.315 qpair failed and we were unable to recover it. 00:35:54.315 [2024-07-12 00:48:21.972602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.315 [2024-07-12 00:48:21.972629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.315 qpair failed and we were unable to recover it. 00:35:54.315 [2024-07-12 00:48:21.972707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.315 [2024-07-12 00:48:21.972733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.315 qpair failed and we were unable to recover it. 00:35:54.315 [2024-07-12 00:48:21.972811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.315 [2024-07-12 00:48:21.972838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.315 qpair failed and we were unable to recover it. 00:35:54.315 [2024-07-12 00:48:21.972914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.315 [2024-07-12 00:48:21.972941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.315 qpair failed and we were unable to recover it. 00:35:54.315 [2024-07-12 00:48:21.973018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.315 [2024-07-12 00:48:21.973043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.315 qpair failed and we were unable to recover it. 00:35:54.315 [2024-07-12 00:48:21.973124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.315 [2024-07-12 00:48:21.973150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.315 qpair failed and we were unable to recover it. 00:35:54.315 [2024-07-12 00:48:21.973235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.315 [2024-07-12 00:48:21.973264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.315 qpair failed and we were unable to recover it. 00:35:54.315 [2024-07-12 00:48:21.973348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.315 [2024-07-12 00:48:21.973376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.315 qpair failed and we were unable to recover it. 00:35:54.315 [2024-07-12 00:48:21.973457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.315 [2024-07-12 00:48:21.973487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.315 qpair failed and we were unable to recover it. 00:35:54.315 [2024-07-12 00:48:21.973569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.315 [2024-07-12 00:48:21.973603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.315 qpair failed and we were unable to recover it. 00:35:54.315 [2024-07-12 00:48:21.973681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.315 [2024-07-12 00:48:21.973707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.315 qpair failed and we were unable to recover it. 00:35:54.315 [2024-07-12 00:48:21.973786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.315 [2024-07-12 00:48:21.973813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.315 qpair failed and we were unable to recover it. 00:35:54.315 [2024-07-12 00:48:21.973897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.315 [2024-07-12 00:48:21.973925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.315 qpair failed and we were unable to recover it. 00:35:54.315 [2024-07-12 00:48:21.974006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.315 [2024-07-12 00:48:21.974031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.315 qpair failed and we were unable to recover it. 00:35:54.315 [2024-07-12 00:48:21.974120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.316 [2024-07-12 00:48:21.974149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.316 qpair failed and we were unable to recover it. 00:35:54.316 [2024-07-12 00:48:21.974229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.316 [2024-07-12 00:48:21.974255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.316 qpair failed and we were unable to recover it. 00:35:54.316 [2024-07-12 00:48:21.974332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.316 [2024-07-12 00:48:21.974357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.316 qpair failed and we were unable to recover it. 00:35:54.316 [2024-07-12 00:48:21.974438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.316 [2024-07-12 00:48:21.974464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.316 qpair failed and we were unable to recover it. 00:35:54.316 [2024-07-12 00:48:21.974544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.316 [2024-07-12 00:48:21.974571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.316 qpair failed and we were unable to recover it. 00:35:54.316 [2024-07-12 00:48:21.974662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.316 [2024-07-12 00:48:21.974689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.316 qpair failed and we were unable to recover it. 00:35:54.316 [2024-07-12 00:48:21.974770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.316 [2024-07-12 00:48:21.974796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.316 qpair failed and we were unable to recover it. 00:35:54.316 [2024-07-12 00:48:21.974879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.316 [2024-07-12 00:48:21.974907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.316 qpair failed and we were unable to recover it. 00:35:54.316 [2024-07-12 00:48:21.974996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.316 [2024-07-12 00:48:21.975022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.316 qpair failed and we were unable to recover it. 00:35:54.316 [2024-07-12 00:48:21.975105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.316 [2024-07-12 00:48:21.975132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.316 qpair failed and we were unable to recover it. 00:35:54.316 [2024-07-12 00:48:21.975218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.316 [2024-07-12 00:48:21.975245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.316 qpair failed and we were unable to recover it. 00:35:54.316 [2024-07-12 00:48:21.975327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.316 [2024-07-12 00:48:21.975354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.316 qpair failed and we were unable to recover it. 00:35:54.316 [2024-07-12 00:48:21.975439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.316 [2024-07-12 00:48:21.975468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.316 qpair failed and we were unable to recover it. 00:35:54.316 [2024-07-12 00:48:21.975547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.316 [2024-07-12 00:48:21.975573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.316 qpair failed and we were unable to recover it. 00:35:54.316 [2024-07-12 00:48:21.975656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.316 [2024-07-12 00:48:21.975683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.316 qpair failed and we were unable to recover it. 00:35:54.316 [2024-07-12 00:48:21.975763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.316 [2024-07-12 00:48:21.975789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.316 qpair failed and we were unable to recover it. 00:35:54.316 [2024-07-12 00:48:21.975872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.316 [2024-07-12 00:48:21.975898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.316 qpair failed and we were unable to recover it. 00:35:54.316 [2024-07-12 00:48:21.975973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.316 [2024-07-12 00:48:21.975999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.316 qpair failed and we were unable to recover it. 00:35:54.316 [2024-07-12 00:48:21.976076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.316 [2024-07-12 00:48:21.976101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.316 qpair failed and we were unable to recover it. 00:35:54.316 [2024-07-12 00:48:21.976185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.316 [2024-07-12 00:48:21.976211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.316 qpair failed and we were unable to recover it. 00:35:54.316 [2024-07-12 00:48:21.976292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.316 [2024-07-12 00:48:21.976321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.316 qpair failed and we were unable to recover it. 00:35:54.316 [2024-07-12 00:48:21.976403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.316 [2024-07-12 00:48:21.976430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.316 qpair failed and we were unable to recover it. 00:35:54.316 [2024-07-12 00:48:21.976507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.316 [2024-07-12 00:48:21.976533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.316 qpair failed and we were unable to recover it. 00:35:54.316 [2024-07-12 00:48:21.976607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.316 [2024-07-12 00:48:21.976634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.316 qpair failed and we were unable to recover it. 00:35:54.316 [2024-07-12 00:48:21.976718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.316 [2024-07-12 00:48:21.976751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.316 qpair failed and we were unable to recover it. 00:35:54.316 [2024-07-12 00:48:21.976837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.316 [2024-07-12 00:48:21.976863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.316 qpair failed and we were unable to recover it. 00:35:54.316 [2024-07-12 00:48:21.976941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.316 [2024-07-12 00:48:21.976967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.316 qpair failed and we were unable to recover it. 00:35:54.316 [2024-07-12 00:48:21.977047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.316 [2024-07-12 00:48:21.977074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.316 qpair failed and we were unable to recover it. 00:35:54.316 [2024-07-12 00:48:21.977149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.316 [2024-07-12 00:48:21.977174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.316 qpair failed and we were unable to recover it. 00:35:54.316 [2024-07-12 00:48:21.977249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.316 [2024-07-12 00:48:21.977274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.316 qpair failed and we were unable to recover it. 00:35:54.316 [2024-07-12 00:48:21.977348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.316 [2024-07-12 00:48:21.977375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.316 qpair failed and we were unable to recover it. 00:35:54.316 [2024-07-12 00:48:21.977461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.316 [2024-07-12 00:48:21.977490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.316 qpair failed and we were unable to recover it. 00:35:54.316 [2024-07-12 00:48:21.977569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.316 [2024-07-12 00:48:21.977601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.316 qpair failed and we were unable to recover it. 00:35:54.316 [2024-07-12 00:48:21.977680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.316 [2024-07-12 00:48:21.977706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.316 qpair failed and we were unable to recover it. 00:35:54.316 [2024-07-12 00:48:21.977803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.316 [2024-07-12 00:48:21.977829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.316 qpair failed and we were unable to recover it. 00:35:54.316 [2024-07-12 00:48:21.977907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.316 [2024-07-12 00:48:21.977932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.316 qpair failed and we were unable to recover it. 00:35:54.316 [2024-07-12 00:48:21.978023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.316 [2024-07-12 00:48:21.978055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.316 qpair failed and we were unable to recover it. 00:35:54.316 [2024-07-12 00:48:21.978143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.316 [2024-07-12 00:48:21.978170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.316 qpair failed and we were unable to recover it. 00:35:54.316 [2024-07-12 00:48:21.978259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.316 [2024-07-12 00:48:21.978286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.316 qpair failed and we were unable to recover it. 00:35:54.316 [2024-07-12 00:48:21.978361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.316 [2024-07-12 00:48:21.978387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.316 qpair failed and we were unable to recover it. 00:35:54.316 [2024-07-12 00:48:21.978465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.316 [2024-07-12 00:48:21.978494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.316 qpair failed and we were unable to recover it. 00:35:54.316 [2024-07-12 00:48:21.978569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.316 [2024-07-12 00:48:21.978602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.316 qpair failed and we were unable to recover it. 00:35:54.316 [2024-07-12 00:48:21.978682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.316 [2024-07-12 00:48:21.978710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.317 qpair failed and we were unable to recover it. 00:35:54.317 [2024-07-12 00:48:21.978788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.317 [2024-07-12 00:48:21.978813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.317 qpair failed and we were unable to recover it. 00:35:54.317 [2024-07-12 00:48:21.978894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.317 [2024-07-12 00:48:21.978920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.317 qpair failed and we were unable to recover it. 00:35:54.317 [2024-07-12 00:48:21.978995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.317 [2024-07-12 00:48:21.979021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.317 qpair failed and we were unable to recover it. 00:35:54.317 [2024-07-12 00:48:21.979099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.317 [2024-07-12 00:48:21.979124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.317 qpair failed and we were unable to recover it. 00:35:54.317 [2024-07-12 00:48:21.979204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.317 [2024-07-12 00:48:21.979230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.317 qpair failed and we were unable to recover it. 00:35:54.317 [2024-07-12 00:48:21.979308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.317 [2024-07-12 00:48:21.979333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.317 qpair failed and we were unable to recover it. 00:35:54.317 [2024-07-12 00:48:21.979439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.317 [2024-07-12 00:48:21.979466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.317 qpair failed and we were unable to recover it. 00:35:54.317 [2024-07-12 00:48:21.979559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.317 [2024-07-12 00:48:21.979592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.317 qpair failed and we were unable to recover it. 00:35:54.317 [2024-07-12 00:48:21.979681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.317 [2024-07-12 00:48:21.979708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.317 qpair failed and we were unable to recover it. 00:35:54.317 [2024-07-12 00:48:21.979794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.317 [2024-07-12 00:48:21.979823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.317 qpair failed and we were unable to recover it. 00:35:54.317 [2024-07-12 00:48:21.979915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.317 [2024-07-12 00:48:21.979942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.317 qpair failed and we were unable to recover it. 00:35:54.317 [2024-07-12 00:48:21.980022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.317 [2024-07-12 00:48:21.980049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.317 qpair failed and we were unable to recover it. 00:35:54.317 [2024-07-12 00:48:21.980135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.317 [2024-07-12 00:48:21.980163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.317 qpair failed and we were unable to recover it. 00:35:54.317 [2024-07-12 00:48:21.980242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.317 [2024-07-12 00:48:21.980268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.317 qpair failed and we were unable to recover it. 00:35:54.317 [2024-07-12 00:48:21.980368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.317 [2024-07-12 00:48:21.980397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.317 qpair failed and we were unable to recover it. 00:35:54.317 [2024-07-12 00:48:21.980480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.317 [2024-07-12 00:48:21.980508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.317 qpair failed and we were unable to recover it. 00:35:54.317 [2024-07-12 00:48:21.980602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.317 [2024-07-12 00:48:21.980627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.317 qpair failed and we were unable to recover it. 00:35:54.317 [2024-07-12 00:48:21.980713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.317 [2024-07-12 00:48:21.980741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.317 qpair failed and we were unable to recover it. 00:35:54.317 [2024-07-12 00:48:21.980817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.317 [2024-07-12 00:48:21.980842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.317 qpair failed and we were unable to recover it. 00:35:54.317 [2024-07-12 00:48:21.980927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.317 [2024-07-12 00:48:21.980952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.317 qpair failed and we were unable to recover it. 00:35:54.317 [2024-07-12 00:48:21.981038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.317 [2024-07-12 00:48:21.981064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.317 qpair failed and we were unable to recover it. 00:35:54.317 [2024-07-12 00:48:21.981152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.317 [2024-07-12 00:48:21.981180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.317 qpair failed and we were unable to recover it. 00:35:54.317 [2024-07-12 00:48:21.981273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.317 [2024-07-12 00:48:21.981298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.317 qpair failed and we were unable to recover it. 00:35:54.317 [2024-07-12 00:48:21.981380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.317 [2024-07-12 00:48:21.981407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.317 qpair failed and we were unable to recover it. 00:35:54.317 [2024-07-12 00:48:21.981499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.317 [2024-07-12 00:48:21.981525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.317 qpair failed and we were unable to recover it. 00:35:54.317 [2024-07-12 00:48:21.981607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.317 [2024-07-12 00:48:21.981633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.317 qpair failed and we were unable to recover it. 00:35:54.317 [2024-07-12 00:48:21.981718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.317 [2024-07-12 00:48:21.981745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.317 qpair failed and we were unable to recover it. 00:35:54.317 [2024-07-12 00:48:21.981828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.317 [2024-07-12 00:48:21.981853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.317 qpair failed and we were unable to recover it. 00:35:54.317 [2024-07-12 00:48:21.981934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.317 [2024-07-12 00:48:21.981960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.317 qpair failed and we were unable to recover it. 00:35:54.317 [2024-07-12 00:48:21.982061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.317 [2024-07-12 00:48:21.982087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.317 qpair failed and we were unable to recover it. 00:35:54.317 [2024-07-12 00:48:21.982168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.317 [2024-07-12 00:48:21.982194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.317 qpair failed and we were unable to recover it. 00:35:54.317 [2024-07-12 00:48:21.982269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.317 [2024-07-12 00:48:21.982295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.317 qpair failed and we were unable to recover it. 00:35:54.317 [2024-07-12 00:48:21.982374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.317 [2024-07-12 00:48:21.982400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.317 qpair failed and we were unable to recover it. 00:35:54.317 [2024-07-12 00:48:21.982480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.317 [2024-07-12 00:48:21.982506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.317 qpair failed and we were unable to recover it. 00:35:54.317 [2024-07-12 00:48:21.982593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.317 [2024-07-12 00:48:21.982620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.317 qpair failed and we were unable to recover it. 00:35:54.317 [2024-07-12 00:48:21.982702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.317 [2024-07-12 00:48:21.982728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.317 qpair failed and we were unable to recover it. 00:35:54.317 [2024-07-12 00:48:21.982812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.317 [2024-07-12 00:48:21.982841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.317 qpair failed and we were unable to recover it. 00:35:54.317 [2024-07-12 00:48:21.982919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.317 [2024-07-12 00:48:21.982945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.317 qpair failed and we were unable to recover it. 00:35:54.317 [2024-07-12 00:48:21.983031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.317 [2024-07-12 00:48:21.983058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.317 qpair failed and we were unable to recover it. 00:35:54.317 [2024-07-12 00:48:21.983135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.317 [2024-07-12 00:48:21.983162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.317 qpair failed and we were unable to recover it. 00:35:54.317 [2024-07-12 00:48:21.983244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.317 [2024-07-12 00:48:21.983271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.317 qpair failed and we were unable to recover it. 00:35:54.317 [2024-07-12 00:48:21.983351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.317 [2024-07-12 00:48:21.983379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.317 qpair failed and we were unable to recover it. 00:35:54.317 [2024-07-12 00:48:21.983458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.317 [2024-07-12 00:48:21.983484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.317 qpair failed and we were unable to recover it. 00:35:54.317 [2024-07-12 00:48:21.983560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.317 [2024-07-12 00:48:21.983591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.317 qpair failed and we were unable to recover it. 00:35:54.318 [2024-07-12 00:48:21.983673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.318 [2024-07-12 00:48:21.983699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.318 qpair failed and we were unable to recover it. 00:35:54.318 [2024-07-12 00:48:21.983775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.318 [2024-07-12 00:48:21.983800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.318 qpair failed and we were unable to recover it. 00:35:54.318 [2024-07-12 00:48:21.983885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.318 [2024-07-12 00:48:21.983910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.318 qpair failed and we were unable to recover it. 00:35:54.318 [2024-07-12 00:48:21.983992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.318 [2024-07-12 00:48:21.984019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.318 qpair failed and we were unable to recover it. 00:35:54.318 [2024-07-12 00:48:21.984100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.318 [2024-07-12 00:48:21.984126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.318 qpair failed and we were unable to recover it. 00:35:54.318 [2024-07-12 00:48:21.984218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.318 [2024-07-12 00:48:21.984244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.318 qpair failed and we were unable to recover it. 00:35:54.318 [2024-07-12 00:48:21.984325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.318 [2024-07-12 00:48:21.984351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.318 qpair failed and we were unable to recover it. 00:35:54.318 [2024-07-12 00:48:21.984440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.318 [2024-07-12 00:48:21.984466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.318 qpair failed and we were unable to recover it. 00:35:54.318 [2024-07-12 00:48:21.984543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.318 [2024-07-12 00:48:21.984569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.318 qpair failed and we were unable to recover it. 00:35:54.318 [2024-07-12 00:48:21.984650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.318 [2024-07-12 00:48:21.984678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.318 qpair failed and we were unable to recover it. 00:35:54.318 [2024-07-12 00:48:21.984757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.318 [2024-07-12 00:48:21.984783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.318 qpair failed and we were unable to recover it. 00:35:54.318 [2024-07-12 00:48:21.984859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.318 [2024-07-12 00:48:21.984885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.318 qpair failed and we were unable to recover it. 00:35:54.318 [2024-07-12 00:48:21.984966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.318 [2024-07-12 00:48:21.984992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.318 qpair failed and we were unable to recover it. 00:35:54.318 [2024-07-12 00:48:21.985077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.318 [2024-07-12 00:48:21.985105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.318 qpair failed and we were unable to recover it. 00:35:54.318 [2024-07-12 00:48:21.985186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.318 [2024-07-12 00:48:21.985212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.318 qpair failed and we were unable to recover it. 00:35:54.318 [2024-07-12 00:48:21.985297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.318 [2024-07-12 00:48:21.985325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.318 qpair failed and we were unable to recover it. 00:35:54.318 [2024-07-12 00:48:21.985411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.318 [2024-07-12 00:48:21.985440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.318 qpair failed and we were unable to recover it. 00:35:54.318 [2024-07-12 00:48:21.985523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.318 [2024-07-12 00:48:21.985553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.318 qpair failed and we were unable to recover it. 00:35:54.318 [2024-07-12 00:48:21.985637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.318 [2024-07-12 00:48:21.985664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.318 qpair failed and we were unable to recover it. 00:35:54.318 [2024-07-12 00:48:21.985741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.318 [2024-07-12 00:48:21.985767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.318 qpair failed and we were unable to recover it. 00:35:54.318 [2024-07-12 00:48:21.985848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.318 [2024-07-12 00:48:21.985878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.318 qpair failed and we were unable to recover it. 00:35:54.318 [2024-07-12 00:48:21.985961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.318 [2024-07-12 00:48:21.985988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.318 qpair failed and we were unable to recover it. 00:35:54.318 [2024-07-12 00:48:21.986087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.318 [2024-07-12 00:48:21.986113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.318 qpair failed and we were unable to recover it. 00:35:54.318 [2024-07-12 00:48:21.986192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.318 [2024-07-12 00:48:21.986218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.318 qpair failed and we were unable to recover it. 00:35:54.318 [2024-07-12 00:48:21.986296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.318 [2024-07-12 00:48:21.986322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.318 qpair failed and we were unable to recover it. 00:35:54.318 [2024-07-12 00:48:21.986425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.318 [2024-07-12 00:48:21.986454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.318 qpair failed and we were unable to recover it. 00:35:54.318 [2024-07-12 00:48:21.986532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.318 [2024-07-12 00:48:21.986559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.318 qpair failed and we were unable to recover it. 00:35:54.318 [2024-07-12 00:48:21.986649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.318 [2024-07-12 00:48:21.986675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.318 qpair failed and we were unable to recover it. 00:35:54.318 [2024-07-12 00:48:21.986755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.318 [2024-07-12 00:48:21.986781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.318 qpair failed and we were unable to recover it. 00:35:54.318 [2024-07-12 00:48:21.986857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.318 [2024-07-12 00:48:21.986883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.318 qpair failed and we were unable to recover it. 00:35:54.318 [2024-07-12 00:48:21.986958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.318 [2024-07-12 00:48:21.986984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.318 qpair failed and we were unable to recover it. 00:35:54.318 [2024-07-12 00:48:21.987064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.318 [2024-07-12 00:48:21.987091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.318 qpair failed and we were unable to recover it. 00:35:54.318 [2024-07-12 00:48:21.987166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.318 [2024-07-12 00:48:21.987192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.318 qpair failed and we were unable to recover it. 00:35:54.318 [2024-07-12 00:48:21.987270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.318 [2024-07-12 00:48:21.987297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.318 qpair failed and we were unable to recover it. 00:35:54.318 [2024-07-12 00:48:21.987396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.318 [2024-07-12 00:48:21.987430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.318 qpair failed and we were unable to recover it. 00:35:54.318 [2024-07-12 00:48:21.987517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.318 [2024-07-12 00:48:21.987544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.318 qpair failed and we were unable to recover it. 00:35:54.318 [2024-07-12 00:48:21.987629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.318 [2024-07-12 00:48:21.987655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.318 qpair failed and we were unable to recover it. 00:35:54.318 [2024-07-12 00:48:21.987733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.318 [2024-07-12 00:48:21.987759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.318 qpair failed and we were unable to recover it. 00:35:54.318 [2024-07-12 00:48:21.987841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.318 [2024-07-12 00:48:21.987868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.318 qpair failed and we were unable to recover it. 00:35:54.319 [2024-07-12 00:48:21.987951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.319 [2024-07-12 00:48:21.987977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.319 qpair failed and we were unable to recover it. 00:35:54.319 [2024-07-12 00:48:21.988061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.319 [2024-07-12 00:48:21.988088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.319 qpair failed and we were unable to recover it. 00:35:54.319 [2024-07-12 00:48:21.988170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.319 [2024-07-12 00:48:21.988197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.319 qpair failed and we were unable to recover it. 00:35:54.319 [2024-07-12 00:48:21.988279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.319 [2024-07-12 00:48:21.988305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.319 qpair failed and we were unable to recover it. 00:35:54.319 [2024-07-12 00:48:21.988387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.319 [2024-07-12 00:48:21.988416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.319 qpair failed and we were unable to recover it. 00:35:54.319 [2024-07-12 00:48:21.988499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.319 [2024-07-12 00:48:21.988525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.319 qpair failed and we were unable to recover it. 00:35:54.319 [2024-07-12 00:48:21.988612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.319 [2024-07-12 00:48:21.988639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.319 qpair failed and we were unable to recover it. 00:35:54.319 [2024-07-12 00:48:21.988714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.319 [2024-07-12 00:48:21.988739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.319 qpair failed and we were unable to recover it. 00:35:54.319 [2024-07-12 00:48:21.988829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.319 [2024-07-12 00:48:21.988856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.319 qpair failed and we were unable to recover it. 00:35:54.319 [2024-07-12 00:48:21.988937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.319 [2024-07-12 00:48:21.988965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.319 qpair failed and we were unable to recover it. 00:35:54.319 [2024-07-12 00:48:21.989052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.319 [2024-07-12 00:48:21.989080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.319 qpair failed and we were unable to recover it. 00:35:54.319 [2024-07-12 00:48:21.989163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.319 [2024-07-12 00:48:21.989189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.319 qpair failed and we were unable to recover it. 00:35:54.319 [2024-07-12 00:48:21.989267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.319 [2024-07-12 00:48:21.989293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.319 qpair failed and we were unable to recover it. 00:35:54.319 [2024-07-12 00:48:21.989367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.319 [2024-07-12 00:48:21.989393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.319 qpair failed and we were unable to recover it. 00:35:54.319 [2024-07-12 00:48:21.989477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.319 [2024-07-12 00:48:21.989508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.319 qpair failed and we were unable to recover it. 00:35:54.319 [2024-07-12 00:48:21.989604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.319 [2024-07-12 00:48:21.989633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.319 qpair failed and we were unable to recover it. 00:35:54.319 [2024-07-12 00:48:21.989711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.319 [2024-07-12 00:48:21.989737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.319 qpair failed and we were unable to recover it. 00:35:54.319 [2024-07-12 00:48:21.989826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.319 [2024-07-12 00:48:21.989852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.319 qpair failed and we were unable to recover it. 00:35:54.319 [2024-07-12 00:48:21.989929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.319 [2024-07-12 00:48:21.989959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.319 qpair failed and we were unable to recover it. 00:35:54.319 [2024-07-12 00:48:21.990041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.319 [2024-07-12 00:48:21.990067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.319 qpair failed and we were unable to recover it. 00:35:54.319 [2024-07-12 00:48:21.990152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.319 [2024-07-12 00:48:21.990176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.319 qpair failed and we were unable to recover it. 00:35:54.319 [2024-07-12 00:48:21.990255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.319 [2024-07-12 00:48:21.990280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.319 qpair failed and we were unable to recover it. 00:35:54.319 [2024-07-12 00:48:21.990359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.319 [2024-07-12 00:48:21.990384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.319 qpair failed and we were unable to recover it. 00:35:54.319 [2024-07-12 00:48:21.990470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.319 [2024-07-12 00:48:21.990510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.319 qpair failed and we were unable to recover it. 00:35:54.319 [2024-07-12 00:48:21.990604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.319 [2024-07-12 00:48:21.990633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.319 qpair failed and we were unable to recover it. 00:35:54.319 [2024-07-12 00:48:21.990733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.319 [2024-07-12 00:48:21.990762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.319 qpair failed and we were unable to recover it. 00:35:54.319 [2024-07-12 00:48:21.990837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.319 [2024-07-12 00:48:21.990864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.319 qpair failed and we were unable to recover it. 00:35:54.319 [2024-07-12 00:48:21.990954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.319 [2024-07-12 00:48:21.990982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.319 qpair failed and we were unable to recover it. 00:35:54.319 [2024-07-12 00:48:21.991067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.319 [2024-07-12 00:48:21.991094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.319 qpair failed and we were unable to recover it. 00:35:54.319 [2024-07-12 00:48:21.991182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.319 [2024-07-12 00:48:21.991208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.319 qpair failed and we were unable to recover it. 00:35:54.319 [2024-07-12 00:48:21.991286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.319 [2024-07-12 00:48:21.991314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.319 qpair failed and we were unable to recover it. 00:35:54.320 [2024-07-12 00:48:21.991391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.320 [2024-07-12 00:48:21.991416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.320 qpair failed and we were unable to recover it. 00:35:54.320 [2024-07-12 00:48:21.991499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.320 [2024-07-12 00:48:21.991525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.320 qpair failed and we were unable to recover it. 00:35:54.320 [2024-07-12 00:48:21.991609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.320 [2024-07-12 00:48:21.991636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.320 qpair failed and we were unable to recover it. 00:35:54.320 [2024-07-12 00:48:21.991715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.320 [2024-07-12 00:48:21.991743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.320 qpair failed and we were unable to recover it. 00:35:54.320 [2024-07-12 00:48:21.991823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.320 [2024-07-12 00:48:21.991850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.320 qpair failed and we were unable to recover it. 00:35:54.320 [2024-07-12 00:48:21.991930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.320 [2024-07-12 00:48:21.991957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.320 qpair failed and we were unable to recover it. 00:35:54.320 [2024-07-12 00:48:21.992038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.320 [2024-07-12 00:48:21.992064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.320 qpair failed and we were unable to recover it. 00:35:54.320 [2024-07-12 00:48:21.992141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.320 [2024-07-12 00:48:21.992168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.320 qpair failed and we were unable to recover it. 00:35:54.320 [2024-07-12 00:48:21.992244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.320 [2024-07-12 00:48:21.992270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.320 qpair failed and we were unable to recover it. 00:35:54.320 [2024-07-12 00:48:21.992350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.320 [2024-07-12 00:48:21.992380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.320 qpair failed and we were unable to recover it. 00:35:54.320 [2024-07-12 00:48:21.992457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.320 [2024-07-12 00:48:21.992483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.320 qpair failed and we were unable to recover it. 00:35:54.320 [2024-07-12 00:48:21.992564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.320 [2024-07-12 00:48:21.992600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.320 qpair failed and we were unable to recover it. 00:35:54.320 [2024-07-12 00:48:21.992688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.320 [2024-07-12 00:48:21.992716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.320 qpair failed and we were unable to recover it. 00:35:54.320 [2024-07-12 00:48:21.992806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.320 [2024-07-12 00:48:21.992832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.320 qpair failed and we were unable to recover it. 00:35:54.320 [2024-07-12 00:48:21.992923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.320 [2024-07-12 00:48:21.992956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.320 qpair failed and we were unable to recover it. 00:35:54.320 [2024-07-12 00:48:21.993044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.320 [2024-07-12 00:48:21.993071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.320 qpair failed and we were unable to recover it. 00:35:54.320 [2024-07-12 00:48:21.993148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.320 [2024-07-12 00:48:21.993174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.320 qpair failed and we were unable to recover it. 00:35:54.320 [2024-07-12 00:48:21.993256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.320 [2024-07-12 00:48:21.993281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.320 qpair failed and we were unable to recover it. 00:35:54.320 [2024-07-12 00:48:21.993358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.320 [2024-07-12 00:48:21.993387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.320 qpair failed and we were unable to recover it. 00:35:54.320 [2024-07-12 00:48:21.993473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.320 [2024-07-12 00:48:21.993500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.320 qpair failed and we were unable to recover it. 00:35:54.320 [2024-07-12 00:48:21.993597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.320 [2024-07-12 00:48:21.993624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.320 qpair failed and we were unable to recover it. 00:35:54.320 [2024-07-12 00:48:21.993711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.320 [2024-07-12 00:48:21.993738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.320 qpair failed and we were unable to recover it. 00:35:54.320 [2024-07-12 00:48:21.993813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.320 [2024-07-12 00:48:21.993839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.320 qpair failed and we were unable to recover it. 00:35:54.320 [2024-07-12 00:48:21.993928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.320 [2024-07-12 00:48:21.993956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.320 qpair failed and we were unable to recover it. 00:35:54.320 [2024-07-12 00:48:21.994042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.320 [2024-07-12 00:48:21.994068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.320 qpair failed and we were unable to recover it. 00:35:54.320 [2024-07-12 00:48:21.994148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.320 [2024-07-12 00:48:21.994174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.320 qpair failed and we were unable to recover it. 00:35:54.320 [2024-07-12 00:48:21.994251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.320 [2024-07-12 00:48:21.994277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.320 qpair failed and we were unable to recover it. 00:35:54.320 [2024-07-12 00:48:21.994354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.320 [2024-07-12 00:48:21.994385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.320 qpair failed and we were unable to recover it. 00:35:54.320 [2024-07-12 00:48:21.994464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.320 [2024-07-12 00:48:21.994490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.320 qpair failed and we were unable to recover it. 00:35:54.320 [2024-07-12 00:48:21.994576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.320 [2024-07-12 00:48:21.994618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.320 qpair failed and we were unable to recover it. 00:35:54.320 [2024-07-12 00:48:21.994696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.320 [2024-07-12 00:48:21.994722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.320 qpair failed and we were unable to recover it. 00:35:54.320 [2024-07-12 00:48:21.994800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.320 [2024-07-12 00:48:21.994826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.320 qpair failed and we were unable to recover it. 00:35:54.320 [2024-07-12 00:48:21.994901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.320 [2024-07-12 00:48:21.994927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.320 qpair failed and we were unable to recover it. 00:35:54.320 [2024-07-12 00:48:21.995012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.320 [2024-07-12 00:48:21.995038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.320 qpair failed and we were unable to recover it. 00:35:54.320 [2024-07-12 00:48:21.995129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.320 [2024-07-12 00:48:21.995155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.320 qpair failed and we were unable to recover it. 00:35:54.320 [2024-07-12 00:48:21.995233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.320 [2024-07-12 00:48:21.995258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.320 qpair failed and we were unable to recover it. 00:35:54.320 [2024-07-12 00:48:21.995344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.320 [2024-07-12 00:48:21.995370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.320 qpair failed and we were unable to recover it. 00:35:54.320 [2024-07-12 00:48:21.995451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.320 [2024-07-12 00:48:21.995477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.320 qpair failed and we were unable to recover it. 00:35:54.320 [2024-07-12 00:48:21.995554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.320 [2024-07-12 00:48:21.995580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.320 qpair failed and we were unable to recover it. 00:35:54.320 [2024-07-12 00:48:21.995667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.320 [2024-07-12 00:48:21.995695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.320 qpair failed and we were unable to recover it. 00:35:54.320 [2024-07-12 00:48:21.995778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.321 [2024-07-12 00:48:21.995805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.321 qpair failed and we were unable to recover it. 00:35:54.321 [2024-07-12 00:48:21.995891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.321 [2024-07-12 00:48:21.995918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.321 qpair failed and we were unable to recover it. 00:35:54.321 [2024-07-12 00:48:21.995997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.321 [2024-07-12 00:48:21.996024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.321 qpair failed and we were unable to recover it. 00:35:54.321 [2024-07-12 00:48:21.996098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.321 [2024-07-12 00:48:21.996125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.321 qpair failed and we were unable to recover it. 00:35:54.321 [2024-07-12 00:48:21.996200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.321 [2024-07-12 00:48:21.996226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.321 qpair failed and we were unable to recover it. 00:35:54.321 [2024-07-12 00:48:21.996300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.321 [2024-07-12 00:48:21.996326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.321 qpair failed and we were unable to recover it. 00:35:54.321 [2024-07-12 00:48:21.996415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.321 [2024-07-12 00:48:21.996442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.321 qpair failed and we were unable to recover it. 00:35:54.321 [2024-07-12 00:48:21.996531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.321 [2024-07-12 00:48:21.996560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.321 qpair failed and we were unable to recover it. 00:35:54.321 [2024-07-12 00:48:21.996653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.321 [2024-07-12 00:48:21.996680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.321 qpair failed and we were unable to recover it. 00:35:54.321 [2024-07-12 00:48:21.996763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.321 [2024-07-12 00:48:21.996790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.321 qpair failed and we were unable to recover it. 00:35:54.321 [2024-07-12 00:48:21.996869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.321 [2024-07-12 00:48:21.996896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.321 qpair failed and we were unable to recover it. 00:35:54.321 [2024-07-12 00:48:21.996976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.321 [2024-07-12 00:48:21.997002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.321 qpair failed and we were unable to recover it. 00:35:54.321 [2024-07-12 00:48:21.997085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.321 [2024-07-12 00:48:21.997110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.321 qpair failed and we were unable to recover it. 00:35:54.321 [2024-07-12 00:48:21.997220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.321 [2024-07-12 00:48:21.997246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.321 qpair failed and we were unable to recover it. 00:35:54.321 [2024-07-12 00:48:21.997331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.321 [2024-07-12 00:48:21.997360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.321 qpair failed and we were unable to recover it. 00:35:54.321 [2024-07-12 00:48:21.997444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.321 [2024-07-12 00:48:21.997470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.321 qpair failed and we were unable to recover it. 00:35:54.321 [2024-07-12 00:48:21.997552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.321 [2024-07-12 00:48:21.997585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.321 qpair failed and we were unable to recover it. 00:35:54.321 [2024-07-12 00:48:21.997678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.321 [2024-07-12 00:48:21.997704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.321 qpair failed and we were unable to recover it. 00:35:54.321 [2024-07-12 00:48:21.997779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.321 [2024-07-12 00:48:21.997805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.321 qpair failed and we were unable to recover it. 00:35:54.321 [2024-07-12 00:48:21.997885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.321 [2024-07-12 00:48:21.997912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.321 qpair failed and we were unable to recover it. 00:35:54.321 [2024-07-12 00:48:21.998002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.321 [2024-07-12 00:48:21.998032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.321 qpair failed and we were unable to recover it. 00:35:54.321 [2024-07-12 00:48:21.998123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.321 [2024-07-12 00:48:21.998149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.321 qpair failed and we were unable to recover it. 00:35:54.321 [2024-07-12 00:48:21.998236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.321 [2024-07-12 00:48:21.998262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.321 qpair failed and we were unable to recover it. 00:35:54.321 [2024-07-12 00:48:21.998338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.321 [2024-07-12 00:48:21.998364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.321 qpair failed and we were unable to recover it. 00:35:54.321 [2024-07-12 00:48:21.998450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.321 [2024-07-12 00:48:21.998476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.321 qpair failed and we were unable to recover it. 00:35:54.321 [2024-07-12 00:48:21.998554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.321 [2024-07-12 00:48:21.998579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.321 qpair failed and we were unable to recover it. 00:35:54.321 [2024-07-12 00:48:21.998679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.321 [2024-07-12 00:48:21.998709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.321 qpair failed and we were unable to recover it. 00:35:54.321 [2024-07-12 00:48:21.998801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.321 [2024-07-12 00:48:21.998833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.321 qpair failed and we were unable to recover it. 00:35:54.321 [2024-07-12 00:48:21.998915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.321 [2024-07-12 00:48:21.998942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.321 qpair failed and we were unable to recover it. 00:35:54.321 [2024-07-12 00:48:21.999027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.321 [2024-07-12 00:48:21.999054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.321 qpair failed and we were unable to recover it. 00:35:54.321 [2024-07-12 00:48:21.999135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.321 [2024-07-12 00:48:21.999161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.321 qpair failed and we were unable to recover it. 00:35:54.321 [2024-07-12 00:48:21.999244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.321 [2024-07-12 00:48:21.999273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.321 qpair failed and we were unable to recover it. 00:35:54.321 [2024-07-12 00:48:21.999349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.321 [2024-07-12 00:48:21.999375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.321 qpair failed and we were unable to recover it. 00:35:54.321 [2024-07-12 00:48:21.999458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.321 [2024-07-12 00:48:21.999484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.321 qpair failed and we were unable to recover it. 00:35:54.321 [2024-07-12 00:48:21.999582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.321 [2024-07-12 00:48:21.999613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.321 qpair failed and we were unable to recover it. 00:35:54.321 [2024-07-12 00:48:21.999695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.321 [2024-07-12 00:48:21.999724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.321 qpair failed and we were unable to recover it. 00:35:54.321 [2024-07-12 00:48:21.999800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.321 [2024-07-12 00:48:21.999827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.321 qpair failed and we were unable to recover it. 00:35:54.321 [2024-07-12 00:48:21.999902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.321 [2024-07-12 00:48:21.999928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.321 qpair failed and we were unable to recover it. 00:35:54.321 [2024-07-12 00:48:22.000002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.321 [2024-07-12 00:48:22.000029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.321 qpair failed and we were unable to recover it. 00:35:54.321 [2024-07-12 00:48:22.000106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.321 [2024-07-12 00:48:22.000132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.321 qpair failed and we were unable to recover it. 00:35:54.321 [2024-07-12 00:48:22.000209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.321 [2024-07-12 00:48:22.000235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.322 qpair failed and we were unable to recover it. 00:35:54.322 [2024-07-12 00:48:22.000323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.322 [2024-07-12 00:48:22.000350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.322 qpair failed and we were unable to recover it. 00:35:54.322 [2024-07-12 00:48:22.000435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.322 [2024-07-12 00:48:22.000460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.322 qpair failed and we were unable to recover it. 00:35:54.322 [2024-07-12 00:48:22.000541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.322 [2024-07-12 00:48:22.000566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.322 qpair failed and we were unable to recover it. 00:35:54.322 [2024-07-12 00:48:22.000659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.322 [2024-07-12 00:48:22.000686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.322 qpair failed and we were unable to recover it. 00:35:54.322 [2024-07-12 00:48:22.000766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.322 [2024-07-12 00:48:22.000791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.322 qpair failed and we were unable to recover it. 00:35:54.322 [2024-07-12 00:48:22.000874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.322 [2024-07-12 00:48:22.000899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.322 qpair failed and we were unable to recover it. 00:35:54.322 [2024-07-12 00:48:22.000986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.322 [2024-07-12 00:48:22.001016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.322 qpair failed and we were unable to recover it. 00:35:54.322 [2024-07-12 00:48:22.001113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.322 [2024-07-12 00:48:22.001140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.322 qpair failed and we were unable to recover it. 00:35:54.322 [2024-07-12 00:48:22.001218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.322 [2024-07-12 00:48:22.001244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.322 qpair failed and we were unable to recover it. 00:35:54.322 [2024-07-12 00:48:22.001327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.322 [2024-07-12 00:48:22.001352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.322 qpair failed and we were unable to recover it. 00:35:54.322 [2024-07-12 00:48:22.001429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.322 [2024-07-12 00:48:22.001454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.322 qpair failed and we were unable to recover it. 00:35:54.322 [2024-07-12 00:48:22.001535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.322 [2024-07-12 00:48:22.001564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.322 qpair failed and we were unable to recover it. 00:35:54.322 [2024-07-12 00:48:22.001652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.322 [2024-07-12 00:48:22.001678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.322 qpair failed and we were unable to recover it. 00:35:54.322 [2024-07-12 00:48:22.001766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.322 [2024-07-12 00:48:22.001791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.322 qpair failed and we were unable to recover it. 00:35:54.322 [2024-07-12 00:48:22.001874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.322 [2024-07-12 00:48:22.001901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.322 qpair failed and we were unable to recover it. 00:35:54.322 [2024-07-12 00:48:22.001979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.322 [2024-07-12 00:48:22.002005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.322 qpair failed and we were unable to recover it. 00:35:54.322 [2024-07-12 00:48:22.002088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.322 [2024-07-12 00:48:22.002118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.322 qpair failed and we were unable to recover it. 00:35:54.322 [2024-07-12 00:48:22.002202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.322 [2024-07-12 00:48:22.002232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.322 qpair failed and we were unable to recover it. 00:35:54.322 [2024-07-12 00:48:22.002310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.322 [2024-07-12 00:48:22.002336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.322 qpair failed and we were unable to recover it. 00:35:54.322 [2024-07-12 00:48:22.002414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.322 [2024-07-12 00:48:22.002440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.322 qpair failed and we were unable to recover it. 00:35:54.322 [2024-07-12 00:48:22.002529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.322 [2024-07-12 00:48:22.002555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.322 qpair failed and we were unable to recover it. 00:35:54.322 [2024-07-12 00:48:22.002650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.322 [2024-07-12 00:48:22.002677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.322 qpair failed and we were unable to recover it. 00:35:54.322 [2024-07-12 00:48:22.002761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.322 [2024-07-12 00:48:22.002788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.322 qpair failed and we were unable to recover it. 00:35:54.322 [2024-07-12 00:48:22.002876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.322 [2024-07-12 00:48:22.002904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.322 qpair failed and we were unable to recover it. 00:35:54.322 [2024-07-12 00:48:22.002987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.322 [2024-07-12 00:48:22.003013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.322 qpair failed and we were unable to recover it. 00:35:54.322 [2024-07-12 00:48:22.003098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.322 [2024-07-12 00:48:22.003124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.322 qpair failed and we were unable to recover it. 00:35:54.322 [2024-07-12 00:48:22.003208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.322 [2024-07-12 00:48:22.003239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.322 qpair failed and we were unable to recover it. 00:35:54.322 [2024-07-12 00:48:22.003321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.322 [2024-07-12 00:48:22.003347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.322 qpair failed and we were unable to recover it. 00:35:54.322 [2024-07-12 00:48:22.003421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.322 [2024-07-12 00:48:22.003447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.322 qpair failed and we were unable to recover it. 00:35:54.322 [2024-07-12 00:48:22.003536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.322 [2024-07-12 00:48:22.003565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.322 qpair failed and we were unable to recover it. 00:35:54.322 [2024-07-12 00:48:22.003652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.322 [2024-07-12 00:48:22.003676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.322 qpair failed and we were unable to recover it. 00:35:54.322 [2024-07-12 00:48:22.003773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.322 [2024-07-12 00:48:22.003800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.322 qpair failed and we were unable to recover it. 00:35:54.322 [2024-07-12 00:48:22.003877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.322 [2024-07-12 00:48:22.003902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.322 qpair failed and we were unable to recover it. 00:35:54.322 [2024-07-12 00:48:22.003991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.322 [2024-07-12 00:48:22.004017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.322 qpair failed and we were unable to recover it. 00:35:54.322 [2024-07-12 00:48:22.004093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.322 [2024-07-12 00:48:22.004119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.322 qpair failed and we were unable to recover it. 00:35:54.322 [2024-07-12 00:48:22.004195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.322 [2024-07-12 00:48:22.004221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.322 qpair failed and we were unable to recover it. 00:35:54.322 [2024-07-12 00:48:22.004303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.322 [2024-07-12 00:48:22.004333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.322 qpair failed and we were unable to recover it. 00:35:54.322 [2024-07-12 00:48:22.004412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.322 [2024-07-12 00:48:22.004439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.322 qpair failed and we were unable to recover it. 00:35:54.322 [2024-07-12 00:48:22.004516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.322 [2024-07-12 00:48:22.004542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.322 qpair failed and we were unable to recover it. 00:35:54.322 [2024-07-12 00:48:22.004622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.323 [2024-07-12 00:48:22.004656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.323 qpair failed and we were unable to recover it. 00:35:54.323 [2024-07-12 00:48:22.004739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.323 [2024-07-12 00:48:22.004766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.323 qpair failed and we were unable to recover it. 00:35:54.323 [2024-07-12 00:48:22.004845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.323 [2024-07-12 00:48:22.004871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.323 qpair failed and we were unable to recover it. 00:35:54.323 [2024-07-12 00:48:22.004952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.323 [2024-07-12 00:48:22.004978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.323 qpair failed and we were unable to recover it. 00:35:54.323 [2024-07-12 00:48:22.005065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.323 [2024-07-12 00:48:22.005092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.323 qpair failed and we were unable to recover it. 00:35:54.323 [2024-07-12 00:48:22.005179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.323 [2024-07-12 00:48:22.005206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.323 qpair failed and we were unable to recover it. 00:35:54.323 [2024-07-12 00:48:22.005283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.323 [2024-07-12 00:48:22.005310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.323 qpair failed and we were unable to recover it. 00:35:54.323 [2024-07-12 00:48:22.005385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.323 [2024-07-12 00:48:22.005411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.323 qpair failed and we were unable to recover it. 00:35:54.323 [2024-07-12 00:48:22.005487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.323 [2024-07-12 00:48:22.005513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.323 qpair failed and we were unable to recover it. 00:35:54.323 [2024-07-12 00:48:22.005603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.323 [2024-07-12 00:48:22.005630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.323 qpair failed and we were unable to recover it. 00:35:54.323 [2024-07-12 00:48:22.005716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.323 [2024-07-12 00:48:22.005741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.323 qpair failed and we were unable to recover it. 00:35:54.323 [2024-07-12 00:48:22.005815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.323 [2024-07-12 00:48:22.005842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.323 qpair failed and we were unable to recover it. 00:35:54.323 [2024-07-12 00:48:22.005922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.323 [2024-07-12 00:48:22.005950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.323 qpair failed and we were unable to recover it. 00:35:54.323 [2024-07-12 00:48:22.006034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.323 [2024-07-12 00:48:22.006060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.323 qpair failed and we were unable to recover it. 00:35:54.323 [2024-07-12 00:48:22.006143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.323 [2024-07-12 00:48:22.006172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.323 qpair failed and we were unable to recover it. 00:35:54.323 [2024-07-12 00:48:22.006249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.323 [2024-07-12 00:48:22.006276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.323 qpair failed and we were unable to recover it. 00:35:54.323 [2024-07-12 00:48:22.006354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.323 [2024-07-12 00:48:22.006381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.323 qpair failed and we were unable to recover it. 00:35:54.323 [2024-07-12 00:48:22.006465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.323 [2024-07-12 00:48:22.006491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.323 qpair failed and we were unable to recover it. 00:35:54.323 [2024-07-12 00:48:22.006575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.323 [2024-07-12 00:48:22.006609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.323 qpair failed and we were unable to recover it. 00:35:54.323 [2024-07-12 00:48:22.006696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.323 [2024-07-12 00:48:22.006727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.323 qpair failed and we were unable to recover it. 00:35:54.323 [2024-07-12 00:48:22.006805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.323 [2024-07-12 00:48:22.006833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.323 qpair failed and we were unable to recover it. 00:35:54.323 [2024-07-12 00:48:22.006910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.323 [2024-07-12 00:48:22.006937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.323 qpair failed and we were unable to recover it. 00:35:54.323 [2024-07-12 00:48:22.007017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.323 [2024-07-12 00:48:22.007044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.323 qpair failed and we were unable to recover it. 00:35:54.323 [2024-07-12 00:48:22.007125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.323 [2024-07-12 00:48:22.007151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.323 qpair failed and we were unable to recover it. 00:35:54.323 [2024-07-12 00:48:22.007224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.323 [2024-07-12 00:48:22.007250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.323 qpair failed and we were unable to recover it. 00:35:54.323 [2024-07-12 00:48:22.007325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.323 [2024-07-12 00:48:22.007351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.323 qpair failed and we were unable to recover it. 00:35:54.323 [2024-07-12 00:48:22.007425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.323 [2024-07-12 00:48:22.007451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.323 qpair failed and we were unable to recover it. 00:35:54.323 [2024-07-12 00:48:22.007527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.323 [2024-07-12 00:48:22.007558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.323 qpair failed and we were unable to recover it. 00:35:54.323 [2024-07-12 00:48:22.007649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.323 [2024-07-12 00:48:22.007677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.323 qpair failed and we were unable to recover it. 00:35:54.323 [2024-07-12 00:48:22.007756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.323 [2024-07-12 00:48:22.007784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.323 qpair failed and we were unable to recover it. 00:35:54.323 [2024-07-12 00:48:22.007867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.323 [2024-07-12 00:48:22.007897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.323 qpair failed and we were unable to recover it. 00:35:54.323 [2024-07-12 00:48:22.007975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.323 [2024-07-12 00:48:22.008000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.323 qpair failed and we were unable to recover it. 00:35:54.323 [2024-07-12 00:48:22.008082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.323 [2024-07-12 00:48:22.008109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.323 qpair failed and we were unable to recover it. 00:35:54.323 [2024-07-12 00:48:22.008194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.323 [2024-07-12 00:48:22.008220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.324 qpair failed and we were unable to recover it. 00:35:54.324 [2024-07-12 00:48:22.008317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.324 [2024-07-12 00:48:22.008343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.324 qpair failed and we were unable to recover it. 00:35:54.324 [2024-07-12 00:48:22.008429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.324 [2024-07-12 00:48:22.008457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.324 qpair failed and we were unable to recover it. 00:35:54.324 [2024-07-12 00:48:22.008532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.324 [2024-07-12 00:48:22.008558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.324 qpair failed and we were unable to recover it. 00:35:54.324 [2024-07-12 00:48:22.008647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.324 [2024-07-12 00:48:22.008674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.324 qpair failed and we were unable to recover it. 00:35:54.324 [2024-07-12 00:48:22.008754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.324 [2024-07-12 00:48:22.008781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.324 qpair failed and we were unable to recover it. 00:35:54.324 [2024-07-12 00:48:22.008860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.324 [2024-07-12 00:48:22.008886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.324 qpair failed and we were unable to recover it. 00:35:54.324 [2024-07-12 00:48:22.008960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.324 [2024-07-12 00:48:22.008986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.324 qpair failed and we were unable to recover it. 00:35:54.324 [2024-07-12 00:48:22.009068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.324 [2024-07-12 00:48:22.009094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.324 qpair failed and we were unable to recover it. 00:35:54.324 [2024-07-12 00:48:22.009174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.324 [2024-07-12 00:48:22.009200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.324 qpair failed and we were unable to recover it. 00:35:54.324 [2024-07-12 00:48:22.009275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.324 [2024-07-12 00:48:22.009301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.324 qpair failed and we were unable to recover it. 00:35:54.324 [2024-07-12 00:48:22.009377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.324 [2024-07-12 00:48:22.009403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.324 qpair failed and we were unable to recover it. 00:35:54.324 [2024-07-12 00:48:22.009482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.324 [2024-07-12 00:48:22.009509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.324 qpair failed and we were unable to recover it. 00:35:54.324 [2024-07-12 00:48:22.009611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.324 [2024-07-12 00:48:22.009640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.324 qpair failed and we were unable to recover it. 00:35:54.324 [2024-07-12 00:48:22.009719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.324 [2024-07-12 00:48:22.009746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.324 qpair failed and we were unable to recover it. 00:35:54.324 [2024-07-12 00:48:22.009826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.324 [2024-07-12 00:48:22.009856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.324 qpair failed and we were unable to recover it. 00:35:54.324 [2024-07-12 00:48:22.009931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.324 [2024-07-12 00:48:22.009958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.324 qpair failed and we were unable to recover it. 00:35:54.324 [2024-07-12 00:48:22.010045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.324 [2024-07-12 00:48:22.010072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.324 qpair failed and we were unable to recover it. 00:35:54.324 [2024-07-12 00:48:22.010163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.324 [2024-07-12 00:48:22.010193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.324 qpair failed and we were unable to recover it. 00:35:54.324 [2024-07-12 00:48:22.010278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.324 [2024-07-12 00:48:22.010305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.324 qpair failed and we were unable to recover it. 00:35:54.324 [2024-07-12 00:48:22.010387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.324 [2024-07-12 00:48:22.010413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.324 qpair failed and we were unable to recover it. 00:35:54.324 [2024-07-12 00:48:22.010503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.324 [2024-07-12 00:48:22.010531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.324 qpair failed and we were unable to recover it. 00:35:54.324 [2024-07-12 00:48:22.010621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.324 [2024-07-12 00:48:22.010648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.324 qpair failed and we were unable to recover it. 00:35:54.324 [2024-07-12 00:48:22.010724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.324 [2024-07-12 00:48:22.010750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.324 qpair failed and we were unable to recover it. 00:35:54.324 [2024-07-12 00:48:22.010826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.324 [2024-07-12 00:48:22.010852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.324 qpair failed and we were unable to recover it. 00:35:54.324 [2024-07-12 00:48:22.010953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.324 [2024-07-12 00:48:22.010980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.324 qpair failed and we were unable to recover it. 00:35:54.324 [2024-07-12 00:48:22.011062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.324 [2024-07-12 00:48:22.011089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.324 qpair failed and we were unable to recover it. 00:35:54.324 [2024-07-12 00:48:22.011173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.324 [2024-07-12 00:48:22.011201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.324 qpair failed and we were unable to recover it. 00:35:54.324 [2024-07-12 00:48:22.011280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.324 [2024-07-12 00:48:22.011307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.324 qpair failed and we were unable to recover it. 00:35:54.324 [2024-07-12 00:48:22.011382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.324 [2024-07-12 00:48:22.011409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.324 qpair failed and we were unable to recover it. 00:35:54.324 [2024-07-12 00:48:22.011483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.324 [2024-07-12 00:48:22.011509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.324 qpair failed and we were unable to recover it. 00:35:54.324 [2024-07-12 00:48:22.011601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.324 [2024-07-12 00:48:22.011630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.324 qpair failed and we were unable to recover it. 00:35:54.324 [2024-07-12 00:48:22.011705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.324 [2024-07-12 00:48:22.011732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.324 qpair failed and we were unable to recover it. 00:35:54.324 [2024-07-12 00:48:22.011813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.324 [2024-07-12 00:48:22.011842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.324 qpair failed and we were unable to recover it. 00:35:54.324 [2024-07-12 00:48:22.011920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.324 [2024-07-12 00:48:22.011954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.324 qpair failed and we were unable to recover it. 00:35:54.324 [2024-07-12 00:48:22.012030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.324 [2024-07-12 00:48:22.012056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.324 qpair failed and we were unable to recover it. 00:35:54.324 [2024-07-12 00:48:22.012131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.324 [2024-07-12 00:48:22.012157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.324 qpair failed and we were unable to recover it. 00:35:54.324 [2024-07-12 00:48:22.012233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.324 [2024-07-12 00:48:22.012259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.324 qpair failed and we were unable to recover it. 00:35:54.324 [2024-07-12 00:48:22.012333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.324 [2024-07-12 00:48:22.012359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.324 qpair failed and we were unable to recover it. 00:35:54.324 [2024-07-12 00:48:22.012446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.324 [2024-07-12 00:48:22.012474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.324 qpair failed and we were unable to recover it. 00:35:54.324 [2024-07-12 00:48:22.012559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.324 [2024-07-12 00:48:22.012593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.324 qpair failed and we were unable to recover it. 00:35:54.324 [2024-07-12 00:48:22.012673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.325 [2024-07-12 00:48:22.012699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.325 qpair failed and we were unable to recover it. 00:35:54.325 [2024-07-12 00:48:22.012782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.325 [2024-07-12 00:48:22.012809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.325 qpair failed and we were unable to recover it. 00:35:54.325 [2024-07-12 00:48:22.012885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.325 [2024-07-12 00:48:22.012911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.325 qpair failed and we were unable to recover it. 00:35:54.325 [2024-07-12 00:48:22.012986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.325 [2024-07-12 00:48:22.013012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.325 qpair failed and we were unable to recover it. 00:35:54.325 [2024-07-12 00:48:22.013096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.325 [2024-07-12 00:48:22.013125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.325 qpair failed and we were unable to recover it. 00:35:54.325 [2024-07-12 00:48:22.013208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.325 [2024-07-12 00:48:22.013233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.325 qpair failed and we were unable to recover it. 00:35:54.325 [2024-07-12 00:48:22.013313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.325 [2024-07-12 00:48:22.013344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.325 qpair failed and we were unable to recover it. 00:35:54.325 [2024-07-12 00:48:22.013441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.325 [2024-07-12 00:48:22.013467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.325 qpair failed and we were unable to recover it. 00:35:54.325 [2024-07-12 00:48:22.013556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.325 [2024-07-12 00:48:22.013584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.325 qpair failed and we were unable to recover it. 00:35:54.325 [2024-07-12 00:48:22.013685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.325 [2024-07-12 00:48:22.013709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.325 qpair failed and we were unable to recover it. 00:35:54.325 [2024-07-12 00:48:22.013809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.325 [2024-07-12 00:48:22.013836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.325 qpair failed and we were unable to recover it. 00:35:54.325 [2024-07-12 00:48:22.013914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.325 [2024-07-12 00:48:22.013939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.325 qpair failed and we were unable to recover it. 00:35:54.325 [2024-07-12 00:48:22.014024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.325 [2024-07-12 00:48:22.014051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.325 qpair failed and we were unable to recover it. 00:35:54.325 [2024-07-12 00:48:22.014126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.325 [2024-07-12 00:48:22.014154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.325 qpair failed and we were unable to recover it. 00:35:54.325 [2024-07-12 00:48:22.014241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.325 [2024-07-12 00:48:22.014267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.325 qpair failed and we were unable to recover it. 00:35:54.325 [2024-07-12 00:48:22.014342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.325 [2024-07-12 00:48:22.014369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.325 qpair failed and we were unable to recover it. 00:35:54.325 [2024-07-12 00:48:22.014460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.325 [2024-07-12 00:48:22.014487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.325 qpair failed and we were unable to recover it. 00:35:54.325 [2024-07-12 00:48:22.014571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.325 [2024-07-12 00:48:22.014603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.325 qpair failed and we were unable to recover it. 00:35:54.325 [2024-07-12 00:48:22.014686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.325 [2024-07-12 00:48:22.014715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.325 qpair failed and we were unable to recover it. 00:35:54.325 [2024-07-12 00:48:22.014805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.325 [2024-07-12 00:48:22.014833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.325 qpair failed and we were unable to recover it. 00:35:54.325 [2024-07-12 00:48:22.014916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.325 [2024-07-12 00:48:22.014947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.325 qpair failed and we were unable to recover it. 00:35:54.325 [2024-07-12 00:48:22.015033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.325 [2024-07-12 00:48:22.015059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.325 qpair failed and we were unable to recover it. 00:35:54.325 [2024-07-12 00:48:22.015142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.325 [2024-07-12 00:48:22.015170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.325 qpair failed and we were unable to recover it. 00:35:54.325 [2024-07-12 00:48:22.015252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.325 [2024-07-12 00:48:22.015279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.325 qpair failed and we were unable to recover it. 00:35:54.325 [2024-07-12 00:48:22.015359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.325 [2024-07-12 00:48:22.015385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.325 qpair failed and we were unable to recover it. 00:35:54.325 [2024-07-12 00:48:22.015467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.325 [2024-07-12 00:48:22.015494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.325 qpair failed and we were unable to recover it. 00:35:54.325 [2024-07-12 00:48:22.015574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.325 [2024-07-12 00:48:22.015607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.325 qpair failed and we were unable to recover it. 00:35:54.325 [2024-07-12 00:48:22.015684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.325 [2024-07-12 00:48:22.015710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.325 qpair failed and we were unable to recover it. 00:35:54.325 [2024-07-12 00:48:22.015787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.325 [2024-07-12 00:48:22.015814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.325 qpair failed and we were unable to recover it. 00:35:54.325 [2024-07-12 00:48:22.015888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.325 [2024-07-12 00:48:22.015915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.325 qpair failed and we were unable to recover it. 00:35:54.325 [2024-07-12 00:48:22.015997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.325 [2024-07-12 00:48:22.016023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.325 qpair failed and we were unable to recover it. 00:35:54.325 [2024-07-12 00:48:22.016101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.325 [2024-07-12 00:48:22.016127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.325 qpair failed and we were unable to recover it. 00:35:54.325 [2024-07-12 00:48:22.016202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.325 [2024-07-12 00:48:22.016228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.325 qpair failed and we were unable to recover it. 00:35:54.325 [2024-07-12 00:48:22.016307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.325 [2024-07-12 00:48:22.016335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.325 qpair failed and we were unable to recover it. 00:35:54.325 [2024-07-12 00:48:22.016423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.325 [2024-07-12 00:48:22.016450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.325 qpair failed and we were unable to recover it. 00:35:54.325 [2024-07-12 00:48:22.016532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.325 [2024-07-12 00:48:22.016558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.325 qpair failed and we were unable to recover it. 00:35:54.325 [2024-07-12 00:48:22.016649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.325 [2024-07-12 00:48:22.016676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.325 qpair failed and we were unable to recover it. 00:35:54.325 [2024-07-12 00:48:22.016752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.325 [2024-07-12 00:48:22.016777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.325 qpair failed and we were unable to recover it. 00:35:54.325 [2024-07-12 00:48:22.016866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.325 [2024-07-12 00:48:22.016896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.325 qpair failed and we were unable to recover it. 00:35:54.325 [2024-07-12 00:48:22.016975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.325 [2024-07-12 00:48:22.017004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.325 qpair failed and we were unable to recover it. 00:35:54.325 [2024-07-12 00:48:22.017086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.325 [2024-07-12 00:48:22.017113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.325 qpair failed and we were unable to recover it. 00:35:54.326 [2024-07-12 00:48:22.017189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.326 [2024-07-12 00:48:22.017216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.326 qpair failed and we were unable to recover it. 00:35:54.326 [2024-07-12 00:48:22.017290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.326 [2024-07-12 00:48:22.017317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.326 qpair failed and we were unable to recover it. 00:35:54.326 [2024-07-12 00:48:22.017396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.326 [2024-07-12 00:48:22.017426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.326 qpair failed and we were unable to recover it. 00:35:54.326 [2024-07-12 00:48:22.017509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.326 [2024-07-12 00:48:22.017551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.326 qpair failed and we were unable to recover it. 00:35:54.326 [2024-07-12 00:48:22.017647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.326 [2024-07-12 00:48:22.017674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.326 qpair failed and we were unable to recover it. 00:35:54.326 [2024-07-12 00:48:22.017757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.326 [2024-07-12 00:48:22.017783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.326 qpair failed and we were unable to recover it. 00:35:54.326 [2024-07-12 00:48:22.017878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.326 [2024-07-12 00:48:22.017904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.326 qpair failed and we were unable to recover it. 00:35:54.326 [2024-07-12 00:48:22.017983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.326 [2024-07-12 00:48:22.018009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.326 qpair failed and we were unable to recover it. 00:35:54.326 [2024-07-12 00:48:22.018088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.326 [2024-07-12 00:48:22.018113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.326 qpair failed and we were unable to recover it. 00:35:54.326 [2024-07-12 00:48:22.018208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.326 [2024-07-12 00:48:22.018235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.326 qpair failed and we were unable to recover it. 00:35:54.326 [2024-07-12 00:48:22.018310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.326 [2024-07-12 00:48:22.018335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.326 qpair failed and we were unable to recover it. 00:35:54.326 [2024-07-12 00:48:22.018424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.326 [2024-07-12 00:48:22.018452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.326 qpair failed and we were unable to recover it. 00:35:54.326 [2024-07-12 00:48:22.018527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.326 [2024-07-12 00:48:22.018553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.326 qpair failed and we were unable to recover it. 00:35:54.326 [2024-07-12 00:48:22.018650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.326 [2024-07-12 00:48:22.018679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.326 qpair failed and we were unable to recover it. 00:35:54.326 [2024-07-12 00:48:22.018760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.326 [2024-07-12 00:48:22.018786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.326 qpair failed and we were unable to recover it. 00:35:54.326 [2024-07-12 00:48:22.018869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.326 [2024-07-12 00:48:22.018894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.326 qpair failed and we were unable to recover it. 00:35:54.326 [2024-07-12 00:48:22.018981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.326 [2024-07-12 00:48:22.019008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.326 qpair failed and we were unable to recover it. 00:35:54.326 [2024-07-12 00:48:22.019087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.326 [2024-07-12 00:48:22.019113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.326 qpair failed and we were unable to recover it. 00:35:54.326 [2024-07-12 00:48:22.019189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.326 [2024-07-12 00:48:22.019215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.326 qpair failed and we were unable to recover it. 00:35:54.326 [2024-07-12 00:48:22.019293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.326 [2024-07-12 00:48:22.019325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.326 qpair failed and we were unable to recover it. 00:35:54.326 [2024-07-12 00:48:22.019403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.326 [2024-07-12 00:48:22.019429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.326 qpair failed and we were unable to recover it. 00:35:54.326 [2024-07-12 00:48:22.019511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.326 [2024-07-12 00:48:22.019535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.326 qpair failed and we were unable to recover it. 00:35:54.326 [2024-07-12 00:48:22.019619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.326 [2024-07-12 00:48:22.019648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.326 qpair failed and we were unable to recover it. 00:35:54.326 [2024-07-12 00:48:22.019736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.326 [2024-07-12 00:48:22.019763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.326 qpair failed and we were unable to recover it. 00:35:54.326 [2024-07-12 00:48:22.019853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.326 [2024-07-12 00:48:22.019881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.326 qpair failed and we were unable to recover it. 00:35:54.326 [2024-07-12 00:48:22.019960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.326 [2024-07-12 00:48:22.019986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.326 qpair failed and we were unable to recover it. 00:35:54.326 [2024-07-12 00:48:22.020064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.326 [2024-07-12 00:48:22.020091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.326 qpair failed and we were unable to recover it. 00:35:54.326 [2024-07-12 00:48:22.020174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.326 [2024-07-12 00:48:22.020205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.326 qpair failed and we were unable to recover it. 00:35:54.326 [2024-07-12 00:48:22.020287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.326 [2024-07-12 00:48:22.020314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.326 qpair failed and we were unable to recover it. 00:35:54.326 [2024-07-12 00:48:22.020393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.326 [2024-07-12 00:48:22.020419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.326 qpair failed and we were unable to recover it. 00:35:54.326 [2024-07-12 00:48:22.020498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.326 [2024-07-12 00:48:22.020523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.326 qpair failed and we were unable to recover it. 00:35:54.326 [2024-07-12 00:48:22.020612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.326 [2024-07-12 00:48:22.020638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.326 qpair failed and we were unable to recover it. 00:35:54.326 [2024-07-12 00:48:22.020718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.326 [2024-07-12 00:48:22.020744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.326 qpair failed and we were unable to recover it. 00:35:54.326 [2024-07-12 00:48:22.020825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.326 [2024-07-12 00:48:22.020852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.326 qpair failed and we were unable to recover it. 00:35:54.326 [2024-07-12 00:48:22.020927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.326 [2024-07-12 00:48:22.020953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.326 qpair failed and we were unable to recover it. 00:35:54.326 [2024-07-12 00:48:22.021037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.326 [2024-07-12 00:48:22.021064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.326 qpair failed and we were unable to recover it. 00:35:54.326 [2024-07-12 00:48:22.021137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.326 [2024-07-12 00:48:22.021162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.326 qpair failed and we were unable to recover it. 00:35:54.326 [2024-07-12 00:48:22.021237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.326 [2024-07-12 00:48:22.021263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.326 qpair failed and we were unable to recover it. 00:35:54.326 [2024-07-12 00:48:22.021342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.326 [2024-07-12 00:48:22.021368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.326 qpair failed and we were unable to recover it. 00:35:54.326 [2024-07-12 00:48:22.021445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.326 [2024-07-12 00:48:22.021471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.326 qpair failed and we were unable to recover it. 00:35:54.326 [2024-07-12 00:48:22.021549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.327 [2024-07-12 00:48:22.021575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.327 qpair failed and we were unable to recover it. 00:35:54.327 [2024-07-12 00:48:22.021660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.327 [2024-07-12 00:48:22.021686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.327 qpair failed and we were unable to recover it. 00:35:54.327 [2024-07-12 00:48:22.021768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.327 [2024-07-12 00:48:22.021794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.327 qpair failed and we were unable to recover it. 00:35:54.327 [2024-07-12 00:48:22.021869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.327 [2024-07-12 00:48:22.021895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.327 qpair failed and we were unable to recover it. 00:35:54.327 [2024-07-12 00:48:22.021976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.327 [2024-07-12 00:48:22.022020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.327 qpair failed and we were unable to recover it. 00:35:54.327 [2024-07-12 00:48:22.022107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.327 [2024-07-12 00:48:22.022133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.327 qpair failed and we were unable to recover it. 00:35:54.327 [2024-07-12 00:48:22.022222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.327 [2024-07-12 00:48:22.022251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.327 qpair failed and we were unable to recover it. 00:35:54.327 [2024-07-12 00:48:22.022332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.327 [2024-07-12 00:48:22.022358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.327 qpair failed and we were unable to recover it. 00:35:54.327 [2024-07-12 00:48:22.022437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.327 [2024-07-12 00:48:22.022463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.327 qpair failed and we were unable to recover it. 00:35:54.327 [2024-07-12 00:48:22.022539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.327 [2024-07-12 00:48:22.022565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.327 qpair failed and we were unable to recover it. 00:35:54.327 [2024-07-12 00:48:22.022667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.327 [2024-07-12 00:48:22.022694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.327 qpair failed and we were unable to recover it. 00:35:54.327 [2024-07-12 00:48:22.022773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.327 [2024-07-12 00:48:22.022799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.327 qpair failed and we were unable to recover it. 00:35:54.327 [2024-07-12 00:48:22.022881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.327 [2024-07-12 00:48:22.022909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.327 qpair failed and we were unable to recover it. 00:35:54.327 [2024-07-12 00:48:22.022998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.327 [2024-07-12 00:48:22.023025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.327 qpair failed and we were unable to recover it. 00:35:54.327 [2024-07-12 00:48:22.023107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.327 [2024-07-12 00:48:22.023132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.327 qpair failed and we were unable to recover it. 00:35:54.327 [2024-07-12 00:48:22.023211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.327 [2024-07-12 00:48:22.023238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.327 qpair failed and we were unable to recover it. 00:35:54.327 [2024-07-12 00:48:22.023321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.327 [2024-07-12 00:48:22.023348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.327 qpair failed and we were unable to recover it. 00:35:54.327 [2024-07-12 00:48:22.023427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.327 [2024-07-12 00:48:22.023453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.327 qpair failed and we were unable to recover it. 00:35:54.327 [2024-07-12 00:48:22.023534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.327 [2024-07-12 00:48:22.023561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.327 qpair failed and we were unable to recover it. 00:35:54.327 [2024-07-12 00:48:22.023657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.327 [2024-07-12 00:48:22.023692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.327 qpair failed and we were unable to recover it. 00:35:54.327 [2024-07-12 00:48:22.023772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.327 [2024-07-12 00:48:22.023799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.327 qpair failed and we were unable to recover it. 00:35:54.327 [2024-07-12 00:48:22.023881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.327 [2024-07-12 00:48:22.023910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.327 qpair failed and we were unable to recover it. 00:35:54.327 [2024-07-12 00:48:22.023990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.327 [2024-07-12 00:48:22.024016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.327 qpair failed and we were unable to recover it. 00:35:54.327 [2024-07-12 00:48:22.024099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.327 [2024-07-12 00:48:22.024126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.327 qpair failed and we were unable to recover it. 00:35:54.327 [2024-07-12 00:48:22.024208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.327 [2024-07-12 00:48:22.024234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.327 qpair failed and we were unable to recover it. 00:35:54.327 [2024-07-12 00:48:22.024318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.327 [2024-07-12 00:48:22.024345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.327 qpair failed and we were unable to recover it. 00:35:54.327 [2024-07-12 00:48:22.024426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.327 [2024-07-12 00:48:22.024453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.327 qpair failed and we were unable to recover it. 00:35:54.327 [2024-07-12 00:48:22.024532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.327 [2024-07-12 00:48:22.024560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.327 qpair failed and we were unable to recover it. 00:35:54.327 [2024-07-12 00:48:22.024648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.327 [2024-07-12 00:48:22.024675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.327 qpair failed and we were unable to recover it. 00:35:54.327 [2024-07-12 00:48:22.024765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.327 [2024-07-12 00:48:22.024793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.327 qpair failed and we were unable to recover it. 00:35:54.327 [2024-07-12 00:48:22.024880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.327 [2024-07-12 00:48:22.024906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.327 qpair failed and we were unable to recover it. 00:35:54.327 [2024-07-12 00:48:22.024986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.327 [2024-07-12 00:48:22.025013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.327 qpair failed and we were unable to recover it. 00:35:54.327 [2024-07-12 00:48:22.025088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.327 [2024-07-12 00:48:22.025115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.327 qpair failed and we were unable to recover it. 00:35:54.327 [2024-07-12 00:48:22.025199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.327 [2024-07-12 00:48:22.025226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.327 qpair failed and we were unable to recover it. 00:35:54.327 [2024-07-12 00:48:22.025301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.327 [2024-07-12 00:48:22.025327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.327 qpair failed and we were unable to recover it. 00:35:54.327 [2024-07-12 00:48:22.025412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.328 [2024-07-12 00:48:22.025439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.328 qpair failed and we were unable to recover it. 00:35:54.328 [2024-07-12 00:48:22.025514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.328 [2024-07-12 00:48:22.025540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.328 qpair failed and we were unable to recover it. 00:35:54.328 [2024-07-12 00:48:22.025628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.328 [2024-07-12 00:48:22.025657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.328 qpair failed and we were unable to recover it. 00:35:54.328 [2024-07-12 00:48:22.025739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.328 [2024-07-12 00:48:22.025766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.328 qpair failed and we were unable to recover it. 00:35:54.328 [2024-07-12 00:48:22.025853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.328 [2024-07-12 00:48:22.025879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.328 qpair failed and we were unable to recover it. 00:35:54.328 [2024-07-12 00:48:22.025960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.328 [2024-07-12 00:48:22.025986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.328 qpair failed and we were unable to recover it. 00:35:54.328 [2024-07-12 00:48:22.026081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.328 [2024-07-12 00:48:22.026115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.328 qpair failed and we were unable to recover it. 00:35:54.328 [2024-07-12 00:48:22.026208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.328 [2024-07-12 00:48:22.026237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.328 qpair failed and we were unable to recover it. 00:35:54.328 [2024-07-12 00:48:22.026315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.328 [2024-07-12 00:48:22.026341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.328 qpair failed and we were unable to recover it. 00:35:54.328 [2024-07-12 00:48:22.026426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.328 [2024-07-12 00:48:22.026452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.328 qpair failed and we were unable to recover it. 00:35:54.328 [2024-07-12 00:48:22.026539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.328 [2024-07-12 00:48:22.026565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.328 qpair failed and we were unable to recover it. 00:35:54.328 [2024-07-12 00:48:22.026662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.328 [2024-07-12 00:48:22.026691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.328 qpair failed and we were unable to recover it. 00:35:54.328 [2024-07-12 00:48:22.026781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.328 [2024-07-12 00:48:22.026807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.328 qpair failed and we were unable to recover it. 00:35:54.328 [2024-07-12 00:48:22.026887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.328 [2024-07-12 00:48:22.026915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.328 qpair failed and we were unable to recover it. 00:35:54.328 [2024-07-12 00:48:22.026994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.328 [2024-07-12 00:48:22.027021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.328 qpair failed and we were unable to recover it. 00:35:54.328 [2024-07-12 00:48:22.027097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.328 [2024-07-12 00:48:22.027123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.328 qpair failed and we were unable to recover it. 00:35:54.328 [2024-07-12 00:48:22.027203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.328 [2024-07-12 00:48:22.027229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.328 qpair failed and we were unable to recover it. 00:35:54.328 [2024-07-12 00:48:22.027319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.328 [2024-07-12 00:48:22.027345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.328 qpair failed and we were unable to recover it. 00:35:54.328 [2024-07-12 00:48:22.027431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.328 [2024-07-12 00:48:22.027458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.328 qpair failed and we were unable to recover it. 00:35:54.328 [2024-07-12 00:48:22.027544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.328 [2024-07-12 00:48:22.027571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.328 qpair failed and we were unable to recover it. 00:35:54.328 [2024-07-12 00:48:22.027667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.328 [2024-07-12 00:48:22.027696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.328 qpair failed and we were unable to recover it. 00:35:54.328 [2024-07-12 00:48:22.027780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.328 [2024-07-12 00:48:22.027810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.328 qpair failed and we were unable to recover it. 00:35:54.328 [2024-07-12 00:48:22.027916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.328 [2024-07-12 00:48:22.027942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.328 qpair failed and we were unable to recover it. 00:35:54.328 [2024-07-12 00:48:22.028025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.328 [2024-07-12 00:48:22.028051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.328 qpair failed and we were unable to recover it. 00:35:54.328 [2024-07-12 00:48:22.028148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.328 [2024-07-12 00:48:22.028178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.328 qpair failed and we were unable to recover it. 00:35:54.328 [2024-07-12 00:48:22.028268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.328 [2024-07-12 00:48:22.028300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.328 qpair failed and we were unable to recover it. 00:35:54.328 [2024-07-12 00:48:22.028392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.328 [2024-07-12 00:48:22.028419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.328 qpair failed and we were unable to recover it. 00:35:54.328 [2024-07-12 00:48:22.028503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.328 [2024-07-12 00:48:22.028530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.328 qpair failed and we were unable to recover it. 00:35:54.328 [2024-07-12 00:48:22.028608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.328 [2024-07-12 00:48:22.028635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.328 qpair failed and we were unable to recover it. 00:35:54.328 [2024-07-12 00:48:22.028711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.328 [2024-07-12 00:48:22.028736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.328 qpair failed and we were unable to recover it. 00:35:54.328 [2024-07-12 00:48:22.028809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.328 [2024-07-12 00:48:22.028835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.328 qpair failed and we were unable to recover it. 00:35:54.328 [2024-07-12 00:48:22.028914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.328 [2024-07-12 00:48:22.028940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.328 qpair failed and we were unable to recover it. 00:35:54.328 [2024-07-12 00:48:22.029027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.328 [2024-07-12 00:48:22.029054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.328 qpair failed and we were unable to recover it. 00:35:54.328 [2024-07-12 00:48:22.029138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.328 [2024-07-12 00:48:22.029167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.328 qpair failed and we were unable to recover it. 00:35:54.328 [2024-07-12 00:48:22.029253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.328 [2024-07-12 00:48:22.029280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.328 qpair failed and we were unable to recover it. 00:35:54.328 [2024-07-12 00:48:22.029363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.328 [2024-07-12 00:48:22.029393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.328 qpair failed and we were unable to recover it. 00:35:54.328 [2024-07-12 00:48:22.029475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.328 [2024-07-12 00:48:22.029502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.328 qpair failed and we were unable to recover it. 00:35:54.328 [2024-07-12 00:48:22.029581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.328 [2024-07-12 00:48:22.029616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.328 qpair failed and we were unable to recover it. 00:35:54.328 [2024-07-12 00:48:22.029705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.328 [2024-07-12 00:48:22.029732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.328 qpair failed and we were unable to recover it. 00:35:54.328 [2024-07-12 00:48:22.029810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.328 [2024-07-12 00:48:22.029836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.328 qpair failed and we were unable to recover it. 00:35:54.328 [2024-07-12 00:48:22.029919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.328 [2024-07-12 00:48:22.029949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.328 qpair failed and we were unable to recover it. 00:35:54.328 [2024-07-12 00:48:22.030029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.329 [2024-07-12 00:48:22.030055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.329 qpair failed and we were unable to recover it. 00:35:54.329 [2024-07-12 00:48:22.030136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.329 [2024-07-12 00:48:22.030162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.329 qpair failed and we were unable to recover it. 00:35:54.329 [2024-07-12 00:48:22.030237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.329 [2024-07-12 00:48:22.030263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.329 qpair failed and we were unable to recover it. 00:35:54.329 [2024-07-12 00:48:22.030344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.329 [2024-07-12 00:48:22.030374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.329 qpair failed and we were unable to recover it. 00:35:54.329 [2024-07-12 00:48:22.030457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.329 [2024-07-12 00:48:22.030485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.329 qpair failed and we were unable to recover it. 00:35:54.329 [2024-07-12 00:48:22.030561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.329 [2024-07-12 00:48:22.030593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.329 qpair failed and we were unable to recover it. 00:35:54.329 [2024-07-12 00:48:22.030674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.329 [2024-07-12 00:48:22.030700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.329 qpair failed and we were unable to recover it. 00:35:54.329 [2024-07-12 00:48:22.030775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.329 [2024-07-12 00:48:22.030802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.329 qpair failed and we were unable to recover it. 00:35:54.329 [2024-07-12 00:48:22.030883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.329 [2024-07-12 00:48:22.030910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.329 qpair failed and we were unable to recover it. 00:35:54.329 [2024-07-12 00:48:22.030996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.329 [2024-07-12 00:48:22.031023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.329 qpair failed and we were unable to recover it. 00:35:54.329 [2024-07-12 00:48:22.031103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.329 [2024-07-12 00:48:22.031131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.329 qpair failed and we were unable to recover it. 00:35:54.329 [2024-07-12 00:48:22.031208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.329 [2024-07-12 00:48:22.031234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.329 qpair failed and we were unable to recover it. 00:35:54.329 [2024-07-12 00:48:22.031324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.329 [2024-07-12 00:48:22.031353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.329 qpair failed and we were unable to recover it. 00:35:54.329 [2024-07-12 00:48:22.031437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.329 [2024-07-12 00:48:22.031464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.329 qpair failed and we were unable to recover it. 00:35:54.329 [2024-07-12 00:48:22.031544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.329 [2024-07-12 00:48:22.031570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.329 qpair failed and we were unable to recover it. 00:35:54.329 [2024-07-12 00:48:22.031661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.329 [2024-07-12 00:48:22.031688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.329 qpair failed and we were unable to recover it. 00:35:54.329 [2024-07-12 00:48:22.031769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.329 [2024-07-12 00:48:22.031797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.329 qpair failed and we were unable to recover it. 00:35:54.329 [2024-07-12 00:48:22.031889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.329 [2024-07-12 00:48:22.031915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.329 qpair failed and we were unable to recover it. 00:35:54.329 [2024-07-12 00:48:22.031997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.329 [2024-07-12 00:48:22.032024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.329 qpair failed and we were unable to recover it. 00:35:54.329 [2024-07-12 00:48:22.032102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.329 [2024-07-12 00:48:22.032128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.329 qpair failed and we were unable to recover it. 00:35:54.329 [2024-07-12 00:48:22.032209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.329 [2024-07-12 00:48:22.032236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.329 qpair failed and we were unable to recover it. 00:35:54.329 [2024-07-12 00:48:22.032315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.329 [2024-07-12 00:48:22.032341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.329 qpair failed and we were unable to recover it. 00:35:54.329 [2024-07-12 00:48:22.032430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.329 [2024-07-12 00:48:22.032456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.329 qpair failed and we were unable to recover it. 00:35:54.329 [2024-07-12 00:48:22.032532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.329 [2024-07-12 00:48:22.032562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.329 qpair failed and we were unable to recover it. 00:35:54.329 [2024-07-12 00:48:22.032703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.329 [2024-07-12 00:48:22.032732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.329 qpair failed and we were unable to recover it. 00:35:54.329 [2024-07-12 00:48:22.032820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.329 [2024-07-12 00:48:22.032846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.329 qpair failed and we were unable to recover it. 00:35:54.329 [2024-07-12 00:48:22.032925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.329 [2024-07-12 00:48:22.032951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.329 qpair failed and we were unable to recover it. 00:35:54.329 [2024-07-12 00:48:22.033029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.329 [2024-07-12 00:48:22.033057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.329 qpair failed and we were unable to recover it. 00:35:54.329 [2024-07-12 00:48:22.033144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.329 [2024-07-12 00:48:22.033170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.329 qpair failed and we were unable to recover it. 00:35:54.329 [2024-07-12 00:48:22.033257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.329 [2024-07-12 00:48:22.033285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.329 qpair failed and we were unable to recover it. 00:35:54.329 [2024-07-12 00:48:22.033372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.329 [2024-07-12 00:48:22.033399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.329 qpair failed and we were unable to recover it. 00:35:54.329 [2024-07-12 00:48:22.033491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.329 [2024-07-12 00:48:22.033520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.329 qpair failed and we were unable to recover it. 00:35:54.329 [2024-07-12 00:48:22.033606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.329 [2024-07-12 00:48:22.033634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.329 qpair failed and we were unable to recover it. 00:35:54.329 [2024-07-12 00:48:22.033713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.329 [2024-07-12 00:48:22.033740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.329 qpair failed and we were unable to recover it. 00:35:54.329 [2024-07-12 00:48:22.033817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.329 [2024-07-12 00:48:22.033844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.329 qpair failed and we were unable to recover it. 00:35:54.329 [2024-07-12 00:48:22.033926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.329 [2024-07-12 00:48:22.033952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.329 qpair failed and we were unable to recover it. 00:35:54.329 [2024-07-12 00:48:22.034034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.329 [2024-07-12 00:48:22.034061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.329 qpair failed and we were unable to recover it. 00:35:54.329 [2024-07-12 00:48:22.034154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.329 [2024-07-12 00:48:22.034182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.329 qpair failed and we were unable to recover it. 00:35:54.329 [2024-07-12 00:48:22.034257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.329 [2024-07-12 00:48:22.034284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.329 qpair failed and we were unable to recover it. 00:35:54.329 [2024-07-12 00:48:22.034375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.329 [2024-07-12 00:48:22.034402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.329 qpair failed and we were unable to recover it. 00:35:54.329 [2024-07-12 00:48:22.034488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.329 [2024-07-12 00:48:22.034516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.329 qpair failed and we were unable to recover it. 00:35:54.329 [2024-07-12 00:48:22.034596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.330 [2024-07-12 00:48:22.034623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.330 qpair failed and we were unable to recover it. 00:35:54.330 [2024-07-12 00:48:22.034703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.330 [2024-07-12 00:48:22.034729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.330 qpair failed and we were unable to recover it. 00:35:54.330 [2024-07-12 00:48:22.034807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.330 [2024-07-12 00:48:22.034833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.330 qpair failed and we were unable to recover it. 00:35:54.330 [2024-07-12 00:48:22.034922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.330 [2024-07-12 00:48:22.034950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.330 qpair failed and we were unable to recover it. 00:35:54.330 [2024-07-12 00:48:22.035046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.330 [2024-07-12 00:48:22.035076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.330 qpair failed and we were unable to recover it. 00:35:54.330 [2024-07-12 00:48:22.035166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.330 [2024-07-12 00:48:22.035194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.330 qpair failed and we were unable to recover it. 00:35:54.330 [2024-07-12 00:48:22.035280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.330 [2024-07-12 00:48:22.035310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.330 qpair failed and we were unable to recover it. 00:35:54.330 [2024-07-12 00:48:22.035394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.330 [2024-07-12 00:48:22.035420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.330 qpair failed and we were unable to recover it. 00:35:54.330 [2024-07-12 00:48:22.035528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.330 [2024-07-12 00:48:22.035555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.330 qpair failed and we were unable to recover it. 00:35:54.330 [2024-07-12 00:48:22.035635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.330 [2024-07-12 00:48:22.035666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.330 qpair failed and we were unable to recover it. 00:35:54.330 [2024-07-12 00:48:22.035754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.330 [2024-07-12 00:48:22.035781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.330 qpair failed and we were unable to recover it. 00:35:54.330 [2024-07-12 00:48:22.035857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.330 [2024-07-12 00:48:22.035882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.330 qpair failed and we were unable to recover it. 00:35:54.330 [2024-07-12 00:48:22.035964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.330 [2024-07-12 00:48:22.035991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.330 qpair failed and we were unable to recover it. 00:35:54.330 [2024-07-12 00:48:22.036070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.330 [2024-07-12 00:48:22.036110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.330 qpair failed and we were unable to recover it. 00:35:54.330 [2024-07-12 00:48:22.036203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.330 [2024-07-12 00:48:22.036231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.330 qpair failed and we were unable to recover it. 00:35:54.330 [2024-07-12 00:48:22.036310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.330 [2024-07-12 00:48:22.036339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.330 qpair failed and we were unable to recover it. 00:35:54.330 [2024-07-12 00:48:22.036419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.330 [2024-07-12 00:48:22.036446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.330 qpair failed and we were unable to recover it. 00:35:54.330 [2024-07-12 00:48:22.036522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.330 [2024-07-12 00:48:22.036548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.330 qpair failed and we were unable to recover it. 00:35:54.330 [2024-07-12 00:48:22.036637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.330 [2024-07-12 00:48:22.036664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.330 qpair failed and we were unable to recover it. 00:35:54.330 [2024-07-12 00:48:22.036740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.330 [2024-07-12 00:48:22.036766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.330 qpair failed and we were unable to recover it. 00:35:54.330 [2024-07-12 00:48:22.036846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.330 [2024-07-12 00:48:22.036872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.330 qpair failed and we were unable to recover it. 00:35:54.330 [2024-07-12 00:48:22.036953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.330 [2024-07-12 00:48:22.036980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.330 qpair failed and we were unable to recover it. 00:35:54.330 [2024-07-12 00:48:22.037070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.330 [2024-07-12 00:48:22.037099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.330 qpair failed and we were unable to recover it. 00:35:54.330 [2024-07-12 00:48:22.037186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.330 [2024-07-12 00:48:22.037216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.330 qpair failed and we were unable to recover it. 00:35:54.330 [2024-07-12 00:48:22.037303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.330 [2024-07-12 00:48:22.037332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.330 qpair failed and we were unable to recover it. 00:35:54.330 [2024-07-12 00:48:22.037417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.330 [2024-07-12 00:48:22.037446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.330 qpair failed and we were unable to recover it. 00:35:54.330 [2024-07-12 00:48:22.037531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.330 [2024-07-12 00:48:22.037557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.330 qpair failed and we were unable to recover it. 00:35:54.330 [2024-07-12 00:48:22.037646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.330 [2024-07-12 00:48:22.037673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.330 qpair failed and we were unable to recover it. 00:35:54.330 [2024-07-12 00:48:22.037752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.330 [2024-07-12 00:48:22.037778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.330 qpair failed and we were unable to recover it. 00:35:54.330 [2024-07-12 00:48:22.037857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.330 [2024-07-12 00:48:22.037882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.330 qpair failed and we were unable to recover it. 00:35:54.330 [2024-07-12 00:48:22.037966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.330 [2024-07-12 00:48:22.037995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.330 qpair failed and we were unable to recover it. 00:35:54.330 [2024-07-12 00:48:22.038076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.330 [2024-07-12 00:48:22.038105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.330 qpair failed and we were unable to recover it. 00:35:54.330 [2024-07-12 00:48:22.038193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.330 [2024-07-12 00:48:22.038221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.330 qpair failed and we were unable to recover it. 00:35:54.330 [2024-07-12 00:48:22.038303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.330 [2024-07-12 00:48:22.038329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.330 qpair failed and we were unable to recover it. 00:35:54.330 [2024-07-12 00:48:22.038406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.330 [2024-07-12 00:48:22.038433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.330 qpair failed and we were unable to recover it. 00:35:54.330 [2024-07-12 00:48:22.038506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.330 [2024-07-12 00:48:22.038533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.330 qpair failed and we were unable to recover it. 00:35:54.330 [2024-07-12 00:48:22.038622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.330 [2024-07-12 00:48:22.038652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.330 qpair failed and we were unable to recover it. 00:35:54.330 [2024-07-12 00:48:22.038734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.330 [2024-07-12 00:48:22.038764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.330 qpair failed and we were unable to recover it. 00:35:54.330 [2024-07-12 00:48:22.038851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.330 [2024-07-12 00:48:22.038880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.330 qpair failed and we were unable to recover it. 00:35:54.330 [2024-07-12 00:48:22.038956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.330 [2024-07-12 00:48:22.038983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.330 qpair failed and we were unable to recover it. 00:35:54.330 [2024-07-12 00:48:22.039057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.331 [2024-07-12 00:48:22.039084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.331 qpair failed and we were unable to recover it. 00:35:54.331 [2024-07-12 00:48:22.039162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.331 [2024-07-12 00:48:22.039188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.331 qpair failed and we were unable to recover it. 00:35:54.331 [2024-07-12 00:48:22.039266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.331 [2024-07-12 00:48:22.039293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.331 qpair failed and we were unable to recover it. 00:35:54.331 [2024-07-12 00:48:22.039384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.331 [2024-07-12 00:48:22.039411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.331 qpair failed and we were unable to recover it. 00:35:54.331 [2024-07-12 00:48:22.039495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.331 [2024-07-12 00:48:22.039522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.331 qpair failed and we were unable to recover it. 00:35:54.331 [2024-07-12 00:48:22.039602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.331 [2024-07-12 00:48:22.039628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.331 qpair failed and we were unable to recover it. 00:35:54.331 [2024-07-12 00:48:22.039709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.331 [2024-07-12 00:48:22.039735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.331 qpair failed and we were unable to recover it. 00:35:54.331 [2024-07-12 00:48:22.039810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.331 [2024-07-12 00:48:22.039836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.331 qpair failed and we were unable to recover it. 00:35:54.331 [2024-07-12 00:48:22.039909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.331 [2024-07-12 00:48:22.039935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.331 qpair failed and we were unable to recover it. 00:35:54.331 [2024-07-12 00:48:22.040014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.331 [2024-07-12 00:48:22.040041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.331 qpair failed and we were unable to recover it. 00:35:54.331 [2024-07-12 00:48:22.040128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.331 [2024-07-12 00:48:22.040159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.331 qpair failed and we were unable to recover it. 00:35:54.331 [2024-07-12 00:48:22.040238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.331 [2024-07-12 00:48:22.040267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.331 qpair failed and we were unable to recover it. 00:35:54.331 [2024-07-12 00:48:22.040346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.331 [2024-07-12 00:48:22.040372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.331 qpair failed and we were unable to recover it. 00:35:54.331 [2024-07-12 00:48:22.040447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.331 [2024-07-12 00:48:22.040473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.331 qpair failed and we were unable to recover it. 00:35:54.331 [2024-07-12 00:48:22.040562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.331 [2024-07-12 00:48:22.040594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.331 qpair failed and we were unable to recover it. 00:35:54.331 [2024-07-12 00:48:22.040680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.331 [2024-07-12 00:48:22.040706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.331 qpair failed and we were unable to recover it. 00:35:54.331 [2024-07-12 00:48:22.040785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.331 [2024-07-12 00:48:22.040811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.331 qpair failed and we were unable to recover it. 00:35:54.331 [2024-07-12 00:48:22.040886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.331 [2024-07-12 00:48:22.040913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.331 qpair failed and we were unable to recover it. 00:35:54.331 [2024-07-12 00:48:22.040999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.331 [2024-07-12 00:48:22.041025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.331 qpair failed and we were unable to recover it. 00:35:54.331 [2024-07-12 00:48:22.041108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.331 [2024-07-12 00:48:22.041135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.331 qpair failed and we were unable to recover it. 00:35:54.331 [2024-07-12 00:48:22.041214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.331 [2024-07-12 00:48:22.041240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.331 qpair failed and we were unable to recover it. 00:35:54.331 [2024-07-12 00:48:22.041319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.331 [2024-07-12 00:48:22.041345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.331 qpair failed and we were unable to recover it. 00:35:54.331 [2024-07-12 00:48:22.041425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.331 [2024-07-12 00:48:22.041451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.331 qpair failed and we were unable to recover it. 00:35:54.331 [2024-07-12 00:48:22.041547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.331 [2024-07-12 00:48:22.041576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.331 qpair failed and we were unable to recover it. 00:35:54.331 [2024-07-12 00:48:22.041665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.331 [2024-07-12 00:48:22.041693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.331 qpair failed and we were unable to recover it. 00:35:54.331 [2024-07-12 00:48:22.041777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.331 [2024-07-12 00:48:22.041805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.331 qpair failed and we were unable to recover it. 00:35:54.331 [2024-07-12 00:48:22.041879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.331 [2024-07-12 00:48:22.041905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.331 qpair failed and we were unable to recover it. 00:35:54.331 [2024-07-12 00:48:22.041992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.331 [2024-07-12 00:48:22.042018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.331 qpair failed and we were unable to recover it. 00:35:54.331 [2024-07-12 00:48:22.042103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.331 [2024-07-12 00:48:22.042130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.331 qpair failed and we were unable to recover it. 00:35:54.331 [2024-07-12 00:48:22.042211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.331 [2024-07-12 00:48:22.042239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.331 qpair failed and we were unable to recover it. 00:35:54.331 [2024-07-12 00:48:22.042314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.331 [2024-07-12 00:48:22.042340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.331 qpair failed and we were unable to recover it. 00:35:54.331 [2024-07-12 00:48:22.042421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.331 [2024-07-12 00:48:22.042449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.331 qpair failed and we were unable to recover it. 00:35:54.331 [2024-07-12 00:48:22.042525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.331 [2024-07-12 00:48:22.042552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.331 qpair failed and we were unable to recover it. 00:35:54.331 [2024-07-12 00:48:22.042640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.331 [2024-07-12 00:48:22.042666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.331 qpair failed and we were unable to recover it. 00:35:54.331 [2024-07-12 00:48:22.042746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.331 [2024-07-12 00:48:22.042773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.331 qpair failed and we were unable to recover it. 00:35:54.331 [2024-07-12 00:48:22.042855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.331 [2024-07-12 00:48:22.042881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.331 qpair failed and we were unable to recover it. 00:35:54.331 [2024-07-12 00:48:22.042957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.332 [2024-07-12 00:48:22.042987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.332 qpair failed and we were unable to recover it. 00:35:54.332 [2024-07-12 00:48:22.043074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.332 [2024-07-12 00:48:22.043103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.332 qpair failed and we were unable to recover it. 00:35:54.332 [2024-07-12 00:48:22.043182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.332 [2024-07-12 00:48:22.043209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.332 qpair failed and we were unable to recover it. 00:35:54.332 [2024-07-12 00:48:22.043293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.332 [2024-07-12 00:48:22.043322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.332 qpair failed and we were unable to recover it. 00:35:54.332 [2024-07-12 00:48:22.043402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.332 [2024-07-12 00:48:22.043428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.332 qpair failed and we were unable to recover it. 00:35:54.332 [2024-07-12 00:48:22.043505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.332 [2024-07-12 00:48:22.043531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.332 qpair failed and we were unable to recover it. 00:35:54.332 [2024-07-12 00:48:22.043640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.332 [2024-07-12 00:48:22.043667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.332 qpair failed and we were unable to recover it. 00:35:54.332 [2024-07-12 00:48:22.043747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.332 [2024-07-12 00:48:22.043773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.332 qpair failed and we were unable to recover it. 00:35:54.332 [2024-07-12 00:48:22.043850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.332 [2024-07-12 00:48:22.043877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.332 qpair failed and we were unable to recover it. 00:35:54.332 [2024-07-12 00:48:22.043958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.332 [2024-07-12 00:48:22.043984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.332 qpair failed and we were unable to recover it. 00:35:54.332 [2024-07-12 00:48:22.044064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.332 [2024-07-12 00:48:22.044091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.332 qpair failed and we were unable to recover it. 00:35:54.332 [2024-07-12 00:48:22.044180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.332 [2024-07-12 00:48:22.044206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.332 qpair failed and we were unable to recover it. 00:35:54.332 [2024-07-12 00:48:22.044295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.332 [2024-07-12 00:48:22.044321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.332 qpair failed and we were unable to recover it. 00:35:54.332 [2024-07-12 00:48:22.044401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.332 [2024-07-12 00:48:22.044430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.332 qpair failed and we were unable to recover it. 00:35:54.332 [2024-07-12 00:48:22.044520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.332 [2024-07-12 00:48:22.044548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.332 qpair failed and we were unable to recover it. 00:35:54.332 [2024-07-12 00:48:22.044643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.332 [2024-07-12 00:48:22.044670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.332 qpair failed and we were unable to recover it. 00:35:54.332 [2024-07-12 00:48:22.044755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.332 [2024-07-12 00:48:22.044780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.332 qpair failed and we were unable to recover it. 00:35:54.332 [2024-07-12 00:48:22.044863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.332 [2024-07-12 00:48:22.044889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.332 qpair failed and we were unable to recover it. 00:35:54.332 [2024-07-12 00:48:22.044970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.332 [2024-07-12 00:48:22.044996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.332 qpair failed and we were unable to recover it. 00:35:54.332 [2024-07-12 00:48:22.045076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.332 [2024-07-12 00:48:22.045100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.332 qpair failed and we were unable to recover it. 00:35:54.332 [2024-07-12 00:48:22.045191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.332 [2024-07-12 00:48:22.045218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.332 qpair failed and we were unable to recover it. 00:35:54.332 [2024-07-12 00:48:22.045301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.332 [2024-07-12 00:48:22.045328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.332 qpair failed and we were unable to recover it. 00:35:54.332 [2024-07-12 00:48:22.045413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.332 [2024-07-12 00:48:22.045440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.332 qpair failed and we were unable to recover it. 00:35:54.332 [2024-07-12 00:48:22.045522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.332 [2024-07-12 00:48:22.045547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.332 qpair failed and we were unable to recover it. 00:35:54.332 [2024-07-12 00:48:22.045641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.332 [2024-07-12 00:48:22.045666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.332 qpair failed and we were unable to recover it. 00:35:54.332 [2024-07-12 00:48:22.045745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.332 [2024-07-12 00:48:22.045772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.332 qpair failed and we were unable to recover it. 00:35:54.332 [2024-07-12 00:48:22.045848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.332 [2024-07-12 00:48:22.045873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.332 qpair failed and we were unable to recover it. 00:35:54.332 [2024-07-12 00:48:22.045956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.332 [2024-07-12 00:48:22.045984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.332 qpair failed and we were unable to recover it. 00:35:54.332 [2024-07-12 00:48:22.046058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.332 [2024-07-12 00:48:22.046083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.332 qpair failed and we were unable to recover it. 00:35:54.332 [2024-07-12 00:48:22.046173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.332 [2024-07-12 00:48:22.046200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.332 qpair failed and we were unable to recover it. 00:35:54.332 [2024-07-12 00:48:22.046281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.332 [2024-07-12 00:48:22.046307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.332 qpair failed and we were unable to recover it. 00:35:54.332 [2024-07-12 00:48:22.046388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.332 [2024-07-12 00:48:22.046417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.332 qpair failed and we were unable to recover it. 00:35:54.332 [2024-07-12 00:48:22.046522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.332 [2024-07-12 00:48:22.046549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.332 qpair failed and we were unable to recover it. 00:35:54.332 [2024-07-12 00:48:22.046642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.332 [2024-07-12 00:48:22.046670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.332 qpair failed and we were unable to recover it. 00:35:54.332 [2024-07-12 00:48:22.046753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.332 [2024-07-12 00:48:22.046779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.332 qpair failed and we were unable to recover it. 00:35:54.332 [2024-07-12 00:48:22.046856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.332 [2024-07-12 00:48:22.046882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.332 qpair failed and we were unable to recover it. 00:35:54.332 [2024-07-12 00:48:22.046969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.332 [2024-07-12 00:48:22.046996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.332 qpair failed and we were unable to recover it. 00:35:54.332 [2024-07-12 00:48:22.047091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.332 [2024-07-12 00:48:22.047118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.332 qpair failed and we were unable to recover it. 00:35:54.332 [2024-07-12 00:48:22.047195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.332 [2024-07-12 00:48:22.047222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.332 qpair failed and we were unable to recover it. 00:35:54.332 [2024-07-12 00:48:22.047306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.332 [2024-07-12 00:48:22.047334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.332 qpair failed and we were unable to recover it. 00:35:54.332 [2024-07-12 00:48:22.047417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.333 [2024-07-12 00:48:22.047444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.333 qpair failed and we were unable to recover it. 00:35:54.333 [2024-07-12 00:48:22.047525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.333 [2024-07-12 00:48:22.047552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.333 qpair failed and we were unable to recover it. 00:35:54.333 [2024-07-12 00:48:22.047644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.333 [2024-07-12 00:48:22.047672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.333 qpair failed and we were unable to recover it. 00:35:54.333 [2024-07-12 00:48:22.047799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.333 [2024-07-12 00:48:22.047826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.333 qpair failed and we were unable to recover it. 00:35:54.333 [2024-07-12 00:48:22.047951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.333 [2024-07-12 00:48:22.047977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.333 qpair failed and we were unable to recover it. 00:35:54.333 [2024-07-12 00:48:22.048050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.333 [2024-07-12 00:48:22.048076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.333 qpair failed and we were unable to recover it. 00:35:54.333 [2024-07-12 00:48:22.048156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.333 [2024-07-12 00:48:22.048182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.333 qpair failed and we were unable to recover it. 00:35:54.333 [2024-07-12 00:48:22.048258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.333 [2024-07-12 00:48:22.048284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.333 qpair failed and we were unable to recover it. 00:35:54.333 [2024-07-12 00:48:22.048365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.333 [2024-07-12 00:48:22.048392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.333 qpair failed and we were unable to recover it. 00:35:54.333 [2024-07-12 00:48:22.048470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.333 [2024-07-12 00:48:22.048495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.333 qpair failed and we were unable to recover it. 00:35:54.333 [2024-07-12 00:48:22.048575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.333 [2024-07-12 00:48:22.048607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.333 qpair failed and we were unable to recover it. 00:35:54.333 [2024-07-12 00:48:22.048687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.333 [2024-07-12 00:48:22.048713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.333 qpair failed and we were unable to recover it. 00:35:54.333 [2024-07-12 00:48:22.048794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.333 [2024-07-12 00:48:22.048819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.333 qpair failed and we were unable to recover it. 00:35:54.333 [2024-07-12 00:48:22.048933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.333 [2024-07-12 00:48:22.048958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.333 qpair failed and we were unable to recover it. 00:35:54.333 [2024-07-12 00:48:22.049047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.333 [2024-07-12 00:48:22.049076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.333 qpair failed and we were unable to recover it. 00:35:54.333 [2024-07-12 00:48:22.049168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.333 [2024-07-12 00:48:22.049194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.333 qpair failed and we were unable to recover it. 00:35:54.333 [2024-07-12 00:48:22.049281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.333 [2024-07-12 00:48:22.049308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.333 qpair failed and we were unable to recover it. 00:35:54.333 [2024-07-12 00:48:22.049386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.333 [2024-07-12 00:48:22.049412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.333 qpair failed and we were unable to recover it. 00:35:54.333 [2024-07-12 00:48:22.049497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.333 [2024-07-12 00:48:22.049524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.333 qpair failed and we were unable to recover it. 00:35:54.333 [2024-07-12 00:48:22.049609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.333 [2024-07-12 00:48:22.049636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.333 qpair failed and we were unable to recover it. 00:35:54.333 [2024-07-12 00:48:22.049726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.333 [2024-07-12 00:48:22.049754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.333 qpair failed and we were unable to recover it. 00:35:54.333 [2024-07-12 00:48:22.049841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.333 [2024-07-12 00:48:22.049867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.333 qpair failed and we were unable to recover it. 00:35:54.333 [2024-07-12 00:48:22.049949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.333 [2024-07-12 00:48:22.049975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.333 qpair failed and we were unable to recover it. 00:35:54.333 [2024-07-12 00:48:22.050056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.333 [2024-07-12 00:48:22.050080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.333 qpair failed and we were unable to recover it. 00:35:54.333 [2024-07-12 00:48:22.050159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.333 [2024-07-12 00:48:22.050188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.333 qpair failed and we were unable to recover it. 00:35:54.333 [2024-07-12 00:48:22.050269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.333 [2024-07-12 00:48:22.050295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.333 qpair failed and we were unable to recover it. 00:35:54.333 [2024-07-12 00:48:22.050372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.333 [2024-07-12 00:48:22.050398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.333 qpair failed and we were unable to recover it. 00:35:54.333 [2024-07-12 00:48:22.050480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.333 [2024-07-12 00:48:22.050511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.333 qpair failed and we were unable to recover it. 00:35:54.333 [2024-07-12 00:48:22.050600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.333 [2024-07-12 00:48:22.050630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.333 qpair failed and we were unable to recover it. 00:35:54.333 [2024-07-12 00:48:22.050731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.333 [2024-07-12 00:48:22.050760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.333 qpair failed and we were unable to recover it. 00:35:54.333 [2024-07-12 00:48:22.050842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.333 [2024-07-12 00:48:22.050872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.333 qpair failed and we were unable to recover it. 00:35:54.333 [2024-07-12 00:48:22.050955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.333 [2024-07-12 00:48:22.050981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.333 qpair failed and we were unable to recover it. 00:35:54.333 [2024-07-12 00:48:22.051111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.333 [2024-07-12 00:48:22.051138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.333 qpair failed and we were unable to recover it. 00:35:54.333 [2024-07-12 00:48:22.051225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.333 [2024-07-12 00:48:22.051251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.333 qpair failed and we were unable to recover it. 00:35:54.333 [2024-07-12 00:48:22.051342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.333 [2024-07-12 00:48:22.051372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.333 qpair failed and we were unable to recover it. 00:35:54.333 [2024-07-12 00:48:22.051453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.333 [2024-07-12 00:48:22.051479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.333 qpair failed and we were unable to recover it. 00:35:54.333 [2024-07-12 00:48:22.051565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.333 [2024-07-12 00:48:22.051605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.333 qpair failed and we were unable to recover it. 00:35:54.333 [2024-07-12 00:48:22.051692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.333 [2024-07-12 00:48:22.051720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.333 qpair failed and we were unable to recover it. 00:35:54.333 [2024-07-12 00:48:22.051811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.333 [2024-07-12 00:48:22.051839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.333 qpair failed and we were unable to recover it. 00:35:54.333 [2024-07-12 00:48:22.051923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.333 [2024-07-12 00:48:22.051950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.333 qpair failed and we were unable to recover it. 00:35:54.333 [2024-07-12 00:48:22.052030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.333 [2024-07-12 00:48:22.052056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.333 qpair failed and we were unable to recover it. 00:35:54.333 [2024-07-12 00:48:22.052153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.334 [2024-07-12 00:48:22.052179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.334 qpair failed and we were unable to recover it. 00:35:54.334 [2024-07-12 00:48:22.052258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.334 [2024-07-12 00:48:22.052286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.334 qpair failed and we were unable to recover it. 00:35:54.334 [2024-07-12 00:48:22.052368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.334 [2024-07-12 00:48:22.052395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.334 qpair failed and we were unable to recover it. 00:35:54.334 [2024-07-12 00:48:22.052475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.334 [2024-07-12 00:48:22.052502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.334 qpair failed and we were unable to recover it. 00:35:54.334 [2024-07-12 00:48:22.052584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.334 [2024-07-12 00:48:22.052617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.334 qpair failed and we were unable to recover it. 00:35:54.334 [2024-07-12 00:48:22.052701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.334 [2024-07-12 00:48:22.052726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.334 qpair failed and we were unable to recover it. 00:35:54.334 [2024-07-12 00:48:22.052806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.334 [2024-07-12 00:48:22.052834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.334 qpair failed and we were unable to recover it. 00:35:54.334 [2024-07-12 00:48:22.052918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.334 [2024-07-12 00:48:22.052945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.334 qpair failed and we were unable to recover it. 00:35:54.334 [2024-07-12 00:48:22.053024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.334 [2024-07-12 00:48:22.053050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.334 qpair failed and we were unable to recover it. 00:35:54.334 [2024-07-12 00:48:22.053131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.334 [2024-07-12 00:48:22.053157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.334 qpair failed and we were unable to recover it. 00:35:54.334 [2024-07-12 00:48:22.053236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.334 [2024-07-12 00:48:22.053262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.334 qpair failed and we were unable to recover it. 00:35:54.334 [2024-07-12 00:48:22.053340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.334 [2024-07-12 00:48:22.053367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.334 qpair failed and we were unable to recover it. 00:35:54.334 [2024-07-12 00:48:22.053443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.334 [2024-07-12 00:48:22.053470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.334 qpair failed and we were unable to recover it. 00:35:54.334 [2024-07-12 00:48:22.053545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.334 [2024-07-12 00:48:22.053575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.334 qpair failed and we were unable to recover it. 00:35:54.334 [2024-07-12 00:48:22.053672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.334 [2024-07-12 00:48:22.053699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.334 qpair failed and we were unable to recover it. 00:35:54.334 [2024-07-12 00:48:22.053774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.334 [2024-07-12 00:48:22.053801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.334 qpair failed and we were unable to recover it. 00:35:54.334 [2024-07-12 00:48:22.053876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.334 [2024-07-12 00:48:22.053901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.334 qpair failed and we were unable to recover it. 00:35:54.334 [2024-07-12 00:48:22.053979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.334 [2024-07-12 00:48:22.054008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.334 qpair failed and we were unable to recover it. 00:35:54.334 [2024-07-12 00:48:22.054094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.334 [2024-07-12 00:48:22.054121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.334 qpair failed and we were unable to recover it. 00:35:54.334 [2024-07-12 00:48:22.054205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.334 [2024-07-12 00:48:22.054234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.334 qpair failed and we were unable to recover it. 00:35:54.334 [2024-07-12 00:48:22.054325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.334 [2024-07-12 00:48:22.054351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.334 qpair failed and we were unable to recover it. 00:35:54.334 [2024-07-12 00:48:22.054432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.334 [2024-07-12 00:48:22.054457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.334 qpair failed and we were unable to recover it. 00:35:54.334 [2024-07-12 00:48:22.054534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.334 [2024-07-12 00:48:22.054560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.334 qpair failed and we were unable to recover it. 00:35:54.334 [2024-07-12 00:48:22.054651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.334 [2024-07-12 00:48:22.054677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.334 qpair failed and we were unable to recover it. 00:35:54.334 [2024-07-12 00:48:22.054782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.334 [2024-07-12 00:48:22.054811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.334 qpair failed and we were unable to recover it. 00:35:54.334 [2024-07-12 00:48:22.054897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.334 [2024-07-12 00:48:22.054923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.334 qpair failed and we were unable to recover it. 00:35:54.334 [2024-07-12 00:48:22.055012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.334 [2024-07-12 00:48:22.055039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.334 qpair failed and we were unable to recover it. 00:35:54.334 [2024-07-12 00:48:22.055128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.334 [2024-07-12 00:48:22.055155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.334 qpair failed and we were unable to recover it. 00:35:54.334 [2024-07-12 00:48:22.055242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.334 [2024-07-12 00:48:22.055269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.334 qpair failed and we were unable to recover it. 00:35:54.334 [2024-07-12 00:48:22.055355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.334 [2024-07-12 00:48:22.055381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.334 qpair failed and we were unable to recover it. 00:35:54.334 [2024-07-12 00:48:22.055462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.334 [2024-07-12 00:48:22.055489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.334 qpair failed and we were unable to recover it. 00:35:54.334 [2024-07-12 00:48:22.055619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.334 [2024-07-12 00:48:22.055650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.334 qpair failed and we were unable to recover it. 00:35:54.334 [2024-07-12 00:48:22.055740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.334 [2024-07-12 00:48:22.055768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.334 qpair failed and we were unable to recover it. 00:35:54.334 [2024-07-12 00:48:22.055853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.334 [2024-07-12 00:48:22.055879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.334 qpair failed and we were unable to recover it. 00:35:54.334 [2024-07-12 00:48:22.055962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.334 [2024-07-12 00:48:22.055989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.334 qpair failed and we were unable to recover it. 00:35:54.334 [2024-07-12 00:48:22.056073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.334 [2024-07-12 00:48:22.056100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.334 qpair failed and we were unable to recover it. 00:35:54.334 [2024-07-12 00:48:22.056178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.334 [2024-07-12 00:48:22.056205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.334 qpair failed and we were unable to recover it. 00:35:54.334 [2024-07-12 00:48:22.056291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.334 [2024-07-12 00:48:22.056318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.334 qpair failed and we were unable to recover it. 00:35:54.334 [2024-07-12 00:48:22.056393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.334 [2024-07-12 00:48:22.056419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.334 qpair failed and we were unable to recover it. 00:35:54.334 [2024-07-12 00:48:22.056503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.334 [2024-07-12 00:48:22.056530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.334 qpair failed and we were unable to recover it. 00:35:54.334 [2024-07-12 00:48:22.056618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.334 [2024-07-12 00:48:22.056645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.335 qpair failed and we were unable to recover it. 00:35:54.335 [2024-07-12 00:48:22.056730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.335 [2024-07-12 00:48:22.056756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.335 qpair failed and we were unable to recover it. 00:35:54.335 [2024-07-12 00:48:22.056851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.335 [2024-07-12 00:48:22.056880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.335 qpair failed and we were unable to recover it. 00:35:54.335 [2024-07-12 00:48:22.056976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.335 [2024-07-12 00:48:22.057003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.335 qpair failed and we were unable to recover it. 00:35:54.335 [2024-07-12 00:48:22.057103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.335 [2024-07-12 00:48:22.057134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.335 qpair failed and we were unable to recover it. 00:35:54.335 [2024-07-12 00:48:22.057220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.335 [2024-07-12 00:48:22.057246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.335 qpair failed and we were unable to recover it. 00:35:54.335 [2024-07-12 00:48:22.057325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.335 [2024-07-12 00:48:22.057350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.335 qpair failed and we were unable to recover it. 00:35:54.335 [2024-07-12 00:48:22.057432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.335 [2024-07-12 00:48:22.057457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.335 qpair failed and we were unable to recover it. 00:35:54.335 [2024-07-12 00:48:22.057534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.335 [2024-07-12 00:48:22.057560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.335 qpair failed and we were unable to recover it. 00:35:54.335 [2024-07-12 00:48:22.057652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.335 [2024-07-12 00:48:22.057681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.335 qpair failed and we were unable to recover it. 00:35:54.335 [2024-07-12 00:48:22.057809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.335 [2024-07-12 00:48:22.057837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.335 qpair failed and we were unable to recover it. 00:35:54.335 [2024-07-12 00:48:22.057914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.335 [2024-07-12 00:48:22.057940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.335 qpair failed and we were unable to recover it. 00:35:54.335 [2024-07-12 00:48:22.058024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.335 [2024-07-12 00:48:22.058050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.335 qpair failed and we were unable to recover it. 00:35:54.335 [2024-07-12 00:48:22.058131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.335 [2024-07-12 00:48:22.058163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.335 qpair failed and we were unable to recover it. 00:35:54.335 [2024-07-12 00:48:22.058260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.335 [2024-07-12 00:48:22.058288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.335 qpair failed and we were unable to recover it. 00:35:54.335 [2024-07-12 00:48:22.058366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.335 [2024-07-12 00:48:22.058391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.335 qpair failed and we were unable to recover it. 00:35:54.335 [2024-07-12 00:48:22.058472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.335 [2024-07-12 00:48:22.058499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.335 qpair failed and we were unable to recover it. 00:35:54.335 [2024-07-12 00:48:22.058581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.335 [2024-07-12 00:48:22.058614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.335 qpair failed and we were unable to recover it. 00:35:54.335 [2024-07-12 00:48:22.058691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.335 [2024-07-12 00:48:22.058717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.335 qpair failed and we were unable to recover it. 00:35:54.335 [2024-07-12 00:48:22.058796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.335 [2024-07-12 00:48:22.058822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.335 qpair failed and we were unable to recover it. 00:35:54.335 [2024-07-12 00:48:22.058896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.335 [2024-07-12 00:48:22.058921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.335 qpair failed and we were unable to recover it. 00:35:54.335 [2024-07-12 00:48:22.058999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.335 [2024-07-12 00:48:22.059026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.335 qpair failed and we were unable to recover it. 00:35:54.335 [2024-07-12 00:48:22.059111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.335 [2024-07-12 00:48:22.059140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.335 qpair failed and we were unable to recover it. 00:35:54.335 [2024-07-12 00:48:22.059227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.335 [2024-07-12 00:48:22.059256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.335 qpair failed and we were unable to recover it. 00:35:54.335 [2024-07-12 00:48:22.059335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.335 [2024-07-12 00:48:22.059362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.335 qpair failed and we were unable to recover it. 00:35:54.335 [2024-07-12 00:48:22.059445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.335 [2024-07-12 00:48:22.059471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.335 qpair failed and we were unable to recover it. 00:35:54.335 [2024-07-12 00:48:22.059556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.335 [2024-07-12 00:48:22.059583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.335 qpair failed and we were unable to recover it. 00:35:54.335 [2024-07-12 00:48:22.059679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.335 [2024-07-12 00:48:22.059714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.335 qpair failed and we were unable to recover it. 00:35:54.335 [2024-07-12 00:48:22.059800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.335 [2024-07-12 00:48:22.059828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.335 qpair failed and we were unable to recover it. 00:35:54.335 [2024-07-12 00:48:22.059904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.335 [2024-07-12 00:48:22.059930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.335 qpair failed and we were unable to recover it. 00:35:54.335 [2024-07-12 00:48:22.060015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.335 [2024-07-12 00:48:22.060040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.335 qpair failed and we were unable to recover it. 00:35:54.335 [2024-07-12 00:48:22.060119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.335 [2024-07-12 00:48:22.060146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.335 qpair failed and we were unable to recover it. 00:35:54.335 [2024-07-12 00:48:22.060223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.335 [2024-07-12 00:48:22.060248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.335 qpair failed and we were unable to recover it. 00:35:54.335 [2024-07-12 00:48:22.060329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.335 [2024-07-12 00:48:22.060354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.335 qpair failed and we were unable to recover it. 00:35:54.335 [2024-07-12 00:48:22.060433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.335 [2024-07-12 00:48:22.060457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.335 qpair failed and we were unable to recover it. 00:35:54.335 [2024-07-12 00:48:22.060533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.335 [2024-07-12 00:48:22.060557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.335 qpair failed and we were unable to recover it. 00:35:54.335 [2024-07-12 00:48:22.060653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.335 [2024-07-12 00:48:22.060677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.335 qpair failed and we were unable to recover it. 00:35:54.335 [2024-07-12 00:48:22.060761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.335 [2024-07-12 00:48:22.060789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.335 qpair failed and we were unable to recover it. 00:35:54.335 [2024-07-12 00:48:22.060872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.335 [2024-07-12 00:48:22.060899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.335 qpair failed and we were unable to recover it. 00:35:54.335 [2024-07-12 00:48:22.060978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.335 [2024-07-12 00:48:22.061004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.335 qpair failed and we were unable to recover it. 00:35:54.335 [2024-07-12 00:48:22.061091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.335 [2024-07-12 00:48:22.061124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.335 qpair failed and we were unable to recover it. 00:35:54.335 [2024-07-12 00:48:22.061219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.336 [2024-07-12 00:48:22.061248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.336 qpair failed and we were unable to recover it. 00:35:54.336 [2024-07-12 00:48:22.061337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.336 [2024-07-12 00:48:22.061365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.336 qpair failed and we were unable to recover it. 00:35:54.336 [2024-07-12 00:48:22.061453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.336 [2024-07-12 00:48:22.061480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.336 qpair failed and we were unable to recover it. 00:35:54.336 [2024-07-12 00:48:22.061568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.336 [2024-07-12 00:48:22.061597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.336 qpair failed and we were unable to recover it. 00:35:54.336 [2024-07-12 00:48:22.061695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.336 [2024-07-12 00:48:22.061721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.336 qpair failed and we were unable to recover it. 00:35:54.336 [2024-07-12 00:48:22.061802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.336 [2024-07-12 00:48:22.061827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.336 qpair failed and we were unable to recover it. 00:35:54.336 [2024-07-12 00:48:22.061906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.336 [2024-07-12 00:48:22.061930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.336 qpair failed and we were unable to recover it. 00:35:54.336 [2024-07-12 00:48:22.062011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.336 [2024-07-12 00:48:22.062037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.336 qpair failed and we were unable to recover it. 00:35:54.336 [2024-07-12 00:48:22.062116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.336 [2024-07-12 00:48:22.062140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.336 qpair failed and we were unable to recover it. 00:35:54.336 [2024-07-12 00:48:22.062218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.336 [2024-07-12 00:48:22.062247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.336 qpair failed and we were unable to recover it. 00:35:54.336 [2024-07-12 00:48:22.062324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.336 [2024-07-12 00:48:22.062349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.336 qpair failed and we were unable to recover it. 00:35:54.336 [2024-07-12 00:48:22.062433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.336 [2024-07-12 00:48:22.062461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.336 qpair failed and we were unable to recover it. 00:35:54.336 [2024-07-12 00:48:22.062606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.336 [2024-07-12 00:48:22.062633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.336 qpair failed and we were unable to recover it. 00:35:54.336 [2024-07-12 00:48:22.062725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.336 [2024-07-12 00:48:22.062751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.336 qpair failed and we were unable to recover it. 00:35:54.336 [2024-07-12 00:48:22.062839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.336 [2024-07-12 00:48:22.062866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.336 qpair failed and we were unable to recover it. 00:35:54.336 [2024-07-12 00:48:22.062945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.336 [2024-07-12 00:48:22.062971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.336 qpair failed and we were unable to recover it. 00:35:54.336 [2024-07-12 00:48:22.063054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.336 [2024-07-12 00:48:22.063081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.336 qpair failed and we were unable to recover it. 00:35:54.336 [2024-07-12 00:48:22.063159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.336 [2024-07-12 00:48:22.063186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.336 qpair failed and we were unable to recover it. 00:35:54.336 [2024-07-12 00:48:22.063275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.336 [2024-07-12 00:48:22.063302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.336 qpair failed and we were unable to recover it. 00:35:54.336 [2024-07-12 00:48:22.063387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.336 [2024-07-12 00:48:22.063414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.336 qpair failed and we were unable to recover it. 00:35:54.336 [2024-07-12 00:48:22.063502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.336 [2024-07-12 00:48:22.063530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.336 qpair failed and we were unable to recover it. 00:35:54.336 [2024-07-12 00:48:22.063620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.336 [2024-07-12 00:48:22.063650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.336 qpair failed and we were unable to recover it. 00:35:54.336 [2024-07-12 00:48:22.063730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.336 [2024-07-12 00:48:22.063756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.336 qpair failed and we were unable to recover it. 00:35:54.336 [2024-07-12 00:48:22.063844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.336 [2024-07-12 00:48:22.063874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.336 qpair failed and we were unable to recover it. 00:35:54.336 [2024-07-12 00:48:22.063970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.336 [2024-07-12 00:48:22.063996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.336 qpair failed and we were unable to recover it. 00:35:54.336 [2024-07-12 00:48:22.064089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.336 [2024-07-12 00:48:22.064115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.336 qpair failed and we were unable to recover it. 00:35:54.336 [2024-07-12 00:48:22.064207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.336 [2024-07-12 00:48:22.064239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.336 qpair failed and we were unable to recover it. 00:35:54.336 [2024-07-12 00:48:22.064327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.336 [2024-07-12 00:48:22.064355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.336 qpair failed and we were unable to recover it. 00:35:54.336 [2024-07-12 00:48:22.064447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.336 [2024-07-12 00:48:22.064476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.336 qpair failed and we were unable to recover it. 00:35:54.336 [2024-07-12 00:48:22.064564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.336 [2024-07-12 00:48:22.064594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.336 qpair failed and we were unable to recover it. 00:35:54.336 [2024-07-12 00:48:22.064679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.336 [2024-07-12 00:48:22.064707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.336 qpair failed and we were unable to recover it. 00:35:54.336 [2024-07-12 00:48:22.064792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.336 [2024-07-12 00:48:22.064818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.336 qpair failed and we were unable to recover it. 00:35:54.336 [2024-07-12 00:48:22.064909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.336 [2024-07-12 00:48:22.064938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.336 qpair failed and we were unable to recover it. 00:35:54.336 [2024-07-12 00:48:22.065023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.336 [2024-07-12 00:48:22.065049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.336 qpair failed and we were unable to recover it. 00:35:54.336 [2024-07-12 00:48:22.065131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.336 [2024-07-12 00:48:22.065157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.336 qpair failed and we were unable to recover it. 00:35:54.336 [2024-07-12 00:48:22.065235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.336 [2024-07-12 00:48:22.065262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.337 qpair failed and we were unable to recover it. 00:35:54.337 [2024-07-12 00:48:22.065341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.337 [2024-07-12 00:48:22.065367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.337 qpair failed and we were unable to recover it. 00:35:54.337 [2024-07-12 00:48:22.065450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.337 [2024-07-12 00:48:22.065478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.337 qpair failed and we were unable to recover it. 00:35:54.337 [2024-07-12 00:48:22.065559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.337 [2024-07-12 00:48:22.065591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.337 qpair failed and we were unable to recover it. 00:35:54.337 [2024-07-12 00:48:22.065682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.337 [2024-07-12 00:48:22.065708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.337 qpair failed and we were unable to recover it. 00:35:54.337 [2024-07-12 00:48:22.065794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.337 [2024-07-12 00:48:22.065821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.337 qpair failed and we were unable to recover it. 00:35:54.337 [2024-07-12 00:48:22.065902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.337 [2024-07-12 00:48:22.065930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.337 qpair failed and we were unable to recover it. 00:35:54.337 [2024-07-12 00:48:22.066013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.337 [2024-07-12 00:48:22.066042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.337 qpair failed and we were unable to recover it. 00:35:54.337 [2024-07-12 00:48:22.066132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.337 [2024-07-12 00:48:22.066158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.337 qpair failed and we were unable to recover it. 00:35:54.337 [2024-07-12 00:48:22.066244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.337 [2024-07-12 00:48:22.066272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.337 qpair failed and we were unable to recover it. 00:35:54.337 [2024-07-12 00:48:22.066348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.337 [2024-07-12 00:48:22.066374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.337 qpair failed and we were unable to recover it. 00:35:54.337 [2024-07-12 00:48:22.066452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.337 [2024-07-12 00:48:22.066481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.337 qpair failed and we were unable to recover it. 00:35:54.337 [2024-07-12 00:48:22.066558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.337 [2024-07-12 00:48:22.066592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.337 qpair failed and we were unable to recover it. 00:35:54.337 [2024-07-12 00:48:22.066677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.337 [2024-07-12 00:48:22.066703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.337 qpair failed and we were unable to recover it. 00:35:54.337 [2024-07-12 00:48:22.066785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.337 [2024-07-12 00:48:22.066811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.337 qpair failed and we were unable to recover it. 00:35:54.337 [2024-07-12 00:48:22.066889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.337 [2024-07-12 00:48:22.066916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.337 qpair failed and we were unable to recover it. 00:35:54.337 [2024-07-12 00:48:22.066994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.337 [2024-07-12 00:48:22.067021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.337 qpair failed and we were unable to recover it. 00:35:54.337 [2024-07-12 00:48:22.067106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.337 [2024-07-12 00:48:22.067132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.337 qpair failed and we were unable to recover it. 00:35:54.337 [2024-07-12 00:48:22.067218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.337 [2024-07-12 00:48:22.067245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.337 qpair failed and we were unable to recover it. 00:35:54.337 [2024-07-12 00:48:22.067329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.337 [2024-07-12 00:48:22.067355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.337 qpair failed and we were unable to recover it. 00:35:54.337 [2024-07-12 00:48:22.067434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.337 [2024-07-12 00:48:22.067459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.337 qpair failed and we were unable to recover it. 00:35:54.337 [2024-07-12 00:48:22.067535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.337 [2024-07-12 00:48:22.067560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.337 qpair failed and we were unable to recover it. 00:35:54.337 [2024-07-12 00:48:22.067648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.337 [2024-07-12 00:48:22.067680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.337 qpair failed and we were unable to recover it. 00:35:54.337 [2024-07-12 00:48:22.067773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.337 [2024-07-12 00:48:22.067800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.337 qpair failed and we were unable to recover it. 00:35:54.337 [2024-07-12 00:48:22.067877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.337 [2024-07-12 00:48:22.067903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.337 qpair failed and we were unable to recover it. 00:35:54.337 [2024-07-12 00:48:22.067985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.337 [2024-07-12 00:48:22.068013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.337 qpair failed and we were unable to recover it. 00:35:54.337 [2024-07-12 00:48:22.068101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.337 [2024-07-12 00:48:22.068128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.337 qpair failed and we were unable to recover it. 00:35:54.337 [2024-07-12 00:48:22.068214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.337 [2024-07-12 00:48:22.068243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.337 qpair failed and we were unable to recover it. 00:35:54.337 [2024-07-12 00:48:22.068324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.337 [2024-07-12 00:48:22.068350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.337 qpair failed and we were unable to recover it. 00:35:54.337 [2024-07-12 00:48:22.068432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.337 [2024-07-12 00:48:22.068458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.337 qpair failed and we were unable to recover it. 00:35:54.337 [2024-07-12 00:48:22.068538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.337 [2024-07-12 00:48:22.068563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.337 qpair failed and we were unable to recover it. 00:35:54.337 [2024-07-12 00:48:22.068676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.337 [2024-07-12 00:48:22.068710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.337 qpair failed and we were unable to recover it. 00:35:54.337 [2024-07-12 00:48:22.068792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.337 [2024-07-12 00:48:22.068817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.337 qpair failed and we were unable to recover it. 00:35:54.337 [2024-07-12 00:48:22.068911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.337 [2024-07-12 00:48:22.068937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.337 qpair failed and we were unable to recover it. 00:35:54.337 [2024-07-12 00:48:22.069018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.337 [2024-07-12 00:48:22.069043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.337 qpair failed and we were unable to recover it. 00:35:54.337 [2024-07-12 00:48:22.069133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.337 [2024-07-12 00:48:22.069162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.337 qpair failed and we were unable to recover it. 00:35:54.337 [2024-07-12 00:48:22.069251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.337 [2024-07-12 00:48:22.069279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.337 qpair failed and we were unable to recover it. 00:35:54.337 [2024-07-12 00:48:22.069360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.337 [2024-07-12 00:48:22.069386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.337 qpair failed and we were unable to recover it. 00:35:54.337 [2024-07-12 00:48:22.069460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.337 [2024-07-12 00:48:22.069486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.337 qpair failed and we were unable to recover it. 00:35:54.337 [2024-07-12 00:48:22.069567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.337 [2024-07-12 00:48:22.069603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.337 qpair failed and we were unable to recover it. 00:35:54.337 [2024-07-12 00:48:22.069688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.337 [2024-07-12 00:48:22.069716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.337 qpair failed and we were unable to recover it. 00:35:54.338 [2024-07-12 00:48:22.069799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.338 [2024-07-12 00:48:22.069828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.338 qpair failed and we were unable to recover it. 00:35:54.338 [2024-07-12 00:48:22.069905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.338 [2024-07-12 00:48:22.069931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.338 qpair failed and we were unable to recover it. 00:35:54.338 [2024-07-12 00:48:22.070013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.338 [2024-07-12 00:48:22.070040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.338 qpair failed and we were unable to recover it. 00:35:54.338 [2024-07-12 00:48:22.070123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.338 [2024-07-12 00:48:22.070150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.338 qpair failed and we were unable to recover it. 00:35:54.338 [2024-07-12 00:48:22.070243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.338 [2024-07-12 00:48:22.070270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.338 qpair failed and we were unable to recover it. 00:35:54.338 [2024-07-12 00:48:22.070345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.338 [2024-07-12 00:48:22.070370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.338 qpair failed and we were unable to recover it. 00:35:54.338 [2024-07-12 00:48:22.070446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.338 [2024-07-12 00:48:22.070472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.338 qpair failed and we were unable to recover it. 00:35:54.338 [2024-07-12 00:48:22.070559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.338 [2024-07-12 00:48:22.070595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.338 qpair failed and we were unable to recover it. 00:35:54.338 [2024-07-12 00:48:22.070680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.338 [2024-07-12 00:48:22.070706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.338 qpair failed and we were unable to recover it. 00:35:54.338 [2024-07-12 00:48:22.070785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.338 [2024-07-12 00:48:22.070811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.338 qpair failed and we were unable to recover it. 00:35:54.338 [2024-07-12 00:48:22.070892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.338 [2024-07-12 00:48:22.070920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.338 qpair failed and we were unable to recover it. 00:35:54.338 [2024-07-12 00:48:22.071004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.338 [2024-07-12 00:48:22.071033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.338 qpair failed and we were unable to recover it. 00:35:54.338 [2024-07-12 00:48:22.071110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.338 [2024-07-12 00:48:22.071137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.338 qpair failed and we were unable to recover it. 00:35:54.338 [2024-07-12 00:48:22.071218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.338 [2024-07-12 00:48:22.071246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.338 qpair failed and we were unable to recover it. 00:35:54.338 [2024-07-12 00:48:22.071321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.338 [2024-07-12 00:48:22.071348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.338 qpair failed and we were unable to recover it. 00:35:54.338 [2024-07-12 00:48:22.071423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.338 [2024-07-12 00:48:22.071448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.338 qpair failed and we were unable to recover it. 00:35:54.338 [2024-07-12 00:48:22.071536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.338 [2024-07-12 00:48:22.071562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.338 qpair failed and we were unable to recover it. 00:35:54.338 [2024-07-12 00:48:22.071670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.338 [2024-07-12 00:48:22.071698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.338 qpair failed and we were unable to recover it. 00:35:54.338 [2024-07-12 00:48:22.071778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.338 [2024-07-12 00:48:22.071805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.338 qpair failed and we were unable to recover it. 00:35:54.338 [2024-07-12 00:48:22.071887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.338 [2024-07-12 00:48:22.071915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.338 qpair failed and we were unable to recover it. 00:35:54.338 [2024-07-12 00:48:22.071996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.338 [2024-07-12 00:48:22.072023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.338 qpair failed and we were unable to recover it. 00:35:54.338 [2024-07-12 00:48:22.072106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.338 [2024-07-12 00:48:22.072134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.338 qpair failed and we were unable to recover it. 00:35:54.338 [2024-07-12 00:48:22.072218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.338 [2024-07-12 00:48:22.072245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.338 qpair failed and we were unable to recover it. 00:35:54.338 [2024-07-12 00:48:22.072325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.338 [2024-07-12 00:48:22.072350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.338 qpair failed and we were unable to recover it. 00:35:54.338 [2024-07-12 00:48:22.072424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.338 [2024-07-12 00:48:22.072449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.338 qpair failed and we were unable to recover it. 00:35:54.338 [2024-07-12 00:48:22.072535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.338 [2024-07-12 00:48:22.072560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.338 qpair failed and we were unable to recover it. 00:35:54.338 [2024-07-12 00:48:22.072646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.338 [2024-07-12 00:48:22.072675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.338 qpair failed and we were unable to recover it. 00:35:54.338 [2024-07-12 00:48:22.072755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.338 [2024-07-12 00:48:22.072783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.338 qpair failed and we were unable to recover it. 00:35:54.338 [2024-07-12 00:48:22.072872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.338 [2024-07-12 00:48:22.072899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.338 qpair failed and we were unable to recover it. 00:35:54.338 [2024-07-12 00:48:22.072989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.338 [2024-07-12 00:48:22.073017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.338 qpair failed and we were unable to recover it. 00:35:54.338 [2024-07-12 00:48:22.073106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.338 [2024-07-12 00:48:22.073138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.338 qpair failed and we were unable to recover it. 00:35:54.338 [2024-07-12 00:48:22.073225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.338 [2024-07-12 00:48:22.073257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.338 qpair failed and we were unable to recover it. 00:35:54.338 [2024-07-12 00:48:22.073348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.338 [2024-07-12 00:48:22.073378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.338 qpair failed and we were unable to recover it. 00:35:54.338 [2024-07-12 00:48:22.073465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.338 [2024-07-12 00:48:22.073491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.338 qpair failed and we were unable to recover it. 00:35:54.338 [2024-07-12 00:48:22.073567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.338 [2024-07-12 00:48:22.073606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.338 qpair failed and we were unable to recover it. 00:35:54.338 [2024-07-12 00:48:22.073691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.338 [2024-07-12 00:48:22.073717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.338 qpair failed and we were unable to recover it. 00:35:54.338 [2024-07-12 00:48:22.073793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.338 [2024-07-12 00:48:22.073818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.338 qpair failed and we were unable to recover it. 00:35:54.338 [2024-07-12 00:48:22.073901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.338 [2024-07-12 00:48:22.073929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.338 qpair failed and we were unable to recover it. 00:35:54.338 [2024-07-12 00:48:22.074006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.338 [2024-07-12 00:48:22.074033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.338 qpair failed and we were unable to recover it. 00:35:54.338 [2024-07-12 00:48:22.074109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.338 [2024-07-12 00:48:22.074136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.338 qpair failed and we were unable to recover it. 00:35:54.338 [2024-07-12 00:48:22.074213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.338 [2024-07-12 00:48:22.074239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.338 qpair failed and we were unable to recover it. 00:35:54.339 [2024-07-12 00:48:22.074321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.339 [2024-07-12 00:48:22.074347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.339 qpair failed and we were unable to recover it. 00:35:54.339 [2024-07-12 00:48:22.074434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.339 [2024-07-12 00:48:22.074462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.339 qpair failed and we were unable to recover it. 00:35:54.339 [2024-07-12 00:48:22.074537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.339 [2024-07-12 00:48:22.074562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.339 qpair failed and we were unable to recover it. 00:35:54.339 [2024-07-12 00:48:22.074668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.339 [2024-07-12 00:48:22.074697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.339 qpair failed and we were unable to recover it. 00:35:54.339 [2024-07-12 00:48:22.074776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.339 [2024-07-12 00:48:22.074802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.339 qpair failed and we were unable to recover it. 00:35:54.339 [2024-07-12 00:48:22.074878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.339 [2024-07-12 00:48:22.074904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.339 qpair failed and we were unable to recover it. 00:35:54.339 [2024-07-12 00:48:22.074988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.339 [2024-07-12 00:48:22.075014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.339 qpair failed and we were unable to recover it. 00:35:54.339 [2024-07-12 00:48:22.075103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.339 [2024-07-12 00:48:22.075131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.339 qpair failed and we were unable to recover it. 00:35:54.339 [2024-07-12 00:48:22.075214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.339 [2024-07-12 00:48:22.075241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.339 qpair failed and we were unable to recover it. 00:35:54.339 [2024-07-12 00:48:22.075318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.339 [2024-07-12 00:48:22.075345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.339 qpair failed and we were unable to recover it. 00:35:54.339 [2024-07-12 00:48:22.075422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.339 [2024-07-12 00:48:22.075449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.339 qpair failed and we were unable to recover it. 00:35:54.339 [2024-07-12 00:48:22.075540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.339 [2024-07-12 00:48:22.075569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.339 qpair failed and we were unable to recover it. 00:35:54.339 [2024-07-12 00:48:22.075662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.339 [2024-07-12 00:48:22.075689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.339 qpair failed and we were unable to recover it. 00:35:54.339 [2024-07-12 00:48:22.075774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.339 [2024-07-12 00:48:22.075801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.339 qpair failed and we were unable to recover it. 00:35:54.339 [2024-07-12 00:48:22.075891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.339 [2024-07-12 00:48:22.075920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.339 qpair failed and we were unable to recover it. 00:35:54.339 [2024-07-12 00:48:22.076008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.339 [2024-07-12 00:48:22.076036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.339 qpair failed and we were unable to recover it. 00:35:54.339 [2024-07-12 00:48:22.076139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.339 [2024-07-12 00:48:22.076170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.339 qpair failed and we were unable to recover it. 00:35:54.339 [2024-07-12 00:48:22.076253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.339 [2024-07-12 00:48:22.076280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.339 qpair failed and we were unable to recover it. 00:35:54.339 [2024-07-12 00:48:22.076361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.339 [2024-07-12 00:48:22.076388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.339 qpair failed and we were unable to recover it. 00:35:54.339 [2024-07-12 00:48:22.076469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.339 [2024-07-12 00:48:22.076496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.339 qpair failed and we were unable to recover it. 00:35:54.339 [2024-07-12 00:48:22.076573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.339 [2024-07-12 00:48:22.076607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.339 qpair failed and we were unable to recover it. 00:35:54.339 [2024-07-12 00:48:22.076691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.339 [2024-07-12 00:48:22.076718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.339 qpair failed and we were unable to recover it. 00:35:54.339 [2024-07-12 00:48:22.076807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.339 [2024-07-12 00:48:22.076835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.339 qpair failed and we were unable to recover it. 00:35:54.339 [2024-07-12 00:48:22.076922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.339 [2024-07-12 00:48:22.076949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.339 qpair failed and we were unable to recover it. 00:35:54.339 [2024-07-12 00:48:22.077038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.339 [2024-07-12 00:48:22.077065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.339 qpair failed and we were unable to recover it. 00:35:54.339 [2024-07-12 00:48:22.077144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.339 [2024-07-12 00:48:22.077171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.339 qpair failed and we were unable to recover it. 00:35:54.339 [2024-07-12 00:48:22.077253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.339 [2024-07-12 00:48:22.077280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.339 qpair failed and we were unable to recover it. 00:35:54.339 [2024-07-12 00:48:22.077357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.339 [2024-07-12 00:48:22.077383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.339 qpair failed and we were unable to recover it. 00:35:54.339 [2024-07-12 00:48:22.077472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.339 [2024-07-12 00:48:22.077501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.339 qpair failed and we were unable to recover it. 00:35:54.339 [2024-07-12 00:48:22.077581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.339 [2024-07-12 00:48:22.077615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.339 qpair failed and we were unable to recover it. 00:35:54.339 [2024-07-12 00:48:22.077708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.339 [2024-07-12 00:48:22.077735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.339 qpair failed and we were unable to recover it. 00:35:54.339 [2024-07-12 00:48:22.077813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.339 [2024-07-12 00:48:22.077840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.339 qpair failed and we were unable to recover it. 00:35:54.339 [2024-07-12 00:48:22.077930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.339 [2024-07-12 00:48:22.077957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.339 qpair failed and we were unable to recover it. 00:35:54.339 [2024-07-12 00:48:22.078042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.339 [2024-07-12 00:48:22.078070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.339 qpair failed and we were unable to recover it. 00:35:54.339 [2024-07-12 00:48:22.078153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.339 [2024-07-12 00:48:22.078181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.339 qpair failed and we were unable to recover it. 00:35:54.339 [2024-07-12 00:48:22.078263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.339 [2024-07-12 00:48:22.078289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.339 qpair failed and we were unable to recover it. 00:35:54.339 [2024-07-12 00:48:22.078372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.339 [2024-07-12 00:48:22.078400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.339 qpair failed and we were unable to recover it. 00:35:54.339 [2024-07-12 00:48:22.078481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.339 [2024-07-12 00:48:22.078508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.339 qpair failed and we were unable to recover it. 00:35:54.339 [2024-07-12 00:48:22.078592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.339 [2024-07-12 00:48:22.078620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.339 qpair failed and we were unable to recover it. 00:35:54.339 [2024-07-12 00:48:22.078707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.339 [2024-07-12 00:48:22.078735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.339 qpair failed and we were unable to recover it. 00:35:54.339 [2024-07-12 00:48:22.078811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.339 [2024-07-12 00:48:22.078837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.339 qpair failed and we were unable to recover it. 00:35:54.340 [2024-07-12 00:48:22.078914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.340 [2024-07-12 00:48:22.078943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.340 qpair failed and we were unable to recover it. 00:35:54.340 [2024-07-12 00:48:22.079031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.340 [2024-07-12 00:48:22.079058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.340 qpair failed and we were unable to recover it. 00:35:54.340 [2024-07-12 00:48:22.079139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.340 [2024-07-12 00:48:22.079168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.340 qpair failed and we were unable to recover it. 00:35:54.340 [2024-07-12 00:48:22.079252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.340 [2024-07-12 00:48:22.079277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.340 qpair failed and we were unable to recover it. 00:35:54.340 [2024-07-12 00:48:22.079355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.340 [2024-07-12 00:48:22.079382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.340 qpair failed and we were unable to recover it. 00:35:54.340 [2024-07-12 00:48:22.079466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.340 [2024-07-12 00:48:22.079493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.340 qpair failed and we were unable to recover it. 00:35:54.340 [2024-07-12 00:48:22.079568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.340 [2024-07-12 00:48:22.079601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.340 qpair failed and we were unable to recover it. 00:35:54.340 [2024-07-12 00:48:22.079680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.340 [2024-07-12 00:48:22.079706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.340 qpair failed and we were unable to recover it. 00:35:54.340 [2024-07-12 00:48:22.079785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.340 [2024-07-12 00:48:22.079812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.340 qpair failed and we were unable to recover it. 00:35:54.340 [2024-07-12 00:48:22.079888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.340 [2024-07-12 00:48:22.079914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.340 qpair failed and we were unable to recover it. 00:35:54.340 [2024-07-12 00:48:22.079989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.340 [2024-07-12 00:48:22.080015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.340 qpair failed and we were unable to recover it. 00:35:54.340 [2024-07-12 00:48:22.080100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.340 [2024-07-12 00:48:22.080128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.340 qpair failed and we were unable to recover it. 00:35:54.340 [2024-07-12 00:48:22.080207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.340 [2024-07-12 00:48:22.080233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.340 qpair failed and we were unable to recover it. 00:35:54.340 [2024-07-12 00:48:22.080316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.340 [2024-07-12 00:48:22.080344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.340 qpair failed and we were unable to recover it. 00:35:54.340 [2024-07-12 00:48:22.080420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.340 [2024-07-12 00:48:22.080445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.340 qpair failed and we were unable to recover it. 00:35:54.340 [2024-07-12 00:48:22.080524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.340 [2024-07-12 00:48:22.080553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.340 qpair failed and we were unable to recover it. 00:35:54.340 [2024-07-12 00:48:22.080642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.340 [2024-07-12 00:48:22.080667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.340 qpair failed and we were unable to recover it. 00:35:54.340 [2024-07-12 00:48:22.080741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.340 [2024-07-12 00:48:22.080768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.340 qpair failed and we were unable to recover it. 00:35:54.340 [2024-07-12 00:48:22.080845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.340 [2024-07-12 00:48:22.080869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.340 qpair failed and we were unable to recover it. 00:35:54.340 [2024-07-12 00:48:22.080954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.340 [2024-07-12 00:48:22.080981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.340 qpair failed and we were unable to recover it. 00:35:54.340 [2024-07-12 00:48:22.081068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.340 [2024-07-12 00:48:22.081094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.340 qpair failed and we were unable to recover it. 00:35:54.340 [2024-07-12 00:48:22.081177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.340 [2024-07-12 00:48:22.081202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.340 qpair failed and we were unable to recover it. 00:35:54.340 [2024-07-12 00:48:22.081292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.340 [2024-07-12 00:48:22.081318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.340 qpair failed and we were unable to recover it. 00:35:54.340 [2024-07-12 00:48:22.081402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.340 [2024-07-12 00:48:22.081427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.340 qpair failed and we were unable to recover it. 00:35:54.340 [2024-07-12 00:48:22.081504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.340 [2024-07-12 00:48:22.081528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.340 qpair failed and we were unable to recover it. 00:35:54.340 [2024-07-12 00:48:22.081614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.340 [2024-07-12 00:48:22.081639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.340 qpair failed and we were unable to recover it. 00:35:54.340 [2024-07-12 00:48:22.081724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.340 [2024-07-12 00:48:22.081749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.340 qpair failed and we were unable to recover it. 00:35:54.340 [2024-07-12 00:48:22.081828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.340 [2024-07-12 00:48:22.081856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.340 qpair failed and we were unable to recover it. 00:35:54.340 [2024-07-12 00:48:22.081940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.340 [2024-07-12 00:48:22.081964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.340 qpair failed and we were unable to recover it. 00:35:54.340 [2024-07-12 00:48:22.082052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.340 [2024-07-12 00:48:22.082077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.340 qpair failed and we were unable to recover it. 00:35:54.340 [2024-07-12 00:48:22.082163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.340 [2024-07-12 00:48:22.082187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.340 qpair failed and we were unable to recover it. 00:35:54.340 [2024-07-12 00:48:22.082263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.340 [2024-07-12 00:48:22.082288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.340 qpair failed and we were unable to recover it. 00:35:54.340 [2024-07-12 00:48:22.082375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.340 [2024-07-12 00:48:22.082403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.340 qpair failed and we were unable to recover it. 00:35:54.340 [2024-07-12 00:48:22.082485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.340 [2024-07-12 00:48:22.082512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.340 qpair failed and we were unable to recover it. 00:35:54.340 [2024-07-12 00:48:22.082600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.340 [2024-07-12 00:48:22.082626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.340 qpair failed and we were unable to recover it. 00:35:54.341 [2024-07-12 00:48:22.082706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.341 [2024-07-12 00:48:22.082731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.341 qpair failed and we were unable to recover it. 00:35:54.341 [2024-07-12 00:48:22.082816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.341 [2024-07-12 00:48:22.082842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.341 qpair failed and we were unable to recover it. 00:35:54.341 [2024-07-12 00:48:22.082926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.341 [2024-07-12 00:48:22.082951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.341 qpair failed and we were unable to recover it. 00:35:54.341 [2024-07-12 00:48:22.083031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.341 [2024-07-12 00:48:22.083056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.341 qpair failed and we were unable to recover it. 00:35:54.341 [2024-07-12 00:48:22.083143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.341 [2024-07-12 00:48:22.083169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.341 qpair failed and we were unable to recover it. 00:35:54.341 [2024-07-12 00:48:22.083254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.341 [2024-07-12 00:48:22.083280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.341 qpair failed and we were unable to recover it. 00:35:54.341 [2024-07-12 00:48:22.083358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.341 [2024-07-12 00:48:22.083383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.341 qpair failed and we were unable to recover it. 00:35:54.341 [2024-07-12 00:48:22.083472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.341 [2024-07-12 00:48:22.083501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.341 qpair failed and we were unable to recover it. 00:35:54.341 [2024-07-12 00:48:22.083583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.341 [2024-07-12 00:48:22.083616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.341 qpair failed and we were unable to recover it. 00:35:54.341 [2024-07-12 00:48:22.083697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.341 [2024-07-12 00:48:22.083722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.341 qpair failed and we were unable to recover it. 00:35:54.341 [2024-07-12 00:48:22.083803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.341 [2024-07-12 00:48:22.083827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.341 qpair failed and we were unable to recover it. 00:35:54.341 [2024-07-12 00:48:22.083901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.341 [2024-07-12 00:48:22.083925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.341 qpair failed and we were unable to recover it. 00:35:54.341 [2024-07-12 00:48:22.084003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.341 [2024-07-12 00:48:22.084026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.341 qpair failed and we were unable to recover it. 00:35:54.341 [2024-07-12 00:48:22.084106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.341 [2024-07-12 00:48:22.084131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.341 qpair failed and we were unable to recover it. 00:35:54.341 [2024-07-12 00:48:22.084207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.341 [2024-07-12 00:48:22.084232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.341 qpair failed and we were unable to recover it. 00:35:54.341 [2024-07-12 00:48:22.084309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.341 [2024-07-12 00:48:22.084333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.341 qpair failed and we were unable to recover it. 00:35:54.341 [2024-07-12 00:48:22.084418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.341 [2024-07-12 00:48:22.084444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.341 qpair failed and we were unable to recover it. 00:35:54.341 [2024-07-12 00:48:22.084526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.341 [2024-07-12 00:48:22.084552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.341 qpair failed and we were unable to recover it. 00:35:54.341 [2024-07-12 00:48:22.084644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.341 [2024-07-12 00:48:22.084670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.341 qpair failed and we were unable to recover it. 00:35:54.341 [2024-07-12 00:48:22.084746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.341 [2024-07-12 00:48:22.084771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.341 qpair failed and we were unable to recover it. 00:35:54.341 [2024-07-12 00:48:22.084852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.341 [2024-07-12 00:48:22.084882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.341 qpair failed and we were unable to recover it. 00:35:54.341 [2024-07-12 00:48:22.084963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.341 [2024-07-12 00:48:22.084989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.341 qpair failed and we were unable to recover it. 00:35:54.341 [2024-07-12 00:48:22.085073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.341 [2024-07-12 00:48:22.085099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.341 qpair failed and we were unable to recover it. 00:35:54.341 [2024-07-12 00:48:22.085183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.341 [2024-07-12 00:48:22.085208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.341 qpair failed and we were unable to recover it. 00:35:54.341 [2024-07-12 00:48:22.085291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.341 [2024-07-12 00:48:22.085319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.341 qpair failed and we were unable to recover it. 00:35:54.341 [2024-07-12 00:48:22.085409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.341 [2024-07-12 00:48:22.085438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.341 qpair failed and we were unable to recover it. 00:35:54.341 [2024-07-12 00:48:22.085524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.341 [2024-07-12 00:48:22.085551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.341 qpair failed and we were unable to recover it. 00:35:54.341 [2024-07-12 00:48:22.085640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.341 [2024-07-12 00:48:22.085666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.341 qpair failed and we were unable to recover it. 00:35:54.341 [2024-07-12 00:48:22.085751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.341 [2024-07-12 00:48:22.085776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.341 qpair failed and we were unable to recover it. 00:35:54.341 [2024-07-12 00:48:22.085859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.341 [2024-07-12 00:48:22.085883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.341 qpair failed and we were unable to recover it. 00:35:54.341 [2024-07-12 00:48:22.085963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.341 [2024-07-12 00:48:22.085991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.341 qpair failed and we were unable to recover it. 00:35:54.341 [2024-07-12 00:48:22.086078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.341 [2024-07-12 00:48:22.086108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.341 qpair failed and we were unable to recover it. 00:35:54.341 [2024-07-12 00:48:22.086197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.341 [2024-07-12 00:48:22.086223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.341 qpair failed and we were unable to recover it. 00:35:54.341 [2024-07-12 00:48:22.086311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.341 [2024-07-12 00:48:22.086336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.341 qpair failed and we were unable to recover it. 00:35:54.341 [2024-07-12 00:48:22.086422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.341 [2024-07-12 00:48:22.086452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.341 qpair failed and we were unable to recover it. 00:35:54.341 [2024-07-12 00:48:22.086535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.341 [2024-07-12 00:48:22.086561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.341 qpair failed and we were unable to recover it. 00:35:54.341 [2024-07-12 00:48:22.086653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.341 [2024-07-12 00:48:22.086680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.341 qpair failed and we were unable to recover it. 00:35:54.341 [2024-07-12 00:48:22.086764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.341 [2024-07-12 00:48:22.086790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.341 qpair failed and we were unable to recover it. 00:35:54.341 [2024-07-12 00:48:22.086866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.341 [2024-07-12 00:48:22.086892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.341 qpair failed and we were unable to recover it. 00:35:54.341 [2024-07-12 00:48:22.086971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.341 [2024-07-12 00:48:22.086997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.341 qpair failed and we were unable to recover it. 00:35:54.341 [2024-07-12 00:48:22.087090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.341 [2024-07-12 00:48:22.087117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.341 qpair failed and we were unable to recover it. 00:35:54.342 [2024-07-12 00:48:22.087208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.342 [2024-07-12 00:48:22.087240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.342 qpair failed and we were unable to recover it. 00:35:54.342 [2024-07-12 00:48:22.087331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.342 [2024-07-12 00:48:22.087359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.342 qpair failed and we were unable to recover it. 00:35:54.342 [2024-07-12 00:48:22.087443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.342 [2024-07-12 00:48:22.087471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.342 qpair failed and we were unable to recover it. 00:35:54.619 [2024-07-12 00:48:22.087553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.619 [2024-07-12 00:48:22.087579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.619 qpair failed and we were unable to recover it. 00:35:54.619 [2024-07-12 00:48:22.087690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.619 [2024-07-12 00:48:22.087719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.619 qpair failed and we were unable to recover it. 00:35:54.619 [2024-07-12 00:48:22.087810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.619 [2024-07-12 00:48:22.087845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.619 qpair failed and we were unable to recover it. 00:35:54.619 [2024-07-12 00:48:22.087930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.619 [2024-07-12 00:48:22.087959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.619 qpair failed and we were unable to recover it. 00:35:54.619 [2024-07-12 00:48:22.088052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.619 [2024-07-12 00:48:22.088080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.619 qpair failed and we were unable to recover it. 00:35:54.619 [2024-07-12 00:48:22.088202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.619 [2024-07-12 00:48:22.088233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.619 qpair failed and we were unable to recover it. 00:35:54.619 [2024-07-12 00:48:22.088318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.619 [2024-07-12 00:48:22.088345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.619 qpair failed and we were unable to recover it. 00:35:54.619 [2024-07-12 00:48:22.088421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.619 [2024-07-12 00:48:22.088448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.619 qpair failed and we were unable to recover it. 00:35:54.619 [2024-07-12 00:48:22.088527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.619 [2024-07-12 00:48:22.088553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.619 qpair failed and we were unable to recover it. 00:35:54.619 [2024-07-12 00:48:22.088652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.619 [2024-07-12 00:48:22.088679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.619 qpair failed and we were unable to recover it. 00:35:54.619 [2024-07-12 00:48:22.088758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.619 [2024-07-12 00:48:22.088785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.619 qpair failed and we were unable to recover it. 00:35:54.619 [2024-07-12 00:48:22.088863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.619 [2024-07-12 00:48:22.088889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.619 qpair failed and we were unable to recover it. 00:35:54.619 [2024-07-12 00:48:22.088975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.619 [2024-07-12 00:48:22.089001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.619 qpair failed and we were unable to recover it. 00:35:54.619 [2024-07-12 00:48:22.089080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.619 [2024-07-12 00:48:22.089106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.619 qpair failed and we were unable to recover it. 00:35:54.619 [2024-07-12 00:48:22.089191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.619 [2024-07-12 00:48:22.089219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.619 qpair failed and we were unable to recover it. 00:35:54.619 [2024-07-12 00:48:22.089299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.619 [2024-07-12 00:48:22.089325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.619 qpair failed and we were unable to recover it. 00:35:54.619 [2024-07-12 00:48:22.089416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.619 [2024-07-12 00:48:22.089442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.619 qpair failed and we were unable to recover it. 00:35:54.619 [2024-07-12 00:48:22.089531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.619 [2024-07-12 00:48:22.089560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.619 qpair failed and we were unable to recover it. 00:35:54.619 [2024-07-12 00:48:22.089696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.619 [2024-07-12 00:48:22.089722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.619 qpair failed and we were unable to recover it. 00:35:54.619 [2024-07-12 00:48:22.089800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.619 [2024-07-12 00:48:22.089827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.619 qpair failed and we were unable to recover it. 00:35:54.619 [2024-07-12 00:48:22.089914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.619 [2024-07-12 00:48:22.089940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.619 qpair failed and we were unable to recover it. 00:35:54.619 [2024-07-12 00:48:22.090020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.620 [2024-07-12 00:48:22.090046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.620 qpair failed and we were unable to recover it. 00:35:54.620 [2024-07-12 00:48:22.090134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.620 [2024-07-12 00:48:22.090160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.620 qpair failed and we were unable to recover it. 00:35:54.620 [2024-07-12 00:48:22.090251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.620 [2024-07-12 00:48:22.090277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.620 qpair failed and we were unable to recover it. 00:35:54.620 [2024-07-12 00:48:22.090364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.620 [2024-07-12 00:48:22.090390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.620 qpair failed and we were unable to recover it. 00:35:54.620 [2024-07-12 00:48:22.090488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.620 [2024-07-12 00:48:22.090525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.620 qpair failed and we were unable to recover it. 00:35:54.620 [2024-07-12 00:48:22.090621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.620 [2024-07-12 00:48:22.090650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.620 qpair failed and we were unable to recover it. 00:35:54.620 [2024-07-12 00:48:22.090753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.620 [2024-07-12 00:48:22.090781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.620 qpair failed and we were unable to recover it. 00:35:54.620 [2024-07-12 00:48:22.090869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.620 [2024-07-12 00:48:22.090897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.620 qpair failed and we were unable to recover it. 00:35:54.620 [2024-07-12 00:48:22.090991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.620 [2024-07-12 00:48:22.091029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.620 qpair failed and we were unable to recover it. 00:35:54.620 [2024-07-12 00:48:22.091133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.620 [2024-07-12 00:48:22.091173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.620 qpair failed and we were unable to recover it. 00:35:54.620 [2024-07-12 00:48:22.091262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.620 [2024-07-12 00:48:22.091290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.620 qpair failed and we were unable to recover it. 00:35:54.620 [2024-07-12 00:48:22.091378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.620 [2024-07-12 00:48:22.091404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.620 qpair failed and we were unable to recover it. 00:35:54.620 [2024-07-12 00:48:22.091492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.620 [2024-07-12 00:48:22.091518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.620 qpair failed and we were unable to recover it. 00:35:54.620 [2024-07-12 00:48:22.091609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.620 [2024-07-12 00:48:22.091641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.620 qpair failed and we were unable to recover it. 00:35:54.620 [2024-07-12 00:48:22.091757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.620 [2024-07-12 00:48:22.091783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.620 qpair failed and we were unable to recover it. 00:35:54.620 [2024-07-12 00:48:22.091875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.620 [2024-07-12 00:48:22.091903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.620 qpair failed and we were unable to recover it. 00:35:54.620 [2024-07-12 00:48:22.092009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.620 [2024-07-12 00:48:22.092035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.620 qpair failed and we were unable to recover it. 00:35:54.620 [2024-07-12 00:48:22.092127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.620 [2024-07-12 00:48:22.092153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.620 qpair failed and we were unable to recover it. 00:35:54.620 [2024-07-12 00:48:22.092247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.620 [2024-07-12 00:48:22.092274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.620 qpair failed and we were unable to recover it. 00:35:54.620 [2024-07-12 00:48:22.092353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.620 [2024-07-12 00:48:22.092379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.620 qpair failed and we were unable to recover it. 00:35:54.620 [2024-07-12 00:48:22.092467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.620 [2024-07-12 00:48:22.092493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.620 qpair failed and we were unable to recover it. 00:35:54.620 [2024-07-12 00:48:22.092577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.620 [2024-07-12 00:48:22.092609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.620 qpair failed and we were unable to recover it. 00:35:54.620 [2024-07-12 00:48:22.092705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.620 [2024-07-12 00:48:22.092731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.620 qpair failed and we were unable to recover it. 00:35:54.620 [2024-07-12 00:48:22.092853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.620 [2024-07-12 00:48:22.092882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.620 qpair failed and we were unable to recover it. 00:35:54.620 [2024-07-12 00:48:22.092973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.620 [2024-07-12 00:48:22.093000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.620 qpair failed and we were unable to recover it. 00:35:54.620 [2024-07-12 00:48:22.093100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.620 [2024-07-12 00:48:22.093131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.620 qpair failed and we were unable to recover it. 00:35:54.620 [2024-07-12 00:48:22.093216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.620 [2024-07-12 00:48:22.093246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.620 qpair failed and we were unable to recover it. 00:35:54.620 [2024-07-12 00:48:22.093335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.620 [2024-07-12 00:48:22.093361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.620 qpair failed and we were unable to recover it. 00:35:54.620 [2024-07-12 00:48:22.093453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.620 [2024-07-12 00:48:22.093483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.620 qpair failed and we were unable to recover it. 00:35:54.620 [2024-07-12 00:48:22.093573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.620 [2024-07-12 00:48:22.093611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.620 qpair failed and we were unable to recover it. 00:35:54.620 [2024-07-12 00:48:22.093700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.620 [2024-07-12 00:48:22.093726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.620 qpair failed and we were unable to recover it. 00:35:54.620 [2024-07-12 00:48:22.093811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.620 [2024-07-12 00:48:22.093838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.620 qpair failed and we were unable to recover it. 00:35:54.620 [2024-07-12 00:48:22.093918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.620 [2024-07-12 00:48:22.093944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.620 qpair failed and we were unable to recover it. 00:35:54.620 [2024-07-12 00:48:22.094033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.620 [2024-07-12 00:48:22.094062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.620 qpair failed and we were unable to recover it. 00:35:54.620 [2024-07-12 00:48:22.094167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.620 [2024-07-12 00:48:22.094208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.620 qpair failed and we were unable to recover it. 00:35:54.620 [2024-07-12 00:48:22.094302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.620 [2024-07-12 00:48:22.094331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.620 qpair failed and we were unable to recover it. 00:35:54.620 [2024-07-12 00:48:22.094473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.620 [2024-07-12 00:48:22.094499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.620 qpair failed and we were unable to recover it. 00:35:54.620 [2024-07-12 00:48:22.094577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.620 [2024-07-12 00:48:22.094608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.620 qpair failed and we were unable to recover it. 00:35:54.620 [2024-07-12 00:48:22.094688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.620 [2024-07-12 00:48:22.094716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.620 qpair failed and we were unable to recover it. 00:35:54.621 [2024-07-12 00:48:22.094811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.621 [2024-07-12 00:48:22.094836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.621 qpair failed and we were unable to recover it. 00:35:54.621 [2024-07-12 00:48:22.094918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.621 [2024-07-12 00:48:22.094943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.621 qpair failed and we were unable to recover it. 00:35:54.621 [2024-07-12 00:48:22.095031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.621 [2024-07-12 00:48:22.095056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.621 qpair failed and we were unable to recover it. 00:35:54.621 [2024-07-12 00:48:22.095141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.621 [2024-07-12 00:48:22.095170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.621 qpair failed and we were unable to recover it. 00:35:54.621 [2024-07-12 00:48:22.095253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.621 [2024-07-12 00:48:22.095279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.621 qpair failed and we were unable to recover it. 00:35:54.621 [2024-07-12 00:48:22.095363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.621 [2024-07-12 00:48:22.095393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.621 qpair failed and we were unable to recover it. 00:35:54.621 [2024-07-12 00:48:22.095477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.621 [2024-07-12 00:48:22.095504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.621 qpair failed and we were unable to recover it. 00:35:54.621 [2024-07-12 00:48:22.095583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.621 [2024-07-12 00:48:22.095618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.621 qpair failed and we were unable to recover it. 00:35:54.621 [2024-07-12 00:48:22.095716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.621 [2024-07-12 00:48:22.095745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.621 qpair failed and we were unable to recover it. 00:35:54.621 [2024-07-12 00:48:22.095826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.621 [2024-07-12 00:48:22.095853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.621 qpair failed and we were unable to recover it. 00:35:54.621 [2024-07-12 00:48:22.095937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.621 [2024-07-12 00:48:22.095972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.621 qpair failed and we were unable to recover it. 00:35:54.621 [2024-07-12 00:48:22.096064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.621 [2024-07-12 00:48:22.096092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.621 qpair failed and we were unable to recover it. 00:35:54.621 [2024-07-12 00:48:22.096181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.621 [2024-07-12 00:48:22.096210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.621 qpair failed and we were unable to recover it. 00:35:54.621 [2024-07-12 00:48:22.096290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.621 [2024-07-12 00:48:22.096316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.621 qpair failed and we were unable to recover it. 00:35:54.621 [2024-07-12 00:48:22.096392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.621 [2024-07-12 00:48:22.096418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.621 qpair failed and we were unable to recover it. 00:35:54.621 [2024-07-12 00:48:22.096496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.621 [2024-07-12 00:48:22.096522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.621 qpair failed and we were unable to recover it. 00:35:54.621 [2024-07-12 00:48:22.096611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.621 [2024-07-12 00:48:22.096643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.621 qpair failed and we were unable to recover it. 00:35:54.621 [2024-07-12 00:48:22.096737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.621 [2024-07-12 00:48:22.096764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.621 qpair failed and we were unable to recover it. 00:35:54.621 [2024-07-12 00:48:22.096841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.621 [2024-07-12 00:48:22.096868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.621 qpair failed and we were unable to recover it. 00:35:54.621 [2024-07-12 00:48:22.096947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.621 [2024-07-12 00:48:22.096973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.621 qpair failed and we were unable to recover it. 00:35:54.621 [2024-07-12 00:48:22.097058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.621 [2024-07-12 00:48:22.097084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.621 qpair failed and we were unable to recover it. 00:35:54.621 [2024-07-12 00:48:22.097162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.621 [2024-07-12 00:48:22.097188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.621 qpair failed and we were unable to recover it. 00:35:54.621 [2024-07-12 00:48:22.097270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.621 [2024-07-12 00:48:22.097299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.621 qpair failed and we were unable to recover it. 00:35:54.621 [2024-07-12 00:48:22.097389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.621 [2024-07-12 00:48:22.097417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.621 qpair failed and we were unable to recover it. 00:35:54.621 [2024-07-12 00:48:22.097506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.621 [2024-07-12 00:48:22.097535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.621 qpair failed and we were unable to recover it. 00:35:54.621 [2024-07-12 00:48:22.097619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.621 [2024-07-12 00:48:22.097646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.621 qpair failed and we were unable to recover it. 00:35:54.621 [2024-07-12 00:48:22.097734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.621 [2024-07-12 00:48:22.097761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.621 qpair failed and we were unable to recover it. 00:35:54.621 [2024-07-12 00:48:22.097844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.621 [2024-07-12 00:48:22.097870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.621 qpair failed and we were unable to recover it. 00:35:54.621 [2024-07-12 00:48:22.097955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.621 [2024-07-12 00:48:22.097980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.621 qpair failed and we were unable to recover it. 00:35:54.621 [2024-07-12 00:48:22.098064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.621 [2024-07-12 00:48:22.098091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.621 qpair failed and we were unable to recover it. 00:35:54.621 [2024-07-12 00:48:22.098172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.621 [2024-07-12 00:48:22.098195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.621 qpair failed and we were unable to recover it. 00:35:54.621 [2024-07-12 00:48:22.098293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.621 [2024-07-12 00:48:22.098323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.621 qpair failed and we were unable to recover it. 00:35:54.621 [2024-07-12 00:48:22.098407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.621 [2024-07-12 00:48:22.098433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.621 qpair failed and we were unable to recover it. 00:35:54.621 [2024-07-12 00:48:22.098522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.621 [2024-07-12 00:48:22.098548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.621 qpair failed and we were unable to recover it. 00:35:54.621 [2024-07-12 00:48:22.098640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.621 [2024-07-12 00:48:22.098667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.621 qpair failed and we were unable to recover it. 00:35:54.621 [2024-07-12 00:48:22.098766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.621 [2024-07-12 00:48:22.098796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.621 qpair failed and we were unable to recover it. 00:35:54.621 [2024-07-12 00:48:22.098873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.621 [2024-07-12 00:48:22.098900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.621 qpair failed and we were unable to recover it. 00:35:54.621 [2024-07-12 00:48:22.098978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.621 [2024-07-12 00:48:22.099010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.621 qpair failed and we were unable to recover it. 00:35:54.621 [2024-07-12 00:48:22.099099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.621 [2024-07-12 00:48:22.099127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.621 qpair failed and we were unable to recover it. 00:35:54.621 [2024-07-12 00:48:22.099204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.621 [2024-07-12 00:48:22.099230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.621 qpair failed and we were unable to recover it. 00:35:54.621 [2024-07-12 00:48:22.099319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.622 [2024-07-12 00:48:22.099346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.622 qpair failed and we were unable to recover it. 00:35:54.622 [2024-07-12 00:48:22.099449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.622 [2024-07-12 00:48:22.099477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.622 qpair failed and we were unable to recover it. 00:35:54.622 [2024-07-12 00:48:22.099554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.622 [2024-07-12 00:48:22.099581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.622 qpair failed and we were unable to recover it. 00:35:54.622 [2024-07-12 00:48:22.099675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.622 [2024-07-12 00:48:22.099702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.622 qpair failed and we were unable to recover it. 00:35:54.622 [2024-07-12 00:48:22.099787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.622 [2024-07-12 00:48:22.099815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.622 qpair failed and we were unable to recover it. 00:35:54.622 [2024-07-12 00:48:22.099898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.622 [2024-07-12 00:48:22.099926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.622 qpair failed and we were unable to recover it. 00:35:54.622 [2024-07-12 00:48:22.100010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.622 [2024-07-12 00:48:22.100038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.622 qpair failed and we were unable to recover it. 00:35:54.622 [2024-07-12 00:48:22.100116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.622 [2024-07-12 00:48:22.100143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.622 qpair failed and we were unable to recover it. 00:35:54.622 [2024-07-12 00:48:22.100226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.622 [2024-07-12 00:48:22.100255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.622 qpair failed and we were unable to recover it. 00:35:54.622 [2024-07-12 00:48:22.100334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.622 [2024-07-12 00:48:22.100360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.622 qpair failed and we were unable to recover it. 00:35:54.622 [2024-07-12 00:48:22.100436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.622 [2024-07-12 00:48:22.100462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.622 qpair failed and we were unable to recover it. 00:35:54.622 [2024-07-12 00:48:22.100550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.622 [2024-07-12 00:48:22.100576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.622 qpair failed and we were unable to recover it. 00:35:54.622 [2024-07-12 00:48:22.100676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.622 [2024-07-12 00:48:22.100703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.622 qpair failed and we were unable to recover it. 00:35:54.622 [2024-07-12 00:48:22.100786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.622 [2024-07-12 00:48:22.100812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.622 qpair failed and we were unable to recover it. 00:35:54.622 [2024-07-12 00:48:22.100891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.622 [2024-07-12 00:48:22.100918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.622 qpair failed and we were unable to recover it. 00:35:54.622 [2024-07-12 00:48:22.100996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.622 [2024-07-12 00:48:22.101023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.622 qpair failed and we were unable to recover it. 00:35:54.622 [2024-07-12 00:48:22.101100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.622 [2024-07-12 00:48:22.101127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.622 qpair failed and we were unable to recover it. 00:35:54.622 [2024-07-12 00:48:22.101204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.622 [2024-07-12 00:48:22.101232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.622 qpair failed and we were unable to recover it. 00:35:54.622 [2024-07-12 00:48:22.101308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.622 [2024-07-12 00:48:22.101334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.622 qpair failed and we were unable to recover it. 00:35:54.622 [2024-07-12 00:48:22.101414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.622 [2024-07-12 00:48:22.101439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.622 qpair failed and we were unable to recover it. 00:35:54.622 [2024-07-12 00:48:22.101517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.622 [2024-07-12 00:48:22.101542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.622 qpair failed and we were unable to recover it. 00:35:54.622 [2024-07-12 00:48:22.101624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.622 [2024-07-12 00:48:22.101649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.622 qpair failed and we were unable to recover it. 00:35:54.622 [2024-07-12 00:48:22.101725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.622 [2024-07-12 00:48:22.101751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.622 qpair failed and we were unable to recover it. 00:35:54.622 [2024-07-12 00:48:22.101837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.622 [2024-07-12 00:48:22.101863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.622 qpair failed and we were unable to recover it. 00:35:54.622 [2024-07-12 00:48:22.101942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.622 [2024-07-12 00:48:22.101970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.622 qpair failed and we were unable to recover it. 00:35:54.622 [2024-07-12 00:48:22.102055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.622 [2024-07-12 00:48:22.102082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.622 qpair failed and we were unable to recover it. 00:35:54.622 [2024-07-12 00:48:22.102165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.622 [2024-07-12 00:48:22.102195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.622 qpair failed and we were unable to recover it. 00:35:54.622 [2024-07-12 00:48:22.102276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.622 [2024-07-12 00:48:22.102302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.622 qpair failed and we were unable to recover it. 00:35:54.622 [2024-07-12 00:48:22.102377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.622 [2024-07-12 00:48:22.102403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.622 qpair failed and we were unable to recover it. 00:35:54.622 [2024-07-12 00:48:22.102484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.622 [2024-07-12 00:48:22.102511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.622 qpair failed and we were unable to recover it. 00:35:54.622 [2024-07-12 00:48:22.102598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.622 [2024-07-12 00:48:22.102631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.622 qpair failed and we were unable to recover it. 00:35:54.622 [2024-07-12 00:48:22.102731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.622 [2024-07-12 00:48:22.102772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.622 qpair failed and we were unable to recover it. 00:35:54.622 [2024-07-12 00:48:22.102859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.622 [2024-07-12 00:48:22.102888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.622 qpair failed and we were unable to recover it. 00:35:54.622 [2024-07-12 00:48:22.102974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.622 [2024-07-12 00:48:22.103001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.622 qpair failed and we were unable to recover it. 00:35:54.622 [2024-07-12 00:48:22.103091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.622 [2024-07-12 00:48:22.103117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.622 qpair failed and we were unable to recover it. 00:35:54.622 [2024-07-12 00:48:22.103205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.622 [2024-07-12 00:48:22.103231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.622 qpair failed and we were unable to recover it. 00:35:54.622 [2024-07-12 00:48:22.103311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.622 [2024-07-12 00:48:22.103339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.622 qpair failed and we were unable to recover it. 00:35:54.622 [2024-07-12 00:48:22.103472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.622 [2024-07-12 00:48:22.103507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.622 qpair failed and we were unable to recover it. 00:35:54.622 [2024-07-12 00:48:22.103608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.622 [2024-07-12 00:48:22.103635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.622 qpair failed and we were unable to recover it. 00:35:54.622 [2024-07-12 00:48:22.103719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.622 [2024-07-12 00:48:22.103745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.622 qpair failed and we were unable to recover it. 00:35:54.622 [2024-07-12 00:48:22.103821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.623 [2024-07-12 00:48:22.103846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.623 qpair failed and we were unable to recover it. 00:35:54.623 [2024-07-12 00:48:22.103938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.623 [2024-07-12 00:48:22.103969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.623 qpair failed and we were unable to recover it. 00:35:54.623 [2024-07-12 00:48:22.104057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.623 [2024-07-12 00:48:22.104084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.623 qpair failed and we were unable to recover it. 00:35:54.623 [2024-07-12 00:48:22.104164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.623 [2024-07-12 00:48:22.104191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.623 qpair failed and we were unable to recover it. 00:35:54.623 [2024-07-12 00:48:22.104273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.623 [2024-07-12 00:48:22.104299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.623 qpair failed and we were unable to recover it. 00:35:54.623 [2024-07-12 00:48:22.104382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.623 [2024-07-12 00:48:22.104409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.623 qpair failed and we were unable to recover it. 00:35:54.623 [2024-07-12 00:48:22.104493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.623 [2024-07-12 00:48:22.104521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.623 qpair failed and we were unable to recover it. 00:35:54.623 [2024-07-12 00:48:22.104604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.623 [2024-07-12 00:48:22.104636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.623 qpair failed and we were unable to recover it. 00:35:54.623 [2024-07-12 00:48:22.104714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.623 [2024-07-12 00:48:22.104740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.623 qpair failed and we were unable to recover it. 00:35:54.623 [2024-07-12 00:48:22.104814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.623 [2024-07-12 00:48:22.104841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.623 qpair failed and we were unable to recover it. 00:35:54.623 [2024-07-12 00:48:22.104919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.623 [2024-07-12 00:48:22.104946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.623 qpair failed and we were unable to recover it. 00:35:54.623 [2024-07-12 00:48:22.105034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.623 [2024-07-12 00:48:22.105061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.623 qpair failed and we were unable to recover it. 00:35:54.623 [2024-07-12 00:48:22.105141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.623 [2024-07-12 00:48:22.105169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.623 qpair failed and we were unable to recover it. 00:35:54.623 [2024-07-12 00:48:22.105247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.623 [2024-07-12 00:48:22.105273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.623 qpair failed and we were unable to recover it. 00:35:54.623 [2024-07-12 00:48:22.105349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.623 [2024-07-12 00:48:22.105375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.623 qpair failed and we were unable to recover it. 00:35:54.623 [2024-07-12 00:48:22.105451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.623 [2024-07-12 00:48:22.105477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.623 qpair failed and we were unable to recover it. 00:35:54.623 [2024-07-12 00:48:22.105554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.623 [2024-07-12 00:48:22.105582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.623 qpair failed and we were unable to recover it. 00:35:54.623 [2024-07-12 00:48:22.105675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.623 [2024-07-12 00:48:22.105701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.623 qpair failed and we were unable to recover it. 00:35:54.623 [2024-07-12 00:48:22.105786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.623 [2024-07-12 00:48:22.105812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.623 qpair failed and we were unable to recover it. 00:35:54.623 [2024-07-12 00:48:22.105888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.623 [2024-07-12 00:48:22.105914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.623 qpair failed and we were unable to recover it. 00:35:54.623 [2024-07-12 00:48:22.105998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.623 [2024-07-12 00:48:22.106024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.623 qpair failed and we were unable to recover it. 00:35:54.623 [2024-07-12 00:48:22.106107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.623 [2024-07-12 00:48:22.106135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.623 qpair failed and we were unable to recover it. 00:35:54.623 [2024-07-12 00:48:22.106220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.623 [2024-07-12 00:48:22.106247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.623 qpair failed and we were unable to recover it. 00:35:54.623 [2024-07-12 00:48:22.106327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.623 [2024-07-12 00:48:22.106352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.623 qpair failed and we were unable to recover it. 00:35:54.623 [2024-07-12 00:48:22.106436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.623 [2024-07-12 00:48:22.106471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.623 qpair failed and we were unable to recover it. 00:35:54.623 [2024-07-12 00:48:22.106575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.623 [2024-07-12 00:48:22.106611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.623 qpair failed and we were unable to recover it. 00:35:54.623 [2024-07-12 00:48:22.106697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.623 [2024-07-12 00:48:22.106724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.623 qpair failed and we were unable to recover it. 00:35:54.623 [2024-07-12 00:48:22.106812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.623 [2024-07-12 00:48:22.106839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.623 qpair failed and we were unable to recover it. 00:35:54.623 [2024-07-12 00:48:22.106928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.623 [2024-07-12 00:48:22.106954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.623 qpair failed and we were unable to recover it. 00:35:54.623 [2024-07-12 00:48:22.107035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.623 [2024-07-12 00:48:22.107062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.623 qpair failed and we were unable to recover it. 00:35:54.623 [2024-07-12 00:48:22.107144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.623 [2024-07-12 00:48:22.107170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.623 qpair failed and we were unable to recover it. 00:35:54.623 [2024-07-12 00:48:22.107254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.623 [2024-07-12 00:48:22.107281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.623 qpair failed and we were unable to recover it. 00:35:54.623 [2024-07-12 00:48:22.107364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.623 [2024-07-12 00:48:22.107392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.623 qpair failed and we were unable to recover it. 00:35:54.623 [2024-07-12 00:48:22.107478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.623 [2024-07-12 00:48:22.107506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.624 qpair failed and we were unable to recover it. 00:35:54.624 [2024-07-12 00:48:22.107604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.624 [2024-07-12 00:48:22.107638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.624 qpair failed and we were unable to recover it. 00:35:54.624 [2024-07-12 00:48:22.107727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.624 [2024-07-12 00:48:22.107754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.624 qpair failed and we were unable to recover it. 00:35:54.624 [2024-07-12 00:48:22.107848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.624 [2024-07-12 00:48:22.107875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.624 qpair failed and we were unable to recover it. 00:35:54.624 [2024-07-12 00:48:22.107954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.624 [2024-07-12 00:48:22.107981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.624 qpair failed and we were unable to recover it. 00:35:54.624 [2024-07-12 00:48:22.108064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.624 [2024-07-12 00:48:22.108091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.624 qpair failed and we were unable to recover it. 00:35:54.624 [2024-07-12 00:48:22.108174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.624 [2024-07-12 00:48:22.108201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.624 qpair failed and we were unable to recover it. 00:35:54.624 [2024-07-12 00:48:22.108280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.624 [2024-07-12 00:48:22.108306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.624 qpair failed and we were unable to recover it. 00:35:54.624 [2024-07-12 00:48:22.108397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.624 [2024-07-12 00:48:22.108425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.624 qpair failed and we were unable to recover it. 00:35:54.624 [2024-07-12 00:48:22.108518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.624 [2024-07-12 00:48:22.108545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.624 qpair failed and we were unable to recover it. 00:35:54.624 [2024-07-12 00:48:22.108638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.624 [2024-07-12 00:48:22.108666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.624 qpair failed and we were unable to recover it. 00:35:54.624 [2024-07-12 00:48:22.108746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.624 [2024-07-12 00:48:22.108775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.624 qpair failed and we were unable to recover it. 00:35:54.624 [2024-07-12 00:48:22.108860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.624 [2024-07-12 00:48:22.108886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.624 qpair failed and we were unable to recover it. 00:35:54.624 [2024-07-12 00:48:22.108965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.624 [2024-07-12 00:48:22.108991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.624 qpair failed and we were unable to recover it. 00:35:54.624 [2024-07-12 00:48:22.109074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.624 [2024-07-12 00:48:22.109101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.624 qpair failed and we were unable to recover it. 00:35:54.624 [2024-07-12 00:48:22.109183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.624 [2024-07-12 00:48:22.109210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.624 qpair failed and we were unable to recover it. 00:35:54.624 [2024-07-12 00:48:22.109299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.624 [2024-07-12 00:48:22.109327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.624 qpair failed and we were unable to recover it. 00:35:54.624 [2024-07-12 00:48:22.109413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.624 [2024-07-12 00:48:22.109440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.624 qpair failed and we were unable to recover it. 00:35:54.624 [2024-07-12 00:48:22.109525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.624 [2024-07-12 00:48:22.109558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.624 qpair failed and we were unable to recover it. 00:35:54.624 [2024-07-12 00:48:22.109656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.624 [2024-07-12 00:48:22.109685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.624 qpair failed and we were unable to recover it. 00:35:54.624 [2024-07-12 00:48:22.109778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.624 [2024-07-12 00:48:22.109804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.624 qpair failed and we were unable to recover it. 00:35:54.624 [2024-07-12 00:48:22.109880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.624 [2024-07-12 00:48:22.109907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.624 qpair failed and we were unable to recover it. 00:35:54.624 [2024-07-12 00:48:22.109987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.624 [2024-07-12 00:48:22.110014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.624 qpair failed and we were unable to recover it. 00:35:54.624 [2024-07-12 00:48:22.110094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.624 [2024-07-12 00:48:22.110120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.624 qpair failed and we were unable to recover it. 00:35:54.624 [2024-07-12 00:48:22.110196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.624 [2024-07-12 00:48:22.110222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.624 qpair failed and we were unable to recover it. 00:35:54.624 [2024-07-12 00:48:22.110297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.624 [2024-07-12 00:48:22.110323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.624 qpair failed and we were unable to recover it. 00:35:54.624 [2024-07-12 00:48:22.110404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.624 [2024-07-12 00:48:22.110432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.624 qpair failed and we were unable to recover it. 00:35:54.624 [2024-07-12 00:48:22.110516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.624 [2024-07-12 00:48:22.110545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.624 qpair failed and we were unable to recover it. 00:35:54.624 [2024-07-12 00:48:22.110654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.624 [2024-07-12 00:48:22.110680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.624 qpair failed and we were unable to recover it. 00:35:54.624 [2024-07-12 00:48:22.110764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.624 [2024-07-12 00:48:22.110789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.624 qpair failed and we were unable to recover it. 00:35:54.624 [2024-07-12 00:48:22.110870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.624 [2024-07-12 00:48:22.110894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.624 qpair failed and we were unable to recover it. 00:35:54.624 [2024-07-12 00:48:22.110981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.624 [2024-07-12 00:48:22.111011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.624 qpair failed and we were unable to recover it. 00:35:54.624 [2024-07-12 00:48:22.111099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.624 [2024-07-12 00:48:22.111124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.624 qpair failed and we were unable to recover it. 00:35:54.624 [2024-07-12 00:48:22.111210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.624 [2024-07-12 00:48:22.111235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.624 qpair failed and we were unable to recover it. 00:35:54.624 [2024-07-12 00:48:22.111314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.624 [2024-07-12 00:48:22.111339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.624 qpair failed and we were unable to recover it. 00:35:54.624 [2024-07-12 00:48:22.111429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.624 [2024-07-12 00:48:22.111457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.624 qpair failed and we were unable to recover it. 00:35:54.624 [2024-07-12 00:48:22.111555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.624 [2024-07-12 00:48:22.111581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.624 qpair failed and we were unable to recover it. 00:35:54.624 [2024-07-12 00:48:22.111689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.624 [2024-07-12 00:48:22.111716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.624 qpair failed and we were unable to recover it. 00:35:54.624 [2024-07-12 00:48:22.111810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.624 [2024-07-12 00:48:22.111834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.624 qpair failed and we were unable to recover it. 00:35:54.624 [2024-07-12 00:48:22.111920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.624 [2024-07-12 00:48:22.111946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.624 qpair failed and we were unable to recover it. 00:35:54.624 [2024-07-12 00:48:22.112024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.624 [2024-07-12 00:48:22.112050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.624 qpair failed and we were unable to recover it. 00:35:54.624 [2024-07-12 00:48:22.112144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.625 [2024-07-12 00:48:22.112170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.625 qpair failed and we were unable to recover it. 00:35:54.625 [2024-07-12 00:48:22.112248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.625 [2024-07-12 00:48:22.112275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.625 qpair failed and we were unable to recover it. 00:35:54.625 [2024-07-12 00:48:22.112354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.625 [2024-07-12 00:48:22.112381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.625 qpair failed and we were unable to recover it. 00:35:54.625 [2024-07-12 00:48:22.112470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.625 [2024-07-12 00:48:22.112497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.625 qpair failed and we were unable to recover it. 00:35:54.625 [2024-07-12 00:48:22.112601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.625 [2024-07-12 00:48:22.112630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.625 qpair failed and we were unable to recover it. 00:35:54.625 [2024-07-12 00:48:22.112713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.625 [2024-07-12 00:48:22.112742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.625 qpair failed and we were unable to recover it. 00:35:54.625 [2024-07-12 00:48:22.112830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.625 [2024-07-12 00:48:22.112858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.625 qpair failed and we were unable to recover it. 00:35:54.625 [2024-07-12 00:48:22.112936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.625 [2024-07-12 00:48:22.112962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.625 qpair failed and we were unable to recover it. 00:35:54.625 [2024-07-12 00:48:22.113045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.625 [2024-07-12 00:48:22.113072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.625 qpair failed and we were unable to recover it. 00:35:54.625 [2024-07-12 00:48:22.113158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.625 [2024-07-12 00:48:22.113184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.625 qpair failed and we were unable to recover it. 00:35:54.625 [2024-07-12 00:48:22.113259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.625 [2024-07-12 00:48:22.113285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.625 qpair failed and we were unable to recover it. 00:35:54.625 [2024-07-12 00:48:22.113366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.625 [2024-07-12 00:48:22.113393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.625 qpair failed and we were unable to recover it. 00:35:54.625 [2024-07-12 00:48:22.113471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.625 [2024-07-12 00:48:22.113497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.625 qpair failed and we were unable to recover it. 00:35:54.625 [2024-07-12 00:48:22.113579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.625 [2024-07-12 00:48:22.113690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.625 qpair failed and we were unable to recover it. 00:35:54.625 [2024-07-12 00:48:22.113777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.625 [2024-07-12 00:48:22.113803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.625 qpair failed and we were unable to recover it. 00:35:54.625 [2024-07-12 00:48:22.113878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.625 [2024-07-12 00:48:22.113903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.625 qpair failed and we were unable to recover it. 00:35:54.625 [2024-07-12 00:48:22.113991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.625 [2024-07-12 00:48:22.114020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.625 qpair failed and we were unable to recover it. 00:35:54.625 [2024-07-12 00:48:22.114102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.625 [2024-07-12 00:48:22.114135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.625 qpair failed and we were unable to recover it. 00:35:54.625 [2024-07-12 00:48:22.114214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.625 [2024-07-12 00:48:22.114241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.625 qpair failed and we were unable to recover it. 00:35:54.625 [2024-07-12 00:48:22.114322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.625 [2024-07-12 00:48:22.114348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.625 qpair failed and we were unable to recover it. 00:35:54.625 [2024-07-12 00:48:22.114423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.625 [2024-07-12 00:48:22.114449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.625 qpair failed and we were unable to recover it. 00:35:54.625 [2024-07-12 00:48:22.114523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.625 [2024-07-12 00:48:22.114549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.625 qpair failed and we were unable to recover it. 00:35:54.625 [2024-07-12 00:48:22.114648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.625 [2024-07-12 00:48:22.114676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.625 qpair failed and we were unable to recover it. 00:35:54.625 [2024-07-12 00:48:22.114758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.625 [2024-07-12 00:48:22.114785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.625 qpair failed and we were unable to recover it. 00:35:54.625 [2024-07-12 00:48:22.114861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.625 [2024-07-12 00:48:22.114887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.625 qpair failed and we were unable to recover it. 00:35:54.625 [2024-07-12 00:48:22.114967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.625 [2024-07-12 00:48:22.114994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.625 qpair failed and we were unable to recover it. 00:35:54.625 [2024-07-12 00:48:22.115083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.625 [2024-07-12 00:48:22.115115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.625 qpair failed and we were unable to recover it. 00:35:54.625 [2024-07-12 00:48:22.115204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.625 [2024-07-12 00:48:22.115232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.625 qpair failed and we were unable to recover it. 00:35:54.625 [2024-07-12 00:48:22.115321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.625 [2024-07-12 00:48:22.115347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.625 qpair failed and we were unable to recover it. 00:35:54.625 [2024-07-12 00:48:22.115427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.625 [2024-07-12 00:48:22.115454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.625 qpair failed and we were unable to recover it. 00:35:54.625 [2024-07-12 00:48:22.115539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.625 [2024-07-12 00:48:22.115566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.625 qpair failed and we were unable to recover it. 00:35:54.625 [2024-07-12 00:48:22.115655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.625 [2024-07-12 00:48:22.115681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.625 qpair failed and we were unable to recover it. 00:35:54.625 [2024-07-12 00:48:22.115769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.625 [2024-07-12 00:48:22.115795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.625 qpair failed and we were unable to recover it. 00:35:54.625 [2024-07-12 00:48:22.115870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.625 [2024-07-12 00:48:22.115896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.625 qpair failed and we were unable to recover it. 00:35:54.625 [2024-07-12 00:48:22.115984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.625 [2024-07-12 00:48:22.116010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.625 qpair failed and we were unable to recover it. 00:35:54.625 [2024-07-12 00:48:22.116094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.625 [2024-07-12 00:48:22.116120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.625 qpair failed and we were unable to recover it. 00:35:54.625 [2024-07-12 00:48:22.116205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.625 [2024-07-12 00:48:22.116235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.625 qpair failed and we were unable to recover it. 00:35:54.625 [2024-07-12 00:48:22.116320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.625 [2024-07-12 00:48:22.116346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.625 qpair failed and we were unable to recover it. 00:35:54.625 [2024-07-12 00:48:22.116436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.625 [2024-07-12 00:48:22.116464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.625 qpair failed and we were unable to recover it. 00:35:54.625 [2024-07-12 00:48:22.116543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.625 [2024-07-12 00:48:22.116569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.625 qpair failed and we were unable to recover it. 00:35:54.625 [2024-07-12 00:48:22.116662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.625 [2024-07-12 00:48:22.116691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.625 qpair failed and we were unable to recover it. 00:35:54.625 [2024-07-12 00:48:22.116774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.626 [2024-07-12 00:48:22.116801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.626 qpair failed and we were unable to recover it. 00:35:54.626 [2024-07-12 00:48:22.116880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.626 [2024-07-12 00:48:22.116907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.626 qpair failed and we were unable to recover it. 00:35:54.626 [2024-07-12 00:48:22.116988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.626 [2024-07-12 00:48:22.117014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.626 qpair failed and we were unable to recover it. 00:35:54.626 [2024-07-12 00:48:22.117106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.626 [2024-07-12 00:48:22.117133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.626 qpair failed and we were unable to recover it. 00:35:54.626 [2024-07-12 00:48:22.117228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.626 [2024-07-12 00:48:22.117255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.626 qpair failed and we were unable to recover it. 00:35:54.626 [2024-07-12 00:48:22.117331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.626 [2024-07-12 00:48:22.117357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.626 qpair failed and we were unable to recover it. 00:35:54.626 [2024-07-12 00:48:22.117440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.626 [2024-07-12 00:48:22.117477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.626 qpair failed and we were unable to recover it. 00:35:54.626 [2024-07-12 00:48:22.117557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.626 [2024-07-12 00:48:22.117591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.626 qpair failed and we were unable to recover it. 00:35:54.626 [2024-07-12 00:48:22.117678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.626 [2024-07-12 00:48:22.117707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.626 qpair failed and we were unable to recover it. 00:35:54.626 [2024-07-12 00:48:22.117787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.626 [2024-07-12 00:48:22.117813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.626 qpair failed and we were unable to recover it. 00:35:54.626 [2024-07-12 00:48:22.117893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.626 [2024-07-12 00:48:22.117923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.626 qpair failed and we were unable to recover it. 00:35:54.626 [2024-07-12 00:48:22.118011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.626 [2024-07-12 00:48:22.118039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.626 qpair failed and we were unable to recover it. 00:35:54.626 [2024-07-12 00:48:22.118122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.626 [2024-07-12 00:48:22.118151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.626 qpair failed and we were unable to recover it. 00:35:54.626 [2024-07-12 00:48:22.118237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.626 [2024-07-12 00:48:22.118268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.626 qpair failed and we were unable to recover it. 00:35:54.626 [2024-07-12 00:48:22.118353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.626 [2024-07-12 00:48:22.118381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.626 qpair failed and we were unable to recover it. 00:35:54.626 [2024-07-12 00:48:22.118459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.626 [2024-07-12 00:48:22.118485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.626 qpair failed and we were unable to recover it. 00:35:54.626 [2024-07-12 00:48:22.118564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.626 [2024-07-12 00:48:22.118600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.626 qpair failed and we were unable to recover it. 00:35:54.626 [2024-07-12 00:48:22.118688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.626 [2024-07-12 00:48:22.118714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.626 qpair failed and we were unable to recover it. 00:35:54.626 [2024-07-12 00:48:22.118814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.626 [2024-07-12 00:48:22.118842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.626 qpair failed and we were unable to recover it. 00:35:54.626 [2024-07-12 00:48:22.118929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.626 [2024-07-12 00:48:22.118958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.626 qpair failed and we were unable to recover it. 00:35:54.626 [2024-07-12 00:48:22.119047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.626 [2024-07-12 00:48:22.119075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.626 qpair failed and we were unable to recover it. 00:35:54.626 [2024-07-12 00:48:22.119163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.626 [2024-07-12 00:48:22.119189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.626 qpair failed and we were unable to recover it. 00:35:54.626 [2024-07-12 00:48:22.119279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.626 [2024-07-12 00:48:22.119306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.626 qpair failed and we were unable to recover it. 00:35:54.626 [2024-07-12 00:48:22.119388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.626 [2024-07-12 00:48:22.119415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.626 qpair failed and we were unable to recover it. 00:35:54.626 [2024-07-12 00:48:22.119499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.626 [2024-07-12 00:48:22.119526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.626 qpair failed and we were unable to recover it. 00:35:54.626 [2024-07-12 00:48:22.119614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.626 [2024-07-12 00:48:22.119642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.626 qpair failed and we were unable to recover it. 00:35:54.626 [2024-07-12 00:48:22.119726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.626 [2024-07-12 00:48:22.119751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.626 qpair failed and we were unable to recover it. 00:35:54.626 [2024-07-12 00:48:22.119828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.626 [2024-07-12 00:48:22.119855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.626 qpair failed and we were unable to recover it. 00:35:54.626 [2024-07-12 00:48:22.119940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.626 [2024-07-12 00:48:22.119967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.626 qpair failed and we were unable to recover it. 00:35:54.626 [2024-07-12 00:48:22.120048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.626 [2024-07-12 00:48:22.120074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.626 qpair failed and we were unable to recover it. 00:35:54.626 [2024-07-12 00:48:22.120165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.626 [2024-07-12 00:48:22.120193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.626 qpair failed and we were unable to recover it. 00:35:54.626 [2024-07-12 00:48:22.120274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.626 [2024-07-12 00:48:22.120300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.626 qpair failed and we were unable to recover it. 00:35:54.626 [2024-07-12 00:48:22.120387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.626 [2024-07-12 00:48:22.120416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.626 qpair failed and we were unable to recover it. 00:35:54.626 [2024-07-12 00:48:22.120500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.626 [2024-07-12 00:48:22.120528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.626 qpair failed and we were unable to recover it. 00:35:54.626 [2024-07-12 00:48:22.120606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.626 [2024-07-12 00:48:22.120634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.626 qpair failed and we were unable to recover it. 00:35:54.626 [2024-07-12 00:48:22.120710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.626 [2024-07-12 00:48:22.120736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.626 qpair failed and we were unable to recover it. 00:35:54.626 [2024-07-12 00:48:22.120852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.626 [2024-07-12 00:48:22.120880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.626 qpair failed and we were unable to recover it. 00:35:54.626 [2024-07-12 00:48:22.120966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.626 [2024-07-12 00:48:22.120996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.626 qpair failed and we were unable to recover it. 00:35:54.626 [2024-07-12 00:48:22.121086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.626 [2024-07-12 00:48:22.121114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.626 qpair failed and we were unable to recover it. 00:35:54.626 [2024-07-12 00:48:22.121203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.626 [2024-07-12 00:48:22.121230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.626 qpair failed and we were unable to recover it. 00:35:54.627 [2024-07-12 00:48:22.121315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.627 [2024-07-12 00:48:22.121341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.627 qpair failed and we were unable to recover it. 00:35:54.627 [2024-07-12 00:48:22.121423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.627 [2024-07-12 00:48:22.121450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.627 qpair failed and we were unable to recover it. 00:35:54.627 [2024-07-12 00:48:22.121529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.627 [2024-07-12 00:48:22.121554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.627 qpair failed and we were unable to recover it. 00:35:54.627 [2024-07-12 00:48:22.121642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.627 [2024-07-12 00:48:22.121673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.627 qpair failed and we were unable to recover it. 00:35:54.627 [2024-07-12 00:48:22.121760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.627 [2024-07-12 00:48:22.121787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.627 qpair failed and we were unable to recover it. 00:35:54.627 [2024-07-12 00:48:22.121874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.627 [2024-07-12 00:48:22.121903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.627 qpair failed and we were unable to recover it. 00:35:54.627 [2024-07-12 00:48:22.121983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.627 [2024-07-12 00:48:22.122009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.627 qpair failed and we were unable to recover it. 00:35:54.627 [2024-07-12 00:48:22.122092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.627 [2024-07-12 00:48:22.122121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.627 qpair failed and we were unable to recover it. 00:35:54.627 [2024-07-12 00:48:22.122205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.627 [2024-07-12 00:48:22.122234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.627 qpair failed and we were unable to recover it. 00:35:54.627 [2024-07-12 00:48:22.122346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.627 [2024-07-12 00:48:22.122373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.627 qpair failed and we were unable to recover it. 00:35:54.627 [2024-07-12 00:48:22.122447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.627 [2024-07-12 00:48:22.122474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.627 qpair failed and we were unable to recover it. 00:35:54.627 [2024-07-12 00:48:22.122555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.627 [2024-07-12 00:48:22.122582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.627 qpair failed and we were unable to recover it. 00:35:54.627 [2024-07-12 00:48:22.122676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.627 [2024-07-12 00:48:22.122707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.627 qpair failed and we were unable to recover it. 00:35:54.627 [2024-07-12 00:48:22.122821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.627 [2024-07-12 00:48:22.122847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.627 qpair failed and we were unable to recover it. 00:35:54.627 [2024-07-12 00:48:22.122958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.627 [2024-07-12 00:48:22.122986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.627 qpair failed and we were unable to recover it. 00:35:54.627 [2024-07-12 00:48:22.123101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.627 [2024-07-12 00:48:22.123129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.627 qpair failed and we were unable to recover it. 00:35:54.627 [2024-07-12 00:48:22.123214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.627 [2024-07-12 00:48:22.123240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.627 qpair failed and we were unable to recover it. 00:35:54.627 [2024-07-12 00:48:22.123359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.627 [2024-07-12 00:48:22.123386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.627 qpair failed and we were unable to recover it. 00:35:54.627 [2024-07-12 00:48:22.123462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.627 [2024-07-12 00:48:22.123488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.627 qpair failed and we were unable to recover it. 00:35:54.627 [2024-07-12 00:48:22.123574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.627 [2024-07-12 00:48:22.123612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.627 qpair failed and we were unable to recover it. 00:35:54.627 [2024-07-12 00:48:22.123703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.627 [2024-07-12 00:48:22.123730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.627 qpair failed and we were unable to recover it. 00:35:54.627 [2024-07-12 00:48:22.123819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.627 [2024-07-12 00:48:22.123846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.627 qpair failed and we were unable to recover it. 00:35:54.627 [2024-07-12 00:48:22.123934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.627 [2024-07-12 00:48:22.123961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.627 qpair failed and we were unable to recover it. 00:35:54.627 [2024-07-12 00:48:22.124051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.627 [2024-07-12 00:48:22.124082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.627 qpair failed and we were unable to recover it. 00:35:54.627 [2024-07-12 00:48:22.124164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.627 [2024-07-12 00:48:22.124191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.627 qpair failed and we were unable to recover it. 00:35:54.627 [2024-07-12 00:48:22.124306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.627 [2024-07-12 00:48:22.124332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.627 qpair failed and we were unable to recover it. 00:35:54.627 [2024-07-12 00:48:22.124421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.627 [2024-07-12 00:48:22.124447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.627 qpair failed and we were unable to recover it. 00:35:54.627 [2024-07-12 00:48:22.124527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.627 [2024-07-12 00:48:22.124554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.627 qpair failed and we were unable to recover it. 00:35:54.627 [2024-07-12 00:48:22.124679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.627 [2024-07-12 00:48:22.124705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.627 qpair failed and we were unable to recover it. 00:35:54.627 [2024-07-12 00:48:22.124792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.627 [2024-07-12 00:48:22.124819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.627 qpair failed and we were unable to recover it. 00:35:54.627 [2024-07-12 00:48:22.124911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.627 [2024-07-12 00:48:22.124950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.627 qpair failed and we were unable to recover it. 00:35:54.627 [2024-07-12 00:48:22.125034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.627 [2024-07-12 00:48:22.125062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.627 qpair failed and we were unable to recover it. 00:35:54.627 [2024-07-12 00:48:22.125157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.627 [2024-07-12 00:48:22.125184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.627 qpair failed and we were unable to recover it. 00:35:54.627 [2024-07-12 00:48:22.125268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.627 [2024-07-12 00:48:22.125295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.627 qpair failed and we were unable to recover it. 00:35:54.627 [2024-07-12 00:48:22.125373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.627 [2024-07-12 00:48:22.125399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.627 qpair failed and we were unable to recover it. 00:35:54.627 [2024-07-12 00:48:22.125519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.627 [2024-07-12 00:48:22.125546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.627 qpair failed and we were unable to recover it. 00:35:54.627 [2024-07-12 00:48:22.125641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.627 [2024-07-12 00:48:22.125672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.627 qpair failed and we were unable to recover it. 00:35:54.627 [2024-07-12 00:48:22.125783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.627 [2024-07-12 00:48:22.125810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.627 qpair failed and we were unable to recover it. 00:35:54.627 [2024-07-12 00:48:22.125928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.627 [2024-07-12 00:48:22.125956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.627 qpair failed and we were unable to recover it. 00:35:54.627 [2024-07-12 00:48:22.126039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.627 [2024-07-12 00:48:22.126066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.627 qpair failed and we were unable to recover it. 00:35:54.627 [2024-07-12 00:48:22.126144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.627 [2024-07-12 00:48:22.126170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.628 qpair failed and we were unable to recover it. 00:35:54.628 [2024-07-12 00:48:22.126303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.628 [2024-07-12 00:48:22.126357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.628 qpair failed and we were unable to recover it. 00:35:54.628 [2024-07-12 00:48:22.126435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.628 [2024-07-12 00:48:22.126464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.628 qpair failed and we were unable to recover it. 00:35:54.628 [2024-07-12 00:48:22.126546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.628 [2024-07-12 00:48:22.126577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.628 qpair failed and we were unable to recover it. 00:35:54.628 [2024-07-12 00:48:22.126676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.628 [2024-07-12 00:48:22.126708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.628 qpair failed and we were unable to recover it. 00:35:54.628 [2024-07-12 00:48:22.126796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.628 [2024-07-12 00:48:22.126823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.628 qpair failed and we were unable to recover it. 00:35:54.628 [2024-07-12 00:48:22.126904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.628 [2024-07-12 00:48:22.126930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.628 qpair failed and we were unable to recover it. 00:35:54.628 [2024-07-12 00:48:22.127010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.628 [2024-07-12 00:48:22.127039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.628 qpair failed and we were unable to recover it. 00:35:54.628 [2024-07-12 00:48:22.127124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.628 [2024-07-12 00:48:22.127154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.628 qpair failed and we were unable to recover it. 00:35:54.628 [2024-07-12 00:48:22.127232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.628 [2024-07-12 00:48:22.127259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.628 qpair failed and we were unable to recover it. 00:35:54.628 [2024-07-12 00:48:22.127343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.628 [2024-07-12 00:48:22.127373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.628 qpair failed and we were unable to recover it. 00:35:54.628 [2024-07-12 00:48:22.127455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.628 [2024-07-12 00:48:22.127482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.628 qpair failed and we were unable to recover it. 00:35:54.628 [2024-07-12 00:48:22.127564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.628 [2024-07-12 00:48:22.127601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.628 qpair failed and we were unable to recover it. 00:35:54.628 [2024-07-12 00:48:22.127686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.628 [2024-07-12 00:48:22.127712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.628 qpair failed and we were unable to recover it. 00:35:54.628 [2024-07-12 00:48:22.127791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.628 [2024-07-12 00:48:22.127817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.628 qpair failed and we were unable to recover it. 00:35:54.628 [2024-07-12 00:48:22.127893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.628 [2024-07-12 00:48:22.127920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.628 qpair failed and we were unable to recover it. 00:35:54.628 [2024-07-12 00:48:22.127995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.628 [2024-07-12 00:48:22.128022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.628 qpair failed and we were unable to recover it. 00:35:54.628 [2024-07-12 00:48:22.128111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.628 [2024-07-12 00:48:22.128140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.628 qpair failed and we were unable to recover it. 00:35:54.628 [2024-07-12 00:48:22.128215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.628 [2024-07-12 00:48:22.128242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.628 qpair failed and we were unable to recover it. 00:35:54.628 [2024-07-12 00:48:22.128327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.628 [2024-07-12 00:48:22.128354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.628 qpair failed and we were unable to recover it. 00:35:54.628 [2024-07-12 00:48:22.128444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.628 [2024-07-12 00:48:22.128473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.628 qpair failed and we were unable to recover it. 00:35:54.628 [2024-07-12 00:48:22.128554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.628 [2024-07-12 00:48:22.128580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.628 qpair failed and we were unable to recover it. 00:35:54.628 [2024-07-12 00:48:22.128674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.628 [2024-07-12 00:48:22.128701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.628 qpair failed and we were unable to recover it. 00:35:54.628 [2024-07-12 00:48:22.128781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.628 [2024-07-12 00:48:22.128808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.628 qpair failed and we were unable to recover it. 00:35:54.628 [2024-07-12 00:48:22.128895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.628 [2024-07-12 00:48:22.128922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.628 qpair failed and we were unable to recover it. 00:35:54.628 [2024-07-12 00:48:22.129019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.628 [2024-07-12 00:48:22.129047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.628 qpair failed and we were unable to recover it. 00:35:54.628 [2024-07-12 00:48:22.129133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.628 [2024-07-12 00:48:22.129161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.628 qpair failed and we were unable to recover it. 00:35:54.628 [2024-07-12 00:48:22.129246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.628 [2024-07-12 00:48:22.129276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.628 qpair failed and we were unable to recover it. 00:35:54.628 [2024-07-12 00:48:22.129359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.628 [2024-07-12 00:48:22.129390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.628 qpair failed and we were unable to recover it. 00:35:54.628 [2024-07-12 00:48:22.129483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.628 [2024-07-12 00:48:22.129509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.628 qpair failed and we were unable to recover it. 00:35:54.628 [2024-07-12 00:48:22.129592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.628 [2024-07-12 00:48:22.129623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.628 qpair failed and we were unable to recover it. 00:35:54.628 [2024-07-12 00:48:22.129708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.628 [2024-07-12 00:48:22.129737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.628 qpair failed and we were unable to recover it. 00:35:54.628 [2024-07-12 00:48:22.129824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.628 [2024-07-12 00:48:22.129850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.628 qpair failed and we were unable to recover it. 00:35:54.628 [2024-07-12 00:48:22.129933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.628 [2024-07-12 00:48:22.129963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.628 qpair failed and we were unable to recover it. 00:35:54.628 [2024-07-12 00:48:22.130059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.628 [2024-07-12 00:48:22.130088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.628 qpair failed and we were unable to recover it. 00:35:54.628 [2024-07-12 00:48:22.130175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.628 [2024-07-12 00:48:22.130202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.628 qpair failed and we were unable to recover it. 00:35:54.628 [2024-07-12 00:48:22.130286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.629 [2024-07-12 00:48:22.130317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.629 qpair failed and we were unable to recover it. 00:35:54.629 [2024-07-12 00:48:22.130395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.629 [2024-07-12 00:48:22.130422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.629 qpair failed and we were unable to recover it. 00:35:54.629 [2024-07-12 00:48:22.130506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.629 [2024-07-12 00:48:22.130534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.629 qpair failed and we were unable to recover it. 00:35:54.629 [2024-07-12 00:48:22.130631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.629 [2024-07-12 00:48:22.130659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.629 qpair failed and we were unable to recover it. 00:35:54.629 [2024-07-12 00:48:22.130733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.629 [2024-07-12 00:48:22.130759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.629 qpair failed and we were unable to recover it. 00:35:54.629 [2024-07-12 00:48:22.130846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.629 [2024-07-12 00:48:22.130872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.629 qpair failed and we were unable to recover it. 00:35:54.629 [2024-07-12 00:48:22.130949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.629 [2024-07-12 00:48:22.130975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.629 qpair failed and we were unable to recover it. 00:35:54.629 [2024-07-12 00:48:22.131061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.629 [2024-07-12 00:48:22.131087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.629 qpair failed and we were unable to recover it. 00:35:54.629 [2024-07-12 00:48:22.131178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.629 [2024-07-12 00:48:22.131209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.629 qpair failed and we were unable to recover it. 00:35:54.629 [2024-07-12 00:48:22.131299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.629 [2024-07-12 00:48:22.131330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.629 qpair failed and we were unable to recover it. 00:35:54.629 [2024-07-12 00:48:22.131414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.629 [2024-07-12 00:48:22.131443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.629 qpair failed and we were unable to recover it. 00:35:54.629 [2024-07-12 00:48:22.131536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.629 [2024-07-12 00:48:22.131562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.629 qpair failed and we were unable to recover it. 00:35:54.629 [2024-07-12 00:48:22.131663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.629 [2024-07-12 00:48:22.131692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.629 qpair failed and we were unable to recover it. 00:35:54.629 [2024-07-12 00:48:22.131778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.629 [2024-07-12 00:48:22.131804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.629 qpair failed and we were unable to recover it. 00:35:54.629 [2024-07-12 00:48:22.131889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.629 [2024-07-12 00:48:22.131915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.629 qpair failed and we were unable to recover it. 00:35:54.629 [2024-07-12 00:48:22.131991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.629 [2024-07-12 00:48:22.132017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.629 qpair failed and we were unable to recover it. 00:35:54.629 [2024-07-12 00:48:22.132095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.629 [2024-07-12 00:48:22.132120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.629 qpair failed and we were unable to recover it. 00:35:54.629 [2024-07-12 00:48:22.132197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.629 [2024-07-12 00:48:22.132223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.629 qpair failed and we were unable to recover it. 00:35:54.629 [2024-07-12 00:48:22.132301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.629 [2024-07-12 00:48:22.132329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.629 qpair failed and we were unable to recover it. 00:35:54.629 [2024-07-12 00:48:22.132406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.629 [2024-07-12 00:48:22.132434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.629 qpair failed and we were unable to recover it. 00:35:54.629 [2024-07-12 00:48:22.132520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.629 [2024-07-12 00:48:22.132549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.629 qpair failed and we were unable to recover it. 00:35:54.629 [2024-07-12 00:48:22.132654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.629 [2024-07-12 00:48:22.132682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.629 qpair failed and we were unable to recover it. 00:35:54.629 [2024-07-12 00:48:22.132772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.629 [2024-07-12 00:48:22.132798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.629 qpair failed and we were unable to recover it. 00:35:54.629 [2024-07-12 00:48:22.132880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.629 [2024-07-12 00:48:22.132907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.629 qpair failed and we were unable to recover it. 00:35:54.629 [2024-07-12 00:48:22.132985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.629 [2024-07-12 00:48:22.133012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.629 qpair failed and we were unable to recover it. 00:35:54.629 [2024-07-12 00:48:22.133091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.629 [2024-07-12 00:48:22.133117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.629 qpair failed and we were unable to recover it. 00:35:54.629 [2024-07-12 00:48:22.133196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.629 [2024-07-12 00:48:22.133223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.629 qpair failed and we were unable to recover it. 00:35:54.629 [2024-07-12 00:48:22.133307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.629 [2024-07-12 00:48:22.133336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.629 qpair failed and we were unable to recover it. 00:35:54.629 [2024-07-12 00:48:22.133421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.629 [2024-07-12 00:48:22.133448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.629 qpair failed and we were unable to recover it. 00:35:54.629 [2024-07-12 00:48:22.133530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.629 [2024-07-12 00:48:22.133557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.629 qpair failed and we were unable to recover it. 00:35:54.629 [2024-07-12 00:48:22.133645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.629 [2024-07-12 00:48:22.133673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.629 qpair failed and we were unable to recover it. 00:35:54.629 [2024-07-12 00:48:22.133755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.629 [2024-07-12 00:48:22.133782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.629 qpair failed and we were unable to recover it. 00:35:54.629 [2024-07-12 00:48:22.133867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.629 [2024-07-12 00:48:22.133895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.629 qpair failed and we were unable to recover it. 00:35:54.629 [2024-07-12 00:48:22.133986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.629 [2024-07-12 00:48:22.134013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.629 qpair failed and we were unable to recover it. 00:35:54.629 [2024-07-12 00:48:22.134094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.629 [2024-07-12 00:48:22.134124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.629 qpair failed and we were unable to recover it. 00:35:54.629 [2024-07-12 00:48:22.134208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.629 [2024-07-12 00:48:22.134250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.629 qpair failed and we were unable to recover it. 00:35:54.629 [2024-07-12 00:48:22.134333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.629 [2024-07-12 00:48:22.134361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.629 qpair failed and we were unable to recover it. 00:35:54.629 [2024-07-12 00:48:22.134436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.629 [2024-07-12 00:48:22.134463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.629 qpair failed and we were unable to recover it. 00:35:54.629 [2024-07-12 00:48:22.134538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.629 [2024-07-12 00:48:22.134564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.629 qpair failed and we were unable to recover it. 00:35:54.629 [2024-07-12 00:48:22.134651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.629 [2024-07-12 00:48:22.134676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.629 qpair failed and we were unable to recover it. 00:35:54.629 [2024-07-12 00:48:22.134753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.629 [2024-07-12 00:48:22.134778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.629 qpair failed and we were unable to recover it. 00:35:54.629 [2024-07-12 00:48:22.134857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.630 [2024-07-12 00:48:22.134881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.630 qpair failed and we were unable to recover it. 00:35:54.630 [2024-07-12 00:48:22.134963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.630 [2024-07-12 00:48:22.134990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.630 qpair failed and we were unable to recover it. 00:35:54.630 [2024-07-12 00:48:22.135071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.630 [2024-07-12 00:48:22.135097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.630 qpair failed and we were unable to recover it. 00:35:54.630 [2024-07-12 00:48:22.135178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.630 [2024-07-12 00:48:22.135204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.630 qpair failed and we were unable to recover it. 00:35:54.630 [2024-07-12 00:48:22.135283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.630 [2024-07-12 00:48:22.135307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.630 qpair failed and we were unable to recover it. 00:35:54.630 [2024-07-12 00:48:22.135390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.630 [2024-07-12 00:48:22.135416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.630 qpair failed and we were unable to recover it. 00:35:54.630 [2024-07-12 00:48:22.135520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.630 [2024-07-12 00:48:22.135548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.630 qpair failed and we were unable to recover it. 00:35:54.630 [2024-07-12 00:48:22.135641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.630 [2024-07-12 00:48:22.135669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.630 qpair failed and we were unable to recover it. 00:35:54.630 [2024-07-12 00:48:22.135751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.630 [2024-07-12 00:48:22.135777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.630 qpair failed and we were unable to recover it. 00:35:54.630 [2024-07-12 00:48:22.135874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.630 [2024-07-12 00:48:22.135901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.630 qpair failed and we were unable to recover it. 00:35:54.630 [2024-07-12 00:48:22.135978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.630 [2024-07-12 00:48:22.136004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.630 qpair failed and we were unable to recover it. 00:35:54.630 [2024-07-12 00:48:22.136087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.630 [2024-07-12 00:48:22.136124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.630 qpair failed and we were unable to recover it. 00:35:54.630 [2024-07-12 00:48:22.136212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.630 [2024-07-12 00:48:22.136239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.630 qpair failed and we were unable to recover it. 00:35:54.630 [2024-07-12 00:48:22.136327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.630 [2024-07-12 00:48:22.136356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.630 qpair failed and we were unable to recover it. 00:35:54.630 [2024-07-12 00:48:22.136439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.630 [2024-07-12 00:48:22.136468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.630 qpair failed and we were unable to recover it. 00:35:54.630 [2024-07-12 00:48:22.136546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.630 [2024-07-12 00:48:22.136573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.630 qpair failed and we were unable to recover it. 00:35:54.630 [2024-07-12 00:48:22.136662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.630 [2024-07-12 00:48:22.136689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.630 qpair failed and we were unable to recover it. 00:35:54.630 [2024-07-12 00:48:22.136771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.630 [2024-07-12 00:48:22.136798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.630 qpair failed and we were unable to recover it. 00:35:54.630 [2024-07-12 00:48:22.136876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.630 [2024-07-12 00:48:22.136902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.630 qpair failed and we were unable to recover it. 00:35:54.630 [2024-07-12 00:48:22.136977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.630 [2024-07-12 00:48:22.137003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.630 qpair failed and we were unable to recover it. 00:35:54.630 [2024-07-12 00:48:22.137085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.630 [2024-07-12 00:48:22.137120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.630 qpair failed and we were unable to recover it. 00:35:54.630 [2024-07-12 00:48:22.137205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.630 [2024-07-12 00:48:22.137234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.630 qpair failed and we were unable to recover it. 00:35:54.630 [2024-07-12 00:48:22.137311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.630 [2024-07-12 00:48:22.137336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.630 qpair failed and we were unable to recover it. 00:35:54.630 [2024-07-12 00:48:22.137415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.630 [2024-07-12 00:48:22.137441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.630 qpair failed and we were unable to recover it. 00:35:54.630 [2024-07-12 00:48:22.137515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.630 [2024-07-12 00:48:22.137543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.630 qpair failed and we were unable to recover it. 00:35:54.630 [2024-07-12 00:48:22.137637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.630 [2024-07-12 00:48:22.137663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.630 qpair failed and we were unable to recover it. 00:35:54.630 [2024-07-12 00:48:22.137739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.630 [2024-07-12 00:48:22.137764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.630 qpair failed and we were unable to recover it. 00:35:54.630 [2024-07-12 00:48:22.137841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.630 [2024-07-12 00:48:22.137869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.630 qpair failed and we were unable to recover it. 00:35:54.630 [2024-07-12 00:48:22.137952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.630 [2024-07-12 00:48:22.137978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.630 qpair failed and we were unable to recover it. 00:35:54.630 [2024-07-12 00:48:22.138061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.630 [2024-07-12 00:48:22.138087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.630 qpair failed and we were unable to recover it. 00:35:54.630 [2024-07-12 00:48:22.138171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.630 [2024-07-12 00:48:22.138198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.630 qpair failed and we were unable to recover it. 00:35:54.630 [2024-07-12 00:48:22.138275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.630 [2024-07-12 00:48:22.138301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.630 qpair failed and we were unable to recover it. 00:35:54.630 [2024-07-12 00:48:22.138377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.630 [2024-07-12 00:48:22.138403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.630 qpair failed and we were unable to recover it. 00:35:54.630 [2024-07-12 00:48:22.138486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.630 [2024-07-12 00:48:22.138513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.630 qpair failed and we were unable to recover it. 00:35:54.630 [2024-07-12 00:48:22.138602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.630 [2024-07-12 00:48:22.138630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.630 qpair failed and we were unable to recover it. 00:35:54.630 [2024-07-12 00:48:22.138716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.630 [2024-07-12 00:48:22.138745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.630 qpair failed and we were unable to recover it. 00:35:54.630 [2024-07-12 00:48:22.138833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.630 [2024-07-12 00:48:22.138862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.630 qpair failed and we were unable to recover it. 00:35:54.630 [2024-07-12 00:48:22.139005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.630 [2024-07-12 00:48:22.139031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.630 qpair failed and we were unable to recover it. 00:35:54.630 [2024-07-12 00:48:22.139108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.630 [2024-07-12 00:48:22.139134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.630 qpair failed and we were unable to recover it. 00:35:54.630 [2024-07-12 00:48:22.139218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.630 [2024-07-12 00:48:22.139244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.631 qpair failed and we were unable to recover it. 00:35:54.631 [2024-07-12 00:48:22.139330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.631 [2024-07-12 00:48:22.139356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.631 qpair failed and we were unable to recover it. 00:35:54.631 [2024-07-12 00:48:22.139432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.631 [2024-07-12 00:48:22.139457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.631 qpair failed and we were unable to recover it. 00:35:54.631 [2024-07-12 00:48:22.139548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.631 [2024-07-12 00:48:22.139574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.631 qpair failed and we were unable to recover it. 00:35:54.631 [2024-07-12 00:48:22.139666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.631 [2024-07-12 00:48:22.139692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.631 qpair failed and we were unable to recover it. 00:35:54.631 [2024-07-12 00:48:22.139771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.631 [2024-07-12 00:48:22.139796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.631 qpair failed and we were unable to recover it. 00:35:54.631 [2024-07-12 00:48:22.139882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.631 [2024-07-12 00:48:22.139907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.631 qpair failed and we were unable to recover it. 00:35:54.631 [2024-07-12 00:48:22.140010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.631 [2024-07-12 00:48:22.140035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.631 qpair failed and we were unable to recover it. 00:35:54.631 [2024-07-12 00:48:22.140113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.631 [2024-07-12 00:48:22.140146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.631 qpair failed and we were unable to recover it. 00:35:54.631 [2024-07-12 00:48:22.140231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.631 [2024-07-12 00:48:22.140259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.631 qpair failed and we were unable to recover it. 00:35:54.631 [2024-07-12 00:48:22.140340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.631 [2024-07-12 00:48:22.140366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.631 qpair failed and we were unable to recover it. 00:35:54.631 [2024-07-12 00:48:22.140444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.631 [2024-07-12 00:48:22.140470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.631 qpair failed and we were unable to recover it. 00:35:54.631 [2024-07-12 00:48:22.140546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.631 [2024-07-12 00:48:22.140572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.631 qpair failed and we were unable to recover it. 00:35:54.631 [2024-07-12 00:48:22.140663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.631 [2024-07-12 00:48:22.140689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.631 qpair failed and we were unable to recover it. 00:35:54.631 [2024-07-12 00:48:22.140776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.631 [2024-07-12 00:48:22.140803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.631 qpair failed and we were unable to recover it. 00:35:54.631 [2024-07-12 00:48:22.140932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.631 [2024-07-12 00:48:22.140960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.631 qpair failed and we were unable to recover it. 00:35:54.631 [2024-07-12 00:48:22.141053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.631 [2024-07-12 00:48:22.141082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.631 qpair failed and we were unable to recover it. 00:35:54.631 [2024-07-12 00:48:22.141167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.631 [2024-07-12 00:48:22.141195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.631 qpair failed and we were unable to recover it. 00:35:54.631 [2024-07-12 00:48:22.141284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.631 [2024-07-12 00:48:22.141312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.631 qpair failed and we were unable to recover it. 00:35:54.631 [2024-07-12 00:48:22.141395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.631 [2024-07-12 00:48:22.141425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.631 qpair failed and we were unable to recover it. 00:35:54.631 [2024-07-12 00:48:22.141504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.631 [2024-07-12 00:48:22.141533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.631 qpair failed and we were unable to recover it. 00:35:54.631 [2024-07-12 00:48:22.141623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.631 [2024-07-12 00:48:22.141651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.631 qpair failed and we were unable to recover it. 00:35:54.631 [2024-07-12 00:48:22.141734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.631 [2024-07-12 00:48:22.141761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.631 qpair failed and we were unable to recover it. 00:35:54.631 [2024-07-12 00:48:22.141847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.631 [2024-07-12 00:48:22.141874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.631 qpair failed and we were unable to recover it. 00:35:54.631 [2024-07-12 00:48:22.141953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.631 [2024-07-12 00:48:22.141982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.631 qpair failed and we were unable to recover it. 00:35:54.631 [2024-07-12 00:48:22.142064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.631 [2024-07-12 00:48:22.142091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.631 qpair failed and we were unable to recover it. 00:35:54.631 [2024-07-12 00:48:22.142169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.631 [2024-07-12 00:48:22.142197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.631 qpair failed and we were unable to recover it. 00:35:54.631 [2024-07-12 00:48:22.142272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.631 [2024-07-12 00:48:22.142299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.631 qpair failed and we were unable to recover it. 00:35:54.631 [2024-07-12 00:48:22.142384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.631 [2024-07-12 00:48:22.142411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.631 qpair failed and we were unable to recover it. 00:35:54.631 [2024-07-12 00:48:22.142503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.631 [2024-07-12 00:48:22.142529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.631 qpair failed and we were unable to recover it. 00:35:54.631 [2024-07-12 00:48:22.142628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.631 [2024-07-12 00:48:22.142656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.631 qpair failed and we were unable to recover it. 00:35:54.631 [2024-07-12 00:48:22.142738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.631 [2024-07-12 00:48:22.142766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.631 qpair failed and we were unable to recover it. 00:35:54.631 [2024-07-12 00:48:22.142854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.631 [2024-07-12 00:48:22.142882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.631 qpair failed and we were unable to recover it. 00:35:54.631 [2024-07-12 00:48:22.142965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.631 [2024-07-12 00:48:22.142992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.631 qpair failed and we were unable to recover it. 00:35:54.631 [2024-07-12 00:48:22.143068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.631 [2024-07-12 00:48:22.143094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.631 qpair failed and we were unable to recover it. 00:35:54.631 [2024-07-12 00:48:22.143174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.631 [2024-07-12 00:48:22.143204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.631 qpair failed and we were unable to recover it. 00:35:54.631 [2024-07-12 00:48:22.143289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.631 [2024-07-12 00:48:22.143315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.631 qpair failed and we were unable to recover it. 00:35:54.631 [2024-07-12 00:48:22.143393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.631 [2024-07-12 00:48:22.143419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.631 qpair failed and we were unable to recover it. 00:35:54.631 [2024-07-12 00:48:22.143508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.631 [2024-07-12 00:48:22.143535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.631 qpair failed and we were unable to recover it. 00:35:54.631 [2024-07-12 00:48:22.143613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.631 [2024-07-12 00:48:22.143640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.631 qpair failed and we were unable to recover it. 00:35:54.631 [2024-07-12 00:48:22.143719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.632 [2024-07-12 00:48:22.143746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.632 qpair failed and we were unable to recover it. 00:35:54.632 [2024-07-12 00:48:22.143827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.632 [2024-07-12 00:48:22.143853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.632 qpair failed and we were unable to recover it. 00:35:54.632 [2024-07-12 00:48:22.143939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.632 [2024-07-12 00:48:22.143965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.632 qpair failed and we were unable to recover it. 00:35:54.632 [2024-07-12 00:48:22.144053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.632 [2024-07-12 00:48:22.144084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.632 qpair failed and we were unable to recover it. 00:35:54.632 [2024-07-12 00:48:22.144171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.632 [2024-07-12 00:48:22.144199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.632 qpair failed and we were unable to recover it. 00:35:54.632 [2024-07-12 00:48:22.144288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.632 [2024-07-12 00:48:22.144333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.632 qpair failed and we were unable to recover it. 00:35:54.632 [2024-07-12 00:48:22.144416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.632 [2024-07-12 00:48:22.144443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.632 qpair failed and we were unable to recover it. 00:35:54.632 [2024-07-12 00:48:22.144524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.632 [2024-07-12 00:48:22.144551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.632 qpair failed and we were unable to recover it. 00:35:54.632 [2024-07-12 00:48:22.144638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.632 [2024-07-12 00:48:22.144665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.632 qpair failed and we were unable to recover it. 00:35:54.632 [2024-07-12 00:48:22.144756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.632 [2024-07-12 00:48:22.144783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.632 qpair failed and we were unable to recover it. 00:35:54.632 [2024-07-12 00:48:22.144866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.632 [2024-07-12 00:48:22.144893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.632 qpair failed and we were unable to recover it. 00:35:54.632 [2024-07-12 00:48:22.144969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.632 [2024-07-12 00:48:22.144994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.632 qpair failed and we were unable to recover it. 00:35:54.632 [2024-07-12 00:48:22.145083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.632 [2024-07-12 00:48:22.145109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.632 qpair failed and we were unable to recover it. 00:35:54.632 [2024-07-12 00:48:22.145187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.632 [2024-07-12 00:48:22.145215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.632 qpair failed and we were unable to recover it. 00:35:54.632 [2024-07-12 00:48:22.145301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.632 [2024-07-12 00:48:22.145331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.632 qpair failed and we were unable to recover it. 00:35:54.632 [2024-07-12 00:48:22.145417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.632 [2024-07-12 00:48:22.145444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.632 qpair failed and we were unable to recover it. 00:35:54.632 [2024-07-12 00:48:22.145519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.632 [2024-07-12 00:48:22.145545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.632 qpair failed and we were unable to recover it. 00:35:54.632 [2024-07-12 00:48:22.145626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.632 [2024-07-12 00:48:22.145654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.632 qpair failed and we were unable to recover it. 00:35:54.632 [2024-07-12 00:48:22.145735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.632 [2024-07-12 00:48:22.145760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.632 qpair failed and we were unable to recover it. 00:35:54.632 [2024-07-12 00:48:22.145833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.632 [2024-07-12 00:48:22.145857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.632 qpair failed and we were unable to recover it. 00:35:54.632 [2024-07-12 00:48:22.145936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.632 [2024-07-12 00:48:22.145965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.632 qpair failed and we were unable to recover it. 00:35:54.632 [2024-07-12 00:48:22.146054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.632 [2024-07-12 00:48:22.146083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.632 qpair failed and we were unable to recover it. 00:35:54.632 [2024-07-12 00:48:22.146179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.632 [2024-07-12 00:48:22.146207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.632 qpair failed and we were unable to recover it. 00:35:54.632 [2024-07-12 00:48:22.146306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.632 [2024-07-12 00:48:22.146333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.632 qpair failed and we were unable to recover it. 00:35:54.632 [2024-07-12 00:48:22.146411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.632 [2024-07-12 00:48:22.146437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.632 qpair failed and we were unable to recover it. 00:35:54.632 [2024-07-12 00:48:22.146511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.632 [2024-07-12 00:48:22.146546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.632 qpair failed and we were unable to recover it. 00:35:54.632 [2024-07-12 00:48:22.146634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.632 [2024-07-12 00:48:22.146662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.632 qpair failed and we were unable to recover it. 00:35:54.632 [2024-07-12 00:48:22.146741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.632 [2024-07-12 00:48:22.146767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.632 qpair failed and we were unable to recover it. 00:35:54.632 [2024-07-12 00:48:22.146855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.632 [2024-07-12 00:48:22.146884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.632 qpair failed and we were unable to recover it. 00:35:54.632 [2024-07-12 00:48:22.146961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.632 [2024-07-12 00:48:22.146986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.632 qpair failed and we were unable to recover it. 00:35:54.632 [2024-07-12 00:48:22.147062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.632 [2024-07-12 00:48:22.147087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.632 qpair failed and we were unable to recover it. 00:35:54.632 [2024-07-12 00:48:22.147172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.632 [2024-07-12 00:48:22.147201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.632 qpair failed and we were unable to recover it. 00:35:54.632 [2024-07-12 00:48:22.147282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.632 [2024-07-12 00:48:22.147309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.632 qpair failed and we were unable to recover it. 00:35:54.632 [2024-07-12 00:48:22.147410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.632 [2024-07-12 00:48:22.147450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.632 qpair failed and we were unable to recover it. 00:35:54.632 [2024-07-12 00:48:22.147543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.632 [2024-07-12 00:48:22.147572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.632 qpair failed and we were unable to recover it. 00:35:54.632 [2024-07-12 00:48:22.147666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.633 [2024-07-12 00:48:22.147699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.633 qpair failed and we were unable to recover it. 00:35:54.633 [2024-07-12 00:48:22.147783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.633 [2024-07-12 00:48:22.147810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.633 qpair failed and we were unable to recover it. 00:35:54.633 [2024-07-12 00:48:22.147893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.633 [2024-07-12 00:48:22.147919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.633 qpair failed and we were unable to recover it. 00:35:54.633 [2024-07-12 00:48:22.148002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.633 [2024-07-12 00:48:22.148029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.633 qpair failed and we were unable to recover it. 00:35:54.633 [2024-07-12 00:48:22.148117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.633 [2024-07-12 00:48:22.148145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.633 qpair failed and we were unable to recover it. 00:35:54.633 [2024-07-12 00:48:22.148222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.633 [2024-07-12 00:48:22.148248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.633 qpair failed and we were unable to recover it. 00:35:54.633 [2024-07-12 00:48:22.148327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.633 [2024-07-12 00:48:22.148353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.633 qpair failed and we were unable to recover it. 00:35:54.633 [2024-07-12 00:48:22.148437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.633 [2024-07-12 00:48:22.148465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.633 qpair failed and we were unable to recover it. 00:35:54.633 [2024-07-12 00:48:22.148548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.633 [2024-07-12 00:48:22.148576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.633 qpair failed and we were unable to recover it. 00:35:54.633 [2024-07-12 00:48:22.148675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.633 [2024-07-12 00:48:22.148702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.633 qpair failed and we were unable to recover it. 00:35:54.633 [2024-07-12 00:48:22.148790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.633 [2024-07-12 00:48:22.148816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.633 qpair failed and we were unable to recover it. 00:35:54.633 [2024-07-12 00:48:22.148896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.633 [2024-07-12 00:48:22.148922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.633 qpair failed and we were unable to recover it. 00:35:54.633 [2024-07-12 00:48:22.149002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.633 [2024-07-12 00:48:22.149027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.633 qpair failed and we were unable to recover it. 00:35:54.633 [2024-07-12 00:48:22.149108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.633 [2024-07-12 00:48:22.149133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.633 qpair failed and we were unable to recover it. 00:35:54.633 [2024-07-12 00:48:22.149212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.633 [2024-07-12 00:48:22.149237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.633 qpair failed and we were unable to recover it. 00:35:54.633 [2024-07-12 00:48:22.149319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.633 [2024-07-12 00:48:22.149345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.633 qpair failed and we were unable to recover it. 00:35:54.633 [2024-07-12 00:48:22.149435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.633 [2024-07-12 00:48:22.149463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.633 qpair failed and we were unable to recover it. 00:35:54.633 [2024-07-12 00:48:22.149544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.633 [2024-07-12 00:48:22.149573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.633 qpair failed and we were unable to recover it. 00:35:54.633 [2024-07-12 00:48:22.149659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.633 [2024-07-12 00:48:22.149686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.633 qpair failed and we were unable to recover it. 00:35:54.633 [2024-07-12 00:48:22.149767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.633 [2024-07-12 00:48:22.149794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.633 qpair failed and we were unable to recover it. 00:35:54.633 [2024-07-12 00:48:22.149882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.633 [2024-07-12 00:48:22.149909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.633 qpair failed and we were unable to recover it. 00:35:54.633 [2024-07-12 00:48:22.149995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.633 [2024-07-12 00:48:22.150024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.633 qpair failed and we were unable to recover it. 00:35:54.633 [2024-07-12 00:48:22.150106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.633 [2024-07-12 00:48:22.150149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.633 qpair failed and we were unable to recover it. 00:35:54.633 [2024-07-12 00:48:22.150230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.633 [2024-07-12 00:48:22.150256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.633 qpair failed and we were unable to recover it. 00:35:54.633 [2024-07-12 00:48:22.150332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.633 [2024-07-12 00:48:22.150359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.633 qpair failed and we were unable to recover it. 00:35:54.633 [2024-07-12 00:48:22.150450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.633 [2024-07-12 00:48:22.150477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.633 qpair failed and we were unable to recover it. 00:35:54.633 [2024-07-12 00:48:22.150561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.633 [2024-07-12 00:48:22.150597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.633 qpair failed and we were unable to recover it. 00:35:54.633 [2024-07-12 00:48:22.150683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.633 [2024-07-12 00:48:22.150716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.633 qpair failed and we were unable to recover it. 00:35:54.633 [2024-07-12 00:48:22.150803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.633 [2024-07-12 00:48:22.150829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.633 qpair failed and we were unable to recover it. 00:35:54.633 [2024-07-12 00:48:22.150915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.633 [2024-07-12 00:48:22.150942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.633 qpair failed and we were unable to recover it. 00:35:54.633 [2024-07-12 00:48:22.151028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.633 [2024-07-12 00:48:22.151054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.633 qpair failed and we were unable to recover it. 00:35:54.633 [2024-07-12 00:48:22.151132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.633 [2024-07-12 00:48:22.151157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.633 qpair failed and we were unable to recover it. 00:35:54.633 [2024-07-12 00:48:22.151259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.633 [2024-07-12 00:48:22.151287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.633 qpair failed and we were unable to recover it. 00:35:54.633 [2024-07-12 00:48:22.151375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.633 [2024-07-12 00:48:22.151402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.633 qpair failed and we were unable to recover it. 00:35:54.633 [2024-07-12 00:48:22.151488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.633 [2024-07-12 00:48:22.151515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.633 qpair failed and we were unable to recover it. 00:35:54.633 [2024-07-12 00:48:22.151606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.633 [2024-07-12 00:48:22.151631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.633 qpair failed and we were unable to recover it. 00:35:54.633 [2024-07-12 00:48:22.151714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.633 [2024-07-12 00:48:22.151739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.633 qpair failed and we were unable to recover it. 00:35:54.633 [2024-07-12 00:48:22.151819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.633 [2024-07-12 00:48:22.151844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.633 qpair failed and we were unable to recover it. 00:35:54.633 [2024-07-12 00:48:22.151928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.633 [2024-07-12 00:48:22.151955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.633 qpair failed and we were unable to recover it. 00:35:54.633 [2024-07-12 00:48:22.152056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.633 [2024-07-12 00:48:22.152082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.633 qpair failed and we were unable to recover it. 00:35:54.633 [2024-07-12 00:48:22.152161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.634 [2024-07-12 00:48:22.152188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.634 qpair failed and we were unable to recover it. 00:35:54.634 [2024-07-12 00:48:22.152281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.634 [2024-07-12 00:48:22.152307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.634 qpair failed and we were unable to recover it. 00:35:54.634 [2024-07-12 00:48:22.152394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.634 [2024-07-12 00:48:22.152423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.634 qpair failed and we were unable to recover it. 00:35:54.634 [2024-07-12 00:48:22.152506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.634 [2024-07-12 00:48:22.152532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.634 qpair failed and we were unable to recover it. 00:35:54.634 [2024-07-12 00:48:22.152617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.634 [2024-07-12 00:48:22.152646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.634 qpair failed and we were unable to recover it. 00:35:54.634 [2024-07-12 00:48:22.152728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.634 [2024-07-12 00:48:22.152754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.634 qpair failed and we were unable to recover it. 00:35:54.634 [2024-07-12 00:48:22.152846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.634 [2024-07-12 00:48:22.152873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.634 qpair failed and we were unable to recover it. 00:35:54.634 [2024-07-12 00:48:22.152951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.634 [2024-07-12 00:48:22.152976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.634 qpair failed and we were unable to recover it. 00:35:54.634 [2024-07-12 00:48:22.153057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.634 [2024-07-12 00:48:22.153083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.634 qpair failed and we were unable to recover it. 00:35:54.634 [2024-07-12 00:48:22.153161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.634 [2024-07-12 00:48:22.153186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.634 qpair failed and we were unable to recover it. 00:35:54.634 [2024-07-12 00:48:22.153264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.634 [2024-07-12 00:48:22.153290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.634 qpair failed and we were unable to recover it. 00:35:54.634 [2024-07-12 00:48:22.153367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.634 [2024-07-12 00:48:22.153393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.634 qpair failed and we were unable to recover it. 00:35:54.634 [2024-07-12 00:48:22.153473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.634 [2024-07-12 00:48:22.153501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.634 qpair failed and we were unable to recover it. 00:35:54.634 [2024-07-12 00:48:22.153577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.634 [2024-07-12 00:48:22.153616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.634 qpair failed and we were unable to recover it. 00:35:54.634 [2024-07-12 00:48:22.153705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.634 [2024-07-12 00:48:22.153739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.634 qpair failed and we were unable to recover it. 00:35:54.634 [2024-07-12 00:48:22.153836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.634 [2024-07-12 00:48:22.153865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.634 qpair failed and we were unable to recover it. 00:35:54.634 [2024-07-12 00:48:22.153952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.634 [2024-07-12 00:48:22.153979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.634 qpair failed and we were unable to recover it. 00:35:54.634 [2024-07-12 00:48:22.154056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.634 [2024-07-12 00:48:22.154083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.634 qpair failed and we were unable to recover it. 00:35:54.634 [2024-07-12 00:48:22.154166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.634 [2024-07-12 00:48:22.154194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.634 qpair failed and we were unable to recover it. 00:35:54.634 [2024-07-12 00:48:22.154275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.634 [2024-07-12 00:48:22.154311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.634 qpair failed and we were unable to recover it. 00:35:54.634 [2024-07-12 00:48:22.154415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.634 [2024-07-12 00:48:22.154442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.634 qpair failed and we were unable to recover it. 00:35:54.634 [2024-07-12 00:48:22.154522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.634 [2024-07-12 00:48:22.154549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.634 qpair failed and we were unable to recover it. 00:35:54.634 [2024-07-12 00:48:22.154639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.634 [2024-07-12 00:48:22.154667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.634 qpair failed and we were unable to recover it. 00:35:54.634 [2024-07-12 00:48:22.154752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.634 [2024-07-12 00:48:22.154779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.634 qpair failed and we were unable to recover it. 00:35:54.634 [2024-07-12 00:48:22.154854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.634 [2024-07-12 00:48:22.154881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.634 qpair failed and we were unable to recover it. 00:35:54.634 [2024-07-12 00:48:22.154956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.634 [2024-07-12 00:48:22.154982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.634 qpair failed and we were unable to recover it. 00:35:54.634 [2024-07-12 00:48:22.155067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.634 [2024-07-12 00:48:22.155094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.634 qpair failed and we were unable to recover it. 00:35:54.634 [2024-07-12 00:48:22.155175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.634 [2024-07-12 00:48:22.155202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.634 qpair failed and we were unable to recover it. 00:35:54.634 [2024-07-12 00:48:22.155286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.634 [2024-07-12 00:48:22.155313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.634 qpair failed and we were unable to recover it. 00:35:54.634 [2024-07-12 00:48:22.155390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.634 [2024-07-12 00:48:22.155416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.634 qpair failed and we were unable to recover it. 00:35:54.634 [2024-07-12 00:48:22.155490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.634 [2024-07-12 00:48:22.155515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.634 qpair failed and we were unable to recover it. 00:35:54.634 [2024-07-12 00:48:22.155601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.634 [2024-07-12 00:48:22.155626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.634 qpair failed and we were unable to recover it. 00:35:54.634 [2024-07-12 00:48:22.155708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.634 [2024-07-12 00:48:22.155733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.634 qpair failed and we were unable to recover it. 00:35:54.634 [2024-07-12 00:48:22.155810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.634 [2024-07-12 00:48:22.155836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.634 qpair failed and we were unable to recover it. 00:35:54.634 [2024-07-12 00:48:22.155917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.634 [2024-07-12 00:48:22.155944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.634 qpair failed and we were unable to recover it. 00:35:54.634 [2024-07-12 00:48:22.156022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.634 [2024-07-12 00:48:22.156048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.634 qpair failed and we were unable to recover it. 00:35:54.634 [2024-07-12 00:48:22.156143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.634 [2024-07-12 00:48:22.156169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.634 qpair failed and we were unable to recover it. 00:35:54.634 [2024-07-12 00:48:22.156249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.634 [2024-07-12 00:48:22.156276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.634 qpair failed and we were unable to recover it. 00:35:54.634 [2024-07-12 00:48:22.156357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.634 [2024-07-12 00:48:22.156383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.634 qpair failed and we were unable to recover it. 00:35:54.634 [2024-07-12 00:48:22.156463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.634 [2024-07-12 00:48:22.156489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.634 qpair failed and we were unable to recover it. 00:35:54.634 [2024-07-12 00:48:22.156573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.635 [2024-07-12 00:48:22.156607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.635 qpair failed and we were unable to recover it. 00:35:54.635 [2024-07-12 00:48:22.156692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.635 [2024-07-12 00:48:22.156720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.635 qpair failed and we were unable to recover it. 00:35:54.635 [2024-07-12 00:48:22.156831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.635 [2024-07-12 00:48:22.156860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.635 qpair failed and we were unable to recover it. 00:35:54.635 [2024-07-12 00:48:22.156946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.635 [2024-07-12 00:48:22.156974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.635 qpair failed and we were unable to recover it. 00:35:54.635 [2024-07-12 00:48:22.157052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.635 [2024-07-12 00:48:22.157078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.635 qpair failed and we were unable to recover it. 00:35:54.635 [2024-07-12 00:48:22.157162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.635 [2024-07-12 00:48:22.157189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.635 qpair failed and we were unable to recover it. 00:35:54.635 [2024-07-12 00:48:22.157271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.635 [2024-07-12 00:48:22.157296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.635 qpair failed and we were unable to recover it. 00:35:54.635 [2024-07-12 00:48:22.157375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.635 [2024-07-12 00:48:22.157400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.635 qpair failed and we were unable to recover it. 00:35:54.635 [2024-07-12 00:48:22.157483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.635 [2024-07-12 00:48:22.157508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.635 qpair failed and we were unable to recover it. 00:35:54.635 [2024-07-12 00:48:22.157613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.635 [2024-07-12 00:48:22.157641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.635 qpair failed and we were unable to recover it. 00:35:54.635 [2024-07-12 00:48:22.157729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.635 [2024-07-12 00:48:22.157758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.635 qpair failed and we were unable to recover it. 00:35:54.635 [2024-07-12 00:48:22.157844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.635 [2024-07-12 00:48:22.157872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.635 qpair failed and we were unable to recover it. 00:35:54.635 [2024-07-12 00:48:22.157959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.635 [2024-07-12 00:48:22.157988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.635 qpair failed and we were unable to recover it. 00:35:54.635 [2024-07-12 00:48:22.158070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.635 [2024-07-12 00:48:22.158097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.635 qpair failed and we were unable to recover it. 00:35:54.635 [2024-07-12 00:48:22.158175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.635 [2024-07-12 00:48:22.158205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.635 qpair failed and we were unable to recover it. 00:35:54.635 [2024-07-12 00:48:22.158290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.635 [2024-07-12 00:48:22.158316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.635 qpair failed and we were unable to recover it. 00:35:54.635 [2024-07-12 00:48:22.158400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.635 [2024-07-12 00:48:22.158425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.635 qpair failed and we were unable to recover it. 00:35:54.635 [2024-07-12 00:48:22.158514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.635 [2024-07-12 00:48:22.158540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.635 qpair failed and we were unable to recover it. 00:35:54.635 [2024-07-12 00:48:22.158638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.635 [2024-07-12 00:48:22.158664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.635 qpair failed and we were unable to recover it. 00:35:54.635 [2024-07-12 00:48:22.158750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.635 [2024-07-12 00:48:22.158778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.635 qpair failed and we were unable to recover it. 00:35:54.635 [2024-07-12 00:48:22.158865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.635 [2024-07-12 00:48:22.158894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.635 qpair failed and we were unable to recover it. 00:35:54.635 [2024-07-12 00:48:22.158984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.635 [2024-07-12 00:48:22.159011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.635 qpair failed and we were unable to recover it. 00:35:54.635 [2024-07-12 00:48:22.159089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.635 [2024-07-12 00:48:22.159115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.635 qpair failed and we were unable to recover it. 00:35:54.635 [2024-07-12 00:48:22.159196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.635 [2024-07-12 00:48:22.159222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.635 qpair failed and we were unable to recover it. 00:35:54.635 [2024-07-12 00:48:22.159299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.635 [2024-07-12 00:48:22.159324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.635 qpair failed and we were unable to recover it. 00:35:54.635 [2024-07-12 00:48:22.159409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.635 [2024-07-12 00:48:22.159436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.635 qpair failed and we were unable to recover it. 00:35:54.635 [2024-07-12 00:48:22.159519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.635 [2024-07-12 00:48:22.159545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.635 qpair failed and we were unable to recover it. 00:35:54.635 [2024-07-12 00:48:22.159630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.635 [2024-07-12 00:48:22.159657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.635 qpair failed and we were unable to recover it. 00:35:54.635 [2024-07-12 00:48:22.159743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.635 [2024-07-12 00:48:22.159770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.635 qpair failed and we were unable to recover it. 00:35:54.635 [2024-07-12 00:48:22.159850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.635 [2024-07-12 00:48:22.159877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.635 qpair failed and we were unable to recover it. 00:35:54.635 [2024-07-12 00:48:22.159955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.635 [2024-07-12 00:48:22.159983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.635 qpair failed and we were unable to recover it. 00:35:54.635 [2024-07-12 00:48:22.160067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.635 [2024-07-12 00:48:22.160095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.635 qpair failed and we were unable to recover it. 00:35:54.635 [2024-07-12 00:48:22.160187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.635 [2024-07-12 00:48:22.160221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.635 qpair failed and we were unable to recover it. 00:35:54.635 [2024-07-12 00:48:22.160310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.635 [2024-07-12 00:48:22.160337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.635 qpair failed and we were unable to recover it. 00:35:54.635 [2024-07-12 00:48:22.160421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.635 [2024-07-12 00:48:22.160448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.635 qpair failed and we were unable to recover it. 00:35:54.635 [2024-07-12 00:48:22.160527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.635 [2024-07-12 00:48:22.160553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.635 qpair failed and we were unable to recover it. 00:35:54.635 [2024-07-12 00:48:22.160650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.635 [2024-07-12 00:48:22.160677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.635 qpair failed and we were unable to recover it. 00:35:54.635 [2024-07-12 00:48:22.160769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.635 [2024-07-12 00:48:22.160797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.635 qpair failed and we were unable to recover it. 00:35:54.635 [2024-07-12 00:48:22.160880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.635 [2024-07-12 00:48:22.160909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.635 qpair failed and we were unable to recover it. 00:35:54.635 [2024-07-12 00:48:22.160985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.635 [2024-07-12 00:48:22.161011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.635 qpair failed and we were unable to recover it. 00:35:54.635 [2024-07-12 00:48:22.161119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.635 [2024-07-12 00:48:22.161145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.636 qpair failed and we were unable to recover it. 00:35:54.636 [2024-07-12 00:48:22.161228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.636 [2024-07-12 00:48:22.161260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.636 qpair failed and we were unable to recover it. 00:35:54.636 [2024-07-12 00:48:22.161349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.636 [2024-07-12 00:48:22.161375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.636 qpair failed and we were unable to recover it. 00:35:54.636 [2024-07-12 00:48:22.161462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.636 [2024-07-12 00:48:22.161489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.636 qpair failed and we were unable to recover it. 00:35:54.636 [2024-07-12 00:48:22.161578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.636 [2024-07-12 00:48:22.161618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.636 qpair failed and we were unable to recover it. 00:35:54.636 [2024-07-12 00:48:22.161702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.636 [2024-07-12 00:48:22.161731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.636 qpair failed and we were unable to recover it. 00:35:54.636 [2024-07-12 00:48:22.161816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.636 [2024-07-12 00:48:22.161842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.636 qpair failed and we were unable to recover it. 00:35:54.636 [2024-07-12 00:48:22.161925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.636 [2024-07-12 00:48:22.161953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.636 qpair failed and we were unable to recover it. 00:35:54.636 [2024-07-12 00:48:22.162031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.636 [2024-07-12 00:48:22.162057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.636 qpair failed and we were unable to recover it. 00:35:54.636 [2024-07-12 00:48:22.162142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.636 [2024-07-12 00:48:22.162168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.636 qpair failed and we were unable to recover it. 00:35:54.636 [2024-07-12 00:48:22.162251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.636 [2024-07-12 00:48:22.162277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.636 qpair failed and we were unable to recover it. 00:35:54.636 [2024-07-12 00:48:22.162356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.636 [2024-07-12 00:48:22.162382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.636 qpair failed and we were unable to recover it. 00:35:54.636 [2024-07-12 00:48:22.162456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.636 [2024-07-12 00:48:22.162482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.636 qpair failed and we were unable to recover it. 00:35:54.636 [2024-07-12 00:48:22.162563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.636 [2024-07-12 00:48:22.162596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.636 qpair failed and we were unable to recover it. 00:35:54.636 [2024-07-12 00:48:22.162677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.636 [2024-07-12 00:48:22.162703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.636 qpair failed and we were unable to recover it. 00:35:54.636 [2024-07-12 00:48:22.162782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.636 [2024-07-12 00:48:22.162809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.636 qpair failed and we were unable to recover it. 00:35:54.636 [2024-07-12 00:48:22.162884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.636 [2024-07-12 00:48:22.162910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.636 qpair failed and we were unable to recover it. 00:35:54.636 [2024-07-12 00:48:22.162987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.636 [2024-07-12 00:48:22.163016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.636 qpair failed and we were unable to recover it. 00:35:54.636 [2024-07-12 00:48:22.163095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.636 [2024-07-12 00:48:22.163124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.636 qpair failed and we were unable to recover it. 00:35:54.636 [2024-07-12 00:48:22.163216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.636 [2024-07-12 00:48:22.163245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.636 qpair failed and we were unable to recover it. 00:35:54.636 [2024-07-12 00:48:22.163328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.636 [2024-07-12 00:48:22.163355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.636 qpair failed and we were unable to recover it. 00:35:54.636 [2024-07-12 00:48:22.163434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.636 [2024-07-12 00:48:22.163461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.636 qpair failed and we were unable to recover it. 00:35:54.636 [2024-07-12 00:48:22.163545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.636 [2024-07-12 00:48:22.163572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.636 qpair failed and we were unable to recover it. 00:35:54.636 [2024-07-12 00:48:22.163671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.636 [2024-07-12 00:48:22.163697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.636 qpair failed and we were unable to recover it. 00:35:54.636 [2024-07-12 00:48:22.163781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.636 [2024-07-12 00:48:22.163808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.636 qpair failed and we were unable to recover it. 00:35:54.636 [2024-07-12 00:48:22.163888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.636 [2024-07-12 00:48:22.163915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.636 qpair failed and we were unable to recover it. 00:35:54.636 [2024-07-12 00:48:22.164015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.636 [2024-07-12 00:48:22.164043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.636 qpair failed and we were unable to recover it. 00:35:54.636 [2024-07-12 00:48:22.164121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.636 [2024-07-12 00:48:22.164148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.636 qpair failed and we were unable to recover it. 00:35:54.636 [2024-07-12 00:48:22.164228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.636 [2024-07-12 00:48:22.164256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.636 qpair failed and we were unable to recover it. 00:35:54.636 [2024-07-12 00:48:22.164333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.636 [2024-07-12 00:48:22.164362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.636 qpair failed and we were unable to recover it. 00:35:54.636 [2024-07-12 00:48:22.164445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.636 [2024-07-12 00:48:22.164472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.636 qpair failed and we were unable to recover it. 00:35:54.636 [2024-07-12 00:48:22.164582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.636 [2024-07-12 00:48:22.164615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.636 qpair failed and we were unable to recover it. 00:35:54.636 [2024-07-12 00:48:22.164702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.636 [2024-07-12 00:48:22.164728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.636 qpair failed and we were unable to recover it. 00:35:54.636 [2024-07-12 00:48:22.164808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.636 [2024-07-12 00:48:22.164833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.636 qpair failed and we were unable to recover it. 00:35:54.636 [2024-07-12 00:48:22.164917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.636 [2024-07-12 00:48:22.164943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.636 qpair failed and we were unable to recover it. 00:35:54.636 [2024-07-12 00:48:22.165018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.637 [2024-07-12 00:48:22.165044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.637 qpair failed and we were unable to recover it. 00:35:54.637 [2024-07-12 00:48:22.165134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.637 [2024-07-12 00:48:22.165162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.637 qpair failed and we were unable to recover it. 00:35:54.637 [2024-07-12 00:48:22.165238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.637 [2024-07-12 00:48:22.165266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.637 qpair failed and we were unable to recover it. 00:35:54.637 [2024-07-12 00:48:22.165348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.637 [2024-07-12 00:48:22.165377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.637 qpair failed and we were unable to recover it. 00:35:54.637 [2024-07-12 00:48:22.165458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.637 [2024-07-12 00:48:22.165488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.637 qpair failed and we were unable to recover it. 00:35:54.637 [2024-07-12 00:48:22.165573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.637 [2024-07-12 00:48:22.165608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.637 qpair failed and we were unable to recover it. 00:35:54.637 [2024-07-12 00:48:22.165689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.637 [2024-07-12 00:48:22.165727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.637 qpair failed and we were unable to recover it. 00:35:54.637 [2024-07-12 00:48:22.165833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.637 [2024-07-12 00:48:22.165861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.637 qpair failed and we were unable to recover it. 00:35:54.637 [2024-07-12 00:48:22.165939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.637 [2024-07-12 00:48:22.165966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.637 qpair failed and we were unable to recover it. 00:35:54.637 [2024-07-12 00:48:22.166052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.637 [2024-07-12 00:48:22.166079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.637 qpair failed and we were unable to recover it. 00:35:54.637 [2024-07-12 00:48:22.166163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.637 [2024-07-12 00:48:22.166189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.637 qpair failed and we were unable to recover it. 00:35:54.637 [2024-07-12 00:48:22.166278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.637 [2024-07-12 00:48:22.166304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.637 qpair failed and we were unable to recover it. 00:35:54.637 [2024-07-12 00:48:22.166384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.637 [2024-07-12 00:48:22.166411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.637 qpair failed and we were unable to recover it. 00:35:54.637 [2024-07-12 00:48:22.166548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.637 [2024-07-12 00:48:22.166574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.637 qpair failed and we were unable to recover it. 00:35:54.637 [2024-07-12 00:48:22.166664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.637 [2024-07-12 00:48:22.166691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.637 qpair failed and we were unable to recover it. 00:35:54.637 [2024-07-12 00:48:22.166802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.637 [2024-07-12 00:48:22.166828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.637 qpair failed and we were unable to recover it. 00:35:54.637 [2024-07-12 00:48:22.166907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.637 [2024-07-12 00:48:22.166933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.637 qpair failed and we were unable to recover it. 00:35:54.637 [2024-07-12 00:48:22.167022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.637 [2024-07-12 00:48:22.167051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.637 qpair failed and we were unable to recover it. 00:35:54.637 [2024-07-12 00:48:22.167137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.637 [2024-07-12 00:48:22.167164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.637 qpair failed and we were unable to recover it. 00:35:54.637 [2024-07-12 00:48:22.167243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.637 [2024-07-12 00:48:22.167269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.637 qpair failed and we were unable to recover it. 00:35:54.637 [2024-07-12 00:48:22.167367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.637 [2024-07-12 00:48:22.167393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.637 qpair failed and we were unable to recover it. 00:35:54.637 [2024-07-12 00:48:22.167468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.637 [2024-07-12 00:48:22.167495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.637 qpair failed and we were unable to recover it. 00:35:54.637 [2024-07-12 00:48:22.167598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.637 [2024-07-12 00:48:22.167627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.637 qpair failed and we were unable to recover it. 00:35:54.637 [2024-07-12 00:48:22.167711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.637 [2024-07-12 00:48:22.167737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.637 qpair failed and we were unable to recover it. 00:35:54.637 [2024-07-12 00:48:22.167823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.637 [2024-07-12 00:48:22.167850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.637 qpair failed and we were unable to recover it. 00:35:54.637 [2024-07-12 00:48:22.167943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.637 [2024-07-12 00:48:22.167970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.637 qpair failed and we were unable to recover it. 00:35:54.637 [2024-07-12 00:48:22.168047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.637 [2024-07-12 00:48:22.168075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.637 qpair failed and we were unable to recover it. 00:35:54.637 [2024-07-12 00:48:22.168158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.637 [2024-07-12 00:48:22.168188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.637 qpair failed and we were unable to recover it. 00:35:54.637 [2024-07-12 00:48:22.168266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.637 [2024-07-12 00:48:22.168292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.637 qpair failed and we were unable to recover it. 00:35:54.637 [2024-07-12 00:48:22.168371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.637 [2024-07-12 00:48:22.168397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.637 qpair failed and we were unable to recover it. 00:35:54.637 [2024-07-12 00:48:22.168480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.637 [2024-07-12 00:48:22.168506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.637 qpair failed and we were unable to recover it. 00:35:54.637 [2024-07-12 00:48:22.168591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.637 [2024-07-12 00:48:22.168617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.637 qpair failed and we were unable to recover it. 00:35:54.637 [2024-07-12 00:48:22.168713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.637 [2024-07-12 00:48:22.168740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.637 qpair failed and we were unable to recover it. 00:35:54.637 [2024-07-12 00:48:22.168821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.637 [2024-07-12 00:48:22.168852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.637 qpair failed and we were unable to recover it. 00:35:54.637 [2024-07-12 00:48:22.168927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.637 [2024-07-12 00:48:22.168953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.637 qpair failed and we were unable to recover it. 00:35:54.637 [2024-07-12 00:48:22.169035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.637 [2024-07-12 00:48:22.169062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.637 qpair failed and we were unable to recover it. 00:35:54.637 [2024-07-12 00:48:22.169141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.637 [2024-07-12 00:48:22.169168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.637 qpair failed and we were unable to recover it. 00:35:54.637 [2024-07-12 00:48:22.169249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.637 [2024-07-12 00:48:22.169275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.637 qpair failed and we were unable to recover it. 00:35:54.637 [2024-07-12 00:48:22.169350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.637 [2024-07-12 00:48:22.169375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.637 qpair failed and we were unable to recover it. 00:35:54.637 [2024-07-12 00:48:22.169450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.637 [2024-07-12 00:48:22.169475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.637 qpair failed and we were unable to recover it. 00:35:54.637 [2024-07-12 00:48:22.169549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.637 [2024-07-12 00:48:22.169574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.637 qpair failed and we were unable to recover it. 00:35:54.638 [2024-07-12 00:48:22.169662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.638 [2024-07-12 00:48:22.169690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.638 qpair failed and we were unable to recover it. 00:35:54.638 [2024-07-12 00:48:22.169771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.638 [2024-07-12 00:48:22.169797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.638 qpair failed and we were unable to recover it. 00:35:54.638 [2024-07-12 00:48:22.169885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.638 [2024-07-12 00:48:22.169914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.638 qpair failed and we were unable to recover it. 00:35:54.638 [2024-07-12 00:48:22.170006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.638 [2024-07-12 00:48:22.170033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.638 qpair failed and we were unable to recover it. 00:35:54.638 [2024-07-12 00:48:22.170114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.638 [2024-07-12 00:48:22.170140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.638 qpair failed and we were unable to recover it. 00:35:54.638 [2024-07-12 00:48:22.170228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.638 [2024-07-12 00:48:22.170256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.638 qpair failed and we were unable to recover it. 00:35:54.638 [2024-07-12 00:48:22.170341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.638 [2024-07-12 00:48:22.170368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.638 qpair failed and we were unable to recover it. 00:35:54.638 [2024-07-12 00:48:22.170446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.638 [2024-07-12 00:48:22.170472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.638 qpair failed and we were unable to recover it. 00:35:54.638 [2024-07-12 00:48:22.170555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.638 [2024-07-12 00:48:22.170581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.638 qpair failed and we were unable to recover it. 00:35:54.638 [2024-07-12 00:48:22.170676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.638 [2024-07-12 00:48:22.170704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.638 qpair failed and we were unable to recover it. 00:35:54.638 [2024-07-12 00:48:22.170828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.638 [2024-07-12 00:48:22.170857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.638 qpair failed and we were unable to recover it. 00:35:54.638 [2024-07-12 00:48:22.170947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.638 [2024-07-12 00:48:22.170974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.638 qpair failed and we were unable to recover it. 00:35:54.638 [2024-07-12 00:48:22.171072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.638 [2024-07-12 00:48:22.171100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.638 qpair failed and we were unable to recover it. 00:35:54.638 [2024-07-12 00:48:22.171182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.638 [2024-07-12 00:48:22.171208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.638 qpair failed and we were unable to recover it. 00:35:54.638 [2024-07-12 00:48:22.171292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.638 [2024-07-12 00:48:22.171318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.638 qpair failed and we were unable to recover it. 00:35:54.638 [2024-07-12 00:48:22.171418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.638 [2024-07-12 00:48:22.171444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.638 qpair failed and we were unable to recover it. 00:35:54.638 [2024-07-12 00:48:22.171528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.638 [2024-07-12 00:48:22.171558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.638 qpair failed and we were unable to recover it. 00:35:54.638 [2024-07-12 00:48:22.171652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.638 [2024-07-12 00:48:22.171691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.638 qpair failed and we were unable to recover it. 00:35:54.638 [2024-07-12 00:48:22.171782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.638 [2024-07-12 00:48:22.171809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.638 qpair failed and we were unable to recover it. 00:35:54.638 [2024-07-12 00:48:22.171894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.638 [2024-07-12 00:48:22.171921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.638 qpair failed and we were unable to recover it. 00:35:54.638 [2024-07-12 00:48:22.172010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.638 [2024-07-12 00:48:22.172040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.638 qpair failed and we were unable to recover it. 00:35:54.638 [2024-07-12 00:48:22.172130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.638 [2024-07-12 00:48:22.172157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.638 qpair failed and we were unable to recover it. 00:35:54.638 [2024-07-12 00:48:22.172259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.638 [2024-07-12 00:48:22.172286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.638 qpair failed and we were unable to recover it. 00:35:54.638 [2024-07-12 00:48:22.172361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.638 [2024-07-12 00:48:22.172388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.638 qpair failed and we were unable to recover it. 00:35:54.638 [2024-07-12 00:48:22.172464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.638 [2024-07-12 00:48:22.172489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.638 qpair failed and we were unable to recover it. 00:35:54.638 [2024-07-12 00:48:22.172583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.638 [2024-07-12 00:48:22.172614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.638 qpair failed and we were unable to recover it. 00:35:54.638 [2024-07-12 00:48:22.172693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.638 [2024-07-12 00:48:22.172719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.638 qpair failed and we were unable to recover it. 00:35:54.638 [2024-07-12 00:48:22.172810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.638 [2024-07-12 00:48:22.172837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.638 qpair failed and we were unable to recover it. 00:35:54.638 [2024-07-12 00:48:22.172922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.638 [2024-07-12 00:48:22.172949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.638 qpair failed and we were unable to recover it. 00:35:54.638 [2024-07-12 00:48:22.173025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.638 [2024-07-12 00:48:22.173051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.638 qpair failed and we were unable to recover it. 00:35:54.638 [2024-07-12 00:48:22.173136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.638 [2024-07-12 00:48:22.173162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.638 qpair failed and we were unable to recover it. 00:35:54.638 [2024-07-12 00:48:22.173247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.638 [2024-07-12 00:48:22.173275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.638 qpair failed and we were unable to recover it. 00:35:54.638 [2024-07-12 00:48:22.173351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.638 [2024-07-12 00:48:22.173381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.638 qpair failed and we were unable to recover it. 00:35:54.638 [2024-07-12 00:48:22.173466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.638 [2024-07-12 00:48:22.173492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.638 qpair failed and we were unable to recover it. 00:35:54.638 [2024-07-12 00:48:22.173567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.638 [2024-07-12 00:48:22.173601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.638 qpair failed and we were unable to recover it. 00:35:54.638 [2024-07-12 00:48:22.173688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.638 [2024-07-12 00:48:22.173716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.638 qpair failed and we were unable to recover it. 00:35:54.638 [2024-07-12 00:48:22.173804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.638 [2024-07-12 00:48:22.173830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.638 qpair failed and we were unable to recover it. 00:35:54.638 [2024-07-12 00:48:22.173909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.638 [2024-07-12 00:48:22.173937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.638 qpair failed and we were unable to recover it. 00:35:54.638 [2024-07-12 00:48:22.174025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.638 [2024-07-12 00:48:22.174051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.638 qpair failed and we were unable to recover it. 00:35:54.638 [2024-07-12 00:48:22.174143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.638 [2024-07-12 00:48:22.174183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.638 qpair failed and we were unable to recover it. 00:35:54.638 [2024-07-12 00:48:22.174267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.639 [2024-07-12 00:48:22.174295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.639 qpair failed and we were unable to recover it. 00:35:54.639 [2024-07-12 00:48:22.174371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.639 [2024-07-12 00:48:22.174397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.639 qpair failed and we were unable to recover it. 00:35:54.639 [2024-07-12 00:48:22.174492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.639 [2024-07-12 00:48:22.174518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.639 qpair failed and we were unable to recover it. 00:35:54.639 [2024-07-12 00:48:22.174595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.639 [2024-07-12 00:48:22.174622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.639 qpair failed and we were unable to recover it. 00:35:54.639 [2024-07-12 00:48:22.174700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.639 [2024-07-12 00:48:22.174726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.639 qpair failed and we were unable to recover it. 00:35:54.639 [2024-07-12 00:48:22.174813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.639 [2024-07-12 00:48:22.174841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.639 qpair failed and we were unable to recover it. 00:35:54.639 [2024-07-12 00:48:22.174925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.639 [2024-07-12 00:48:22.174951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.639 qpair failed and we were unable to recover it. 00:35:54.639 [2024-07-12 00:48:22.175042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.639 [2024-07-12 00:48:22.175068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.639 qpair failed and we were unable to recover it. 00:35:54.639 [2024-07-12 00:48:22.175151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.639 [2024-07-12 00:48:22.175177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.639 qpair failed and we were unable to recover it. 00:35:54.639 [2024-07-12 00:48:22.175262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.639 [2024-07-12 00:48:22.175288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.639 qpair failed and we were unable to recover it. 00:35:54.639 [2024-07-12 00:48:22.175373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.639 [2024-07-12 00:48:22.175402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.639 qpair failed and we were unable to recover it. 00:35:54.639 [2024-07-12 00:48:22.175483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.639 [2024-07-12 00:48:22.175509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.639 qpair failed and we were unable to recover it. 00:35:54.639 [2024-07-12 00:48:22.175605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.639 [2024-07-12 00:48:22.175633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.639 qpair failed and we were unable to recover it. 00:35:54.639 [2024-07-12 00:48:22.175713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.639 [2024-07-12 00:48:22.175739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.639 qpair failed and we were unable to recover it. 00:35:54.639 [2024-07-12 00:48:22.175822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.639 [2024-07-12 00:48:22.175849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.639 qpair failed and we were unable to recover it. 00:35:54.639 [2024-07-12 00:48:22.175940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.639 [2024-07-12 00:48:22.175969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.639 qpair failed and we were unable to recover it. 00:35:54.639 [2024-07-12 00:48:22.176056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.639 [2024-07-12 00:48:22.176084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.639 qpair failed and we were unable to recover it. 00:35:54.639 [2024-07-12 00:48:22.176172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.639 [2024-07-12 00:48:22.176199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.639 qpair failed and we were unable to recover it. 00:35:54.639 [2024-07-12 00:48:22.176281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.639 [2024-07-12 00:48:22.176308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.639 qpair failed and we were unable to recover it. 00:35:54.639 [2024-07-12 00:48:22.176437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.639 [2024-07-12 00:48:22.176468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.639 qpair failed and we were unable to recover it. 00:35:54.639 [2024-07-12 00:48:22.176558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.639 [2024-07-12 00:48:22.176591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.639 qpair failed and we were unable to recover it. 00:35:54.639 [2024-07-12 00:48:22.176683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.639 [2024-07-12 00:48:22.176711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.639 qpair failed and we were unable to recover it. 00:35:54.639 [2024-07-12 00:48:22.176789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.639 [2024-07-12 00:48:22.176815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.639 qpair failed and we were unable to recover it. 00:35:54.639 [2024-07-12 00:48:22.176889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.639 [2024-07-12 00:48:22.176915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.639 qpair failed and we were unable to recover it. 00:35:54.639 [2024-07-12 00:48:22.177000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.639 [2024-07-12 00:48:22.177027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.639 qpair failed and we were unable to recover it. 00:35:54.639 [2024-07-12 00:48:22.177109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.639 [2024-07-12 00:48:22.177137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.639 qpair failed and we were unable to recover it. 00:35:54.639 [2024-07-12 00:48:22.177224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.639 [2024-07-12 00:48:22.177250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.639 qpair failed and we were unable to recover it. 00:35:54.639 [2024-07-12 00:48:22.177328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.639 [2024-07-12 00:48:22.177355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.639 qpair failed and we were unable to recover it. 00:35:54.639 [2024-07-12 00:48:22.177429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.639 [2024-07-12 00:48:22.177455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.639 qpair failed and we were unable to recover it. 00:35:54.639 [2024-07-12 00:48:22.177531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.639 [2024-07-12 00:48:22.177558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.639 qpair failed and we were unable to recover it. 00:35:54.639 [2024-07-12 00:48:22.177653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.639 [2024-07-12 00:48:22.177680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.639 qpair failed and we were unable to recover it. 00:35:54.639 [2024-07-12 00:48:22.177766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.639 [2024-07-12 00:48:22.177792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.639 qpair failed and we were unable to recover it. 00:35:54.639 [2024-07-12 00:48:22.177867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.639 [2024-07-12 00:48:22.177893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.639 qpair failed and we were unable to recover it. 00:35:54.639 [2024-07-12 00:48:22.177979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.639 [2024-07-12 00:48:22.178005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.639 qpair failed and we were unable to recover it. 00:35:54.639 [2024-07-12 00:48:22.178088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.639 [2024-07-12 00:48:22.178115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.639 qpair failed and we were unable to recover it. 00:35:54.639 [2024-07-12 00:48:22.178198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.639 [2024-07-12 00:48:22.178228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.639 qpair failed and we were unable to recover it. 00:35:54.639 [2024-07-12 00:48:22.178307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.639 [2024-07-12 00:48:22.178336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.639 qpair failed and we were unable to recover it. 00:35:54.639 [2024-07-12 00:48:22.178413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.639 [2024-07-12 00:48:22.178439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.639 qpair failed and we were unable to recover it. 00:35:54.639 [2024-07-12 00:48:22.178518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.639 [2024-07-12 00:48:22.178543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.639 qpair failed and we were unable to recover it. 00:35:54.639 [2024-07-12 00:48:22.178638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.639 [2024-07-12 00:48:22.178664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.639 qpair failed and we were unable to recover it. 00:35:54.639 [2024-07-12 00:48:22.178760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.639 [2024-07-12 00:48:22.178789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.639 qpair failed and we were unable to recover it. 00:35:54.640 [2024-07-12 00:48:22.178883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.640 [2024-07-12 00:48:22.178910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.640 qpair failed and we were unable to recover it. 00:35:54.640 [2024-07-12 00:48:22.178988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.640 [2024-07-12 00:48:22.179014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.640 qpair failed and we were unable to recover it. 00:35:54.640 [2024-07-12 00:48:22.179090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.640 [2024-07-12 00:48:22.179116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.640 qpair failed and we were unable to recover it. 00:35:54.640 [2024-07-12 00:48:22.179198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.640 [2024-07-12 00:48:22.179226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.640 qpair failed and we were unable to recover it. 00:35:54.640 [2024-07-12 00:48:22.179315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.640 [2024-07-12 00:48:22.179345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.640 qpair failed and we were unable to recover it. 00:35:54.640 [2024-07-12 00:48:22.179435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.640 [2024-07-12 00:48:22.179462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.640 qpair failed and we were unable to recover it. 00:35:54.640 [2024-07-12 00:48:22.179538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.640 [2024-07-12 00:48:22.179564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.640 qpair failed and we were unable to recover it. 00:35:54.640 [2024-07-12 00:48:22.179657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.640 [2024-07-12 00:48:22.179687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.640 qpair failed and we were unable to recover it. 00:35:54.640 [2024-07-12 00:48:22.179771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.640 [2024-07-12 00:48:22.179797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.640 qpair failed and we were unable to recover it. 00:35:54.640 [2024-07-12 00:48:22.179874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.640 [2024-07-12 00:48:22.179900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.640 qpair failed and we were unable to recover it. 00:35:54.640 [2024-07-12 00:48:22.179991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.640 [2024-07-12 00:48:22.180019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.640 qpair failed and we were unable to recover it. 00:35:54.640 [2024-07-12 00:48:22.180111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.640 [2024-07-12 00:48:22.180138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.640 qpair failed and we were unable to recover it. 00:35:54.640 [2024-07-12 00:48:22.180227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.640 [2024-07-12 00:48:22.180255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.640 qpair failed and we were unable to recover it. 00:35:54.640 [2024-07-12 00:48:22.180332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.640 [2024-07-12 00:48:22.180359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.640 qpair failed and we were unable to recover it. 00:35:54.640 [2024-07-12 00:48:22.180438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.640 [2024-07-12 00:48:22.180466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.640 qpair failed and we were unable to recover it. 00:35:54.640 [2024-07-12 00:48:22.180564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.640 [2024-07-12 00:48:22.180602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.640 qpair failed and we were unable to recover it. 00:35:54.640 [2024-07-12 00:48:22.180688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.640 [2024-07-12 00:48:22.180714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.640 qpair failed and we were unable to recover it. 00:35:54.640 [2024-07-12 00:48:22.180796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.640 [2024-07-12 00:48:22.180821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.640 qpair failed and we were unable to recover it. 00:35:54.640 [2024-07-12 00:48:22.180900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.640 [2024-07-12 00:48:22.180930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.640 qpair failed and we were unable to recover it. 00:35:54.640 [2024-07-12 00:48:22.181016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.640 [2024-07-12 00:48:22.181043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.640 qpair failed and we were unable to recover it. 00:35:54.640 [2024-07-12 00:48:22.181130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.640 [2024-07-12 00:48:22.181157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.640 qpair failed and we were unable to recover it. 00:35:54.640 [2024-07-12 00:48:22.181231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.640 [2024-07-12 00:48:22.181258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.640 qpair failed and we were unable to recover it. 00:35:54.640 [2024-07-12 00:48:22.181347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.640 [2024-07-12 00:48:22.181375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.640 qpair failed and we were unable to recover it. 00:35:54.640 [2024-07-12 00:48:22.181458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.640 [2024-07-12 00:48:22.181487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.640 qpair failed and we were unable to recover it. 00:35:54.640 [2024-07-12 00:48:22.181576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.640 [2024-07-12 00:48:22.181614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.640 qpair failed and we were unable to recover it. 00:35:54.640 [2024-07-12 00:48:22.181714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.640 [2024-07-12 00:48:22.181740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.640 qpair failed and we were unable to recover it. 00:35:54.640 [2024-07-12 00:48:22.181829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.640 [2024-07-12 00:48:22.181860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.640 qpair failed and we were unable to recover it. 00:35:54.640 [2024-07-12 00:48:22.181947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.640 [2024-07-12 00:48:22.181975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.640 qpair failed and we were unable to recover it. 00:35:54.640 [2024-07-12 00:48:22.182081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.640 [2024-07-12 00:48:22.182109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.640 qpair failed and we were unable to recover it. 00:35:54.640 [2024-07-12 00:48:22.182203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.640 [2024-07-12 00:48:22.182241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.640 qpair failed and we were unable to recover it. 00:35:54.640 [2024-07-12 00:48:22.182338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.640 [2024-07-12 00:48:22.182367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.640 qpair failed and we were unable to recover it. 00:35:54.640 [2024-07-12 00:48:22.182452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.640 [2024-07-12 00:48:22.182480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.640 qpair failed and we were unable to recover it. 00:35:54.640 [2024-07-12 00:48:22.182573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.640 [2024-07-12 00:48:22.182606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.640 qpair failed and we were unable to recover it. 00:35:54.640 [2024-07-12 00:48:22.182705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.640 [2024-07-12 00:48:22.182731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.640 qpair failed and we were unable to recover it. 00:35:54.640 [2024-07-12 00:48:22.182814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.640 [2024-07-12 00:48:22.182841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.640 qpair failed and we were unable to recover it. 00:35:54.640 [2024-07-12 00:48:22.182925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.640 [2024-07-12 00:48:22.182951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.640 qpair failed and we were unable to recover it. 00:35:54.640 [2024-07-12 00:48:22.183039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.640 [2024-07-12 00:48:22.183071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.640 qpair failed and we were unable to recover it. 00:35:54.640 [2024-07-12 00:48:22.183161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.640 [2024-07-12 00:48:22.183189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.640 qpair failed and we were unable to recover it. 00:35:54.640 [2024-07-12 00:48:22.183272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.640 [2024-07-12 00:48:22.183297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.640 qpair failed and we were unable to recover it. 00:35:54.640 [2024-07-12 00:48:22.183376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.640 [2024-07-12 00:48:22.183401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.640 qpair failed and we were unable to recover it. 00:35:54.640 [2024-07-12 00:48:22.183528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.641 [2024-07-12 00:48:22.183553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.641 qpair failed and we were unable to recover it. 00:35:54.641 [2024-07-12 00:48:22.183690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.641 [2024-07-12 00:48:22.183719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.641 qpair failed and we were unable to recover it. 00:35:54.641 [2024-07-12 00:48:22.183805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.641 [2024-07-12 00:48:22.183832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.641 qpair failed and we were unable to recover it. 00:35:54.641 [2024-07-12 00:48:22.183922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.641 [2024-07-12 00:48:22.183949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.641 qpair failed and we were unable to recover it. 00:35:54.641 [2024-07-12 00:48:22.184023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.641 [2024-07-12 00:48:22.184049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.641 qpair failed and we were unable to recover it. 00:35:54.641 [2024-07-12 00:48:22.184127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.641 [2024-07-12 00:48:22.184158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.641 qpair failed and we were unable to recover it. 00:35:54.641 [2024-07-12 00:48:22.184238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.641 [2024-07-12 00:48:22.184265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.641 qpair failed and we were unable to recover it. 00:35:54.641 [2024-07-12 00:48:22.184347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.641 [2024-07-12 00:48:22.184376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.641 qpair failed and we were unable to recover it. 00:35:54.641 [2024-07-12 00:48:22.184452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.641 [2024-07-12 00:48:22.184478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.641 qpair failed and we were unable to recover it. 00:35:54.641 [2024-07-12 00:48:22.184559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.641 [2024-07-12 00:48:22.184593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.641 qpair failed and we were unable to recover it. 00:35:54.641 [2024-07-12 00:48:22.184677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.641 [2024-07-12 00:48:22.184703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.641 qpair failed and we were unable to recover it. 00:35:54.641 [2024-07-12 00:48:22.184788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.641 [2024-07-12 00:48:22.184814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.641 qpair failed and we were unable to recover it. 00:35:54.641 [2024-07-12 00:48:22.184897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.641 [2024-07-12 00:48:22.184923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.641 qpair failed and we were unable to recover it. 00:35:54.641 [2024-07-12 00:48:22.185008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.641 [2024-07-12 00:48:22.185033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.641 qpair failed and we were unable to recover it. 00:35:54.641 [2024-07-12 00:48:22.185120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.641 [2024-07-12 00:48:22.185145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.641 qpair failed and we were unable to recover it. 00:35:54.641 [2024-07-12 00:48:22.185232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.641 [2024-07-12 00:48:22.185259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.641 qpair failed and we were unable to recover it. 00:35:54.641 [2024-07-12 00:48:22.185337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.641 [2024-07-12 00:48:22.185362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.641 qpair failed and we were unable to recover it. 00:35:54.641 [2024-07-12 00:48:22.185437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.641 [2024-07-12 00:48:22.185463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.641 qpair failed and we were unable to recover it. 00:35:54.641 [2024-07-12 00:48:22.185538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.641 [2024-07-12 00:48:22.185564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.641 qpair failed and we were unable to recover it. 00:35:54.641 [2024-07-12 00:48:22.185660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.641 [2024-07-12 00:48:22.185689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.641 qpair failed and we were unable to recover it. 00:35:54.641 [2024-07-12 00:48:22.185778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.641 [2024-07-12 00:48:22.185806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.641 qpair failed and we were unable to recover it. 00:35:54.641 [2024-07-12 00:48:22.185887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.641 [2024-07-12 00:48:22.185914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.641 qpair failed and we were unable to recover it. 00:35:54.641 [2024-07-12 00:48:22.185998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.641 [2024-07-12 00:48:22.186024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.641 qpair failed and we were unable to recover it. 00:35:54.641 [2024-07-12 00:48:22.186102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.641 [2024-07-12 00:48:22.186128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.641 qpair failed and we were unable to recover it. 00:35:54.641 [2024-07-12 00:48:22.186207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.641 [2024-07-12 00:48:22.186232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.641 qpair failed and we were unable to recover it. 00:35:54.641 [2024-07-12 00:48:22.186327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.641 [2024-07-12 00:48:22.186354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.641 qpair failed and we were unable to recover it. 00:35:54.641 [2024-07-12 00:48:22.186430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.641 [2024-07-12 00:48:22.186456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.641 qpair failed and we were unable to recover it. 00:35:54.641 [2024-07-12 00:48:22.186535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.641 [2024-07-12 00:48:22.186562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.641 qpair failed and we were unable to recover it. 00:35:54.641 [2024-07-12 00:48:22.186661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.641 [2024-07-12 00:48:22.186690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.641 qpair failed and we were unable to recover it. 00:35:54.641 [2024-07-12 00:48:22.186774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.641 [2024-07-12 00:48:22.186799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.641 qpair failed and we were unable to recover it. 00:35:54.641 [2024-07-12 00:48:22.186881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.641 [2024-07-12 00:48:22.186923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.641 qpair failed and we were unable to recover it. 00:35:54.641 [2024-07-12 00:48:22.187005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.641 [2024-07-12 00:48:22.187032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.641 qpair failed and we were unable to recover it. 00:35:54.641 [2024-07-12 00:48:22.187114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.641 [2024-07-12 00:48:22.187142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.641 qpair failed and we were unable to recover it. 00:35:54.641 [2024-07-12 00:48:22.187231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.641 [2024-07-12 00:48:22.187259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.641 qpair failed and we were unable to recover it. 00:35:54.641 [2024-07-12 00:48:22.187346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.641 [2024-07-12 00:48:22.187372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.641 qpair failed and we were unable to recover it. 00:35:54.641 [2024-07-12 00:48:22.187455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.642 [2024-07-12 00:48:22.187481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.642 qpair failed and we were unable to recover it. 00:35:54.642 [2024-07-12 00:48:22.187564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.642 [2024-07-12 00:48:22.187601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.642 qpair failed and we were unable to recover it. 00:35:54.642 [2024-07-12 00:48:22.187697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.642 [2024-07-12 00:48:22.187725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.642 qpair failed and we were unable to recover it. 00:35:54.642 [2024-07-12 00:48:22.187814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.642 [2024-07-12 00:48:22.187840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.642 qpair failed and we were unable to recover it. 00:35:54.642 [2024-07-12 00:48:22.187942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.642 [2024-07-12 00:48:22.187969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.642 qpair failed and we were unable to recover it. 00:35:54.642 [2024-07-12 00:48:22.188050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.642 [2024-07-12 00:48:22.188076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.642 qpair failed and we were unable to recover it. 00:35:54.642 [2024-07-12 00:48:22.188155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.642 [2024-07-12 00:48:22.188181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.642 qpair failed and we were unable to recover it. 00:35:54.642 [2024-07-12 00:48:22.188262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.642 [2024-07-12 00:48:22.188289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.642 qpair failed and we were unable to recover it. 00:35:54.642 [2024-07-12 00:48:22.188381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.642 [2024-07-12 00:48:22.188408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.642 qpair failed and we were unable to recover it. 00:35:54.642 [2024-07-12 00:48:22.188487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.642 [2024-07-12 00:48:22.188516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.642 qpair failed and we were unable to recover it. 00:35:54.642 [2024-07-12 00:48:22.188609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.642 [2024-07-12 00:48:22.188640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.642 qpair failed and we were unable to recover it. 00:35:54.642 [2024-07-12 00:48:22.188744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.642 [2024-07-12 00:48:22.188784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.642 qpair failed and we were unable to recover it. 00:35:54.642 [2024-07-12 00:48:22.188874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.642 [2024-07-12 00:48:22.188901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.642 qpair failed and we were unable to recover it. 00:35:54.642 [2024-07-12 00:48:22.188987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.642 [2024-07-12 00:48:22.189015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.642 qpair failed and we were unable to recover it. 00:35:54.642 [2024-07-12 00:48:22.189098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.642 [2024-07-12 00:48:22.189123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.642 qpair failed and we were unable to recover it. 00:35:54.642 [2024-07-12 00:48:22.189207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.642 [2024-07-12 00:48:22.189234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.642 qpair failed and we were unable to recover it. 00:35:54.642 [2024-07-12 00:48:22.189311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.642 [2024-07-12 00:48:22.189336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.642 qpair failed and we were unable to recover it. 00:35:54.642 [2024-07-12 00:48:22.189416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.642 [2024-07-12 00:48:22.189442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.642 qpair failed and we were unable to recover it. 00:35:54.642 [2024-07-12 00:48:22.189529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.642 [2024-07-12 00:48:22.189560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.642 qpair failed and we were unable to recover it. 00:35:54.642 [2024-07-12 00:48:22.189655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.642 [2024-07-12 00:48:22.189685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.642 qpair failed and we were unable to recover it. 00:35:54.642 [2024-07-12 00:48:22.189773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.642 [2024-07-12 00:48:22.189799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.642 qpair failed and we were unable to recover it. 00:35:54.642 [2024-07-12 00:48:22.189877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.642 [2024-07-12 00:48:22.189904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.642 qpair failed and we were unable to recover it. 00:35:54.642 [2024-07-12 00:48:22.189991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.642 [2024-07-12 00:48:22.190023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.642 qpair failed and we were unable to recover it. 00:35:54.642 [2024-07-12 00:48:22.190101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.642 [2024-07-12 00:48:22.190128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.642 qpair failed and we were unable to recover it. 00:35:54.642 [2024-07-12 00:48:22.190207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.642 [2024-07-12 00:48:22.190233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.642 qpair failed and we were unable to recover it. 00:35:54.642 [2024-07-12 00:48:22.190311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.642 [2024-07-12 00:48:22.190336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.642 qpair failed and we were unable to recover it. 00:35:54.642 [2024-07-12 00:48:22.190417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.642 [2024-07-12 00:48:22.190444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.642 qpair failed and we were unable to recover it. 00:35:54.642 [2024-07-12 00:48:22.190533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.642 [2024-07-12 00:48:22.190559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.642 qpair failed and we were unable to recover it. 00:35:54.642 [2024-07-12 00:48:22.190700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.642 [2024-07-12 00:48:22.190729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.642 qpair failed and we were unable to recover it. 00:35:54.642 [2024-07-12 00:48:22.190815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.642 [2024-07-12 00:48:22.190841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.642 qpair failed and we were unable to recover it. 00:35:54.642 [2024-07-12 00:48:22.190937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.642 [2024-07-12 00:48:22.190965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.642 qpair failed and we were unable to recover it. 00:35:54.642 [2024-07-12 00:48:22.191044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.642 [2024-07-12 00:48:22.191070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.642 qpair failed and we were unable to recover it. 00:35:54.642 [2024-07-12 00:48:22.191153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.642 [2024-07-12 00:48:22.191181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.642 qpair failed and we were unable to recover it. 00:35:54.642 [2024-07-12 00:48:22.191264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.642 [2024-07-12 00:48:22.191290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.642 qpair failed and we were unable to recover it. 00:35:54.642 [2024-07-12 00:48:22.191372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.642 [2024-07-12 00:48:22.191400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.642 qpair failed and we were unable to recover it. 00:35:54.642 [2024-07-12 00:48:22.191495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.642 [2024-07-12 00:48:22.191521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.642 qpair failed and we were unable to recover it. 00:35:54.642 [2024-07-12 00:48:22.191608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.642 [2024-07-12 00:48:22.191635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.642 qpair failed and we were unable to recover it. 00:35:54.642 [2024-07-12 00:48:22.191724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.642 [2024-07-12 00:48:22.191754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.642 qpair failed and we were unable to recover it. 00:35:54.642 [2024-07-12 00:48:22.191840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.642 [2024-07-12 00:48:22.191869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.642 qpair failed and we were unable to recover it. 00:35:54.642 [2024-07-12 00:48:22.191953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.642 [2024-07-12 00:48:22.191981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.642 qpair failed and we were unable to recover it. 00:35:54.642 [2024-07-12 00:48:22.192067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.643 [2024-07-12 00:48:22.192096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.643 qpair failed and we were unable to recover it. 00:35:54.643 [2024-07-12 00:48:22.192183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.643 [2024-07-12 00:48:22.192210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.643 qpair failed and we were unable to recover it. 00:35:54.643 [2024-07-12 00:48:22.192306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.643 [2024-07-12 00:48:22.192332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.643 qpair failed and we were unable to recover it. 00:35:54.643 [2024-07-12 00:48:22.192433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.643 [2024-07-12 00:48:22.192460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.643 qpair failed and we were unable to recover it. 00:35:54.643 [2024-07-12 00:48:22.192537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.643 [2024-07-12 00:48:22.192563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.643 qpair failed and we were unable to recover it. 00:35:54.643 [2024-07-12 00:48:22.192657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.643 [2024-07-12 00:48:22.192686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.643 qpair failed and we were unable to recover it. 00:35:54.643 [2024-07-12 00:48:22.192787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.643 [2024-07-12 00:48:22.192815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.643 qpair failed and we were unable to recover it. 00:35:54.643 [2024-07-12 00:48:22.192904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.643 [2024-07-12 00:48:22.192932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.643 qpair failed and we were unable to recover it. 00:35:54.643 [2024-07-12 00:48:22.193037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.643 [2024-07-12 00:48:22.193064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.643 qpair failed and we were unable to recover it. 00:35:54.643 [2024-07-12 00:48:22.193150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.643 [2024-07-12 00:48:22.193187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.643 qpair failed and we were unable to recover it. 00:35:54.643 [2024-07-12 00:48:22.193270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.643 [2024-07-12 00:48:22.193299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.643 qpair failed and we were unable to recover it. 00:35:54.643 [2024-07-12 00:48:22.193407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.643 [2024-07-12 00:48:22.193436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.643 qpair failed and we were unable to recover it. 00:35:54.643 [2024-07-12 00:48:22.193524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.643 [2024-07-12 00:48:22.193550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.643 qpair failed and we were unable to recover it. 00:35:54.643 [2024-07-12 00:48:22.193638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.643 [2024-07-12 00:48:22.193665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.643 qpair failed and we were unable to recover it. 00:35:54.643 [2024-07-12 00:48:22.193748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.643 [2024-07-12 00:48:22.193774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.643 qpair failed and we were unable to recover it. 00:35:54.643 [2024-07-12 00:48:22.193853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.643 [2024-07-12 00:48:22.193879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.643 qpair failed and we were unable to recover it. 00:35:54.643 [2024-07-12 00:48:22.193961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.643 [2024-07-12 00:48:22.193986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.643 qpair failed and we were unable to recover it. 00:35:54.643 [2024-07-12 00:48:22.194083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.643 [2024-07-12 00:48:22.194112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.643 qpair failed and we were unable to recover it. 00:35:54.643 [2024-07-12 00:48:22.194199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.643 [2024-07-12 00:48:22.194227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.643 qpair failed and we were unable to recover it. 00:35:54.643 [2024-07-12 00:48:22.194317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.643 [2024-07-12 00:48:22.194344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.643 qpair failed and we were unable to recover it. 00:35:54.643 [2024-07-12 00:48:22.194427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.643 [2024-07-12 00:48:22.194452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.643 qpair failed and we were unable to recover it. 00:35:54.643 [2024-07-12 00:48:22.194527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.643 [2024-07-12 00:48:22.194553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.643 qpair failed and we were unable to recover it. 00:35:54.643 [2024-07-12 00:48:22.194641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.643 [2024-07-12 00:48:22.194671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.643 qpair failed and we were unable to recover it. 00:35:54.643 [2024-07-12 00:48:22.194756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.643 [2024-07-12 00:48:22.194784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.643 qpair failed and we were unable to recover it. 00:35:54.643 [2024-07-12 00:48:22.194867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.643 [2024-07-12 00:48:22.194898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.643 qpair failed and we were unable to recover it. 00:35:54.643 [2024-07-12 00:48:22.194972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.643 [2024-07-12 00:48:22.194998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.643 qpair failed and we were unable to recover it. 00:35:54.643 [2024-07-12 00:48:22.195084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.643 [2024-07-12 00:48:22.195114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.643 qpair failed and we were unable to recover it. 00:35:54.643 [2024-07-12 00:48:22.195197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.643 [2024-07-12 00:48:22.195225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.643 qpair failed and we were unable to recover it. 00:35:54.643 [2024-07-12 00:48:22.195307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.643 [2024-07-12 00:48:22.195335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.643 qpair failed and we were unable to recover it. 00:35:54.643 [2024-07-12 00:48:22.195424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.643 [2024-07-12 00:48:22.195450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.643 qpair failed and we were unable to recover it. 00:35:54.643 [2024-07-12 00:48:22.195534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.643 [2024-07-12 00:48:22.195559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.643 qpair failed and we were unable to recover it. 00:35:54.643 [2024-07-12 00:48:22.195653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.643 [2024-07-12 00:48:22.195678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.643 qpair failed and we were unable to recover it. 00:35:54.643 [2024-07-12 00:48:22.195757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.643 [2024-07-12 00:48:22.195786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.643 qpair failed and we were unable to recover it. 00:35:54.643 [2024-07-12 00:48:22.195873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.643 [2024-07-12 00:48:22.195901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.643 qpair failed and we were unable to recover it. 00:35:54.643 [2024-07-12 00:48:22.195977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.643 [2024-07-12 00:48:22.196003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.643 qpair failed and we were unable to recover it. 00:35:54.643 [2024-07-12 00:48:22.196090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.643 [2024-07-12 00:48:22.196120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.643 qpair failed and we were unable to recover it. 00:35:54.643 [2024-07-12 00:48:22.196203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.643 [2024-07-12 00:48:22.196229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.643 qpair failed and we were unable to recover it. 00:35:54.643 [2024-07-12 00:48:22.196309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.643 [2024-07-12 00:48:22.196340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.643 qpair failed and we were unable to recover it. 00:35:54.643 [2024-07-12 00:48:22.196425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.643 [2024-07-12 00:48:22.196451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.643 qpair failed and we were unable to recover it. 00:35:54.643 [2024-07-12 00:48:22.196529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.643 [2024-07-12 00:48:22.196554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.643 qpair failed and we were unable to recover it. 00:35:54.643 [2024-07-12 00:48:22.196647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.644 [2024-07-12 00:48:22.196676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.644 qpair failed and we were unable to recover it. 00:35:54.644 [2024-07-12 00:48:22.196770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.644 [2024-07-12 00:48:22.196797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.644 qpair failed and we were unable to recover it. 00:35:54.644 [2024-07-12 00:48:22.196879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.644 [2024-07-12 00:48:22.196905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.644 qpair failed and we were unable to recover it. 00:35:54.644 [2024-07-12 00:48:22.196991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.644 [2024-07-12 00:48:22.197017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.644 qpair failed and we were unable to recover it. 00:35:54.644 [2024-07-12 00:48:22.197100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.644 [2024-07-12 00:48:22.197131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.644 qpair failed and we were unable to recover it. 00:35:54.644 [2024-07-12 00:48:22.197214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.644 [2024-07-12 00:48:22.197242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.644 qpair failed and we were unable to recover it. 00:35:54.644 [2024-07-12 00:48:22.197323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.644 [2024-07-12 00:48:22.197349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.644 qpair failed and we were unable to recover it. 00:35:54.644 [2024-07-12 00:48:22.197430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.644 [2024-07-12 00:48:22.197456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.644 qpair failed and we were unable to recover it. 00:35:54.644 [2024-07-12 00:48:22.197529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.644 [2024-07-12 00:48:22.197556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.644 qpair failed and we were unable to recover it. 00:35:54.644 [2024-07-12 00:48:22.197654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.644 [2024-07-12 00:48:22.197681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.644 qpair failed and we were unable to recover it. 00:35:54.644 [2024-07-12 00:48:22.197763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.644 [2024-07-12 00:48:22.197789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.644 qpair failed and we were unable to recover it. 00:35:54.644 [2024-07-12 00:48:22.197892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.644 [2024-07-12 00:48:22.197921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.644 qpair failed and we were unable to recover it. 00:35:54.644 [2024-07-12 00:48:22.198022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.644 [2024-07-12 00:48:22.198049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.644 qpair failed and we were unable to recover it. 00:35:54.644 [2024-07-12 00:48:22.198140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.644 [2024-07-12 00:48:22.198168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.644 qpair failed and we were unable to recover it. 00:35:54.644 [2024-07-12 00:48:22.198247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.644 [2024-07-12 00:48:22.198273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.644 qpair failed and we were unable to recover it. 00:35:54.644 [2024-07-12 00:48:22.198350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.644 [2024-07-12 00:48:22.198376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.644 qpair failed and we were unable to recover it. 00:35:54.644 [2024-07-12 00:48:22.198455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.644 [2024-07-12 00:48:22.198487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.644 qpair failed and we were unable to recover it. 00:35:54.644 [2024-07-12 00:48:22.198565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.644 [2024-07-12 00:48:22.198599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.644 qpair failed and we were unable to recover it. 00:35:54.644 [2024-07-12 00:48:22.198690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.644 [2024-07-12 00:48:22.198719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.644 qpair failed and we were unable to recover it. 00:35:54.644 [2024-07-12 00:48:22.198800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.644 [2024-07-12 00:48:22.198827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.644 qpair failed and we were unable to recover it. 00:35:54.644 [2024-07-12 00:48:22.198908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.644 [2024-07-12 00:48:22.198934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.644 qpair failed and we were unable to recover it. 00:35:54.644 [2024-07-12 00:48:22.199025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.644 [2024-07-12 00:48:22.199054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.644 qpair failed and we were unable to recover it. 00:35:54.644 [2024-07-12 00:48:22.199141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.644 [2024-07-12 00:48:22.199184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.644 qpair failed and we were unable to recover it. 00:35:54.644 [2024-07-12 00:48:22.199270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.644 [2024-07-12 00:48:22.199297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.644 qpair failed and we were unable to recover it. 00:35:54.644 [2024-07-12 00:48:22.199372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.644 [2024-07-12 00:48:22.199406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.644 qpair failed and we were unable to recover it. 00:35:54.644 [2024-07-12 00:48:22.199494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.644 [2024-07-12 00:48:22.199520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.644 qpair failed and we were unable to recover it. 00:35:54.644 [2024-07-12 00:48:22.199601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.644 [2024-07-12 00:48:22.199628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.644 qpair failed and we were unable to recover it. 00:35:54.644 [2024-07-12 00:48:22.199708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.644 [2024-07-12 00:48:22.199731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.644 qpair failed and we were unable to recover it. 00:35:54.644 [2024-07-12 00:48:22.199806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.644 [2024-07-12 00:48:22.199831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.644 qpair failed and we were unable to recover it. 00:35:54.644 [2024-07-12 00:48:22.199911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.644 [2024-07-12 00:48:22.199939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.644 qpair failed and we were unable to recover it. 00:35:54.644 [2024-07-12 00:48:22.200027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.644 [2024-07-12 00:48:22.200054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.644 qpair failed and we were unable to recover it. 00:35:54.644 [2024-07-12 00:48:22.200131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.644 [2024-07-12 00:48:22.200157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.644 qpair failed and we were unable to recover it. 00:35:54.644 [2024-07-12 00:48:22.200234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.644 [2024-07-12 00:48:22.200260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.644 qpair failed and we were unable to recover it. 00:35:54.644 [2024-07-12 00:48:22.200337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.644 [2024-07-12 00:48:22.200363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.644 qpair failed and we were unable to recover it. 00:35:54.644 [2024-07-12 00:48:22.200438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.644 [2024-07-12 00:48:22.200465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.644 qpair failed and we were unable to recover it. 00:35:54.644 [2024-07-12 00:48:22.200661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.644 [2024-07-12 00:48:22.200690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.644 qpair failed and we were unable to recover it. 00:35:54.644 [2024-07-12 00:48:22.200774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.644 [2024-07-12 00:48:22.200800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.644 qpair failed and we were unable to recover it. 00:35:54.644 [2024-07-12 00:48:22.200886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.644 [2024-07-12 00:48:22.200914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.644 qpair failed and we were unable to recover it. 00:35:54.644 [2024-07-12 00:48:22.201003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.644 [2024-07-12 00:48:22.201030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.644 qpair failed and we were unable to recover it. 00:35:54.644 [2024-07-12 00:48:22.201112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.644 [2024-07-12 00:48:22.201138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.644 qpair failed and we were unable to recover it. 00:35:54.644 [2024-07-12 00:48:22.201214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.644 [2024-07-12 00:48:22.201241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.645 qpair failed and we were unable to recover it. 00:35:54.645 [2024-07-12 00:48:22.201318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.645 [2024-07-12 00:48:22.201345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.645 qpair failed and we were unable to recover it. 00:35:54.645 [2024-07-12 00:48:22.201427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.645 [2024-07-12 00:48:22.201456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.645 qpair failed and we were unable to recover it. 00:35:54.645 [2024-07-12 00:48:22.201541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.645 [2024-07-12 00:48:22.201569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.645 qpair failed and we were unable to recover it. 00:35:54.645 [2024-07-12 00:48:22.201660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.645 [2024-07-12 00:48:22.201686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.645 qpair failed and we were unable to recover it. 00:35:54.645 [2024-07-12 00:48:22.201767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.645 [2024-07-12 00:48:22.201796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.645 qpair failed and we were unable to recover it. 00:35:54.645 [2024-07-12 00:48:22.201877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.645 [2024-07-12 00:48:22.201903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.645 qpair failed and we were unable to recover it. 00:35:54.645 [2024-07-12 00:48:22.201998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.645 [2024-07-12 00:48:22.202024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.645 qpair failed and we were unable to recover it. 00:35:54.645 [2024-07-12 00:48:22.202115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.645 [2024-07-12 00:48:22.202143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.645 qpair failed and we were unable to recover it. 00:35:54.645 [2024-07-12 00:48:22.202224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.645 [2024-07-12 00:48:22.202251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.645 qpair failed and we were unable to recover it. 00:35:54.645 [2024-07-12 00:48:22.202329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.645 [2024-07-12 00:48:22.202355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.645 qpair failed and we were unable to recover it. 00:35:54.645 [2024-07-12 00:48:22.202431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.645 [2024-07-12 00:48:22.202464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.645 qpair failed and we were unable to recover it. 00:35:54.645 [2024-07-12 00:48:22.202545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.645 [2024-07-12 00:48:22.202571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.645 qpair failed and we were unable to recover it. 00:35:54.645 [2024-07-12 00:48:22.202665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.645 [2024-07-12 00:48:22.202691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.645 qpair failed and we were unable to recover it. 00:35:54.645 [2024-07-12 00:48:22.202770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.645 [2024-07-12 00:48:22.202797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.645 qpair failed and we were unable to recover it. 00:35:54.645 [2024-07-12 00:48:22.202877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.645 [2024-07-12 00:48:22.202902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.645 qpair failed and we were unable to recover it. 00:35:54.645 [2024-07-12 00:48:22.202985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.645 [2024-07-12 00:48:22.203014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.645 qpair failed and we were unable to recover it. 00:35:54.645 [2024-07-12 00:48:22.203100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.645 [2024-07-12 00:48:22.203127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.645 qpair failed and we were unable to recover it. 00:35:54.645 [2024-07-12 00:48:22.203218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.645 [2024-07-12 00:48:22.203244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.645 qpair failed and we were unable to recover it. 00:35:54.645 [2024-07-12 00:48:22.203334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.645 [2024-07-12 00:48:22.203364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.645 qpair failed and we were unable to recover it. 00:35:54.645 [2024-07-12 00:48:22.203448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.645 [2024-07-12 00:48:22.203476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.645 qpair failed and we were unable to recover it. 00:35:54.645 [2024-07-12 00:48:22.203569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.645 [2024-07-12 00:48:22.203608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.645 qpair failed and we were unable to recover it. 00:35:54.645 [2024-07-12 00:48:22.203694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.645 [2024-07-12 00:48:22.203722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.645 qpair failed and we were unable to recover it. 00:35:54.645 [2024-07-12 00:48:22.203805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.645 [2024-07-12 00:48:22.203831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.645 qpair failed and we were unable to recover it. 00:35:54.645 [2024-07-12 00:48:22.203915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.645 [2024-07-12 00:48:22.203941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.645 qpair failed and we were unable to recover it. 00:35:54.645 [2024-07-12 00:48:22.204033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.645 [2024-07-12 00:48:22.204060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.645 qpair failed and we were unable to recover it. 00:35:54.645 [2024-07-12 00:48:22.204143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.645 [2024-07-12 00:48:22.204168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.645 qpair failed and we were unable to recover it. 00:35:54.645 [2024-07-12 00:48:22.204243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.645 [2024-07-12 00:48:22.204268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.645 qpair failed and we were unable to recover it. 00:35:54.645 [2024-07-12 00:48:22.204348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.645 [2024-07-12 00:48:22.204376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.645 qpair failed and we were unable to recover it. 00:35:54.645 [2024-07-12 00:48:22.204460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.645 [2024-07-12 00:48:22.204486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.645 qpair failed and we were unable to recover it. 00:35:54.645 [2024-07-12 00:48:22.204581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.645 [2024-07-12 00:48:22.204616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.645 qpair failed and we were unable to recover it. 00:35:54.645 [2024-07-12 00:48:22.204698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.645 [2024-07-12 00:48:22.204735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.645 qpair failed and we were unable to recover it. 00:35:54.645 [2024-07-12 00:48:22.204816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.645 [2024-07-12 00:48:22.204843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.645 qpair failed and we were unable to recover it. 00:35:54.645 [2024-07-12 00:48:22.204919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.645 [2024-07-12 00:48:22.204945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.645 qpair failed and we were unable to recover it. 00:35:54.645 [2024-07-12 00:48:22.205024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.645 [2024-07-12 00:48:22.205052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.645 qpair failed and we were unable to recover it. 00:35:54.645 [2024-07-12 00:48:22.205128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.645 [2024-07-12 00:48:22.205154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.645 qpair failed and we were unable to recover it. 00:35:54.646 [2024-07-12 00:48:22.205234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.646 [2024-07-12 00:48:22.205260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.646 qpair failed and we were unable to recover it. 00:35:54.646 [2024-07-12 00:48:22.205341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.646 [2024-07-12 00:48:22.205371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.646 qpair failed and we were unable to recover it. 00:35:54.646 [2024-07-12 00:48:22.205455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.646 [2024-07-12 00:48:22.205487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.646 qpair failed and we were unable to recover it. 00:35:54.646 [2024-07-12 00:48:22.205567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.646 [2024-07-12 00:48:22.205600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.646 qpair failed and we were unable to recover it. 00:35:54.646 [2024-07-12 00:48:22.205688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.646 [2024-07-12 00:48:22.205715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.646 qpair failed and we were unable to recover it. 00:35:54.646 [2024-07-12 00:48:22.205797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.646 [2024-07-12 00:48:22.205823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.646 qpair failed and we were unable to recover it. 00:35:54.646 [2024-07-12 00:48:22.205905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.646 [2024-07-12 00:48:22.205933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.646 qpair failed and we were unable to recover it. 00:35:54.646 [2024-07-12 00:48:22.206012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.646 [2024-07-12 00:48:22.206038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.646 qpair failed and we were unable to recover it. 00:35:54.646 [2024-07-12 00:48:22.206120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.646 [2024-07-12 00:48:22.206147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.646 qpair failed and we were unable to recover it. 00:35:54.646 [2024-07-12 00:48:22.206226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.646 [2024-07-12 00:48:22.206254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.646 qpair failed and we were unable to recover it. 00:35:54.646 [2024-07-12 00:48:22.206332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.646 [2024-07-12 00:48:22.206359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.646 qpair failed and we were unable to recover it. 00:35:54.646 [2024-07-12 00:48:22.206442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.646 [2024-07-12 00:48:22.206475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.646 qpair failed and we were unable to recover it. 00:35:54.646 [2024-07-12 00:48:22.206568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.646 [2024-07-12 00:48:22.206603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.646 qpair failed and we were unable to recover it. 00:35:54.646 [2024-07-12 00:48:22.206691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.646 [2024-07-12 00:48:22.206717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.646 qpair failed and we were unable to recover it. 00:35:54.646 [2024-07-12 00:48:22.206804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.646 [2024-07-12 00:48:22.206830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.646 qpair failed and we were unable to recover it. 00:35:54.646 [2024-07-12 00:48:22.206906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.646 [2024-07-12 00:48:22.206932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.646 qpair failed and we were unable to recover it. 00:35:54.646 [2024-07-12 00:48:22.207014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.646 [2024-07-12 00:48:22.207040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.646 qpair failed and we were unable to recover it. 00:35:54.646 [2024-07-12 00:48:22.207119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.646 [2024-07-12 00:48:22.207145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.646 qpair failed and we were unable to recover it. 00:35:54.646 [2024-07-12 00:48:22.207233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.646 [2024-07-12 00:48:22.207262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.646 qpair failed and we were unable to recover it. 00:35:54.646 [2024-07-12 00:48:22.207339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.646 [2024-07-12 00:48:22.207365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.646 qpair failed and we were unable to recover it. 00:35:54.646 [2024-07-12 00:48:22.207448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.646 [2024-07-12 00:48:22.207475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.646 qpair failed and we were unable to recover it. 00:35:54.646 [2024-07-12 00:48:22.207558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.646 [2024-07-12 00:48:22.207585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.646 qpair failed and we were unable to recover it. 00:35:54.646 [2024-07-12 00:48:22.207680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.646 [2024-07-12 00:48:22.207706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.646 qpair failed and we were unable to recover it. 00:35:54.646 [2024-07-12 00:48:22.207796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.646 [2024-07-12 00:48:22.207823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.646 qpair failed and we were unable to recover it. 00:35:54.646 [2024-07-12 00:48:22.207910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.646 [2024-07-12 00:48:22.207936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.646 qpair failed and we were unable to recover it. 00:35:54.646 [2024-07-12 00:48:22.208020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.646 [2024-07-12 00:48:22.208050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.646 qpair failed and we were unable to recover it. 00:35:54.646 [2024-07-12 00:48:22.208132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.646 [2024-07-12 00:48:22.208158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.646 qpair failed and we were unable to recover it. 00:35:54.646 [2024-07-12 00:48:22.208244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.646 [2024-07-12 00:48:22.208279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.646 qpair failed and we were unable to recover it. 00:35:54.646 [2024-07-12 00:48:22.208375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.646 [2024-07-12 00:48:22.208403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.646 qpair failed and we were unable to recover it. 00:35:54.646 [2024-07-12 00:48:22.208498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.646 [2024-07-12 00:48:22.208524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.646 qpair failed and we were unable to recover it. 00:35:54.646 [2024-07-12 00:48:22.208612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.646 [2024-07-12 00:48:22.208639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.646 qpair failed and we were unable to recover it. 00:35:54.646 [2024-07-12 00:48:22.208722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.646 [2024-07-12 00:48:22.208748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.646 qpair failed and we were unable to recover it. 00:35:54.646 [2024-07-12 00:48:22.208832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.646 [2024-07-12 00:48:22.208859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.646 qpair failed and we were unable to recover it. 00:35:54.646 [2024-07-12 00:48:22.208936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.646 [2024-07-12 00:48:22.208962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.646 qpair failed and we were unable to recover it. 00:35:54.646 [2024-07-12 00:48:22.209044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.646 [2024-07-12 00:48:22.209070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.646 qpair failed and we were unable to recover it. 00:35:54.646 [2024-07-12 00:48:22.209150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.646 [2024-07-12 00:48:22.209179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.646 qpair failed and we were unable to recover it. 00:35:54.646 [2024-07-12 00:48:22.209263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.646 [2024-07-12 00:48:22.209290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.646 qpair failed and we were unable to recover it. 00:35:54.646 [2024-07-12 00:48:22.209373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.646 [2024-07-12 00:48:22.209400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.646 qpair failed and we were unable to recover it. 00:35:54.646 [2024-07-12 00:48:22.209483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.646 [2024-07-12 00:48:22.209509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.646 qpair failed and we were unable to recover it. 00:35:54.646 [2024-07-12 00:48:22.209594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.646 [2024-07-12 00:48:22.209621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.646 qpair failed and we were unable to recover it. 00:35:54.646 [2024-07-12 00:48:22.209706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.646 [2024-07-12 00:48:22.209737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.647 qpair failed and we were unable to recover it. 00:35:54.647 [2024-07-12 00:48:22.209820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.647 [2024-07-12 00:48:22.209850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.647 qpair failed and we were unable to recover it. 00:35:54.647 [2024-07-12 00:48:22.209926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.647 [2024-07-12 00:48:22.209957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.647 qpair failed and we were unable to recover it. 00:35:54.647 [2024-07-12 00:48:22.210034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.647 [2024-07-12 00:48:22.210060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.647 qpair failed and we were unable to recover it. 00:35:54.647 [2024-07-12 00:48:22.210143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.647 [2024-07-12 00:48:22.210170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.647 qpair failed and we were unable to recover it. 00:35:54.647 [2024-07-12 00:48:22.210253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.647 [2024-07-12 00:48:22.210280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.647 qpair failed and we were unable to recover it. 00:35:54.647 [2024-07-12 00:48:22.210369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.647 [2024-07-12 00:48:22.210399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.647 qpair failed and we were unable to recover it. 00:35:54.647 [2024-07-12 00:48:22.210477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.647 [2024-07-12 00:48:22.210504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.647 qpair failed and we were unable to recover it. 00:35:54.647 [2024-07-12 00:48:22.210584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.647 [2024-07-12 00:48:22.210616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.647 qpair failed and we were unable to recover it. 00:35:54.647 [2024-07-12 00:48:22.210704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.647 [2024-07-12 00:48:22.210731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.647 qpair failed and we were unable to recover it. 00:35:54.647 [2024-07-12 00:48:22.210816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.647 [2024-07-12 00:48:22.210845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.647 qpair failed and we were unable to recover it. 00:35:54.647 [2024-07-12 00:48:22.210924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.647 [2024-07-12 00:48:22.210950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.647 qpair failed and we were unable to recover it. 00:35:54.647 [2024-07-12 00:48:22.211038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.647 [2024-07-12 00:48:22.211066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.647 qpair failed and we were unable to recover it. 00:35:54.647 [2024-07-12 00:48:22.211145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.647 [2024-07-12 00:48:22.211171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.647 qpair failed and we were unable to recover it. 00:35:54.647 [2024-07-12 00:48:22.211266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.647 [2024-07-12 00:48:22.211294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.647 qpair failed and we were unable to recover it. 00:35:54.647 [2024-07-12 00:48:22.211378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.647 [2024-07-12 00:48:22.211405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.647 qpair failed and we were unable to recover it. 00:35:54.647 [2024-07-12 00:48:22.211500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.647 [2024-07-12 00:48:22.211526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.647 qpair failed and we were unable to recover it. 00:35:54.647 [2024-07-12 00:48:22.211615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.647 [2024-07-12 00:48:22.211643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.647 qpair failed and we were unable to recover it. 00:35:54.647 [2024-07-12 00:48:22.211722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.647 [2024-07-12 00:48:22.211749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.647 qpair failed and we were unable to recover it. 00:35:54.647 [2024-07-12 00:48:22.211826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.647 [2024-07-12 00:48:22.211852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.647 qpair failed and we were unable to recover it. 00:35:54.647 [2024-07-12 00:48:22.211936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.647 [2024-07-12 00:48:22.211966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.647 qpair failed and we were unable to recover it. 00:35:54.647 [2024-07-12 00:48:22.212049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.647 [2024-07-12 00:48:22.212077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.647 qpair failed and we were unable to recover it. 00:35:54.647 [2024-07-12 00:48:22.212168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.647 [2024-07-12 00:48:22.212197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.647 qpair failed and we were unable to recover it. 00:35:54.647 [2024-07-12 00:48:22.212293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.647 [2024-07-12 00:48:22.212321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.647 qpair failed and we were unable to recover it. 00:35:54.647 [2024-07-12 00:48:22.212408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.647 [2024-07-12 00:48:22.212443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.647 qpair failed and we were unable to recover it. 00:35:54.647 [2024-07-12 00:48:22.212525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.647 [2024-07-12 00:48:22.212551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.647 qpair failed and we were unable to recover it. 00:35:54.647 [2024-07-12 00:48:22.212653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.647 [2024-07-12 00:48:22.212681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.647 qpair failed and we were unable to recover it. 00:35:54.647 [2024-07-12 00:48:22.212761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.647 [2024-07-12 00:48:22.212787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.647 qpair failed and we were unable to recover it. 00:35:54.647 [2024-07-12 00:48:22.212879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.647 [2024-07-12 00:48:22.212907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.647 qpair failed and we were unable to recover it. 00:35:54.647 [2024-07-12 00:48:22.212998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.647 [2024-07-12 00:48:22.213027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.647 qpair failed and we were unable to recover it. 00:35:54.647 [2024-07-12 00:48:22.213113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.647 [2024-07-12 00:48:22.213140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.647 qpair failed and we were unable to recover it. 00:35:54.647 [2024-07-12 00:48:22.213225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.647 [2024-07-12 00:48:22.213252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.647 qpair failed and we were unable to recover it. 00:35:54.647 [2024-07-12 00:48:22.213346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.647 [2024-07-12 00:48:22.213372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.647 qpair failed and we were unable to recover it. 00:35:54.647 [2024-07-12 00:48:22.213449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.647 [2024-07-12 00:48:22.213478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.647 qpair failed and we were unable to recover it. 00:35:54.647 [2024-07-12 00:48:22.213570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.647 [2024-07-12 00:48:22.213604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.647 qpair failed and we were unable to recover it. 00:35:54.647 [2024-07-12 00:48:22.213690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.647 [2024-07-12 00:48:22.213718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.647 qpair failed and we were unable to recover it. 00:35:54.647 [2024-07-12 00:48:22.213800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.647 [2024-07-12 00:48:22.213827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.647 qpair failed and we were unable to recover it. 00:35:54.647 [2024-07-12 00:48:22.213910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.647 [2024-07-12 00:48:22.213936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.647 qpair failed and we were unable to recover it. 00:35:54.647 [2024-07-12 00:48:22.214020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.647 [2024-07-12 00:48:22.214047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.647 qpair failed and we were unable to recover it. 00:35:54.647 [2024-07-12 00:48:22.214122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.647 [2024-07-12 00:48:22.214148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.647 qpair failed and we were unable to recover it. 00:35:54.647 [2024-07-12 00:48:22.214223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.647 [2024-07-12 00:48:22.214249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.647 qpair failed and we were unable to recover it. 00:35:54.647 [2024-07-12 00:48:22.214338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.647 [2024-07-12 00:48:22.214365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.647 qpair failed and we were unable to recover it. 00:35:54.648 [2024-07-12 00:48:22.214450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.648 [2024-07-12 00:48:22.214478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.648 qpair failed and we were unable to recover it. 00:35:54.648 [2024-07-12 00:48:22.214567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.648 [2024-07-12 00:48:22.214604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.648 qpair failed and we were unable to recover it. 00:35:54.648 [2024-07-12 00:48:22.214706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.648 [2024-07-12 00:48:22.214732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.648 qpair failed and we were unable to recover it. 00:35:54.648 [2024-07-12 00:48:22.214806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.648 [2024-07-12 00:48:22.214833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.648 qpair failed and we were unable to recover it. 00:35:54.648 [2024-07-12 00:48:22.214920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.648 [2024-07-12 00:48:22.214946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.648 qpair failed and we were unable to recover it. 00:35:54.648 [2024-07-12 00:48:22.215039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.648 [2024-07-12 00:48:22.215065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.648 qpair failed and we were unable to recover it. 00:35:54.648 [2024-07-12 00:48:22.215150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.648 [2024-07-12 00:48:22.215176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.648 qpair failed and we were unable to recover it. 00:35:54.648 [2024-07-12 00:48:22.215264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.648 [2024-07-12 00:48:22.215291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.648 qpair failed and we were unable to recover it. 00:35:54.648 [2024-07-12 00:48:22.215377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.648 [2024-07-12 00:48:22.215404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.648 qpair failed and we were unable to recover it. 00:35:54.648 [2024-07-12 00:48:22.215490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.648 [2024-07-12 00:48:22.215516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.648 qpair failed and we were unable to recover it. 00:35:54.648 [2024-07-12 00:48:22.215606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.648 [2024-07-12 00:48:22.215643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.648 qpair failed and we were unable to recover it. 00:35:54.648 [2024-07-12 00:48:22.215726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.648 [2024-07-12 00:48:22.215754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.648 qpair failed and we were unable to recover it. 00:35:54.648 [2024-07-12 00:48:22.215835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.648 [2024-07-12 00:48:22.215861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.648 qpair failed and we were unable to recover it. 00:35:54.648 [2024-07-12 00:48:22.215939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.648 [2024-07-12 00:48:22.215965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.648 qpair failed and we were unable to recover it. 00:35:54.648 [2024-07-12 00:48:22.216048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.648 [2024-07-12 00:48:22.216074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.648 qpair failed and we were unable to recover it. 00:35:54.648 [2024-07-12 00:48:22.216150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.648 [2024-07-12 00:48:22.216176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.648 qpair failed and we were unable to recover it. 00:35:54.648 [2024-07-12 00:48:22.216262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.648 [2024-07-12 00:48:22.216289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.648 qpair failed and we were unable to recover it. 00:35:54.648 [2024-07-12 00:48:22.216377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.648 [2024-07-12 00:48:22.216421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.648 qpair failed and we were unable to recover it. 00:35:54.648 [2024-07-12 00:48:22.216507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.648 [2024-07-12 00:48:22.216535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.648 qpair failed and we were unable to recover it. 00:35:54.648 [2024-07-12 00:48:22.216619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.648 [2024-07-12 00:48:22.216649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.648 qpair failed and we were unable to recover it. 00:35:54.648 [2024-07-12 00:48:22.216730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.648 [2024-07-12 00:48:22.216756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.648 qpair failed and we were unable to recover it. 00:35:54.648 [2024-07-12 00:48:22.216829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.648 [2024-07-12 00:48:22.216855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.648 qpair failed and we were unable to recover it. 00:35:54.648 [2024-07-12 00:48:22.216928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.648 [2024-07-12 00:48:22.216954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.648 qpair failed and we were unable to recover it. 00:35:54.648 [2024-07-12 00:48:22.217033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.648 [2024-07-12 00:48:22.217060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.648 qpair failed and we were unable to recover it. 00:35:54.648 [2024-07-12 00:48:22.217145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.648 [2024-07-12 00:48:22.217173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.648 qpair failed and we were unable to recover it. 00:35:54.648 [2024-07-12 00:48:22.217255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.648 [2024-07-12 00:48:22.217281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.648 qpair failed and we were unable to recover it. 00:35:54.648 [2024-07-12 00:48:22.217361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.648 [2024-07-12 00:48:22.217386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.648 qpair failed and we were unable to recover it. 00:35:54.648 [2024-07-12 00:48:22.217462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.648 [2024-07-12 00:48:22.217492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.648 qpair failed and we were unable to recover it. 00:35:54.648 [2024-07-12 00:48:22.217567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.648 [2024-07-12 00:48:22.217606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.648 qpair failed and we were unable to recover it. 00:35:54.648 [2024-07-12 00:48:22.217707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.648 [2024-07-12 00:48:22.217735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.648 qpair failed and we were unable to recover it. 00:35:54.648 [2024-07-12 00:48:22.217822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.648 [2024-07-12 00:48:22.217850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.648 qpair failed and we were unable to recover it. 00:35:54.648 [2024-07-12 00:48:22.217933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.648 [2024-07-12 00:48:22.217960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.648 qpair failed and we were unable to recover it. 00:35:54.648 [2024-07-12 00:48:22.218041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.648 [2024-07-12 00:48:22.218068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.648 qpair failed and we were unable to recover it. 00:35:54.648 [2024-07-12 00:48:22.218143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.648 [2024-07-12 00:48:22.218169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.648 qpair failed and we were unable to recover it. 00:35:54.648 [2024-07-12 00:48:22.218252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.648 [2024-07-12 00:48:22.218279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.648 qpair failed and we were unable to recover it. 00:35:54.648 [2024-07-12 00:48:22.218362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.648 [2024-07-12 00:48:22.218389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.648 qpair failed and we were unable to recover it. 00:35:54.648 [2024-07-12 00:48:22.218465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.648 [2024-07-12 00:48:22.218494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.648 qpair failed and we were unable to recover it. 00:35:54.648 [2024-07-12 00:48:22.218584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.648 [2024-07-12 00:48:22.218618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.648 qpair failed and we were unable to recover it. 00:35:54.648 [2024-07-12 00:48:22.218699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.648 [2024-07-12 00:48:22.218724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.648 qpair failed and we were unable to recover it. 00:35:54.648 [2024-07-12 00:48:22.218806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.648 [2024-07-12 00:48:22.218838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.648 qpair failed and we were unable to recover it. 00:35:54.648 [2024-07-12 00:48:22.218930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.649 [2024-07-12 00:48:22.218956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.649 qpair failed and we were unable to recover it. 00:35:54.649 [2024-07-12 00:48:22.219039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.649 [2024-07-12 00:48:22.219065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.649 qpair failed and we were unable to recover it. 00:35:54.649 [2024-07-12 00:48:22.219143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.649 [2024-07-12 00:48:22.219170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.649 qpair failed and we were unable to recover it. 00:35:54.649 [2024-07-12 00:48:22.219247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.649 [2024-07-12 00:48:22.219273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.649 qpair failed and we were unable to recover it. 00:35:54.649 [2024-07-12 00:48:22.219350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.649 [2024-07-12 00:48:22.219376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.649 qpair failed and we were unable to recover it. 00:35:54.649 [2024-07-12 00:48:22.219463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.649 [2024-07-12 00:48:22.219490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.649 qpair failed and we were unable to recover it. 00:35:54.649 [2024-07-12 00:48:22.219583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.649 [2024-07-12 00:48:22.219616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.649 qpair failed and we were unable to recover it. 00:35:54.649 [2024-07-12 00:48:22.219703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.649 [2024-07-12 00:48:22.219730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.649 qpair failed and we were unable to recover it. 00:35:54.649 [2024-07-12 00:48:22.219813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.649 [2024-07-12 00:48:22.219840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.649 qpair failed and we were unable to recover it. 00:35:54.649 [2024-07-12 00:48:22.219923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.649 [2024-07-12 00:48:22.219948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.649 qpair failed and we were unable to recover it. 00:35:54.649 [2024-07-12 00:48:22.220039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.649 [2024-07-12 00:48:22.220067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.649 qpair failed and we were unable to recover it. 00:35:54.649 [2024-07-12 00:48:22.220151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.649 [2024-07-12 00:48:22.220179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.649 qpair failed and we were unable to recover it. 00:35:54.649 [2024-07-12 00:48:22.220261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.649 [2024-07-12 00:48:22.220289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.649 qpair failed and we were unable to recover it. 00:35:54.649 [2024-07-12 00:48:22.220404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.649 [2024-07-12 00:48:22.220429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.649 qpair failed and we were unable to recover it. 00:35:54.649 [2024-07-12 00:48:22.220521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.649 [2024-07-12 00:48:22.220548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.649 qpair failed and we were unable to recover it. 00:35:54.649 [2024-07-12 00:48:22.220639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.649 [2024-07-12 00:48:22.220666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.649 qpair failed and we were unable to recover it. 00:35:54.649 [2024-07-12 00:48:22.220745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.649 [2024-07-12 00:48:22.220771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.649 qpair failed and we were unable to recover it. 00:35:54.649 [2024-07-12 00:48:22.220864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.649 [2024-07-12 00:48:22.220890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.649 qpair failed and we were unable to recover it. 00:35:54.649 [2024-07-12 00:48:22.220970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.649 [2024-07-12 00:48:22.220995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.649 qpair failed and we were unable to recover it. 00:35:54.649 [2024-07-12 00:48:22.221081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.649 [2024-07-12 00:48:22.221110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.649 qpair failed and we were unable to recover it. 00:35:54.649 [2024-07-12 00:48:22.221202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.649 [2024-07-12 00:48:22.221229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.649 qpair failed and we were unable to recover it. 00:35:54.649 [2024-07-12 00:48:22.221333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.649 [2024-07-12 00:48:22.221364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.649 qpair failed and we were unable to recover it. 00:35:54.649 [2024-07-12 00:48:22.221444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.649 [2024-07-12 00:48:22.221471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.649 qpair failed and we were unable to recover it. 00:35:54.649 [2024-07-12 00:48:22.221546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.649 [2024-07-12 00:48:22.221574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.649 qpair failed and we were unable to recover it. 00:35:54.649 [2024-07-12 00:48:22.221668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.649 [2024-07-12 00:48:22.221694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.649 qpair failed and we were unable to recover it. 00:35:54.649 [2024-07-12 00:48:22.221774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.649 [2024-07-12 00:48:22.221801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.649 qpair failed and we were unable to recover it. 00:35:54.649 [2024-07-12 00:48:22.221878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.649 [2024-07-12 00:48:22.221904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.649 qpair failed and we were unable to recover it. 00:35:54.649 [2024-07-12 00:48:22.221986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.649 [2024-07-12 00:48:22.222012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.649 qpair failed and we were unable to recover it. 00:35:54.649 [2024-07-12 00:48:22.222095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.649 [2024-07-12 00:48:22.222121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.649 qpair failed and we were unable to recover it. 00:35:54.649 [2024-07-12 00:48:22.222195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.649 [2024-07-12 00:48:22.222221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.649 qpair failed and we were unable to recover it. 00:35:54.649 [2024-07-12 00:48:22.222309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.649 [2024-07-12 00:48:22.222338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.649 qpair failed and we were unable to recover it. 00:35:54.649 [2024-07-12 00:48:22.222424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.649 [2024-07-12 00:48:22.222451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.649 qpair failed and we were unable to recover it. 00:35:54.649 [2024-07-12 00:48:22.222557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.649 [2024-07-12 00:48:22.222602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.649 qpair failed and we were unable to recover it. 00:35:54.649 [2024-07-12 00:48:22.222683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.649 [2024-07-12 00:48:22.222710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.649 qpair failed and we were unable to recover it. 00:35:54.649 [2024-07-12 00:48:22.222786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.649 [2024-07-12 00:48:22.222812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.649 qpair failed and we were unable to recover it. 00:35:54.649 [2024-07-12 00:48:22.222893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.650 [2024-07-12 00:48:22.222919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.650 qpair failed and we were unable to recover it. 00:35:54.650 [2024-07-12 00:48:22.222997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.650 [2024-07-12 00:48:22.223023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.650 qpair failed and we were unable to recover it. 00:35:54.650 [2024-07-12 00:48:22.223103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.650 [2024-07-12 00:48:22.223134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.650 qpair failed and we were unable to recover it. 00:35:54.650 [2024-07-12 00:48:22.223218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.650 [2024-07-12 00:48:22.223245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.650 qpair failed and we were unable to recover it. 00:35:54.650 [2024-07-12 00:48:22.223324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.650 [2024-07-12 00:48:22.223350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.650 qpair failed and we were unable to recover it. 00:35:54.650 [2024-07-12 00:48:22.223438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.650 [2024-07-12 00:48:22.223467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.650 qpair failed and we were unable to recover it. 00:35:54.650 [2024-07-12 00:48:22.223555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.650 [2024-07-12 00:48:22.223582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.650 qpair failed and we were unable to recover it. 00:35:54.650 [2024-07-12 00:48:22.223687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.650 [2024-07-12 00:48:22.223715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.650 qpair failed and we were unable to recover it. 00:35:54.650 [2024-07-12 00:48:22.223797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.650 [2024-07-12 00:48:22.223823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.650 qpair failed and we were unable to recover it. 00:35:54.650 [2024-07-12 00:48:22.223904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.650 [2024-07-12 00:48:22.223930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.650 qpair failed and we were unable to recover it. 00:35:54.650 [2024-07-12 00:48:22.224009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.650 [2024-07-12 00:48:22.224035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.650 qpair failed and we were unable to recover it. 00:35:54.650 [2024-07-12 00:48:22.224110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.650 [2024-07-12 00:48:22.224136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.650 qpair failed and we were unable to recover it. 00:35:54.650 [2024-07-12 00:48:22.224216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.650 [2024-07-12 00:48:22.224247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.650 qpair failed and we were unable to recover it. 00:35:54.650 [2024-07-12 00:48:22.224322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.650 [2024-07-12 00:48:22.224348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.650 qpair failed and we were unable to recover it. 00:35:54.650 [2024-07-12 00:48:22.224429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.650 [2024-07-12 00:48:22.224457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.650 qpair failed and we were unable to recover it. 00:35:54.650 [2024-07-12 00:48:22.224544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.650 [2024-07-12 00:48:22.224572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.650 qpair failed and we were unable to recover it. 00:35:54.650 [2024-07-12 00:48:22.224676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.650 [2024-07-12 00:48:22.224703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.650 qpair failed and we were unable to recover it. 00:35:54.650 [2024-07-12 00:48:22.224791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.650 [2024-07-12 00:48:22.224821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.650 qpair failed and we were unable to recover it. 00:35:54.650 [2024-07-12 00:48:22.224901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.650 [2024-07-12 00:48:22.224927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.650 qpair failed and we were unable to recover it. 00:35:54.650 [2024-07-12 00:48:22.225008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.650 [2024-07-12 00:48:22.225041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.650 qpair failed and we were unable to recover it. 00:35:54.650 [2024-07-12 00:48:22.225119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.650 [2024-07-12 00:48:22.225145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.650 qpair failed and we were unable to recover it. 00:35:54.650 [2024-07-12 00:48:22.225222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.650 [2024-07-12 00:48:22.225248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.650 qpair failed and we were unable to recover it. 00:35:54.650 [2024-07-12 00:48:22.225339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.650 [2024-07-12 00:48:22.225365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.650 qpair failed and we were unable to recover it. 00:35:54.650 [2024-07-12 00:48:22.225456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.650 [2024-07-12 00:48:22.225483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.650 qpair failed and we were unable to recover it. 00:35:54.650 [2024-07-12 00:48:22.225567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.650 [2024-07-12 00:48:22.225606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.650 qpair failed and we were unable to recover it. 00:35:54.650 [2024-07-12 00:48:22.225685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.650 [2024-07-12 00:48:22.225710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.650 qpair failed and we were unable to recover it. 00:35:54.650 [2024-07-12 00:48:22.225787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.650 [2024-07-12 00:48:22.225812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.650 qpair failed and we were unable to recover it. 00:35:54.650 [2024-07-12 00:48:22.225895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.650 [2024-07-12 00:48:22.225926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.650 qpair failed and we were unable to recover it. 00:35:54.650 [2024-07-12 00:48:22.226006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.650 [2024-07-12 00:48:22.226032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.650 qpair failed and we were unable to recover it. 00:35:54.650 [2024-07-12 00:48:22.226129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.650 [2024-07-12 00:48:22.226157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.650 qpair failed and we were unable to recover it. 00:35:54.650 [2024-07-12 00:48:22.226248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.650 [2024-07-12 00:48:22.226276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.650 qpair failed and we were unable to recover it. 00:35:54.650 [2024-07-12 00:48:22.226361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.650 [2024-07-12 00:48:22.226388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.650 qpair failed and we were unable to recover it. 00:35:54.650 [2024-07-12 00:48:22.226467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.650 [2024-07-12 00:48:22.226493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.650 qpair failed and we were unable to recover it. 00:35:54.650 [2024-07-12 00:48:22.226578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.650 [2024-07-12 00:48:22.226611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.650 qpair failed and we were unable to recover it. 00:35:54.650 [2024-07-12 00:48:22.226687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.650 [2024-07-12 00:48:22.226714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.650 qpair failed and we were unable to recover it. 00:35:54.650 [2024-07-12 00:48:22.226788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.650 [2024-07-12 00:48:22.226816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.650 qpair failed and we were unable to recover it. 00:35:54.650 [2024-07-12 00:48:22.226909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.650 [2024-07-12 00:48:22.226937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.650 qpair failed and we were unable to recover it. 00:35:54.650 [2024-07-12 00:48:22.227024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.650 [2024-07-12 00:48:22.227051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.650 qpair failed and we were unable to recover it. 00:35:54.650 [2024-07-12 00:48:22.227129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.650 [2024-07-12 00:48:22.227156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.650 qpair failed and we were unable to recover it. 00:35:54.650 [2024-07-12 00:48:22.227237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.650 [2024-07-12 00:48:22.227267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.650 qpair failed and we were unable to recover it. 00:35:54.650 [2024-07-12 00:48:22.227351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.650 [2024-07-12 00:48:22.227381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.650 qpair failed and we were unable to recover it. 00:35:54.651 [2024-07-12 00:48:22.227470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.651 [2024-07-12 00:48:22.227496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.651 qpair failed and we were unable to recover it. 00:35:54.651 [2024-07-12 00:48:22.227582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.651 [2024-07-12 00:48:22.227625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.651 qpair failed and we were unable to recover it. 00:35:54.651 [2024-07-12 00:48:22.227713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.651 [2024-07-12 00:48:22.227739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.651 qpair failed and we were unable to recover it. 00:35:54.651 [2024-07-12 00:48:22.227823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.651 [2024-07-12 00:48:22.227849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.651 qpair failed and we were unable to recover it. 00:35:54.651 [2024-07-12 00:48:22.227951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.651 [2024-07-12 00:48:22.227977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.651 qpair failed and we were unable to recover it. 00:35:54.651 [2024-07-12 00:48:22.228067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.651 [2024-07-12 00:48:22.228095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.651 qpair failed and we were unable to recover it. 00:35:54.651 [2024-07-12 00:48:22.228172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.651 [2024-07-12 00:48:22.228199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.651 qpair failed and we were unable to recover it. 00:35:54.651 [2024-07-12 00:48:22.228273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.651 [2024-07-12 00:48:22.228300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.651 qpair failed and we were unable to recover it. 00:35:54.651 [2024-07-12 00:48:22.228374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.651 [2024-07-12 00:48:22.228400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.651 qpair failed and we were unable to recover it. 00:35:54.651 [2024-07-12 00:48:22.228477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.651 [2024-07-12 00:48:22.228504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.651 qpair failed and we were unable to recover it. 00:35:54.651 [2024-07-12 00:48:22.228595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.651 [2024-07-12 00:48:22.228624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.651 qpair failed and we were unable to recover it. 00:35:54.651 [2024-07-12 00:48:22.228714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.651 [2024-07-12 00:48:22.228742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.651 qpair failed and we were unable to recover it. 00:35:54.651 [2024-07-12 00:48:22.228834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.651 [2024-07-12 00:48:22.228870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.651 qpair failed and we were unable to recover it. 00:35:54.651 [2024-07-12 00:48:22.228959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.651 [2024-07-12 00:48:22.228985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.651 qpair failed and we were unable to recover it. 00:35:54.651 [2024-07-12 00:48:22.229060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.651 [2024-07-12 00:48:22.229087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.651 qpair failed and we were unable to recover it. 00:35:54.651 [2024-07-12 00:48:22.229172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.651 [2024-07-12 00:48:22.229198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.651 qpair failed and we were unable to recover it. 00:35:54.651 [2024-07-12 00:48:22.229277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.651 [2024-07-12 00:48:22.229303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.651 qpair failed and we were unable to recover it. 00:35:54.651 [2024-07-12 00:48:22.229384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.651 [2024-07-12 00:48:22.229410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.651 qpair failed and we were unable to recover it. 00:35:54.651 [2024-07-12 00:48:22.229489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.651 [2024-07-12 00:48:22.229520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.651 qpair failed and we were unable to recover it. 00:35:54.651 [2024-07-12 00:48:22.229607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.651 [2024-07-12 00:48:22.229633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.651 qpair failed and we were unable to recover it. 00:35:54.651 [2024-07-12 00:48:22.229709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.651 [2024-07-12 00:48:22.229735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.651 qpair failed and we were unable to recover it. 00:35:54.651 [2024-07-12 00:48:22.229811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.651 [2024-07-12 00:48:22.229836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.651 qpair failed and we were unable to recover it. 00:35:54.651 [2024-07-12 00:48:22.229917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.651 [2024-07-12 00:48:22.229943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.651 qpair failed and we were unable to recover it. 00:35:54.651 [2024-07-12 00:48:22.230024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.651 [2024-07-12 00:48:22.230051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.651 qpair failed and we were unable to recover it. 00:35:54.651 [2024-07-12 00:48:22.230137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.651 [2024-07-12 00:48:22.230165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.651 qpair failed and we were unable to recover it. 00:35:54.651 [2024-07-12 00:48:22.230247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.651 [2024-07-12 00:48:22.230274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.651 qpair failed and we were unable to recover it. 00:35:54.651 [2024-07-12 00:48:22.230366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.651 [2024-07-12 00:48:22.230395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.651 qpair failed and we were unable to recover it. 00:35:54.651 [2024-07-12 00:48:22.230498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.651 [2024-07-12 00:48:22.230525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.651 qpair failed and we were unable to recover it. 00:35:54.651 [2024-07-12 00:48:22.230613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.651 [2024-07-12 00:48:22.230642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.651 qpair failed and we were unable to recover it. 00:35:54.651 [2024-07-12 00:48:22.230731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.651 [2024-07-12 00:48:22.230758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.651 qpair failed and we were unable to recover it. 00:35:54.651 [2024-07-12 00:48:22.230841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.651 [2024-07-12 00:48:22.230867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.651 qpair failed and we were unable to recover it. 00:35:54.651 [2024-07-12 00:48:22.230955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.651 [2024-07-12 00:48:22.230982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.651 qpair failed and we were unable to recover it. 00:35:54.651 [2024-07-12 00:48:22.231069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.651 [2024-07-12 00:48:22.231095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.651 qpair failed and we were unable to recover it. 00:35:54.651 [2024-07-12 00:48:22.231180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.651 [2024-07-12 00:48:22.231206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.651 qpair failed and we were unable to recover it. 00:35:54.651 [2024-07-12 00:48:22.231336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.651 [2024-07-12 00:48:22.231364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.651 qpair failed and we were unable to recover it. 00:35:54.651 [2024-07-12 00:48:22.231446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.651 [2024-07-12 00:48:22.231471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.651 qpair failed and we were unable to recover it. 00:35:54.651 [2024-07-12 00:48:22.231550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.651 [2024-07-12 00:48:22.231578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.651 qpair failed and we were unable to recover it. 00:35:54.651 [2024-07-12 00:48:22.231691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.651 [2024-07-12 00:48:22.231718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.651 qpair failed and we were unable to recover it. 00:35:54.651 [2024-07-12 00:48:22.231795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.651 [2024-07-12 00:48:22.231820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.651 qpair failed and we were unable to recover it. 00:35:54.651 [2024-07-12 00:48:22.231909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.651 [2024-07-12 00:48:22.231934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.651 qpair failed and we were unable to recover it. 00:35:54.651 [2024-07-12 00:48:22.232026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.651 [2024-07-12 00:48:22.232054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.652 qpair failed and we were unable to recover it. 00:35:54.652 [2024-07-12 00:48:22.232142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.652 [2024-07-12 00:48:22.232170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.652 qpair failed and we were unable to recover it. 00:35:54.652 [2024-07-12 00:48:22.232255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.652 [2024-07-12 00:48:22.232281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.652 qpair failed and we were unable to recover it. 00:35:54.652 [2024-07-12 00:48:22.232360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.652 [2024-07-12 00:48:22.232386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.652 qpair failed and we were unable to recover it. 00:35:54.652 [2024-07-12 00:48:22.232469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.652 [2024-07-12 00:48:22.232496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.652 qpair failed and we were unable to recover it. 00:35:54.652 [2024-07-12 00:48:22.232575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.652 [2024-07-12 00:48:22.232613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.652 qpair failed and we were unable to recover it. 00:35:54.652 [2024-07-12 00:48:22.232713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.652 [2024-07-12 00:48:22.232741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.652 qpair failed and we were unable to recover it. 00:35:54.652 [2024-07-12 00:48:22.232839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.652 [2024-07-12 00:48:22.232866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.652 qpair failed and we were unable to recover it. 00:35:54.652 [2024-07-12 00:48:22.232946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.652 [2024-07-12 00:48:22.232975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.652 qpair failed and we were unable to recover it. 00:35:54.652 [2024-07-12 00:48:22.233068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.652 [2024-07-12 00:48:22.233096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.652 qpair failed and we were unable to recover it. 00:35:54.652 [2024-07-12 00:48:22.233173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.652 [2024-07-12 00:48:22.233199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.652 qpair failed and we were unable to recover it. 00:35:54.652 [2024-07-12 00:48:22.233273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.652 [2024-07-12 00:48:22.233299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.652 qpair failed and we were unable to recover it. 00:35:54.652 [2024-07-12 00:48:22.233380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.652 [2024-07-12 00:48:22.233406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.652 qpair failed and we were unable to recover it. 00:35:54.652 [2024-07-12 00:48:22.233481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.652 [2024-07-12 00:48:22.233507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.652 qpair failed and we were unable to recover it. 00:35:54.652 [2024-07-12 00:48:22.233583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.652 [2024-07-12 00:48:22.233614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.652 qpair failed and we were unable to recover it. 00:35:54.652 [2024-07-12 00:48:22.233693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.652 [2024-07-12 00:48:22.233720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.652 qpair failed and we were unable to recover it. 00:35:54.652 [2024-07-12 00:48:22.233797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.652 [2024-07-12 00:48:22.233825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.652 qpair failed and we were unable to recover it. 00:35:54.652 [2024-07-12 00:48:22.233918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.652 [2024-07-12 00:48:22.233952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.652 qpair failed and we were unable to recover it. 00:35:54.652 [2024-07-12 00:48:22.234040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.652 [2024-07-12 00:48:22.234091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.652 qpair failed and we were unable to recover it. 00:35:54.652 [2024-07-12 00:48:22.234183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.652 [2024-07-12 00:48:22.234210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.652 qpair failed and we were unable to recover it. 00:35:54.652 [2024-07-12 00:48:22.234296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.652 [2024-07-12 00:48:22.234325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.652 qpair failed and we were unable to recover it. 00:35:54.652 [2024-07-12 00:48:22.234410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.652 [2024-07-12 00:48:22.234437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.652 qpair failed and we were unable to recover it. 00:35:54.652 [2024-07-12 00:48:22.234514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.652 [2024-07-12 00:48:22.234541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.652 qpair failed and we were unable to recover it. 00:35:54.652 [2024-07-12 00:48:22.234623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.652 [2024-07-12 00:48:22.234651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.652 qpair failed and we were unable to recover it. 00:35:54.652 [2024-07-12 00:48:22.234739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.652 [2024-07-12 00:48:22.234765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.652 qpair failed and we were unable to recover it. 00:35:54.652 [2024-07-12 00:48:22.234844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.652 [2024-07-12 00:48:22.234870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.652 qpair failed and we were unable to recover it. 00:35:54.652 [2024-07-12 00:48:22.234949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.652 [2024-07-12 00:48:22.234975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.652 qpair failed and we were unable to recover it. 00:35:54.652 [2024-07-12 00:48:22.235055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.652 [2024-07-12 00:48:22.235082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.652 qpair failed and we were unable to recover it. 00:35:54.652 [2024-07-12 00:48:22.235161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.652 [2024-07-12 00:48:22.235188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.652 qpair failed and we were unable to recover it. 00:35:54.652 [2024-07-12 00:48:22.235265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.652 [2024-07-12 00:48:22.235291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.652 qpair failed and we were unable to recover it. 00:35:54.652 [2024-07-12 00:48:22.235371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.652 [2024-07-12 00:48:22.235398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.652 qpair failed and we were unable to recover it. 00:35:54.652 [2024-07-12 00:48:22.235472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.652 [2024-07-12 00:48:22.235498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.652 qpair failed and we were unable to recover it. 00:35:54.652 [2024-07-12 00:48:22.235584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.652 [2024-07-12 00:48:22.235621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.652 qpair failed and we were unable to recover it. 00:35:54.652 [2024-07-12 00:48:22.235705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.652 [2024-07-12 00:48:22.235731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.652 qpair failed and we were unable to recover it. 00:35:54.652 [2024-07-12 00:48:22.235810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.652 [2024-07-12 00:48:22.235837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.652 qpair failed and we were unable to recover it. 00:35:54.652 [2024-07-12 00:48:22.235912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.652 [2024-07-12 00:48:22.235938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.652 qpair failed and we were unable to recover it. 00:35:54.652 [2024-07-12 00:48:22.236016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.652 [2024-07-12 00:48:22.236042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.652 qpair failed and we were unable to recover it. 00:35:54.652 [2024-07-12 00:48:22.236119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.652 [2024-07-12 00:48:22.236147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.652 qpair failed and we were unable to recover it. 00:35:54.652 [2024-07-12 00:48:22.236238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.652 [2024-07-12 00:48:22.236263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.652 qpair failed and we were unable to recover it. 00:35:54.652 [2024-07-12 00:48:22.236341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.652 [2024-07-12 00:48:22.236367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.652 qpair failed and we were unable to recover it. 00:35:54.652 [2024-07-12 00:48:22.236441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.652 [2024-07-12 00:48:22.236467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.652 qpair failed and we were unable to recover it. 00:35:54.652 [2024-07-12 00:48:22.236550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.653 [2024-07-12 00:48:22.236576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.653 qpair failed and we were unable to recover it. 00:35:54.653 [2024-07-12 00:48:22.236674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.653 [2024-07-12 00:48:22.236703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.653 qpair failed and we were unable to recover it. 00:35:54.653 [2024-07-12 00:48:22.236786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.653 [2024-07-12 00:48:22.236816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.653 qpair failed and we were unable to recover it. 00:35:54.653 [2024-07-12 00:48:22.236892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.653 [2024-07-12 00:48:22.236919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.653 qpair failed and we were unable to recover it. 00:35:54.653 [2024-07-12 00:48:22.237000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.653 [2024-07-12 00:48:22.237025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.653 qpair failed and we were unable to recover it. 00:35:54.653 [2024-07-12 00:48:22.237109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.653 [2024-07-12 00:48:22.237136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.653 qpair failed and we were unable to recover it. 00:35:54.653 [2024-07-12 00:48:22.237220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.653 [2024-07-12 00:48:22.237253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.653 qpair failed and we were unable to recover it. 00:35:54.653 [2024-07-12 00:48:22.237347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.653 [2024-07-12 00:48:22.237374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.653 qpair failed and we were unable to recover it. 00:35:54.653 [2024-07-12 00:48:22.237461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.653 [2024-07-12 00:48:22.237487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.653 qpair failed and we were unable to recover it. 00:35:54.653 [2024-07-12 00:48:22.237569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.653 [2024-07-12 00:48:22.237602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.653 qpair failed and we were unable to recover it. 00:35:54.653 [2024-07-12 00:48:22.237680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.653 [2024-07-12 00:48:22.237705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.653 qpair failed and we were unable to recover it. 00:35:54.653 [2024-07-12 00:48:22.237791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.653 [2024-07-12 00:48:22.237817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.653 qpair failed and we were unable to recover it. 00:35:54.653 [2024-07-12 00:48:22.237891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.653 [2024-07-12 00:48:22.237917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.653 qpair failed and we were unable to recover it. 00:35:54.653 [2024-07-12 00:48:22.237995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.653 [2024-07-12 00:48:22.238024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.653 qpair failed and we were unable to recover it. 00:35:54.653 [2024-07-12 00:48:22.238101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.653 [2024-07-12 00:48:22.238127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.653 qpair failed and we were unable to recover it. 00:35:54.653 [2024-07-12 00:48:22.238206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.653 [2024-07-12 00:48:22.238234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.653 qpair failed and we were unable to recover it. 00:35:54.653 [2024-07-12 00:48:22.238309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.653 [2024-07-12 00:48:22.238335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.653 qpair failed and we were unable to recover it. 00:35:54.653 [2024-07-12 00:48:22.238408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.653 [2024-07-12 00:48:22.238434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.653 qpair failed and we were unable to recover it. 00:35:54.653 [2024-07-12 00:48:22.238510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.653 [2024-07-12 00:48:22.238549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.653 qpair failed and we were unable to recover it. 00:35:54.653 [2024-07-12 00:48:22.238641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.653 [2024-07-12 00:48:22.238668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.653 qpair failed and we were unable to recover it. 00:35:54.653 [2024-07-12 00:48:22.238751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.653 [2024-07-12 00:48:22.238777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.653 qpair failed and we were unable to recover it. 00:35:54.653 [2024-07-12 00:48:22.238858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.653 [2024-07-12 00:48:22.238885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.653 qpair failed and we were unable to recover it. 00:35:54.653 [2024-07-12 00:48:22.238973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.653 [2024-07-12 00:48:22.238999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.653 qpair failed and we were unable to recover it. 00:35:54.653 [2024-07-12 00:48:22.239088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.653 [2024-07-12 00:48:22.239117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.653 qpair failed and we were unable to recover it. 00:35:54.653 [2024-07-12 00:48:22.239205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.653 [2024-07-12 00:48:22.239233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.653 qpair failed and we were unable to recover it. 00:35:54.653 [2024-07-12 00:48:22.239318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.653 [2024-07-12 00:48:22.239346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.653 qpair failed and we were unable to recover it. 00:35:54.653 [2024-07-12 00:48:22.239431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.653 [2024-07-12 00:48:22.239461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.653 qpair failed and we were unable to recover it. 00:35:54.653 [2024-07-12 00:48:22.239546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.653 [2024-07-12 00:48:22.239572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.653 qpair failed and we were unable to recover it. 00:35:54.653 [2024-07-12 00:48:22.239655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.653 [2024-07-12 00:48:22.239679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.653 qpair failed and we were unable to recover it. 00:35:54.653 [2024-07-12 00:48:22.239768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.653 [2024-07-12 00:48:22.239795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.653 qpair failed and we were unable to recover it. 00:35:54.653 [2024-07-12 00:48:22.239877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.653 [2024-07-12 00:48:22.239902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.653 qpair failed and we were unable to recover it. 00:35:54.653 [2024-07-12 00:48:22.239989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.653 [2024-07-12 00:48:22.240015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.653 qpair failed and we were unable to recover it. 00:35:54.653 [2024-07-12 00:48:22.240108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.653 [2024-07-12 00:48:22.240133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.653 qpair failed and we were unable to recover it. 00:35:54.653 [2024-07-12 00:48:22.240215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.653 [2024-07-12 00:48:22.240243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.653 qpair failed and we were unable to recover it. 00:35:54.653 [2024-07-12 00:48:22.240327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.653 [2024-07-12 00:48:22.240355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.653 qpair failed and we were unable to recover it. 00:35:54.653 [2024-07-12 00:48:22.240435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.653 [2024-07-12 00:48:22.240461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.653 qpair failed and we were unable to recover it. 00:35:54.653 [2024-07-12 00:48:22.240541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.653 [2024-07-12 00:48:22.240567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.653 qpair failed and we were unable to recover it. 00:35:54.653 [2024-07-12 00:48:22.240652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.653 [2024-07-12 00:48:22.240679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.653 qpair failed and we were unable to recover it. 00:35:54.653 [2024-07-12 00:48:22.240764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.653 [2024-07-12 00:48:22.240789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.653 qpair failed and we were unable to recover it. 00:35:54.653 [2024-07-12 00:48:22.240876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.653 [2024-07-12 00:48:22.240901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.653 qpair failed and we were unable to recover it. 00:35:54.653 [2024-07-12 00:48:22.240979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.653 [2024-07-12 00:48:22.241003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.653 qpair failed and we were unable to recover it. 00:35:54.654 [2024-07-12 00:48:22.241082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.654 [2024-07-12 00:48:22.241106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.654 qpair failed and we were unable to recover it. 00:35:54.654 [2024-07-12 00:48:22.241181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.654 [2024-07-12 00:48:22.241205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.654 qpair failed and we were unable to recover it. 00:35:54.654 [2024-07-12 00:48:22.241289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.654 [2024-07-12 00:48:22.241317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.654 qpair failed and we were unable to recover it. 00:35:54.654 [2024-07-12 00:48:22.241397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.654 [2024-07-12 00:48:22.241425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.654 qpair failed and we were unable to recover it. 00:35:54.654 [2024-07-12 00:48:22.241510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.654 [2024-07-12 00:48:22.241536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.654 qpair failed and we were unable to recover it. 00:35:54.654 [2024-07-12 00:48:22.241641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.654 [2024-07-12 00:48:22.241667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.654 qpair failed and we were unable to recover it. 00:35:54.654 [2024-07-12 00:48:22.241745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.654 [2024-07-12 00:48:22.241771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.654 qpair failed and we were unable to recover it. 00:35:54.654 [2024-07-12 00:48:22.241849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.654 [2024-07-12 00:48:22.241873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.654 qpair failed and we were unable to recover it. 00:35:54.654 [2024-07-12 00:48:22.241960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.654 [2024-07-12 00:48:22.241989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.654 qpair failed and we were unable to recover it. 00:35:54.654 [2024-07-12 00:48:22.264737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.654 [2024-07-12 00:48:22.264778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.654 qpair failed and we were unable to recover it. 00:35:54.654 [2024-07-12 00:48:22.264930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.654 [2024-07-12 00:48:22.264970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.654 qpair failed and we were unable to recover it. 00:35:54.654 [2024-07-12 00:48:22.265575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.654 [2024-07-12 00:48:22.265627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.654 qpair failed and we were unable to recover it. 00:35:54.654 [2024-07-12 00:48:22.265776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.654 [2024-07-12 00:48:22.265816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.654 qpair failed and we were unable to recover it. 00:35:54.654 [2024-07-12 00:48:22.265960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.654 [2024-07-12 00:48:22.265999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.654 qpair failed and we were unable to recover it. 00:35:54.654 [2024-07-12 00:48:22.266144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.654 [2024-07-12 00:48:22.266172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.654 qpair failed and we were unable to recover it. 00:35:54.654 [2024-07-12 00:48:22.266296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.654 [2024-07-12 00:48:22.266322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.654 qpair failed and we were unable to recover it. 00:35:54.654 [2024-07-12 00:48:22.266463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.654 [2024-07-12 00:48:22.266488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.654 qpair failed and we were unable to recover it. 00:35:54.654 [2024-07-12 00:48:22.266578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.654 [2024-07-12 00:48:22.266615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.654 qpair failed and we were unable to recover it. 00:35:54.654 [2024-07-12 00:48:22.266719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.654 [2024-07-12 00:48:22.266745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.654 qpair failed and we were unable to recover it. 00:35:54.654 [2024-07-12 00:48:22.266831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.654 [2024-07-12 00:48:22.266856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.654 qpair failed and we were unable to recover it. 00:35:54.654 [2024-07-12 00:48:22.266980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.654 [2024-07-12 00:48:22.267005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.654 qpair failed and we were unable to recover it. 00:35:54.654 [2024-07-12 00:48:22.267086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.654 [2024-07-12 00:48:22.267110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.654 qpair failed and we were unable to recover it. 00:35:54.654 [2024-07-12 00:48:22.267236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.654 [2024-07-12 00:48:22.267261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.654 qpair failed and we were unable to recover it. 00:35:54.654 [2024-07-12 00:48:22.267382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.654 [2024-07-12 00:48:22.267407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.654 qpair failed and we were unable to recover it. 00:35:54.654 [2024-07-12 00:48:22.267501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.654 [2024-07-12 00:48:22.267527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.654 qpair failed and we were unable to recover it. 00:35:54.654 [2024-07-12 00:48:22.267669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.654 [2024-07-12 00:48:22.267697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.654 qpair failed and we were unable to recover it. 00:35:54.654 [2024-07-12 00:48:22.267818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.654 [2024-07-12 00:48:22.267844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.654 qpair failed and we were unable to recover it. 00:35:54.654 [2024-07-12 00:48:22.271604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.654 [2024-07-12 00:48:22.271656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.654 qpair failed and we were unable to recover it. 00:35:54.654 [2024-07-12 00:48:22.271807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.654 [2024-07-12 00:48:22.271835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.654 qpair failed and we were unable to recover it. 00:35:54.654 [2024-07-12 00:48:22.271937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.654 [2024-07-12 00:48:22.271964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.654 qpair failed and we were unable to recover it. 00:35:54.654 [2024-07-12 00:48:22.272089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.654 [2024-07-12 00:48:22.272115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.654 qpair failed and we were unable to recover it. 00:35:54.654 [2024-07-12 00:48:22.272232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.654 [2024-07-12 00:48:22.272258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.654 qpair failed and we were unable to recover it. 00:35:54.654 [2024-07-12 00:48:22.272353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.654 [2024-07-12 00:48:22.272379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.654 qpair failed and we were unable to recover it. 00:35:54.654 [2024-07-12 00:48:22.272479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.654 [2024-07-12 00:48:22.272506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.654 qpair failed and we were unable to recover it. 00:35:54.654 [2024-07-12 00:48:22.272603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.654 [2024-07-12 00:48:22.272630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.655 qpair failed and we were unable to recover it. 00:35:54.655 [2024-07-12 00:48:22.272724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.655 [2024-07-12 00:48:22.272749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.655 qpair failed and we were unable to recover it. 00:35:54.655 [2024-07-12 00:48:22.272892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.655 [2024-07-12 00:48:22.272918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.655 qpair failed and we were unable to recover it. 00:35:54.655 [2024-07-12 00:48:22.273041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.655 [2024-07-12 00:48:22.273068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.655 qpair failed and we were unable to recover it. 00:35:54.655 [2024-07-12 00:48:22.283603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.655 [2024-07-12 00:48:22.283639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.655 qpair failed and we were unable to recover it. 00:35:54.655 [2024-07-12 00:48:22.283810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.655 [2024-07-12 00:48:22.283837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.655 qpair failed and we were unable to recover it. 00:35:54.655 [2024-07-12 00:48:22.283996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.655 [2024-07-12 00:48:22.284021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.655 qpair failed and we were unable to recover it. 00:35:54.655 [2024-07-12 00:48:22.284160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.655 [2024-07-12 00:48:22.284187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.655 qpair failed and we were unable to recover it. 00:35:54.655 [2024-07-12 00:48:22.284316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.655 [2024-07-12 00:48:22.284341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.655 qpair failed and we were unable to recover it. 00:35:54.655 [2024-07-12 00:48:22.284476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.655 [2024-07-12 00:48:22.284501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.655 qpair failed and we were unable to recover it. 00:35:54.655 [2024-07-12 00:48:22.284604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.655 [2024-07-12 00:48:22.284634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.655 qpair failed and we were unable to recover it. 00:35:54.655 [2024-07-12 00:48:22.284729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.655 [2024-07-12 00:48:22.284755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.655 qpair failed and we were unable to recover it. 00:35:54.655 [2024-07-12 00:48:22.284931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.655 [2024-07-12 00:48:22.284958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.655 qpair failed and we were unable to recover it. 00:35:54.655 [2024-07-12 00:48:22.285081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.655 [2024-07-12 00:48:22.285107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.655 qpair failed and we were unable to recover it. 00:35:54.655 [2024-07-12 00:48:22.285235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.655 [2024-07-12 00:48:22.285261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.655 qpair failed and we were unable to recover it. 00:35:54.655 [2024-07-12 00:48:22.285391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.655 [2024-07-12 00:48:22.285418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.655 qpair failed and we were unable to recover it. 00:35:54.655 [2024-07-12 00:48:22.285547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.655 [2024-07-12 00:48:22.285574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.655 qpair failed and we were unable to recover it. 00:35:54.655 [2024-07-12 00:48:22.285722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.655 [2024-07-12 00:48:22.285747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.655 qpair failed and we were unable to recover it. 00:35:54.655 [2024-07-12 00:48:22.285899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.655 [2024-07-12 00:48:22.285925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.655 qpair failed and we were unable to recover it. 00:35:54.655 [2024-07-12 00:48:22.286047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.655 [2024-07-12 00:48:22.286073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.655 qpair failed and we were unable to recover it. 00:35:54.655 [2024-07-12 00:48:22.286272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.655 [2024-07-12 00:48:22.286299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.655 qpair failed and we were unable to recover it. 00:35:54.655 [2024-07-12 00:48:22.286454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.655 [2024-07-12 00:48:22.286479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.655 qpair failed and we were unable to recover it. 00:35:54.655 [2024-07-12 00:48:22.286593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.655 [2024-07-12 00:48:22.286619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.655 qpair failed and we were unable to recover it. 00:35:54.655 [2024-07-12 00:48:22.286783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.655 [2024-07-12 00:48:22.286827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.655 qpair failed and we were unable to recover it. 00:35:54.655 [2024-07-12 00:48:22.286969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.655 [2024-07-12 00:48:22.287008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.655 qpair failed and we were unable to recover it. 00:35:54.655 [2024-07-12 00:48:22.287223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.655 [2024-07-12 00:48:22.287250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.655 qpair failed and we were unable to recover it. 00:35:54.655 [2024-07-12 00:48:22.287364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.655 [2024-07-12 00:48:22.287391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.655 qpair failed and we were unable to recover it. 00:35:54.655 [2024-07-12 00:48:22.287515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.655 [2024-07-12 00:48:22.287541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.655 qpair failed and we were unable to recover it. 00:35:54.655 [2024-07-12 00:48:22.287743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.655 [2024-07-12 00:48:22.287769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.655 qpair failed and we were unable to recover it. 00:35:54.655 [2024-07-12 00:48:22.287903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.655 [2024-07-12 00:48:22.287928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.655 qpair failed and we were unable to recover it. 00:35:54.655 [2024-07-12 00:48:22.288039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.655 [2024-07-12 00:48:22.288065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.655 qpair failed and we were unable to recover it. 00:35:54.655 [2024-07-12 00:48:22.288212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.655 [2024-07-12 00:48:22.288237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.655 qpair failed and we were unable to recover it. 00:35:54.655 [2024-07-12 00:48:22.288379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.655 [2024-07-12 00:48:22.288404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.655 qpair failed and we were unable to recover it. 00:35:54.655 [2024-07-12 00:48:22.288503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.655 [2024-07-12 00:48:22.288528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.655 qpair failed and we were unable to recover it. 00:35:54.655 [2024-07-12 00:48:22.288655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.655 [2024-07-12 00:48:22.288681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.655 qpair failed and we were unable to recover it. 00:35:54.655 [2024-07-12 00:48:22.288813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.655 [2024-07-12 00:48:22.288839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.655 qpair failed and we were unable to recover it. 00:35:54.655 [2024-07-12 00:48:22.288966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.655 [2024-07-12 00:48:22.288991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.655 qpair failed and we were unable to recover it. 00:35:54.655 [2024-07-12 00:48:22.289116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.655 [2024-07-12 00:48:22.289140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.655 qpair failed and we were unable to recover it. 00:35:54.655 [2024-07-12 00:48:22.289251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.655 [2024-07-12 00:48:22.289277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.655 qpair failed and we were unable to recover it. 00:35:54.655 [2024-07-12 00:48:22.289417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.655 [2024-07-12 00:48:22.289443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.655 qpair failed and we were unable to recover it. 00:35:54.655 [2024-07-12 00:48:22.289548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.655 [2024-07-12 00:48:22.289573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.655 qpair failed and we were unable to recover it. 00:35:54.656 [2024-07-12 00:48:22.289707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.656 [2024-07-12 00:48:22.289732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.656 qpair failed and we were unable to recover it. 00:35:54.656 [2024-07-12 00:48:22.289860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.656 [2024-07-12 00:48:22.289885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.656 qpair failed and we were unable to recover it. 00:35:54.656 [2024-07-12 00:48:22.290006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.656 [2024-07-12 00:48:22.290032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.656 qpair failed and we were unable to recover it. 00:35:54.656 [2024-07-12 00:48:22.290143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.656 [2024-07-12 00:48:22.290169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.656 qpair failed and we were unable to recover it. 00:35:54.656 [2024-07-12 00:48:22.290293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.656 [2024-07-12 00:48:22.290319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.656 qpair failed and we were unable to recover it. 00:35:54.656 [2024-07-12 00:48:22.290437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.656 [2024-07-12 00:48:22.290463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.656 qpair failed and we were unable to recover it. 00:35:54.656 [2024-07-12 00:48:22.290594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.656 [2024-07-12 00:48:22.290621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.656 qpair failed and we were unable to recover it. 00:35:54.656 [2024-07-12 00:48:22.290761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.656 [2024-07-12 00:48:22.290786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.656 qpair failed and we were unable to recover it. 00:35:54.656 [2024-07-12 00:48:22.290897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.656 [2024-07-12 00:48:22.290922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.656 qpair failed and we were unable to recover it. 00:35:54.656 [2024-07-12 00:48:22.291040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.656 [2024-07-12 00:48:22.291065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.656 qpair failed and we were unable to recover it. 00:35:54.656 [2024-07-12 00:48:22.291207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.656 [2024-07-12 00:48:22.291232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.656 qpair failed and we were unable to recover it. 00:35:54.656 [2024-07-12 00:48:22.291343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.656 [2024-07-12 00:48:22.291368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.656 qpair failed and we were unable to recover it. 00:35:54.656 [2024-07-12 00:48:22.291481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.656 [2024-07-12 00:48:22.291506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.656 qpair failed and we were unable to recover it. 00:35:54.656 [2024-07-12 00:48:22.291599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.656 [2024-07-12 00:48:22.291633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.656 qpair failed and we were unable to recover it. 00:35:54.656 [2024-07-12 00:48:22.291783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.656 [2024-07-12 00:48:22.291808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.656 qpair failed and we were unable to recover it. 00:35:54.656 [2024-07-12 00:48:22.291936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.656 [2024-07-12 00:48:22.291960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.656 qpair failed and we were unable to recover it. 00:35:54.656 [2024-07-12 00:48:22.292083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.656 [2024-07-12 00:48:22.292111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.656 qpair failed and we were unable to recover it. 00:35:54.656 [2024-07-12 00:48:22.292236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.656 [2024-07-12 00:48:22.292261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.656 qpair failed and we were unable to recover it. 00:35:54.656 [2024-07-12 00:48:22.292361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.656 [2024-07-12 00:48:22.292386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.656 qpair failed and we were unable to recover it. 00:35:54.656 [2024-07-12 00:48:22.292475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.656 [2024-07-12 00:48:22.292500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.656 qpair failed and we were unable to recover it. 00:35:54.656 [2024-07-12 00:48:22.292599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.656 [2024-07-12 00:48:22.292631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.656 qpair failed and we were unable to recover it. 00:35:54.656 [2024-07-12 00:48:22.292736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.656 [2024-07-12 00:48:22.292761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.656 qpair failed and we were unable to recover it. 00:35:54.656 [2024-07-12 00:48:22.292856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.656 [2024-07-12 00:48:22.292881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.656 qpair failed and we were unable to recover it. 00:35:54.656 [2024-07-12 00:48:22.293006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.656 [2024-07-12 00:48:22.293039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.656 qpair failed and we were unable to recover it. 00:35:54.656 [2024-07-12 00:48:22.293139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.656 [2024-07-12 00:48:22.293164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.656 qpair failed and we were unable to recover it. 00:35:54.656 [2024-07-12 00:48:22.293253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.656 [2024-07-12 00:48:22.293278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.656 qpair failed and we were unable to recover it. 00:35:54.656 [2024-07-12 00:48:22.293369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.656 [2024-07-12 00:48:22.293395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.656 qpair failed and we were unable to recover it. 00:35:54.656 [2024-07-12 00:48:22.293531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.656 [2024-07-12 00:48:22.293556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.656 qpair failed and we were unable to recover it. 00:35:54.656 [2024-07-12 00:48:22.293653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.656 [2024-07-12 00:48:22.293678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.656 qpair failed and we were unable to recover it. 00:35:54.656 [2024-07-12 00:48:22.293832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.656 [2024-07-12 00:48:22.293857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.656 qpair failed and we were unable to recover it. 00:35:54.656 [2024-07-12 00:48:22.293980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.656 [2024-07-12 00:48:22.294005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.656 qpair failed and we were unable to recover it. 00:35:54.656 [2024-07-12 00:48:22.294132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.656 [2024-07-12 00:48:22.294156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.656 qpair failed and we were unable to recover it. 00:35:54.656 [2024-07-12 00:48:22.294258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.656 [2024-07-12 00:48:22.294283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.656 qpair failed and we were unable to recover it. 00:35:54.656 [2024-07-12 00:48:22.294412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.656 [2024-07-12 00:48:22.294453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:54.656 qpair failed and we were unable to recover it. 00:35:54.656 [2024-07-12 00:48:22.294575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.656 [2024-07-12 00:48:22.294625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.656 qpair failed and we were unable to recover it. 00:35:54.656 [2024-07-12 00:48:22.294761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.656 [2024-07-12 00:48:22.294800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.656 qpair failed and we were unable to recover it. 00:35:54.656 [2024-07-12 00:48:22.294901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.656 [2024-07-12 00:48:22.294928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.656 qpair failed and we were unable to recover it. 00:35:54.656 [2024-07-12 00:48:22.295023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.656 [2024-07-12 00:48:22.295055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.656 qpair failed and we were unable to recover it. 00:35:54.656 [2024-07-12 00:48:22.295169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.656 [2024-07-12 00:48:22.295195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.656 qpair failed and we were unable to recover it. 00:35:54.656 [2024-07-12 00:48:22.295316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.656 [2024-07-12 00:48:22.295342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.656 qpair failed and we were unable to recover it. 00:35:54.656 [2024-07-12 00:48:22.295491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.656 [2024-07-12 00:48:22.295515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.656 qpair failed and we were unable to recover it. 00:35:54.656 [2024-07-12 00:48:22.295644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.657 [2024-07-12 00:48:22.295670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.657 qpair failed and we were unable to recover it. 00:35:54.657 [2024-07-12 00:48:22.295766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.657 [2024-07-12 00:48:22.295791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.657 qpair failed and we were unable to recover it. 00:35:54.657 [2024-07-12 00:48:22.295905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.657 [2024-07-12 00:48:22.295929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.657 qpair failed and we were unable to recover it. 00:35:54.657 [2024-07-12 00:48:22.296052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.657 [2024-07-12 00:48:22.296076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.657 qpair failed and we were unable to recover it. 00:35:54.657 [2024-07-12 00:48:22.296227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.657 [2024-07-12 00:48:22.296254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.657 qpair failed and we were unable to recover it. 00:35:54.657 [2024-07-12 00:48:22.296357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.657 [2024-07-12 00:48:22.296384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.657 qpair failed and we were unable to recover it. 00:35:54.657 [2024-07-12 00:48:22.296480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.657 [2024-07-12 00:48:22.296505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.657 qpair failed and we were unable to recover it. 00:35:54.657 [2024-07-12 00:48:22.296602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.657 [2024-07-12 00:48:22.296631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.657 qpair failed and we were unable to recover it. 00:35:54.657 [2024-07-12 00:48:22.296724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.657 [2024-07-12 00:48:22.296749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.657 qpair failed and we were unable to recover it. 00:35:54.657 [2024-07-12 00:48:22.296857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.657 [2024-07-12 00:48:22.296888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.657 qpair failed and we were unable to recover it. 00:35:54.657 [2024-07-12 00:48:22.297012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.657 [2024-07-12 00:48:22.297038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.657 qpair failed and we were unable to recover it. 00:35:54.657 [2024-07-12 00:48:22.297152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.657 [2024-07-12 00:48:22.297178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.657 qpair failed and we were unable to recover it. 00:35:54.657 [2024-07-12 00:48:22.297270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.657 [2024-07-12 00:48:22.297296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:54.657 qpair failed and we were unable to recover it. 00:35:54.657 [2024-07-12 00:48:22.297393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.657 [2024-07-12 00:48:22.297426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.657 qpair failed and we were unable to recover it. 00:35:54.657 [2024-07-12 00:48:22.297512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.657 [2024-07-12 00:48:22.297537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:54.657 qpair failed and we were unable to recover it. 00:35:54.657 [2024-07-12 00:48:22.297652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.657 [2024-07-12 00:48:22.297679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.657 qpair failed and we were unable to recover it. 00:35:54.657 [2024-07-12 00:48:22.297790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.657 [2024-07-12 00:48:22.297817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.657 qpair failed and we were unable to recover it. 00:35:54.657 [2024-07-12 00:48:22.297929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.657 [2024-07-12 00:48:22.297953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.657 qpair failed and we were unable to recover it. 00:35:54.657 [2024-07-12 00:48:22.298073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.657 [2024-07-12 00:48:22.298098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:54.657 qpair failed and we were unable to recover it. 00:35:54.657 [2024-07-12 00:48:22.298190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.657 [2024-07-12 00:48:22.298217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.261 qpair failed and we were unable to recover it. 00:35:55.261 [2024-07-12 00:48:22.799219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.261 [2024-07-12 00:48:22.799280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.261 qpair failed and we were unable to recover it. 00:35:55.261 [2024-07-12 00:48:22.799397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.261 [2024-07-12 00:48:22.799424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.261 qpair failed and we were unable to recover it. 00:35:55.261 [2024-07-12 00:48:22.799553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.261 [2024-07-12 00:48:22.799579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.261 qpair failed and we were unable to recover it. 00:35:55.261 [2024-07-12 00:48:22.799761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.261 [2024-07-12 00:48:22.799801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.261 qpair failed and we were unable to recover it. 00:35:55.261 [2024-07-12 00:48:22.799965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.261 [2024-07-12 00:48:22.800005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.261 qpair failed and we were unable to recover it. 00:35:55.261 [2024-07-12 00:48:22.800162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.261 [2024-07-12 00:48:22.800189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.261 qpair failed and we were unable to recover it. 00:35:55.261 [2024-07-12 00:48:22.800315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.261 [2024-07-12 00:48:22.800341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.261 qpair failed and we were unable to recover it. 00:35:55.262 [2024-07-12 00:48:22.800425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.262 [2024-07-12 00:48:22.800450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.262 qpair failed and we were unable to recover it. 00:35:55.262 [2024-07-12 00:48:22.800551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.262 [2024-07-12 00:48:22.800577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.262 qpair failed and we were unable to recover it. 00:35:55.262 [2024-07-12 00:48:22.800680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.262 [2024-07-12 00:48:22.800706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.262 qpair failed and we were unable to recover it. 00:35:55.262 [2024-07-12 00:48:22.800826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.262 [2024-07-12 00:48:22.800853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.262 qpair failed and we were unable to recover it. 00:35:55.262 [2024-07-12 00:48:22.800974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.262 [2024-07-12 00:48:22.801000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.262 qpair failed and we were unable to recover it. 00:35:55.262 [2024-07-12 00:48:22.801086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.262 [2024-07-12 00:48:22.801111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.262 qpair failed and we were unable to recover it. 00:35:55.262 [2024-07-12 00:48:22.801256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.262 [2024-07-12 00:48:22.801281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.262 qpair failed and we were unable to recover it. 00:35:55.262 [2024-07-12 00:48:22.801375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.262 [2024-07-12 00:48:22.801401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.262 qpair failed and we were unable to recover it. 00:35:55.262 [2024-07-12 00:48:22.801543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.262 [2024-07-12 00:48:22.801568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.262 qpair failed and we were unable to recover it. 00:35:55.262 [2024-07-12 00:48:22.801696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.262 [2024-07-12 00:48:22.801726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.262 qpair failed and we were unable to recover it. 00:35:55.262 [2024-07-12 00:48:22.801826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.262 [2024-07-12 00:48:22.801852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.262 qpair failed and we were unable to recover it. 00:35:55.262 [2024-07-12 00:48:22.801996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.262 [2024-07-12 00:48:22.802021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.262 qpair failed and we were unable to recover it. 00:35:55.262 [2024-07-12 00:48:22.802119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.262 [2024-07-12 00:48:22.802145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.262 qpair failed and we were unable to recover it. 00:35:55.262 [2024-07-12 00:48:22.802275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.262 [2024-07-12 00:48:22.802301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.262 qpair failed and we were unable to recover it. 00:35:55.262 [2024-07-12 00:48:22.802423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.262 [2024-07-12 00:48:22.802450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.262 qpair failed and we were unable to recover it. 00:35:55.262 [2024-07-12 00:48:22.802561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.262 [2024-07-12 00:48:22.802593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.262 qpair failed and we were unable to recover it. 00:35:55.262 [2024-07-12 00:48:22.802742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.262 [2024-07-12 00:48:22.802782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.262 qpair failed and we were unable to recover it. 00:35:55.262 [2024-07-12 00:48:22.802883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.262 [2024-07-12 00:48:22.802913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.262 qpair failed and we were unable to recover it. 00:35:55.262 [2024-07-12 00:48:22.803007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.262 [2024-07-12 00:48:22.803034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.262 qpair failed and we were unable to recover it. 00:35:55.262 [2024-07-12 00:48:22.803151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.262 [2024-07-12 00:48:22.803178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.262 qpair failed and we were unable to recover it. 00:35:55.262 [2024-07-12 00:48:22.803315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.262 [2024-07-12 00:48:22.803342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.262 qpair failed and we were unable to recover it. 00:35:55.262 [2024-07-12 00:48:22.803460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.262 [2024-07-12 00:48:22.803487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.262 qpair failed and we were unable to recover it. 00:35:55.262 [2024-07-12 00:48:22.803617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.262 [2024-07-12 00:48:22.803644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.262 qpair failed and we were unable to recover it. 00:35:55.262 [2024-07-12 00:48:22.803747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.262 [2024-07-12 00:48:22.803773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.262 qpair failed and we were unable to recover it. 00:35:55.262 [2024-07-12 00:48:22.803893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.262 [2024-07-12 00:48:22.803921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.262 qpair failed and we were unable to recover it. 00:35:55.262 [2024-07-12 00:48:22.804016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.262 [2024-07-12 00:48:22.804045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.262 qpair failed and we were unable to recover it. 00:35:55.263 [2024-07-12 00:48:22.804187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.263 [2024-07-12 00:48:22.804214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.263 qpair failed and we were unable to recover it. 00:35:55.263 [2024-07-12 00:48:22.804309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.263 [2024-07-12 00:48:22.804337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.263 qpair failed and we were unable to recover it. 00:35:55.263 [2024-07-12 00:48:22.804421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.263 [2024-07-12 00:48:22.804459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.263 qpair failed and we were unable to recover it. 00:35:55.263 [2024-07-12 00:48:22.804583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.263 [2024-07-12 00:48:22.804618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.263 qpair failed and we were unable to recover it. 00:35:55.263 [2024-07-12 00:48:22.804711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.263 [2024-07-12 00:48:22.804744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.263 qpair failed and we were unable to recover it. 00:35:55.263 [2024-07-12 00:48:22.804893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.263 [2024-07-12 00:48:22.804919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.263 qpair failed and we were unable to recover it. 00:35:55.263 [2024-07-12 00:48:22.805013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.263 [2024-07-12 00:48:22.805040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.263 qpair failed and we were unable to recover it. 00:35:55.263 [2024-07-12 00:48:22.805156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.263 [2024-07-12 00:48:22.805183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.263 qpair failed and we were unable to recover it. 00:35:55.263 [2024-07-12 00:48:22.805331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.263 [2024-07-12 00:48:22.805357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.263 qpair failed and we were unable to recover it. 00:35:55.263 [2024-07-12 00:48:22.805476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.263 [2024-07-12 00:48:22.805503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.263 qpair failed and we were unable to recover it. 00:35:55.263 [2024-07-12 00:48:22.805614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.263 [2024-07-12 00:48:22.805664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.263 qpair failed and we were unable to recover it. 00:35:55.263 [2024-07-12 00:48:22.805802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.263 [2024-07-12 00:48:22.805831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.263 qpair failed and we were unable to recover it. 00:35:55.263 [2024-07-12 00:48:22.805952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.263 [2024-07-12 00:48:22.805978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.263 qpair failed and we were unable to recover it. 00:35:55.263 [2024-07-12 00:48:22.806108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.263 [2024-07-12 00:48:22.806135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.263 qpair failed and we were unable to recover it. 00:35:55.263 [2024-07-12 00:48:22.806251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.263 [2024-07-12 00:48:22.806278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.263 qpair failed and we were unable to recover it. 00:35:55.263 [2024-07-12 00:48:22.806391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.263 [2024-07-12 00:48:22.806417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.263 qpair failed and we were unable to recover it. 00:35:55.263 [2024-07-12 00:48:22.806511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.263 [2024-07-12 00:48:22.806537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.263 qpair failed and we were unable to recover it. 00:35:55.263 [2024-07-12 00:48:22.806640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.263 [2024-07-12 00:48:22.806667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.263 qpair failed and we were unable to recover it. 00:35:55.263 [2024-07-12 00:48:22.806756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.263 [2024-07-12 00:48:22.806783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.263 qpair failed and we were unable to recover it. 00:35:55.263 [2024-07-12 00:48:22.806892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.263 [2024-07-12 00:48:22.806919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.263 qpair failed and we were unable to recover it. 00:35:55.263 [2024-07-12 00:48:22.807034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.263 [2024-07-12 00:48:22.807060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.263 qpair failed and we were unable to recover it. 00:35:55.263 [2024-07-12 00:48:22.807166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.263 [2024-07-12 00:48:22.807192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.263 qpair failed and we were unable to recover it. 00:35:55.263 [2024-07-12 00:48:22.807294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.263 [2024-07-12 00:48:22.807320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.263 qpair failed and we were unable to recover it. 00:35:55.263 [2024-07-12 00:48:22.807453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.263 [2024-07-12 00:48:22.807484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.263 qpair failed and we were unable to recover it. 00:35:55.263 [2024-07-12 00:48:22.807592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.263 [2024-07-12 00:48:22.807620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.263 qpair failed and we were unable to recover it. 00:35:55.263 [2024-07-12 00:48:22.807728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.263 [2024-07-12 00:48:22.807755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.263 qpair failed and we were unable to recover it. 00:35:55.263 [2024-07-12 00:48:22.807877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.263 [2024-07-12 00:48:22.807909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.263 qpair failed and we were unable to recover it. 00:35:55.263 [2024-07-12 00:48:22.808052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.263 [2024-07-12 00:48:22.808093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.263 qpair failed and we were unable to recover it. 00:35:55.263 [2024-07-12 00:48:22.808214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.263 [2024-07-12 00:48:22.808247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.263 qpair failed and we were unable to recover it. 00:35:55.263 [2024-07-12 00:48:22.811599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.263 [2024-07-12 00:48:22.811638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.264 qpair failed and we were unable to recover it. 00:35:55.264 [2024-07-12 00:48:22.811748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.264 [2024-07-12 00:48:22.811774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.264 qpair failed and we were unable to recover it. 00:35:55.264 [2024-07-12 00:48:22.811895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.264 [2024-07-12 00:48:22.811922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.264 qpair failed and we were unable to recover it. 00:35:55.264 [2024-07-12 00:48:22.812025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.264 [2024-07-12 00:48:22.812050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.264 qpair failed and we were unable to recover it. 00:35:55.264 [2024-07-12 00:48:22.812165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.264 [2024-07-12 00:48:22.812192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.264 qpair failed and we were unable to recover it. 00:35:55.264 [2024-07-12 00:48:22.812315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.264 [2024-07-12 00:48:22.812342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.264 qpair failed and we were unable to recover it. 00:35:55.264 [2024-07-12 00:48:22.812449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.264 [2024-07-12 00:48:22.812477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.264 qpair failed and we were unable to recover it. 00:35:55.264 [2024-07-12 00:48:22.812578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.264 [2024-07-12 00:48:22.812625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.264 qpair failed and we were unable to recover it. 00:35:55.264 [2024-07-12 00:48:22.812744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.264 [2024-07-12 00:48:22.812776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.264 qpair failed and we were unable to recover it. 00:35:55.264 [2024-07-12 00:48:22.812875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.264 [2024-07-12 00:48:22.812916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.264 qpair failed and we were unable to recover it. 00:35:55.264 [2024-07-12 00:48:22.813039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.264 [2024-07-12 00:48:22.813067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.264 qpair failed and we were unable to recover it. 00:35:55.264 [2024-07-12 00:48:22.813168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.264 [2024-07-12 00:48:22.813194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.264 qpair failed and we were unable to recover it. 00:35:55.264 [2024-07-12 00:48:22.813323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.264 [2024-07-12 00:48:22.813349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.264 qpair failed and we were unable to recover it. 00:35:55.264 [2024-07-12 00:48:22.813443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.264 [2024-07-12 00:48:22.813470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.264 qpair failed and we were unable to recover it. 00:35:55.264 [2024-07-12 00:48:22.813592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.264 [2024-07-12 00:48:22.813619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.264 qpair failed and we were unable to recover it. 00:35:55.264 [2024-07-12 00:48:22.813736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.264 [2024-07-12 00:48:22.813762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.264 qpair failed and we were unable to recover it. 00:35:55.264 [2024-07-12 00:48:22.813873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.264 [2024-07-12 00:48:22.813899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.264 qpair failed and we were unable to recover it. 00:35:55.264 [2024-07-12 00:48:22.814015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.264 [2024-07-12 00:48:22.814041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.264 qpair failed and we were unable to recover it. 00:35:55.264 [2024-07-12 00:48:22.814121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.264 [2024-07-12 00:48:22.814148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.264 qpair failed and we were unable to recover it. 00:35:55.264 [2024-07-12 00:48:22.814264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.264 [2024-07-12 00:48:22.814290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.264 qpair failed and we were unable to recover it. 00:35:55.264 [2024-07-12 00:48:22.814423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.264 [2024-07-12 00:48:22.814449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.264 qpair failed and we were unable to recover it. 00:35:55.264 [2024-07-12 00:48:22.814562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.264 [2024-07-12 00:48:22.814599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.264 qpair failed and we were unable to recover it. 00:35:55.264 [2024-07-12 00:48:22.814721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.264 [2024-07-12 00:48:22.814747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.264 qpair failed and we were unable to recover it. 00:35:55.264 [2024-07-12 00:48:22.814841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.264 [2024-07-12 00:48:22.814870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.264 qpair failed and we were unable to recover it. 00:35:55.264 [2024-07-12 00:48:22.815027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.264 [2024-07-12 00:48:22.815058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.264 qpair failed and we were unable to recover it. 00:35:55.264 [2024-07-12 00:48:22.815170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.264 [2024-07-12 00:48:22.815197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.264 qpair failed and we were unable to recover it. 00:35:55.264 [2024-07-12 00:48:22.815290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.264 [2024-07-12 00:48:22.815317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.264 qpair failed and we were unable to recover it. 00:35:55.264 [2024-07-12 00:48:22.815420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.264 [2024-07-12 00:48:22.815447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.264 qpair failed and we were unable to recover it. 00:35:55.264 [2024-07-12 00:48:22.815532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.264 [2024-07-12 00:48:22.815559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.264 qpair failed and we were unable to recover it. 00:35:55.264 [2024-07-12 00:48:22.815703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.264 [2024-07-12 00:48:22.815735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.264 qpair failed and we were unable to recover it. 00:35:55.264 [2024-07-12 00:48:22.815838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.264 [2024-07-12 00:48:22.815865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.264 qpair failed and we were unable to recover it. 00:35:55.264 [2024-07-12 00:48:22.815979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.264 [2024-07-12 00:48:22.816006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.264 qpair failed and we were unable to recover it. 00:35:55.264 [2024-07-12 00:48:22.816123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.264 [2024-07-12 00:48:22.816151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.264 qpair failed and we were unable to recover it. 00:35:55.265 [2024-07-12 00:48:22.816257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.265 [2024-07-12 00:48:22.816284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.265 qpair failed and we were unable to recover it. 00:35:55.265 [2024-07-12 00:48:22.816385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.265 [2024-07-12 00:48:22.816428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.265 qpair failed and we were unable to recover it. 00:35:55.265 [2024-07-12 00:48:22.816555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.265 [2024-07-12 00:48:22.816583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.265 qpair failed and we were unable to recover it. 00:35:55.265 [2024-07-12 00:48:22.816707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.265 [2024-07-12 00:48:22.816745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.265 qpair failed and we were unable to recover it. 00:35:55.265 [2024-07-12 00:48:22.816920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.265 [2024-07-12 00:48:22.816956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.265 qpair failed and we were unable to recover it. 00:35:55.265 [2024-07-12 00:48:22.817080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.265 [2024-07-12 00:48:22.817132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.265 qpair failed and we were unable to recover it. 00:35:55.265 [2024-07-12 00:48:22.817255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.265 [2024-07-12 00:48:22.817299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.265 qpair failed and we were unable to recover it. 00:35:55.265 [2024-07-12 00:48:22.817443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.265 [2024-07-12 00:48:22.817529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.265 qpair failed and we were unable to recover it. 00:35:55.265 [2024-07-12 00:48:22.817631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.265 [2024-07-12 00:48:22.817659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.265 qpair failed and we were unable to recover it. 00:35:55.265 [2024-07-12 00:48:22.817802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.265 [2024-07-12 00:48:22.817846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.265 qpair failed and we were unable to recover it. 00:35:55.265 [2024-07-12 00:48:22.817953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.265 [2024-07-12 00:48:22.818008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.265 qpair failed and we were unable to recover it. 00:35:55.265 [2024-07-12 00:48:22.818121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.265 [2024-07-12 00:48:22.818169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.265 qpair failed and we were unable to recover it. 00:35:55.265 [2024-07-12 00:48:22.818246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.265 [2024-07-12 00:48:22.818272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.265 qpair failed and we were unable to recover it. 00:35:55.265 [2024-07-12 00:48:22.818460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.265 [2024-07-12 00:48:22.818490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.265 qpair failed and we were unable to recover it. 00:35:55.265 [2024-07-12 00:48:22.818584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.265 [2024-07-12 00:48:22.818618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.265 qpair failed and we were unable to recover it. 00:35:55.265 [2024-07-12 00:48:22.818710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.265 [2024-07-12 00:48:22.818737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.265 qpair failed and we were unable to recover it. 00:35:55.265 [2024-07-12 00:48:22.818826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.265 [2024-07-12 00:48:22.818855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.265 qpair failed and we were unable to recover it. 00:35:55.265 [2024-07-12 00:48:22.818948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.265 [2024-07-12 00:48:22.818977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.265 qpair failed and we were unable to recover it. 00:35:55.265 [2024-07-12 00:48:22.819086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.265 [2024-07-12 00:48:22.819112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.265 qpair failed and we were unable to recover it. 00:35:55.265 [2024-07-12 00:48:22.819198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.265 [2024-07-12 00:48:22.819223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.265 qpair failed and we were unable to recover it. 00:35:55.265 [2024-07-12 00:48:22.819339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.265 [2024-07-12 00:48:22.819366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.265 qpair failed and we were unable to recover it. 00:35:55.265 [2024-07-12 00:48:22.819450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.265 [2024-07-12 00:48:22.819477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.265 qpair failed and we were unable to recover it. 00:35:55.265 [2024-07-12 00:48:22.819568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.265 [2024-07-12 00:48:22.819600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.265 qpair failed and we were unable to recover it. 00:35:55.265 [2024-07-12 00:48:22.819724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.265 [2024-07-12 00:48:22.819751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.265 qpair failed and we were unable to recover it. 00:35:55.265 [2024-07-12 00:48:22.819866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.265 [2024-07-12 00:48:22.819892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.265 qpair failed and we were unable to recover it. 00:35:55.265 [2024-07-12 00:48:22.819972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.265 [2024-07-12 00:48:22.819998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.265 qpair failed and we were unable to recover it. 00:35:55.265 [2024-07-12 00:48:22.820145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.265 [2024-07-12 00:48:22.820171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.265 qpair failed and we were unable to recover it. 00:35:55.265 [2024-07-12 00:48:22.820283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.265 [2024-07-12 00:48:22.820309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.265 qpair failed and we were unable to recover it. 00:35:55.265 [2024-07-12 00:48:22.820440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.265 [2024-07-12 00:48:22.820471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.265 qpair failed and we were unable to recover it. 00:35:55.265 [2024-07-12 00:48:22.820554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.265 [2024-07-12 00:48:22.820580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.265 qpair failed and we were unable to recover it. 00:35:55.265 [2024-07-12 00:48:22.820680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.265 [2024-07-12 00:48:22.820706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.265 qpair failed and we were unable to recover it. 00:35:55.265 [2024-07-12 00:48:22.820803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.265 [2024-07-12 00:48:22.820842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.265 qpair failed and we were unable to recover it. 00:35:55.265 [2024-07-12 00:48:22.820993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.266 [2024-07-12 00:48:22.821031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.266 qpair failed and we were unable to recover it. 00:35:55.266 [2024-07-12 00:48:22.821138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.266 [2024-07-12 00:48:22.821167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.266 qpair failed and we were unable to recover it. 00:35:55.266 [2024-07-12 00:48:22.821268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.266 [2024-07-12 00:48:22.821295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.266 qpair failed and we were unable to recover it. 00:35:55.266 [2024-07-12 00:48:22.821382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.266 [2024-07-12 00:48:22.821408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.266 qpair failed and we were unable to recover it. 00:35:55.266 [2024-07-12 00:48:22.821510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.266 [2024-07-12 00:48:22.821537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.266 qpair failed and we were unable to recover it. 00:35:55.266 [2024-07-12 00:48:22.821627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.266 [2024-07-12 00:48:22.821662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.266 qpair failed and we were unable to recover it. 00:35:55.266 [2024-07-12 00:48:22.821752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.266 [2024-07-12 00:48:22.821779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.266 qpair failed and we were unable to recover it. 00:35:55.266 [2024-07-12 00:48:22.821871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.266 [2024-07-12 00:48:22.821898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.266 qpair failed and we were unable to recover it. 00:35:55.266 [2024-07-12 00:48:22.822022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.266 [2024-07-12 00:48:22.822049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.266 qpair failed and we were unable to recover it. 00:35:55.266 [2024-07-12 00:48:22.822167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.266 [2024-07-12 00:48:22.822195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.266 qpair failed and we were unable to recover it. 00:35:55.266 [2024-07-12 00:48:22.822329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.266 [2024-07-12 00:48:22.822357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.266 qpair failed and we were unable to recover it. 00:35:55.266 [2024-07-12 00:48:22.822436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.266 [2024-07-12 00:48:22.822463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.266 qpair failed and we were unable to recover it. 00:35:55.266 [2024-07-12 00:48:22.822540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.266 [2024-07-12 00:48:22.822566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.266 qpair failed and we were unable to recover it. 00:35:55.266 [2024-07-12 00:48:22.822659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.266 [2024-07-12 00:48:22.822689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.266 qpair failed and we were unable to recover it. 00:35:55.266 [2024-07-12 00:48:22.822800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.266 [2024-07-12 00:48:22.822826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.266 qpair failed and we were unable to recover it. 00:35:55.266 [2024-07-12 00:48:22.822908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.266 [2024-07-12 00:48:22.822936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.266 qpair failed and we were unable to recover it. 00:35:55.266 [2024-07-12 00:48:22.823017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.266 [2024-07-12 00:48:22.823043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.266 qpair failed and we were unable to recover it. 00:35:55.266 [2024-07-12 00:48:22.823122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.266 [2024-07-12 00:48:22.823149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.266 qpair failed and we were unable to recover it. 00:35:55.266 [2024-07-12 00:48:22.823265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.266 [2024-07-12 00:48:22.823292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.266 qpair failed and we were unable to recover it. 00:35:55.266 [2024-07-12 00:48:22.823387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.266 [2024-07-12 00:48:22.823415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.266 qpair failed and we were unable to recover it. 00:35:55.266 [2024-07-12 00:48:22.823526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.266 [2024-07-12 00:48:22.823553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.266 qpair failed and we were unable to recover it. 00:35:55.266 [2024-07-12 00:48:22.823653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.266 [2024-07-12 00:48:22.823679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.266 qpair failed and we were unable to recover it. 00:35:55.266 [2024-07-12 00:48:22.823764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.266 [2024-07-12 00:48:22.823795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.266 qpair failed and we were unable to recover it. 00:35:55.266 [2024-07-12 00:48:22.823890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.266 [2024-07-12 00:48:22.823917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.266 qpair failed and we were unable to recover it. 00:35:55.266 [2024-07-12 00:48:22.823998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.266 [2024-07-12 00:48:22.824024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.266 qpair failed and we were unable to recover it. 00:35:55.266 [2024-07-12 00:48:22.824116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.266 [2024-07-12 00:48:22.824142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.266 qpair failed and we were unable to recover it. 00:35:55.266 [2024-07-12 00:48:22.824233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.266 [2024-07-12 00:48:22.824260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.266 qpair failed and we were unable to recover it. 00:35:55.266 [2024-07-12 00:48:22.824352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.266 [2024-07-12 00:48:22.824379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.266 qpair failed and we were unable to recover it. 00:35:55.267 [2024-07-12 00:48:22.824494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.267 [2024-07-12 00:48:22.824520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.267 qpair failed and we were unable to recover it. 00:35:55.267 [2024-07-12 00:48:22.824602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.267 [2024-07-12 00:48:22.824629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.267 qpair failed and we were unable to recover it. 00:35:55.267 [2024-07-12 00:48:22.824723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.267 [2024-07-12 00:48:22.824751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.267 qpair failed and we were unable to recover it. 00:35:55.267 [2024-07-12 00:48:22.824836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.267 [2024-07-12 00:48:22.824862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.267 qpair failed and we were unable to recover it. 00:35:55.267 [2024-07-12 00:48:22.824942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.267 [2024-07-12 00:48:22.824968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.267 qpair failed and we were unable to recover it. 00:35:55.267 [2024-07-12 00:48:22.825045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.267 [2024-07-12 00:48:22.825071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.267 qpair failed and we were unable to recover it. 00:35:55.267 [2024-07-12 00:48:22.825183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.267 [2024-07-12 00:48:22.825209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.267 qpair failed and we were unable to recover it. 00:35:55.267 [2024-07-12 00:48:22.825289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.267 [2024-07-12 00:48:22.825316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.267 qpair failed and we were unable to recover it. 00:35:55.267 [2024-07-12 00:48:22.825408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.267 [2024-07-12 00:48:22.825439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.267 qpair failed and we were unable to recover it. 00:35:55.267 [2024-07-12 00:48:22.825549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.267 [2024-07-12 00:48:22.825575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.267 qpair failed and we were unable to recover it. 00:35:55.267 [2024-07-12 00:48:22.825675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.267 [2024-07-12 00:48:22.825701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.267 qpair failed and we were unable to recover it. 00:35:55.267 [2024-07-12 00:48:22.825787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.267 [2024-07-12 00:48:22.825813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.267 qpair failed and we were unable to recover it. 00:35:55.267 [2024-07-12 00:48:22.825895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.267 [2024-07-12 00:48:22.825921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.267 qpair failed and we were unable to recover it. 00:35:55.267 [2024-07-12 00:48:22.826001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.267 [2024-07-12 00:48:22.826032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.267 qpair failed and we were unable to recover it. 00:35:55.267 [2024-07-12 00:48:22.826109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.267 [2024-07-12 00:48:22.826135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.268 qpair failed and we were unable to recover it. 00:35:55.268 [2024-07-12 00:48:22.826216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.268 [2024-07-12 00:48:22.826246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.268 qpair failed and we were unable to recover it. 00:35:55.268 [2024-07-12 00:48:22.826326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.268 [2024-07-12 00:48:22.826352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.268 qpair failed and we were unable to recover it. 00:35:55.268 [2024-07-12 00:48:22.826426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.268 [2024-07-12 00:48:22.826453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.268 qpair failed and we were unable to recover it. 00:35:55.268 [2024-07-12 00:48:22.826530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.268 [2024-07-12 00:48:22.826556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.268 qpair failed and we were unable to recover it. 00:35:55.268 [2024-07-12 00:48:22.826677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.268 [2024-07-12 00:48:22.826704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.268 qpair failed and we were unable to recover it. 00:35:55.268 [2024-07-12 00:48:22.826815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.268 [2024-07-12 00:48:22.826842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.268 qpair failed and we were unable to recover it. 00:35:55.268 [2024-07-12 00:48:22.826956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.268 [2024-07-12 00:48:22.826983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.268 qpair failed and we were unable to recover it. 00:35:55.268 [2024-07-12 00:48:22.827070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.268 [2024-07-12 00:48:22.827097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.268 qpair failed and we were unable to recover it. 00:35:55.268 [2024-07-12 00:48:22.827176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.268 [2024-07-12 00:48:22.827202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.268 qpair failed and we were unable to recover it. 00:35:55.268 [2024-07-12 00:48:22.827278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.268 [2024-07-12 00:48:22.827305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.268 qpair failed and we were unable to recover it. 00:35:55.268 [2024-07-12 00:48:22.827405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.268 [2024-07-12 00:48:22.827446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.268 qpair failed and we were unable to recover it. 00:35:55.268 [2024-07-12 00:48:22.827571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.268 [2024-07-12 00:48:22.827606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.268 qpair failed and we were unable to recover it. 00:35:55.268 [2024-07-12 00:48:22.827724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.268 [2024-07-12 00:48:22.827751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.268 qpair failed and we were unable to recover it. 00:35:55.268 [2024-07-12 00:48:22.827839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.268 [2024-07-12 00:48:22.827866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.268 qpair failed and we were unable to recover it. 00:35:55.268 [2024-07-12 00:48:22.827958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.268 [2024-07-12 00:48:22.827984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.268 qpair failed and we were unable to recover it. 00:35:55.268 [2024-07-12 00:48:22.828075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.268 [2024-07-12 00:48:22.828102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.268 qpair failed and we were unable to recover it. 00:35:55.268 [2024-07-12 00:48:22.828222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.268 [2024-07-12 00:48:22.828250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.268 qpair failed and we were unable to recover it. 00:35:55.268 [2024-07-12 00:48:22.828333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.268 [2024-07-12 00:48:22.828359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.268 qpair failed and we were unable to recover it. 00:35:55.268 [2024-07-12 00:48:22.828478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.268 [2024-07-12 00:48:22.828505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.268 qpair failed and we were unable to recover it. 00:35:55.268 [2024-07-12 00:48:22.828600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.268 [2024-07-12 00:48:22.828628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.268 qpair failed and we were unable to recover it. 00:35:55.268 [2024-07-12 00:48:22.828750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.268 [2024-07-12 00:48:22.828777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.268 qpair failed and we were unable to recover it. 00:35:55.268 [2024-07-12 00:48:22.828864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.268 [2024-07-12 00:48:22.828891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.268 qpair failed and we were unable to recover it. 00:35:55.268 [2024-07-12 00:48:22.829000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.268 [2024-07-12 00:48:22.829027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.268 qpair failed and we were unable to recover it. 00:35:55.268 [2024-07-12 00:48:22.829109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.268 [2024-07-12 00:48:22.829136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.268 qpair failed and we were unable to recover it. 00:35:55.268 [2024-07-12 00:48:22.829223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.268 [2024-07-12 00:48:22.829251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.268 qpair failed and we were unable to recover it. 00:35:55.268 [2024-07-12 00:48:22.829339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.268 [2024-07-12 00:48:22.829370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.268 qpair failed and we were unable to recover it. 00:35:55.268 [2024-07-12 00:48:22.829516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.268 [2024-07-12 00:48:22.829543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.268 qpair failed and we were unable to recover it. 00:35:55.268 [2024-07-12 00:48:22.829657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.268 [2024-07-12 00:48:22.829684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.268 qpair failed and we were unable to recover it. 00:35:55.268 [2024-07-12 00:48:22.829768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.268 [2024-07-12 00:48:22.829794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.268 qpair failed and we were unable to recover it. 00:35:55.268 [2024-07-12 00:48:22.829879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.268 [2024-07-12 00:48:22.829907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.269 qpair failed and we were unable to recover it. 00:35:55.269 [2024-07-12 00:48:22.830018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.269 [2024-07-12 00:48:22.830044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.269 qpair failed and we were unable to recover it. 00:35:55.269 [2024-07-12 00:48:22.830132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.269 [2024-07-12 00:48:22.830159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.269 qpair failed and we were unable to recover it. 00:35:55.269 [2024-07-12 00:48:22.830250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.269 [2024-07-12 00:48:22.830277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.269 qpair failed and we were unable to recover it. 00:35:55.269 [2024-07-12 00:48:22.830355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.269 [2024-07-12 00:48:22.830386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.269 qpair failed and we were unable to recover it. 00:35:55.269 [2024-07-12 00:48:22.830504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.269 [2024-07-12 00:48:22.830534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.269 qpair failed and we were unable to recover it. 00:35:55.269 [2024-07-12 00:48:22.830615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.269 [2024-07-12 00:48:22.830642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.269 qpair failed and we were unable to recover it. 00:35:55.269 [2024-07-12 00:48:22.830729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.269 [2024-07-12 00:48:22.830757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.269 qpair failed and we were unable to recover it. 00:35:55.269 [2024-07-12 00:48:22.830875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.269 [2024-07-12 00:48:22.830901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.269 qpair failed and we were unable to recover it. 00:35:55.269 [2024-07-12 00:48:22.831021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.269 [2024-07-12 00:48:22.831047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.269 qpair failed and we were unable to recover it. 00:35:55.269 [2024-07-12 00:48:22.831163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.269 [2024-07-12 00:48:22.831189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.269 qpair failed and we were unable to recover it. 00:35:55.269 [2024-07-12 00:48:22.831303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.269 [2024-07-12 00:48:22.831329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.269 qpair failed and we were unable to recover it. 00:35:55.269 [2024-07-12 00:48:22.831411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.269 [2024-07-12 00:48:22.831438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.269 qpair failed and we were unable to recover it. 00:35:55.269 [2024-07-12 00:48:22.831555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.269 [2024-07-12 00:48:22.831580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.269 qpair failed and we were unable to recover it. 00:35:55.269 [2024-07-12 00:48:22.831709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.269 [2024-07-12 00:48:22.831737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.269 qpair failed and we were unable to recover it. 00:35:55.269 [2024-07-12 00:48:22.831824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.269 [2024-07-12 00:48:22.831850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.269 qpair failed and we were unable to recover it. 00:35:55.269 [2024-07-12 00:48:22.831932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.269 [2024-07-12 00:48:22.831958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.269 qpair failed and we were unable to recover it. 00:35:55.269 [2024-07-12 00:48:22.832068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.269 [2024-07-12 00:48:22.832094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.269 qpair failed and we were unable to recover it. 00:35:55.269 [2024-07-12 00:48:22.832214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.269 [2024-07-12 00:48:22.832240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.269 qpair failed and we were unable to recover it. 00:35:55.269 [2024-07-12 00:48:22.832385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.269 [2024-07-12 00:48:22.832411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.269 qpair failed and we were unable to recover it. 00:35:55.269 [2024-07-12 00:48:22.832531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.269 [2024-07-12 00:48:22.832560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.269 qpair failed and we were unable to recover it. 00:35:55.269 [2024-07-12 00:48:22.832688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.269 [2024-07-12 00:48:22.832716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.269 qpair failed and we were unable to recover it. 00:35:55.269 [2024-07-12 00:48:22.832813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.269 [2024-07-12 00:48:22.832840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.269 qpair failed and we were unable to recover it. 00:35:55.269 [2024-07-12 00:48:22.832917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.269 [2024-07-12 00:48:22.832943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.269 qpair failed and we were unable to recover it. 00:35:55.269 [2024-07-12 00:48:22.833021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.269 [2024-07-12 00:48:22.833048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.269 qpair failed and we were unable to recover it. 00:35:55.269 [2024-07-12 00:48:22.833146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.269 [2024-07-12 00:48:22.833173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.269 qpair failed and we were unable to recover it. 00:35:55.269 [2024-07-12 00:48:22.833262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.269 [2024-07-12 00:48:22.833289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.269 qpair failed and we were unable to recover it. 00:35:55.269 [2024-07-12 00:48:22.833371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.269 [2024-07-12 00:48:22.833397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.269 qpair failed and we were unable to recover it. 00:35:55.269 [2024-07-12 00:48:22.833496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.269 [2024-07-12 00:48:22.833523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.269 qpair failed and we were unable to recover it. 00:35:55.269 [2024-07-12 00:48:22.833616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.269 [2024-07-12 00:48:22.833650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.269 qpair failed and we were unable to recover it. 00:35:55.269 [2024-07-12 00:48:22.833746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.269 [2024-07-12 00:48:22.833773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.269 qpair failed and we were unable to recover it. 00:35:55.270 [2024-07-12 00:48:22.833886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.270 [2024-07-12 00:48:22.833913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.270 qpair failed and we were unable to recover it. 00:35:55.270 [2024-07-12 00:48:22.833995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.270 [2024-07-12 00:48:22.834023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.270 qpair failed and we were unable to recover it. 00:35:55.270 [2024-07-12 00:48:22.834121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.270 [2024-07-12 00:48:22.834147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.270 qpair failed and we were unable to recover it. 00:35:55.270 [2024-07-12 00:48:22.834229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.270 [2024-07-12 00:48:22.834255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.270 qpair failed and we were unable to recover it. 00:35:55.270 [2024-07-12 00:48:22.834331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.270 [2024-07-12 00:48:22.834358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.270 qpair failed and we were unable to recover it. 00:35:55.270 [2024-07-12 00:48:22.834436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.270 [2024-07-12 00:48:22.834462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.270 qpair failed and we were unable to recover it. 00:35:55.270 [2024-07-12 00:48:22.834540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.270 [2024-07-12 00:48:22.834566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.270 qpair failed and we were unable to recover it. 00:35:55.270 [2024-07-12 00:48:22.834662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.270 [2024-07-12 00:48:22.834688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.270 qpair failed and we were unable to recover it. 00:35:55.270 [2024-07-12 00:48:22.834766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.270 [2024-07-12 00:48:22.834792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.270 qpair failed and we were unable to recover it. 00:35:55.270 [2024-07-12 00:48:22.834873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.270 [2024-07-12 00:48:22.834899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.270 qpair failed and we were unable to recover it. 00:35:55.270 [2024-07-12 00:48:22.834979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.270 [2024-07-12 00:48:22.835005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.270 qpair failed and we were unable to recover it. 00:35:55.270 [2024-07-12 00:48:22.835087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.270 [2024-07-12 00:48:22.835113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.270 qpair failed and we were unable to recover it. 00:35:55.270 [2024-07-12 00:48:22.835190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.270 [2024-07-12 00:48:22.835216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.270 qpair failed and we were unable to recover it. 00:35:55.270 [2024-07-12 00:48:22.835325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.270 [2024-07-12 00:48:22.835362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.270 qpair failed and we were unable to recover it. 00:35:55.270 [2024-07-12 00:48:22.835446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.270 [2024-07-12 00:48:22.835473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.270 qpair failed and we were unable to recover it. 00:35:55.270 [2024-07-12 00:48:22.835560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.270 [2024-07-12 00:48:22.835595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.270 qpair failed and we were unable to recover it. 00:35:55.270 [2024-07-12 00:48:22.835688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.270 [2024-07-12 00:48:22.835715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.270 qpair failed and we were unable to recover it. 00:35:55.270 [2024-07-12 00:48:22.835805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.270 [2024-07-12 00:48:22.835831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.270 qpair failed and we were unable to recover it. 00:35:55.270 [2024-07-12 00:48:22.835949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.270 [2024-07-12 00:48:22.835976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.270 qpair failed and we were unable to recover it. 00:35:55.270 [2024-07-12 00:48:22.836064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.270 [2024-07-12 00:48:22.836095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.270 qpair failed and we were unable to recover it. 00:35:55.270 [2024-07-12 00:48:22.836179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.270 [2024-07-12 00:48:22.836205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.270 qpair failed and we were unable to recover it. 00:35:55.270 [2024-07-12 00:48:22.836324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.270 [2024-07-12 00:48:22.836350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.270 qpair failed and we were unable to recover it. 00:35:55.270 [2024-07-12 00:48:22.836431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.270 [2024-07-12 00:48:22.836457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.270 qpair failed and we were unable to recover it. 00:35:55.270 [2024-07-12 00:48:22.836574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.270 [2024-07-12 00:48:22.836610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.270 qpair failed and we were unable to recover it. 00:35:55.270 [2024-07-12 00:48:22.836734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.270 [2024-07-12 00:48:22.836761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.270 qpair failed and we were unable to recover it. 00:35:55.270 [2024-07-12 00:48:22.836846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.270 [2024-07-12 00:48:22.836872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.270 qpair failed and we were unable to recover it. 00:35:55.270 [2024-07-12 00:48:22.836992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.270 [2024-07-12 00:48:22.837019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.270 qpair failed and we were unable to recover it. 00:35:55.270 [2024-07-12 00:48:22.837111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.270 [2024-07-12 00:48:22.837143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.270 qpair failed and we were unable to recover it. 00:35:55.270 [2024-07-12 00:48:22.837251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.270 [2024-07-12 00:48:22.837278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.270 qpair failed and we were unable to recover it. 00:35:55.270 [2024-07-12 00:48:22.837362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.270 [2024-07-12 00:48:22.837388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.270 qpair failed and we were unable to recover it. 00:35:55.270 [2024-07-12 00:48:22.837477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.270 [2024-07-12 00:48:22.837504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.270 qpair failed and we were unable to recover it. 00:35:55.270 [2024-07-12 00:48:22.837594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.270 [2024-07-12 00:48:22.837621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.270 qpair failed and we were unable to recover it. 00:35:55.270 [2024-07-12 00:48:22.837703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.270 [2024-07-12 00:48:22.837730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.271 qpair failed and we were unable to recover it. 00:35:55.271 [2024-07-12 00:48:22.837821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.271 [2024-07-12 00:48:22.837847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.271 qpair failed and we were unable to recover it. 00:35:55.271 [2024-07-12 00:48:22.837945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.271 [2024-07-12 00:48:22.837972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.271 qpair failed and we were unable to recover it. 00:35:55.271 [2024-07-12 00:48:22.838077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.271 [2024-07-12 00:48:22.838103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.271 qpair failed and we were unable to recover it. 00:35:55.271 [2024-07-12 00:48:22.838179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.271 [2024-07-12 00:48:22.838205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.271 qpair failed and we were unable to recover it. 00:35:55.271 [2024-07-12 00:48:22.838281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.271 [2024-07-12 00:48:22.838308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.271 qpair failed and we were unable to recover it. 00:35:55.271 [2024-07-12 00:48:22.838388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.271 [2024-07-12 00:48:22.838414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.271 qpair failed and we were unable to recover it. 00:35:55.271 [2024-07-12 00:48:22.838492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.271 [2024-07-12 00:48:22.838519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.271 qpair failed and we were unable to recover it. 00:35:55.271 [2024-07-12 00:48:22.838650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.271 [2024-07-12 00:48:22.838692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.271 qpair failed and we were unable to recover it. 00:35:55.271 [2024-07-12 00:48:22.838784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.271 [2024-07-12 00:48:22.838813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.271 qpair failed and we were unable to recover it. 00:35:55.271 [2024-07-12 00:48:22.838891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.271 [2024-07-12 00:48:22.838918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.271 qpair failed and we were unable to recover it. 00:35:55.271 [2024-07-12 00:48:22.839010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.271 [2024-07-12 00:48:22.839036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.271 qpair failed and we were unable to recover it. 00:35:55.271 [2024-07-12 00:48:22.839128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.271 [2024-07-12 00:48:22.839155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.271 qpair failed and we were unable to recover it. 00:35:55.271 [2024-07-12 00:48:22.839229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.271 [2024-07-12 00:48:22.839255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.271 qpair failed and we were unable to recover it. 00:35:55.271 [2024-07-12 00:48:22.839329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.271 [2024-07-12 00:48:22.839356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.271 qpair failed and we were unable to recover it. 00:35:55.271 [2024-07-12 00:48:22.839455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.271 [2024-07-12 00:48:22.839480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.271 qpair failed and we were unable to recover it. 00:35:55.271 [2024-07-12 00:48:22.839564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.271 [2024-07-12 00:48:22.839597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.271 qpair failed and we were unable to recover it. 00:35:55.271 [2024-07-12 00:48:22.839694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.271 [2024-07-12 00:48:22.839720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.271 qpair failed and we were unable to recover it. 00:35:55.271 [2024-07-12 00:48:22.839803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.271 [2024-07-12 00:48:22.839829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.271 qpair failed and we were unable to recover it. 00:35:55.271 [2024-07-12 00:48:22.839912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.271 [2024-07-12 00:48:22.839938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.271 qpair failed and we were unable to recover it. 00:35:55.271 [2024-07-12 00:48:22.840013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.271 [2024-07-12 00:48:22.840039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.271 qpair failed and we were unable to recover it. 00:35:55.271 [2024-07-12 00:48:22.840132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.271 [2024-07-12 00:48:22.840171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.271 qpair failed and we were unable to recover it. 00:35:55.271 [2024-07-12 00:48:22.840259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.271 [2024-07-12 00:48:22.840286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.271 qpair failed and we were unable to recover it. 00:35:55.271 [2024-07-12 00:48:22.840383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.271 [2024-07-12 00:48:22.840409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.271 qpair failed and we were unable to recover it. 00:35:55.271 [2024-07-12 00:48:22.840489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.271 [2024-07-12 00:48:22.840514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.271 qpair failed and we were unable to recover it. 00:35:55.271 [2024-07-12 00:48:22.840594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.271 [2024-07-12 00:48:22.840621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.271 qpair failed and we were unable to recover it. 00:35:55.271 [2024-07-12 00:48:22.840701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.271 [2024-07-12 00:48:22.840735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.271 qpair failed and we were unable to recover it. 00:35:55.271 [2024-07-12 00:48:22.840823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.271 [2024-07-12 00:48:22.840850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.271 qpair failed and we were unable to recover it. 00:35:55.271 [2024-07-12 00:48:22.840943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.271 [2024-07-12 00:48:22.840969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.271 qpair failed and we were unable to recover it. 00:35:55.271 [2024-07-12 00:48:22.841061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.271 [2024-07-12 00:48:22.841091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.271 qpair failed and we were unable to recover it. 00:35:55.271 [2024-07-12 00:48:22.841191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.271 [2024-07-12 00:48:22.841217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.271 qpair failed and we were unable to recover it. 00:35:55.271 [2024-07-12 00:48:22.841319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.272 [2024-07-12 00:48:22.841345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.272 qpair failed and we were unable to recover it. 00:35:55.272 [2024-07-12 00:48:22.841429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.272 [2024-07-12 00:48:22.841456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.272 qpair failed and we were unable to recover it. 00:35:55.272 [2024-07-12 00:48:22.841548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.272 [2024-07-12 00:48:22.841575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.272 qpair failed and we were unable to recover it. 00:35:55.272 [2024-07-12 00:48:22.841683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.272 [2024-07-12 00:48:22.841709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.272 qpair failed and we were unable to recover it. 00:35:55.272 [2024-07-12 00:48:22.841804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.272 [2024-07-12 00:48:22.841831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.272 qpair failed and we were unable to recover it. 00:35:55.272 [2024-07-12 00:48:22.841929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.272 [2024-07-12 00:48:22.841955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.272 qpair failed and we were unable to recover it. 00:35:55.272 [2024-07-12 00:48:22.842037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.272 [2024-07-12 00:48:22.842063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.272 qpair failed and we were unable to recover it. 00:35:55.272 [2024-07-12 00:48:22.842161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.272 [2024-07-12 00:48:22.842187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.272 qpair failed and we were unable to recover it. 00:35:55.272 [2024-07-12 00:48:22.842279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.272 [2024-07-12 00:48:22.842305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.272 qpair failed and we were unable to recover it. 00:35:55.272 [2024-07-12 00:48:22.842385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.272 [2024-07-12 00:48:22.842412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.272 qpair failed and we were unable to recover it. 00:35:55.272 [2024-07-12 00:48:22.842498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.272 [2024-07-12 00:48:22.842528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.272 qpair failed and we were unable to recover it. 00:35:55.272 [2024-07-12 00:48:22.842623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.272 [2024-07-12 00:48:22.842650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.272 qpair failed and we were unable to recover it. 00:35:55.272 [2024-07-12 00:48:22.842730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.272 [2024-07-12 00:48:22.842755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.272 qpair failed and we were unable to recover it. 00:35:55.272 [2024-07-12 00:48:22.842840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.272 [2024-07-12 00:48:22.842868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.272 qpair failed and we were unable to recover it. 00:35:55.272 [2024-07-12 00:48:22.842951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.272 [2024-07-12 00:48:22.842981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.272 qpair failed and we were unable to recover it. 00:35:55.272 [2024-07-12 00:48:22.843058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.272 [2024-07-12 00:48:22.843085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.272 qpair failed and we were unable to recover it. 00:35:55.272 [2024-07-12 00:48:22.843165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.272 [2024-07-12 00:48:22.843192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.272 qpair failed and we were unable to recover it. 00:35:55.272 [2024-07-12 00:48:22.843273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.272 [2024-07-12 00:48:22.843300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.272 qpair failed and we were unable to recover it. 00:35:55.272 [2024-07-12 00:48:22.843376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.272 [2024-07-12 00:48:22.843402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.272 qpair failed and we were unable to recover it. 00:35:55.272 [2024-07-12 00:48:22.843486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.272 [2024-07-12 00:48:22.843512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.272 qpair failed and we were unable to recover it. 00:35:55.272 [2024-07-12 00:48:22.843604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.272 [2024-07-12 00:48:22.843636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.272 qpair failed and we were unable to recover it. 00:35:55.272 [2024-07-12 00:48:22.843725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.272 [2024-07-12 00:48:22.843751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.272 qpair failed and we were unable to recover it. 00:35:55.272 [2024-07-12 00:48:22.843825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.272 [2024-07-12 00:48:22.843851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.272 qpair failed and we were unable to recover it. 00:35:55.272 [2024-07-12 00:48:22.843937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.272 [2024-07-12 00:48:22.843966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.272 qpair failed and we were unable to recover it. 00:35:55.272 [2024-07-12 00:48:22.844054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.272 [2024-07-12 00:48:22.844080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.272 qpair failed and we were unable to recover it. 00:35:55.272 [2024-07-12 00:48:22.844165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.272 [2024-07-12 00:48:22.844191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.272 qpair failed and we were unable to recover it. 00:35:55.272 [2024-07-12 00:48:22.844275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.272 [2024-07-12 00:48:22.844301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.272 qpair failed and we were unable to recover it. 00:35:55.272 [2024-07-12 00:48:22.844382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.272 [2024-07-12 00:48:22.844407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.272 qpair failed and we were unable to recover it. 00:35:55.272 [2024-07-12 00:48:22.844490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.272 [2024-07-12 00:48:22.844516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.272 qpair failed and we were unable to recover it. 00:35:55.272 [2024-07-12 00:48:22.844612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.272 [2024-07-12 00:48:22.844639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.272 qpair failed and we were unable to recover it. 00:35:55.272 [2024-07-12 00:48:22.844730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.272 [2024-07-12 00:48:22.844762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.272 qpair failed and we were unable to recover it. 00:35:55.272 [2024-07-12 00:48:22.844850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.272 [2024-07-12 00:48:22.844879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.272 qpair failed and we were unable to recover it. 00:35:55.272 [2024-07-12 00:48:22.844966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.272 [2024-07-12 00:48:22.844994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.272 qpair failed and we were unable to recover it. 00:35:55.272 [2024-07-12 00:48:22.845075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.272 [2024-07-12 00:48:22.845102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.272 qpair failed and we were unable to recover it. 00:35:55.272 [2024-07-12 00:48:22.845186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.272 [2024-07-12 00:48:22.845213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.272 qpair failed and we were unable to recover it. 00:35:55.272 [2024-07-12 00:48:22.845294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.272 [2024-07-12 00:48:22.845321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.272 qpair failed and we were unable to recover it. 00:35:55.272 [2024-07-12 00:48:22.845402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.272 [2024-07-12 00:48:22.845428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.272 qpair failed and we were unable to recover it. 00:35:55.272 [2024-07-12 00:48:22.845509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.272 [2024-07-12 00:48:22.845535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.272 qpair failed and we were unable to recover it. 00:35:55.272 [2024-07-12 00:48:22.845612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.272 [2024-07-12 00:48:22.845639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.272 qpair failed and we were unable to recover it. 00:35:55.272 [2024-07-12 00:48:22.845749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.272 [2024-07-12 00:48:22.845776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.272 qpair failed and we were unable to recover it. 00:35:55.272 [2024-07-12 00:48:22.845851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.272 [2024-07-12 00:48:22.845877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.272 qpair failed and we were unable to recover it. 00:35:55.272 [2024-07-12 00:48:22.845963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.272 [2024-07-12 00:48:22.845990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.272 qpair failed and we were unable to recover it. 00:35:55.272 [2024-07-12 00:48:22.846065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.272 [2024-07-12 00:48:22.846091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.272 qpair failed and we were unable to recover it. 00:35:55.272 [2024-07-12 00:48:22.846172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.272 [2024-07-12 00:48:22.846200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.272 qpair failed and we were unable to recover it. 00:35:55.272 [2024-07-12 00:48:22.846290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.273 [2024-07-12 00:48:22.846316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.273 qpair failed and we were unable to recover it. 00:35:55.273 [2024-07-12 00:48:22.846392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.273 [2024-07-12 00:48:22.846418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.273 qpair failed and we were unable to recover it. 00:35:55.273 [2024-07-12 00:48:22.846492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.273 [2024-07-12 00:48:22.846518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.273 qpair failed and we were unable to recover it. 00:35:55.273 [2024-07-12 00:48:22.846620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.273 [2024-07-12 00:48:22.846655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.273 qpair failed and we were unable to recover it. 00:35:55.273 [2024-07-12 00:48:22.846742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.273 [2024-07-12 00:48:22.846768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.273 qpair failed and we were unable to recover it. 00:35:55.273 [2024-07-12 00:48:22.846842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.273 [2024-07-12 00:48:22.846868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.273 qpair failed and we were unable to recover it. 00:35:55.273 [2024-07-12 00:48:22.846943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.273 [2024-07-12 00:48:22.846969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.273 qpair failed and we were unable to recover it. 00:35:55.273 [2024-07-12 00:48:22.847044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.273 [2024-07-12 00:48:22.847070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.273 qpair failed and we were unable to recover it. 00:35:55.273 [2024-07-12 00:48:22.847144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.273 [2024-07-12 00:48:22.847171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.273 qpair failed and we were unable to recover it. 00:35:55.273 [2024-07-12 00:48:22.847251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.273 [2024-07-12 00:48:22.847278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.273 qpair failed and we were unable to recover it. 00:35:55.273 [2024-07-12 00:48:22.847362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.273 [2024-07-12 00:48:22.847388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.273 qpair failed and we were unable to recover it. 00:35:55.273 [2024-07-12 00:48:22.847463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.273 [2024-07-12 00:48:22.847489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.273 qpair failed and we were unable to recover it. 00:35:55.273 [2024-07-12 00:48:22.847569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.273 [2024-07-12 00:48:22.847603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.273 qpair failed and we were unable to recover it. 00:35:55.273 [2024-07-12 00:48:22.847696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.273 [2024-07-12 00:48:22.847726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.273 qpair failed and we were unable to recover it. 00:35:55.273 [2024-07-12 00:48:22.847808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.273 [2024-07-12 00:48:22.847838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.273 qpair failed and we were unable to recover it. 00:35:55.273 [2024-07-12 00:48:22.847926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.273 [2024-07-12 00:48:22.847952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.273 qpair failed and we were unable to recover it. 00:35:55.273 [2024-07-12 00:48:22.848025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.273 [2024-07-12 00:48:22.848051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.273 qpair failed and we were unable to recover it. 00:35:55.273 [2024-07-12 00:48:22.848127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.273 [2024-07-12 00:48:22.848154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.273 qpair failed and we were unable to recover it. 00:35:55.273 [2024-07-12 00:48:22.848240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.273 [2024-07-12 00:48:22.848266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.273 qpair failed and we were unable to recover it. 00:35:55.273 [2024-07-12 00:48:22.848349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.273 [2024-07-12 00:48:22.848375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.273 qpair failed and we were unable to recover it. 00:35:55.273 [2024-07-12 00:48:22.848451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.273 [2024-07-12 00:48:22.848477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.273 qpair failed and we were unable to recover it. 00:35:55.273 [2024-07-12 00:48:22.848555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.273 [2024-07-12 00:48:22.848581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.273 qpair failed and we were unable to recover it. 00:35:55.273 [2024-07-12 00:48:22.848682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.273 [2024-07-12 00:48:22.848709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.273 qpair failed and we were unable to recover it. 00:35:55.273 [2024-07-12 00:48:22.848801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.273 [2024-07-12 00:48:22.848831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.273 qpair failed and we were unable to recover it. 00:35:55.273 [2024-07-12 00:48:22.848908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.273 [2024-07-12 00:48:22.848934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.273 qpair failed and we were unable to recover it. 00:35:55.273 [2024-07-12 00:48:22.849026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.273 [2024-07-12 00:48:22.849053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.273 qpair failed and we were unable to recover it. 00:35:55.273 [2024-07-12 00:48:22.849138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.273 [2024-07-12 00:48:22.849173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.273 qpair failed and we were unable to recover it. 00:35:55.273 [2024-07-12 00:48:22.849259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.273 [2024-07-12 00:48:22.849286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.273 qpair failed and we were unable to recover it. 00:35:55.273 [2024-07-12 00:48:22.849370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.273 [2024-07-12 00:48:22.849397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.273 qpair failed and we were unable to recover it. 00:35:55.273 [2024-07-12 00:48:22.849486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.273 [2024-07-12 00:48:22.849526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.273 qpair failed and we were unable to recover it. 00:35:55.273 [2024-07-12 00:48:22.849608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.273 [2024-07-12 00:48:22.849636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.273 qpair failed and we were unable to recover it. 00:35:55.273 [2024-07-12 00:48:22.849715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.273 [2024-07-12 00:48:22.849741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.273 qpair failed and we were unable to recover it. 00:35:55.273 [2024-07-12 00:48:22.849817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.273 [2024-07-12 00:48:22.849843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.273 qpair failed and we were unable to recover it. 00:35:55.273 [2024-07-12 00:48:22.849923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.273 [2024-07-12 00:48:22.849949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.273 qpair failed and we were unable to recover it. 00:35:55.273 [2024-07-12 00:48:22.850023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.273 [2024-07-12 00:48:22.850049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.273 qpair failed and we were unable to recover it. 00:35:55.273 [2024-07-12 00:48:22.850127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.273 [2024-07-12 00:48:22.850152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.273 qpair failed and we were unable to recover it. 00:35:55.273 [2024-07-12 00:48:22.850231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.273 [2024-07-12 00:48:22.850257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.273 qpair failed and we were unable to recover it. 00:35:55.273 [2024-07-12 00:48:22.850331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.273 [2024-07-12 00:48:22.850356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.273 qpair failed and we were unable to recover it. 00:35:55.273 [2024-07-12 00:48:22.850432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.273 [2024-07-12 00:48:22.850459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.273 qpair failed and we were unable to recover it. 00:35:55.273 [2024-07-12 00:48:22.850535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.273 [2024-07-12 00:48:22.850562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.273 qpair failed and we were unable to recover it. 00:35:55.273 [2024-07-12 00:48:22.850650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.273 [2024-07-12 00:48:22.850676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.273 qpair failed and we were unable to recover it. 00:35:55.273 [2024-07-12 00:48:22.850756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.273 [2024-07-12 00:48:22.850784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.273 qpair failed and we were unable to recover it. 00:35:55.273 [2024-07-12 00:48:22.850869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.273 [2024-07-12 00:48:22.850896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.273 qpair failed and we were unable to recover it. 00:35:55.273 [2024-07-12 00:48:22.850978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.273 [2024-07-12 00:48:22.851004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.273 qpair failed and we were unable to recover it. 00:35:55.273 [2024-07-12 00:48:22.851085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.273 [2024-07-12 00:48:22.851112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.273 qpair failed and we were unable to recover it. 00:35:55.273 [2024-07-12 00:48:22.851188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.273 [2024-07-12 00:48:22.851214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.273 qpair failed and we were unable to recover it. 00:35:55.273 [2024-07-12 00:48:22.851297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.273 [2024-07-12 00:48:22.851323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.273 qpair failed and we were unable to recover it. 00:35:55.273 [2024-07-12 00:48:22.851406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.273 [2024-07-12 00:48:22.851433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.273 qpair failed and we were unable to recover it. 00:35:55.273 [2024-07-12 00:48:22.851512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.273 [2024-07-12 00:48:22.851539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.273 qpair failed and we were unable to recover it. 00:35:55.273 [2024-07-12 00:48:22.851621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.273 [2024-07-12 00:48:22.851647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.273 qpair failed and we were unable to recover it. 00:35:55.273 [2024-07-12 00:48:22.851728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.273 [2024-07-12 00:48:22.851753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.273 qpair failed and we were unable to recover it. 00:35:55.273 [2024-07-12 00:48:22.851839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.273 [2024-07-12 00:48:22.851866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.274 qpair failed and we were unable to recover it. 00:35:55.274 [2024-07-12 00:48:22.851946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.274 [2024-07-12 00:48:22.851972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.274 qpair failed and we were unable to recover it. 00:35:55.274 [2024-07-12 00:48:22.852052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.274 [2024-07-12 00:48:22.852081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.274 qpair failed and we were unable to recover it. 00:35:55.274 [2024-07-12 00:48:22.852164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.274 [2024-07-12 00:48:22.852190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.274 qpair failed and we were unable to recover it. 00:35:55.274 [2024-07-12 00:48:22.852265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.274 [2024-07-12 00:48:22.852291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.274 qpair failed and we were unable to recover it. 00:35:55.274 [2024-07-12 00:48:22.852374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.274 [2024-07-12 00:48:22.852400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.274 qpair failed and we were unable to recover it. 00:35:55.274 [2024-07-12 00:48:22.852481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.274 [2024-07-12 00:48:22.852511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.274 qpair failed and we were unable to recover it. 00:35:55.274 [2024-07-12 00:48:22.852593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.274 [2024-07-12 00:48:22.852620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.274 qpair failed and we were unable to recover it. 00:35:55.274 [2024-07-12 00:48:22.852705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.274 [2024-07-12 00:48:22.852732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.274 qpair failed and we were unable to recover it. 00:35:55.274 [2024-07-12 00:48:22.852808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.274 [2024-07-12 00:48:22.852834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.274 qpair failed and we were unable to recover it. 00:35:55.274 [2024-07-12 00:48:22.852924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.274 [2024-07-12 00:48:22.852950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.274 qpair failed and we were unable to recover it. 00:35:55.274 [2024-07-12 00:48:22.853037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.274 [2024-07-12 00:48:22.853064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.274 qpair failed and we were unable to recover it. 00:35:55.274 [2024-07-12 00:48:22.853149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.274 [2024-07-12 00:48:22.853178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.274 qpair failed and we were unable to recover it. 00:35:55.274 [2024-07-12 00:48:22.853264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.274 [2024-07-12 00:48:22.853294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.274 qpair failed and we were unable to recover it. 00:35:55.274 [2024-07-12 00:48:22.853376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.274 [2024-07-12 00:48:22.853402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.274 qpair failed and we were unable to recover it. 00:35:55.274 [2024-07-12 00:48:22.853490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.274 [2024-07-12 00:48:22.853521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.274 qpair failed and we were unable to recover it. 00:35:55.274 [2024-07-12 00:48:22.853606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.274 [2024-07-12 00:48:22.853632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.274 qpair failed and we were unable to recover it. 00:35:55.274 [2024-07-12 00:48:22.853711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.274 [2024-07-12 00:48:22.853737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.274 qpair failed and we were unable to recover it. 00:35:55.274 [2024-07-12 00:48:22.853819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.274 [2024-07-12 00:48:22.853844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.274 qpair failed and we were unable to recover it. 00:35:55.274 [2024-07-12 00:48:22.853921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.274 [2024-07-12 00:48:22.853947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.274 qpair failed and we were unable to recover it. 00:35:55.274 [2024-07-12 00:48:22.854023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.274 [2024-07-12 00:48:22.854049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.274 qpair failed and we were unable to recover it. 00:35:55.274 [2024-07-12 00:48:22.854133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.274 [2024-07-12 00:48:22.854159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.274 qpair failed and we were unable to recover it. 00:35:55.274 [2024-07-12 00:48:22.854233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.274 [2024-07-12 00:48:22.854259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.274 qpair failed and we were unable to recover it. 00:35:55.274 [2024-07-12 00:48:22.854334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.274 [2024-07-12 00:48:22.854360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.274 qpair failed and we were unable to recover it. 00:35:55.274 [2024-07-12 00:48:22.854439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.274 [2024-07-12 00:48:22.854465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.274 qpair failed and we were unable to recover it. 00:35:55.274 [2024-07-12 00:48:22.854551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.274 [2024-07-12 00:48:22.854576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.274 qpair failed and we were unable to recover it. 00:35:55.274 [2024-07-12 00:48:22.854700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.274 [2024-07-12 00:48:22.854728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.274 qpair failed and we were unable to recover it. 00:35:55.274 [2024-07-12 00:48:22.854812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.274 [2024-07-12 00:48:22.854838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.274 qpair failed and we were unable to recover it. 00:35:55.274 [2024-07-12 00:48:22.854915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.274 [2024-07-12 00:48:22.854941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.274 qpair failed and we were unable to recover it. 00:35:55.274 [2024-07-12 00:48:22.855027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.274 [2024-07-12 00:48:22.855056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.274 qpair failed and we were unable to recover it. 00:35:55.274 [2024-07-12 00:48:22.855144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.274 [2024-07-12 00:48:22.855170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.274 qpair failed and we were unable to recover it. 00:35:55.274 [2024-07-12 00:48:22.855253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.274 [2024-07-12 00:48:22.855279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.274 qpair failed and we were unable to recover it. 00:35:55.274 [2024-07-12 00:48:22.855354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.274 [2024-07-12 00:48:22.855381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.274 qpair failed and we were unable to recover it. 00:35:55.274 [2024-07-12 00:48:22.855463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.274 [2024-07-12 00:48:22.855489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.274 qpair failed and we were unable to recover it. 00:35:55.274 [2024-07-12 00:48:22.855564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.274 [2024-07-12 00:48:22.855598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.274 qpair failed and we were unable to recover it. 00:35:55.274 [2024-07-12 00:48:22.855679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.274 [2024-07-12 00:48:22.855706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.274 qpair failed and we were unable to recover it. 00:35:55.274 [2024-07-12 00:48:22.855790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.274 [2024-07-12 00:48:22.855817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.274 qpair failed and we were unable to recover it. 00:35:55.274 [2024-07-12 00:48:22.855892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.274 [2024-07-12 00:48:22.855918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.274 qpair failed and we were unable to recover it. 00:35:55.274 [2024-07-12 00:48:22.855994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.274 [2024-07-12 00:48:22.856020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.274 qpair failed and we were unable to recover it. 00:35:55.274 [2024-07-12 00:48:22.856100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.274 [2024-07-12 00:48:22.856127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.274 qpair failed and we were unable to recover it. 00:35:55.274 [2024-07-12 00:48:22.856201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.274 [2024-07-12 00:48:22.856227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.274 qpair failed and we were unable to recover it. 00:35:55.274 [2024-07-12 00:48:22.856310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.274 [2024-07-12 00:48:22.856338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.274 qpair failed and we were unable to recover it. 00:35:55.274 [2024-07-12 00:48:22.856425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.274 [2024-07-12 00:48:22.856452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.274 qpair failed and we were unable to recover it. 00:35:55.274 [2024-07-12 00:48:22.856536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.274 [2024-07-12 00:48:22.856563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.274 qpair failed and we were unable to recover it. 00:35:55.274 [2024-07-12 00:48:22.856653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.274 [2024-07-12 00:48:22.856680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.274 qpair failed and we were unable to recover it. 00:35:55.274 [2024-07-12 00:48:22.856756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.274 [2024-07-12 00:48:22.856782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.274 qpair failed and we were unable to recover it. 00:35:55.274 [2024-07-12 00:48:22.856865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.274 [2024-07-12 00:48:22.856892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.274 qpair failed and we were unable to recover it. 00:35:55.274 [2024-07-12 00:48:22.856979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.274 [2024-07-12 00:48:22.857007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.274 qpair failed and we were unable to recover it. 00:35:55.274 [2024-07-12 00:48:22.857090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.274 [2024-07-12 00:48:22.857116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.274 qpair failed and we were unable to recover it. 00:35:55.274 [2024-07-12 00:48:22.857197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.274 [2024-07-12 00:48:22.857224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.274 qpair failed and we were unable to recover it. 00:35:55.274 [2024-07-12 00:48:22.857300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.274 [2024-07-12 00:48:22.857327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.274 qpair failed and we were unable to recover it. 00:35:55.274 [2024-07-12 00:48:22.857402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.274 [2024-07-12 00:48:22.857428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.274 qpair failed and we were unable to recover it. 00:35:55.274 [2024-07-12 00:48:22.857503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.274 [2024-07-12 00:48:22.857529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.274 qpair failed and we were unable to recover it. 00:35:55.274 [2024-07-12 00:48:22.857608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.274 [2024-07-12 00:48:22.857636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.274 qpair failed and we were unable to recover it. 00:35:55.274 [2024-07-12 00:48:22.857720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.274 [2024-07-12 00:48:22.857747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.274 qpair failed and we were unable to recover it. 00:35:55.274 [2024-07-12 00:48:22.857823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.274 [2024-07-12 00:48:22.857854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.274 qpair failed and we were unable to recover it. 00:35:55.274 [2024-07-12 00:48:22.857934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.274 [2024-07-12 00:48:22.857961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.274 qpair failed and we were unable to recover it. 00:35:55.274 [2024-07-12 00:48:22.858045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.274 [2024-07-12 00:48:22.858071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.274 qpair failed and we were unable to recover it. 00:35:55.275 [2024-07-12 00:48:22.858155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.275 [2024-07-12 00:48:22.858182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.275 qpair failed and we were unable to recover it. 00:35:55.275 [2024-07-12 00:48:22.858272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.275 [2024-07-12 00:48:22.858298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.275 qpair failed and we were unable to recover it. 00:35:55.275 [2024-07-12 00:48:22.858382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.275 [2024-07-12 00:48:22.858408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.275 qpair failed and we were unable to recover it. 00:35:55.275 [2024-07-12 00:48:22.858489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.275 [2024-07-12 00:48:22.858515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.275 qpair failed and we were unable to recover it. 00:35:55.275 [2024-07-12 00:48:22.858597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.275 [2024-07-12 00:48:22.858623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.275 qpair failed and we were unable to recover it. 00:35:55.275 [2024-07-12 00:48:22.858715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.275 [2024-07-12 00:48:22.858741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.275 qpair failed and we were unable to recover it. 00:35:55.275 [2024-07-12 00:48:22.858824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.275 [2024-07-12 00:48:22.858852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.275 qpair failed and we were unable to recover it. 00:35:55.275 [2024-07-12 00:48:22.858936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.275 [2024-07-12 00:48:22.858965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.275 qpair failed and we were unable to recover it. 00:35:55.275 [2024-07-12 00:48:22.859048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.275 [2024-07-12 00:48:22.859074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.275 qpair failed and we were unable to recover it. 00:35:55.275 [2024-07-12 00:48:22.859161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.275 [2024-07-12 00:48:22.859187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.275 qpair failed and we were unable to recover it. 00:35:55.275 [2024-07-12 00:48:22.859265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.275 [2024-07-12 00:48:22.859291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.275 qpair failed and we were unable to recover it. 00:35:55.275 [2024-07-12 00:48:22.859382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.275 [2024-07-12 00:48:22.859412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.275 qpair failed and we were unable to recover it. 00:35:55.275 [2024-07-12 00:48:22.859495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.275 [2024-07-12 00:48:22.859521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.275 qpair failed and we were unable to recover it. 00:35:55.275 [2024-07-12 00:48:22.859610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.275 [2024-07-12 00:48:22.859637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.275 qpair failed and we were unable to recover it. 00:35:55.275 [2024-07-12 00:48:22.859721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.275 [2024-07-12 00:48:22.859747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.275 qpair failed and we were unable to recover it. 00:35:55.275 [2024-07-12 00:48:22.859825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.275 [2024-07-12 00:48:22.859851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.275 qpair failed and we were unable to recover it. 00:35:55.275 [2024-07-12 00:48:22.859938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.275 [2024-07-12 00:48:22.859964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.275 qpair failed and we were unable to recover it. 00:35:55.275 [2024-07-12 00:48:22.860047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.275 [2024-07-12 00:48:22.860077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.275 qpair failed and we were unable to recover it. 00:35:55.275 [2024-07-12 00:48:22.860169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.275 [2024-07-12 00:48:22.860197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.275 qpair failed and we were unable to recover it. 00:35:55.275 [2024-07-12 00:48:22.860282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.275 [2024-07-12 00:48:22.860310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.275 qpair failed and we were unable to recover it. 00:35:55.275 [2024-07-12 00:48:22.860388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.275 [2024-07-12 00:48:22.860414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.275 qpair failed and we were unable to recover it. 00:35:55.275 [2024-07-12 00:48:22.860496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.275 [2024-07-12 00:48:22.860522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.275 qpair failed and we were unable to recover it. 00:35:55.275 [2024-07-12 00:48:22.860612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.275 [2024-07-12 00:48:22.860639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.275 qpair failed and we were unable to recover it. 00:35:55.275 [2024-07-12 00:48:22.860716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.275 [2024-07-12 00:48:22.860743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.275 qpair failed and we were unable to recover it. 00:35:55.275 [2024-07-12 00:48:22.860826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.275 [2024-07-12 00:48:22.860858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.275 qpair failed and we were unable to recover it. 00:35:55.275 [2024-07-12 00:48:22.860937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.275 [2024-07-12 00:48:22.860962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.275 qpair failed and we were unable to recover it. 00:35:55.275 [2024-07-12 00:48:22.861037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.275 [2024-07-12 00:48:22.861064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.275 qpair failed and we were unable to recover it. 00:35:55.275 [2024-07-12 00:48:22.861151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.275 [2024-07-12 00:48:22.861177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.275 qpair failed and we were unable to recover it. 00:35:55.275 [2024-07-12 00:48:22.861260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.275 [2024-07-12 00:48:22.861289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.275 qpair failed and we were unable to recover it. 00:35:55.275 [2024-07-12 00:48:22.861371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.275 [2024-07-12 00:48:22.861398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.275 qpair failed and we were unable to recover it. 00:35:55.275 [2024-07-12 00:48:22.861487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.275 [2024-07-12 00:48:22.861513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.275 qpair failed and we were unable to recover it. 00:35:55.275 [2024-07-12 00:48:22.861603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.275 [2024-07-12 00:48:22.861631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.275 qpair failed and we were unable to recover it. 00:35:55.275 [2024-07-12 00:48:22.861715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.275 [2024-07-12 00:48:22.861742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.275 qpair failed and we were unable to recover it. 00:35:55.275 [2024-07-12 00:48:22.861825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.275 [2024-07-12 00:48:22.861855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.275 qpair failed and we were unable to recover it. 00:35:55.275 [2024-07-12 00:48:22.861939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.275 [2024-07-12 00:48:22.861967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.275 qpair failed and we were unable to recover it. 00:35:55.275 [2024-07-12 00:48:22.862051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.275 [2024-07-12 00:48:22.862077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.275 qpair failed and we were unable to recover it. 00:35:55.275 [2024-07-12 00:48:22.862156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.275 [2024-07-12 00:48:22.862182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.275 qpair failed and we were unable to recover it. 00:35:55.275 [2024-07-12 00:48:22.862257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.275 [2024-07-12 00:48:22.862288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.275 qpair failed and we were unable to recover it. 00:35:55.275 [2024-07-12 00:48:22.862368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.275 [2024-07-12 00:48:22.862394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.275 qpair failed and we were unable to recover it. 00:35:55.275 [2024-07-12 00:48:22.862470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.275 [2024-07-12 00:48:22.862496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.275 qpair failed and we were unable to recover it. 00:35:55.275 [2024-07-12 00:48:22.862570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.275 [2024-07-12 00:48:22.862604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.275 qpair failed and we were unable to recover it. 00:35:55.275 [2024-07-12 00:48:22.862687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.275 [2024-07-12 00:48:22.862713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.275 qpair failed and we were unable to recover it. 00:35:55.275 [2024-07-12 00:48:22.862795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.275 [2024-07-12 00:48:22.862821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.275 qpair failed and we were unable to recover it. 00:35:55.275 [2024-07-12 00:48:22.862909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.275 [2024-07-12 00:48:22.862936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.275 qpair failed and we were unable to recover it. 00:35:55.275 [2024-07-12 00:48:22.863018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.275 [2024-07-12 00:48:22.863045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.275 qpair failed and we were unable to recover it. 00:35:55.275 [2024-07-12 00:48:22.863128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.275 [2024-07-12 00:48:22.863162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.275 qpair failed and we were unable to recover it. 00:35:55.275 [2024-07-12 00:48:22.863240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.275 [2024-07-12 00:48:22.863267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.275 qpair failed and we were unable to recover it. 00:35:55.275 [2024-07-12 00:48:22.863340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.275 [2024-07-12 00:48:22.863367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.275 qpair failed and we were unable to recover it. 00:35:55.275 [2024-07-12 00:48:22.863441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.275 [2024-07-12 00:48:22.863468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.275 qpair failed and we were unable to recover it. 00:35:55.275 [2024-07-12 00:48:22.863549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.275 [2024-07-12 00:48:22.863576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.275 qpair failed and we were unable to recover it. 00:35:55.275 [2024-07-12 00:48:22.863681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.275 [2024-07-12 00:48:22.863707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.275 qpair failed and we were unable to recover it. 00:35:55.275 [2024-07-12 00:48:22.863792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.275 [2024-07-12 00:48:22.863819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.275 qpair failed and we were unable to recover it. 00:35:55.275 [2024-07-12 00:48:22.863895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.275 [2024-07-12 00:48:22.863922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.275 qpair failed and we were unable to recover it. 00:35:55.275 [2024-07-12 00:48:22.863998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.275 [2024-07-12 00:48:22.864025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.275 qpair failed and we were unable to recover it. 00:35:55.275 [2024-07-12 00:48:22.864116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.275 [2024-07-12 00:48:22.864144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.275 qpair failed and we were unable to recover it. 00:35:55.275 [2024-07-12 00:48:22.864227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.275 [2024-07-12 00:48:22.864254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.275 qpair failed and we were unable to recover it. 00:35:55.275 [2024-07-12 00:48:22.864339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.275 [2024-07-12 00:48:22.864365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.275 qpair failed and we were unable to recover it. 00:35:55.275 [2024-07-12 00:48:22.864449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.275 [2024-07-12 00:48:22.864476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.275 qpair failed and we were unable to recover it. 00:35:55.275 [2024-07-12 00:48:22.864562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.275 [2024-07-12 00:48:22.864598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.275 qpair failed and we were unable to recover it. 00:35:55.275 [2024-07-12 00:48:22.864684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.276 [2024-07-12 00:48:22.864711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.276 qpair failed and we were unable to recover it. 00:35:55.276 [2024-07-12 00:48:22.864793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.276 [2024-07-12 00:48:22.864820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.276 qpair failed and we were unable to recover it. 00:35:55.276 [2024-07-12 00:48:22.864899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.276 [2024-07-12 00:48:22.864926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.276 qpair failed and we were unable to recover it. 00:35:55.276 [2024-07-12 00:48:22.865005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.276 [2024-07-12 00:48:22.865031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.276 qpair failed and we were unable to recover it. 00:35:55.276 [2024-07-12 00:48:22.865116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.276 [2024-07-12 00:48:22.865142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.276 qpair failed and we were unable to recover it. 00:35:55.276 [2024-07-12 00:48:22.865218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.276 [2024-07-12 00:48:22.865249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.276 qpair failed and we were unable to recover it. 00:35:55.276 [2024-07-12 00:48:22.865328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.276 [2024-07-12 00:48:22.865354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.276 qpair failed and we were unable to recover it. 00:35:55.276 [2024-07-12 00:48:22.865432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.276 [2024-07-12 00:48:22.865458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.276 qpair failed and we were unable to recover it. 00:35:55.276 [2024-07-12 00:48:22.865539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.276 [2024-07-12 00:48:22.865569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.276 qpair failed and we were unable to recover it. 00:35:55.276 [2024-07-12 00:48:22.865657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.276 [2024-07-12 00:48:22.865687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.276 qpair failed and we were unable to recover it. 00:35:55.276 [2024-07-12 00:48:22.865771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.276 [2024-07-12 00:48:22.865798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.276 qpair failed and we were unable to recover it. 00:35:55.276 [2024-07-12 00:48:22.865877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.276 [2024-07-12 00:48:22.865905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.276 qpair failed and we were unable to recover it. 00:35:55.276 [2024-07-12 00:48:22.865993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.276 [2024-07-12 00:48:22.866020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.276 qpair failed and we were unable to recover it. 00:35:55.276 [2024-07-12 00:48:22.866103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.276 [2024-07-12 00:48:22.866130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.276 qpair failed and we were unable to recover it. 00:35:55.276 [2024-07-12 00:48:22.866204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.276 [2024-07-12 00:48:22.866230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.276 qpair failed and we were unable to recover it. 00:35:55.276 [2024-07-12 00:48:22.866305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.276 [2024-07-12 00:48:22.866331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.276 qpair failed and we were unable to recover it. 00:35:55.276 [2024-07-12 00:48:22.866407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.276 [2024-07-12 00:48:22.866433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.276 qpair failed and we were unable to recover it. 00:35:55.276 [2024-07-12 00:48:22.866515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.276 [2024-07-12 00:48:22.866540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.276 qpair failed and we were unable to recover it. 00:35:55.276 [2024-07-12 00:48:22.866634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.276 [2024-07-12 00:48:22.866661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.276 qpair failed and we were unable to recover it. 00:35:55.276 [2024-07-12 00:48:22.866743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.276 [2024-07-12 00:48:22.866770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.276 qpair failed and we were unable to recover it. 00:35:55.276 [2024-07-12 00:48:22.866850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.276 [2024-07-12 00:48:22.866876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.276 qpair failed and we were unable to recover it. 00:35:55.276 [2024-07-12 00:48:22.866958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.276 [2024-07-12 00:48:22.866984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.276 qpair failed and we were unable to recover it. 00:35:55.276 [2024-07-12 00:48:22.867059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.276 [2024-07-12 00:48:22.867085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.276 qpair failed and we were unable to recover it. 00:35:55.276 [2024-07-12 00:48:22.867157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.276 [2024-07-12 00:48:22.867184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.276 qpair failed and we were unable to recover it. 00:35:55.276 [2024-07-12 00:48:22.867265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.276 [2024-07-12 00:48:22.867293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.276 qpair failed and we were unable to recover it. 00:35:55.276 [2024-07-12 00:48:22.867374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.276 [2024-07-12 00:48:22.867400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.276 qpair failed and we were unable to recover it. 00:35:55.276 [2024-07-12 00:48:22.867491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.276 [2024-07-12 00:48:22.867520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.276 qpair failed and we were unable to recover it. 00:35:55.277 [2024-07-12 00:48:22.867597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.277 [2024-07-12 00:48:22.867632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.277 qpair failed and we were unable to recover it. 00:35:55.277 [2024-07-12 00:48:22.867717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.277 [2024-07-12 00:48:22.867744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.277 qpair failed and we were unable to recover it. 00:35:55.277 [2024-07-12 00:48:22.867825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.277 [2024-07-12 00:48:22.867852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.277 qpair failed and we were unable to recover it. 00:35:55.277 [2024-07-12 00:48:22.867935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.277 [2024-07-12 00:48:22.867962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.277 qpair failed and we were unable to recover it. 00:35:55.277 [2024-07-12 00:48:22.868043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.277 [2024-07-12 00:48:22.868074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.277 qpair failed and we were unable to recover it. 00:35:55.277 [2024-07-12 00:48:22.868168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.277 [2024-07-12 00:48:22.868195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.277 qpair failed and we were unable to recover it. 00:35:55.277 [2024-07-12 00:48:22.868273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.277 [2024-07-12 00:48:22.868299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.277 qpair failed and we were unable to recover it. 00:35:55.277 [2024-07-12 00:48:22.868373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.277 [2024-07-12 00:48:22.868399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.277 qpair failed and we were unable to recover it. 00:35:55.277 [2024-07-12 00:48:22.868493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.277 [2024-07-12 00:48:22.868521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.277 qpair failed and we were unable to recover it. 00:35:55.277 [2024-07-12 00:48:22.868607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.277 [2024-07-12 00:48:22.868635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.277 qpair failed and we were unable to recover it. 00:35:55.277 [2024-07-12 00:48:22.868713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.277 [2024-07-12 00:48:22.868739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.277 qpair failed and we were unable to recover it. 00:35:55.277 [2024-07-12 00:48:22.868832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.277 [2024-07-12 00:48:22.868859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.277 qpair failed and we were unable to recover it. 00:35:55.277 [2024-07-12 00:48:22.868941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.277 [2024-07-12 00:48:22.868969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.277 qpair failed and we were unable to recover it. 00:35:55.277 [2024-07-12 00:48:22.869057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.277 [2024-07-12 00:48:22.869085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.277 qpair failed and we were unable to recover it. 00:35:55.277 [2024-07-12 00:48:22.869166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.277 [2024-07-12 00:48:22.869194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.277 qpair failed and we were unable to recover it. 00:35:55.277 [2024-07-12 00:48:22.869273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.277 [2024-07-12 00:48:22.869299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.277 qpair failed and we were unable to recover it. 00:35:55.277 [2024-07-12 00:48:22.869380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.277 [2024-07-12 00:48:22.869407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.277 qpair failed and we were unable to recover it. 00:35:55.277 [2024-07-12 00:48:22.869487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.277 [2024-07-12 00:48:22.869514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.277 qpair failed and we were unable to recover it. 00:35:55.277 [2024-07-12 00:48:22.869602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.277 [2024-07-12 00:48:22.869634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.277 qpair failed and we were unable to recover it. 00:35:55.277 [2024-07-12 00:48:22.869718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.277 [2024-07-12 00:48:22.869744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.277 qpair failed and we were unable to recover it. 00:35:55.277 [2024-07-12 00:48:22.869829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.277 [2024-07-12 00:48:22.869857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.277 qpair failed and we were unable to recover it. 00:35:55.277 [2024-07-12 00:48:22.869935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.277 [2024-07-12 00:48:22.869961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.277 qpair failed and we were unable to recover it. 00:35:55.277 [2024-07-12 00:48:22.870036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.277 [2024-07-12 00:48:22.870062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.277 qpair failed and we were unable to recover it. 00:35:55.277 [2024-07-12 00:48:22.870137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.277 [2024-07-12 00:48:22.870164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.277 qpair failed and we were unable to recover it. 00:35:55.277 [2024-07-12 00:48:22.870247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.277 [2024-07-12 00:48:22.870273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.277 qpair failed and we were unable to recover it. 00:35:55.277 [2024-07-12 00:48:22.870359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.277 [2024-07-12 00:48:22.870385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.277 qpair failed and we were unable to recover it. 00:35:55.277 [2024-07-12 00:48:22.870463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.277 [2024-07-12 00:48:22.870489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.277 qpair failed and we were unable to recover it. 00:35:55.277 [2024-07-12 00:48:22.870568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.277 [2024-07-12 00:48:22.870599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.277 qpair failed and we were unable to recover it. 00:35:55.277 [2024-07-12 00:48:22.870676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.277 [2024-07-12 00:48:22.870701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.277 qpair failed and we were unable to recover it. 00:35:55.277 [2024-07-12 00:48:22.870779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.277 [2024-07-12 00:48:22.870806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.277 qpair failed and we were unable to recover it. 00:35:55.277 [2024-07-12 00:48:22.870889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.277 [2024-07-12 00:48:22.870915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.277 qpair failed and we were unable to recover it. 00:35:55.277 [2024-07-12 00:48:22.870993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.277 [2024-07-12 00:48:22.871020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.277 qpair failed and we were unable to recover it. 00:35:55.277 [2024-07-12 00:48:22.871112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.277 [2024-07-12 00:48:22.871139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.277 qpair failed and we were unable to recover it. 00:35:55.278 [2024-07-12 00:48:22.871214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.278 [2024-07-12 00:48:22.871240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.278 qpair failed and we were unable to recover it. 00:35:55.278 [2024-07-12 00:48:22.871325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.278 [2024-07-12 00:48:22.871353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.278 qpair failed and we were unable to recover it. 00:35:55.278 [2024-07-12 00:48:22.871435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.278 [2024-07-12 00:48:22.871464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.278 qpair failed and we were unable to recover it. 00:35:55.278 [2024-07-12 00:48:22.871549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.278 [2024-07-12 00:48:22.871575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.278 qpair failed and we were unable to recover it. 00:35:55.278 [2024-07-12 00:48:22.871676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.278 [2024-07-12 00:48:22.871709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.278 qpair failed and we were unable to recover it. 00:35:55.278 [2024-07-12 00:48:22.871790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.278 [2024-07-12 00:48:22.871816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.278 qpair failed and we were unable to recover it. 00:35:55.278 [2024-07-12 00:48:22.871900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.278 [2024-07-12 00:48:22.871926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.278 qpair failed and we were unable to recover it. 00:35:55.278 [2024-07-12 00:48:22.872001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.278 [2024-07-12 00:48:22.872026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.278 qpair failed and we were unable to recover it. 00:35:55.278 [2024-07-12 00:48:22.872117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.278 [2024-07-12 00:48:22.872145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.278 qpair failed and we were unable to recover it. 00:35:55.278 [2024-07-12 00:48:22.872220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.278 [2024-07-12 00:48:22.872246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.278 qpair failed and we were unable to recover it. 00:35:55.278 [2024-07-12 00:48:22.872329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.278 [2024-07-12 00:48:22.872357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.278 qpair failed and we were unable to recover it. 00:35:55.278 [2024-07-12 00:48:22.872432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.278 [2024-07-12 00:48:22.872458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.278 qpair failed and we were unable to recover it. 00:35:55.278 [2024-07-12 00:48:22.872559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.278 [2024-07-12 00:48:22.872606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.278 qpair failed and we were unable to recover it. 00:35:55.278 [2024-07-12 00:48:22.872689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.278 [2024-07-12 00:48:22.872718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.278 qpair failed and we were unable to recover it. 00:35:55.278 [2024-07-12 00:48:22.872799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.278 [2024-07-12 00:48:22.872825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.278 qpair failed and we were unable to recover it. 00:35:55.278 [2024-07-12 00:48:22.872905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.278 [2024-07-12 00:48:22.872931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.278 qpair failed and we were unable to recover it. 00:35:55.278 [2024-07-12 00:48:22.873012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.278 [2024-07-12 00:48:22.873039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.278 qpair failed and we were unable to recover it. 00:35:55.278 [2024-07-12 00:48:22.873116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.278 [2024-07-12 00:48:22.873144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.278 qpair failed and we were unable to recover it. 00:35:55.278 [2024-07-12 00:48:22.873230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.278 [2024-07-12 00:48:22.873258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.278 qpair failed and we were unable to recover it. 00:35:55.278 [2024-07-12 00:48:22.873351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.278 [2024-07-12 00:48:22.873377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.278 qpair failed and we were unable to recover it. 00:35:55.278 [2024-07-12 00:48:22.873452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.278 [2024-07-12 00:48:22.873478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.278 qpair failed and we were unable to recover it. 00:35:55.278 [2024-07-12 00:48:22.873556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.278 [2024-07-12 00:48:22.873582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.278 qpair failed and we were unable to recover it. 00:35:55.278 [2024-07-12 00:48:22.873678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.278 [2024-07-12 00:48:22.873704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.278 qpair failed and we were unable to recover it. 00:35:55.278 [2024-07-12 00:48:22.873793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.278 [2024-07-12 00:48:22.873819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.278 qpair failed and we were unable to recover it. 00:35:55.278 [2024-07-12 00:48:22.873902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.278 [2024-07-12 00:48:22.873927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.278 qpair failed and we were unable to recover it. 00:35:55.278 [2024-07-12 00:48:22.874003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.278 [2024-07-12 00:48:22.874035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.278 qpair failed and we were unable to recover it. 00:35:55.278 [2024-07-12 00:48:22.874116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.278 [2024-07-12 00:48:22.874145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.278 qpair failed and we were unable to recover it. 00:35:55.278 [2024-07-12 00:48:22.874221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.278 [2024-07-12 00:48:22.874248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.278 qpair failed and we were unable to recover it. 00:35:55.278 [2024-07-12 00:48:22.874323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.278 [2024-07-12 00:48:22.874350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.278 qpair failed and we were unable to recover it. 00:35:55.278 [2024-07-12 00:48:22.874438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.278 [2024-07-12 00:48:22.874465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.278 qpair failed and we were unable to recover it. 00:35:55.278 [2024-07-12 00:48:22.874552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.278 [2024-07-12 00:48:22.874579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.278 qpair failed and we were unable to recover it. 00:35:55.278 [2024-07-12 00:48:22.874662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.278 [2024-07-12 00:48:22.874689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.278 qpair failed and we were unable to recover it. 00:35:55.278 [2024-07-12 00:48:22.874771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.278 [2024-07-12 00:48:22.874798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.278 qpair failed and we were unable to recover it. 00:35:55.278 [2024-07-12 00:48:22.874880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.278 [2024-07-12 00:48:22.874907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.278 qpair failed and we were unable to recover it. 00:35:55.278 [2024-07-12 00:48:22.874983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.278 [2024-07-12 00:48:22.875009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.278 qpair failed and we were unable to recover it. 00:35:55.278 [2024-07-12 00:48:22.875092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.278 [2024-07-12 00:48:22.875118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.278 qpair failed and we were unable to recover it. 00:35:55.278 [2024-07-12 00:48:22.875196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.278 [2024-07-12 00:48:22.875222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.278 qpair failed and we were unable to recover it. 00:35:55.278 [2024-07-12 00:48:22.875294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.278 [2024-07-12 00:48:22.875320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.279 qpair failed and we were unable to recover it. 00:35:55.279 [2024-07-12 00:48:22.875397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.279 [2024-07-12 00:48:22.875423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.279 qpair failed and we were unable to recover it. 00:35:55.279 [2024-07-12 00:48:22.875517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.279 [2024-07-12 00:48:22.875544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.279 qpair failed and we were unable to recover it. 00:35:55.279 [2024-07-12 00:48:22.875637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.279 [2024-07-12 00:48:22.875664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.279 qpair failed and we were unable to recover it. 00:35:55.279 [2024-07-12 00:48:22.875739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.279 [2024-07-12 00:48:22.875765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.279 qpair failed and we were unable to recover it. 00:35:55.279 [2024-07-12 00:48:22.875853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.279 [2024-07-12 00:48:22.875879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.279 qpair failed and we were unable to recover it. 00:35:55.279 [2024-07-12 00:48:22.875956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.279 [2024-07-12 00:48:22.875983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.279 qpair failed and we were unable to recover it. 00:35:55.279 [2024-07-12 00:48:22.876067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.279 [2024-07-12 00:48:22.876096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.279 qpair failed and we were unable to recover it. 00:35:55.279 [2024-07-12 00:48:22.876187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.279 [2024-07-12 00:48:22.876213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.279 qpair failed and we were unable to recover it. 00:35:55.279 [2024-07-12 00:48:22.876299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.279 [2024-07-12 00:48:22.876325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.279 qpair failed and we were unable to recover it. 00:35:55.279 [2024-07-12 00:48:22.876410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.279 [2024-07-12 00:48:22.876437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.279 qpair failed and we were unable to recover it. 00:35:55.279 [2024-07-12 00:48:22.876516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.279 [2024-07-12 00:48:22.876542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.279 qpair failed and we were unable to recover it. 00:35:55.279 [2024-07-12 00:48:22.876621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.279 [2024-07-12 00:48:22.876648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.279 qpair failed and we were unable to recover it. 00:35:55.279 [2024-07-12 00:48:22.876728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.279 [2024-07-12 00:48:22.876754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.279 qpair failed and we were unable to recover it. 00:35:55.279 [2024-07-12 00:48:22.876828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.279 [2024-07-12 00:48:22.876854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.279 qpair failed and we were unable to recover it. 00:35:55.279 [2024-07-12 00:48:22.876942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.279 [2024-07-12 00:48:22.876969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.279 qpair failed and we were unable to recover it. 00:35:55.279 [2024-07-12 00:48:22.877046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.279 [2024-07-12 00:48:22.877072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.279 qpair failed and we were unable to recover it. 00:35:55.279 [2024-07-12 00:48:22.877154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.279 [2024-07-12 00:48:22.877181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.279 qpair failed and we were unable to recover it. 00:35:55.279 [2024-07-12 00:48:22.877265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.279 [2024-07-12 00:48:22.877293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.279 qpair failed and we were unable to recover it. 00:35:55.279 [2024-07-12 00:48:22.877371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.279 [2024-07-12 00:48:22.877399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.279 qpair failed and we were unable to recover it. 00:35:55.279 [2024-07-12 00:48:22.877481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.279 [2024-07-12 00:48:22.877507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.279 qpair failed and we were unable to recover it. 00:35:55.279 [2024-07-12 00:48:22.877595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.279 [2024-07-12 00:48:22.877627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.279 qpair failed and we were unable to recover it. 00:35:55.279 [2024-07-12 00:48:22.877702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.279 [2024-07-12 00:48:22.877728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.279 qpair failed and we were unable to recover it. 00:35:55.279 [2024-07-12 00:48:22.877811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.279 [2024-07-12 00:48:22.877838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.279 qpair failed and we were unable to recover it. 00:35:55.279 [2024-07-12 00:48:22.877914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.279 [2024-07-12 00:48:22.877940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.279 qpair failed and we were unable to recover it. 00:35:55.279 [2024-07-12 00:48:22.878023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.279 [2024-07-12 00:48:22.878049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.279 qpair failed and we were unable to recover it. 00:35:55.279 [2024-07-12 00:48:22.878136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.279 [2024-07-12 00:48:22.878162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.279 qpair failed and we were unable to recover it. 00:35:55.279 [2024-07-12 00:48:22.878239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.279 [2024-07-12 00:48:22.878265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.279 qpair failed and we were unable to recover it. 00:35:55.279 [2024-07-12 00:48:22.878347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.279 [2024-07-12 00:48:22.878378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.279 qpair failed and we were unable to recover it. 00:35:55.279 [2024-07-12 00:48:22.878458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.279 [2024-07-12 00:48:22.878489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.279 qpair failed and we were unable to recover it. 00:35:55.279 [2024-07-12 00:48:22.878565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.279 [2024-07-12 00:48:22.878600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.279 qpair failed and we were unable to recover it. 00:35:55.279 [2024-07-12 00:48:22.878682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.279 [2024-07-12 00:48:22.878711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.279 qpair failed and we were unable to recover it. 00:35:55.279 [2024-07-12 00:48:22.878793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.279 [2024-07-12 00:48:22.878820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.279 qpair failed and we were unable to recover it. 00:35:55.279 [2024-07-12 00:48:22.878899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.279 [2024-07-12 00:48:22.878926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.279 qpair failed and we were unable to recover it. 00:35:55.279 [2024-07-12 00:48:22.879000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.279 [2024-07-12 00:48:22.879027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.279 qpair failed and we were unable to recover it. 00:35:55.279 [2024-07-12 00:48:22.879111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.279 [2024-07-12 00:48:22.879138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.279 qpair failed and we were unable to recover it. 00:35:55.279 [2024-07-12 00:48:22.879221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.279 [2024-07-12 00:48:22.879247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.279 qpair failed and we were unable to recover it. 00:35:55.279 [2024-07-12 00:48:22.879329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.279 [2024-07-12 00:48:22.879356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.279 qpair failed and we were unable to recover it. 00:35:55.279 [2024-07-12 00:48:22.879446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.279 [2024-07-12 00:48:22.879474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.279 qpair failed and we were unable to recover it. 00:35:55.279 [2024-07-12 00:48:22.879550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.279 [2024-07-12 00:48:22.879576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.279 qpair failed and we were unable to recover it. 00:35:55.279 [2024-07-12 00:48:22.879672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.279 [2024-07-12 00:48:22.879700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.279 qpair failed and we were unable to recover it. 00:35:55.279 [2024-07-12 00:48:22.879782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.279 [2024-07-12 00:48:22.879809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.279 qpair failed and we were unable to recover it. 00:35:55.279 [2024-07-12 00:48:22.879891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.279 [2024-07-12 00:48:22.879918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.279 qpair failed and we were unable to recover it. 00:35:55.279 [2024-07-12 00:48:22.879993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.279 [2024-07-12 00:48:22.880019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.279 qpair failed and we were unable to recover it. 00:35:55.279 [2024-07-12 00:48:22.880100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.279 [2024-07-12 00:48:22.880126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.279 qpair failed and we were unable to recover it. 00:35:55.279 [2024-07-12 00:48:22.880206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.279 [2024-07-12 00:48:22.880237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.279 qpair failed and we were unable to recover it. 00:35:55.279 [2024-07-12 00:48:22.880312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.279 [2024-07-12 00:48:22.880339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.279 qpair failed and we were unable to recover it. 00:35:55.279 [2024-07-12 00:48:22.880423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.279 [2024-07-12 00:48:22.880450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.279 qpair failed and we were unable to recover it. 00:35:55.279 [2024-07-12 00:48:22.880532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.279 [2024-07-12 00:48:22.880559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.279 qpair failed and we were unable to recover it. 00:35:55.279 [2024-07-12 00:48:22.880649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.279 [2024-07-12 00:48:22.880677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.279 qpair failed and we were unable to recover it. 00:35:55.279 [2024-07-12 00:48:22.880763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.279 [2024-07-12 00:48:22.880789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.279 qpair failed and we were unable to recover it. 00:35:55.279 [2024-07-12 00:48:22.880865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.279 [2024-07-12 00:48:22.880892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.279 qpair failed and we were unable to recover it. 00:35:55.279 [2024-07-12 00:48:22.880982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.279 [2024-07-12 00:48:22.881009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.279 qpair failed and we were unable to recover it. 00:35:55.279 [2024-07-12 00:48:22.881092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.279 [2024-07-12 00:48:22.881118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.279 qpair failed and we were unable to recover it. 00:35:55.279 [2024-07-12 00:48:22.881194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.279 [2024-07-12 00:48:22.881220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.279 qpair failed and we were unable to recover it. 00:35:55.279 [2024-07-12 00:48:22.881299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.279 [2024-07-12 00:48:22.881325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.279 qpair failed and we were unable to recover it. 00:35:55.279 [2024-07-12 00:48:22.881407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.279 [2024-07-12 00:48:22.881434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.279 qpair failed and we were unable to recover it. 00:35:55.279 [2024-07-12 00:48:22.881513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.279 [2024-07-12 00:48:22.881538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.279 qpair failed and we were unable to recover it. 00:35:55.279 [2024-07-12 00:48:22.881626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.279 [2024-07-12 00:48:22.881653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.280 qpair failed and we were unable to recover it. 00:35:55.280 [2024-07-12 00:48:22.881732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.280 [2024-07-12 00:48:22.881758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.280 qpair failed and we were unable to recover it. 00:35:55.280 [2024-07-12 00:48:22.881840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.280 [2024-07-12 00:48:22.881866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.280 qpair failed and we were unable to recover it. 00:35:55.280 [2024-07-12 00:48:22.881942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.280 [2024-07-12 00:48:22.881968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.280 qpair failed and we were unable to recover it. 00:35:55.280 [2024-07-12 00:48:22.882053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.280 [2024-07-12 00:48:22.882080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.280 qpair failed and we were unable to recover it. 00:35:55.280 [2024-07-12 00:48:22.882155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.280 [2024-07-12 00:48:22.882182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.280 qpair failed and we were unable to recover it. 00:35:55.280 [2024-07-12 00:48:22.882258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.280 [2024-07-12 00:48:22.882285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.280 qpair failed and we were unable to recover it. 00:35:55.280 [2024-07-12 00:48:22.882369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.280 [2024-07-12 00:48:22.882395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.280 qpair failed and we were unable to recover it. 00:35:55.280 [2024-07-12 00:48:22.882483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.280 [2024-07-12 00:48:22.882509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.280 qpair failed and we were unable to recover it. 00:35:55.280 [2024-07-12 00:48:22.882595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.280 [2024-07-12 00:48:22.882622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.280 qpair failed and we were unable to recover it. 00:35:55.280 [2024-07-12 00:48:22.882707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.280 [2024-07-12 00:48:22.882740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.280 qpair failed and we were unable to recover it. 00:35:55.280 [2024-07-12 00:48:22.882829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.280 [2024-07-12 00:48:22.882857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.280 qpair failed and we were unable to recover it. 00:35:55.280 [2024-07-12 00:48:22.882937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.280 [2024-07-12 00:48:22.882963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.280 qpair failed and we were unable to recover it. 00:35:55.280 [2024-07-12 00:48:22.883037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.280 [2024-07-12 00:48:22.883063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.280 qpair failed and we were unable to recover it. 00:35:55.280 [2024-07-12 00:48:22.883146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.280 [2024-07-12 00:48:22.883173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.280 qpair failed and we were unable to recover it. 00:35:55.280 [2024-07-12 00:48:22.883252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.280 [2024-07-12 00:48:22.883279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.280 qpair failed and we were unable to recover it. 00:35:55.280 [2024-07-12 00:48:22.883360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.280 [2024-07-12 00:48:22.883387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.280 qpair failed and we were unable to recover it. 00:35:55.280 [2024-07-12 00:48:22.883462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.280 [2024-07-12 00:48:22.883489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.280 qpair failed and we were unable to recover it. 00:35:55.280 [2024-07-12 00:48:22.883575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.280 [2024-07-12 00:48:22.883617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.280 qpair failed and we were unable to recover it. 00:35:55.280 [2024-07-12 00:48:22.883693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.280 [2024-07-12 00:48:22.883719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.280 qpair failed and we were unable to recover it. 00:35:55.280 [2024-07-12 00:48:22.883797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.280 [2024-07-12 00:48:22.883823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.280 qpair failed and we were unable to recover it. 00:35:55.280 [2024-07-12 00:48:22.883897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.280 [2024-07-12 00:48:22.883923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.280 qpair failed and we were unable to recover it. 00:35:55.280 [2024-07-12 00:48:22.884000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.280 [2024-07-12 00:48:22.884026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.280 qpair failed and we were unable to recover it. 00:35:55.280 [2024-07-12 00:48:22.884106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.280 [2024-07-12 00:48:22.884133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.280 qpair failed and we were unable to recover it. 00:35:55.280 [2024-07-12 00:48:22.884216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.280 [2024-07-12 00:48:22.884246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.280 qpair failed and we were unable to recover it. 00:35:55.280 [2024-07-12 00:48:22.884337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.280 [2024-07-12 00:48:22.884363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.280 qpair failed and we were unable to recover it. 00:35:55.280 [2024-07-12 00:48:22.884446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.280 [2024-07-12 00:48:22.884476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.280 qpair failed and we were unable to recover it. 00:35:55.280 [2024-07-12 00:48:22.884552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.280 [2024-07-12 00:48:22.884579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.280 qpair failed and we were unable to recover it. 00:35:55.280 [2024-07-12 00:48:22.884666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.280 [2024-07-12 00:48:22.884693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.280 qpair failed and we were unable to recover it. 00:35:55.280 [2024-07-12 00:48:22.884775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.280 [2024-07-12 00:48:22.884801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.280 qpair failed and we were unable to recover it. 00:35:55.280 [2024-07-12 00:48:22.884879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.280 [2024-07-12 00:48:22.884906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.280 qpair failed and we were unable to recover it. 00:35:55.280 [2024-07-12 00:48:22.884984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.280 [2024-07-12 00:48:22.885010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.280 qpair failed and we were unable to recover it. 00:35:55.280 [2024-07-12 00:48:22.885089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.280 [2024-07-12 00:48:22.885116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.280 qpair failed and we were unable to recover it. 00:35:55.280 [2024-07-12 00:48:22.885199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.280 [2024-07-12 00:48:22.885225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.280 qpair failed and we were unable to recover it. 00:35:55.280 [2024-07-12 00:48:22.885302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.280 [2024-07-12 00:48:22.885328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.280 qpair failed and we were unable to recover it. 00:35:55.280 [2024-07-12 00:48:22.885403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.280 [2024-07-12 00:48:22.885429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.280 qpair failed and we were unable to recover it. 00:35:55.280 [2024-07-12 00:48:22.885509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.280 [2024-07-12 00:48:22.885535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.280 qpair failed and we were unable to recover it. 00:35:55.280 [2024-07-12 00:48:22.885622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.280 [2024-07-12 00:48:22.885649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.280 qpair failed and we were unable to recover it. 00:35:55.280 [2024-07-12 00:48:22.885730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.280 [2024-07-12 00:48:22.885756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.280 qpair failed and we were unable to recover it. 00:35:55.280 [2024-07-12 00:48:22.885836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.280 [2024-07-12 00:48:22.885862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.280 qpair failed and we were unable to recover it. 00:35:55.280 [2024-07-12 00:48:22.885938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.280 [2024-07-12 00:48:22.885964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.280 qpair failed and we were unable to recover it. 00:35:55.280 [2024-07-12 00:48:22.886043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.280 [2024-07-12 00:48:22.886069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.280 qpair failed and we were unable to recover it. 00:35:55.280 [2024-07-12 00:48:22.886152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.280 [2024-07-12 00:48:22.886181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.280 qpair failed and we were unable to recover it. 00:35:55.280 [2024-07-12 00:48:22.886256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.280 [2024-07-12 00:48:22.886283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.280 qpair failed and we were unable to recover it. 00:35:55.280 [2024-07-12 00:48:22.886368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.280 [2024-07-12 00:48:22.886396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.280 qpair failed and we were unable to recover it. 00:35:55.280 [2024-07-12 00:48:22.886476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.280 [2024-07-12 00:48:22.886503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.280 qpair failed and we were unable to recover it. 00:35:55.280 [2024-07-12 00:48:22.886581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.280 [2024-07-12 00:48:22.886612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.280 qpair failed and we were unable to recover it. 00:35:55.280 [2024-07-12 00:48:22.886692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.280 [2024-07-12 00:48:22.886718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.280 qpair failed and we were unable to recover it. 00:35:55.280 [2024-07-12 00:48:22.886796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.280 [2024-07-12 00:48:22.886823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.280 qpair failed and we were unable to recover it. 00:35:55.280 [2024-07-12 00:48:22.886907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.280 [2024-07-12 00:48:22.886934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.280 qpair failed and we were unable to recover it. 00:35:55.280 [2024-07-12 00:48:22.887011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.280 [2024-07-12 00:48:22.887043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.280 qpair failed and we were unable to recover it. 00:35:55.280 [2024-07-12 00:48:22.887127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.280 [2024-07-12 00:48:22.887154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.280 qpair failed and we were unable to recover it. 00:35:55.280 [2024-07-12 00:48:22.887230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.280 [2024-07-12 00:48:22.887256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.280 qpair failed and we were unable to recover it. 00:35:55.280 [2024-07-12 00:48:22.887347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.280 [2024-07-12 00:48:22.887374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.280 qpair failed and we were unable to recover it. 00:35:55.280 [2024-07-12 00:48:22.887457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.280 [2024-07-12 00:48:22.887483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.280 qpair failed and we were unable to recover it. 00:35:55.280 [2024-07-12 00:48:22.887572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.280 [2024-07-12 00:48:22.887609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.280 qpair failed and we were unable to recover it. 00:35:55.280 [2024-07-12 00:48:22.887696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.280 [2024-07-12 00:48:22.887724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.280 qpair failed and we were unable to recover it. 00:35:55.280 [2024-07-12 00:48:22.887815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.280 [2024-07-12 00:48:22.887842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.280 qpair failed and we were unable to recover it. 00:35:55.280 [2024-07-12 00:48:22.887922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.280 [2024-07-12 00:48:22.887948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.280 qpair failed and we were unable to recover it. 00:35:55.280 [2024-07-12 00:48:22.888031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.280 [2024-07-12 00:48:22.888058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.280 qpair failed and we were unable to recover it. 00:35:55.280 [2024-07-12 00:48:22.888140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.280 [2024-07-12 00:48:22.888167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.280 qpair failed and we were unable to recover it. 00:35:55.280 [2024-07-12 00:48:22.888248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.280 [2024-07-12 00:48:22.888275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.280 qpair failed and we were unable to recover it. 00:35:55.280 [2024-07-12 00:48:22.888355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.280 [2024-07-12 00:48:22.888385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.280 qpair failed and we were unable to recover it. 00:35:55.280 [2024-07-12 00:48:22.888463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.280 [2024-07-12 00:48:22.888490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.280 qpair failed and we were unable to recover it. 00:35:55.280 [2024-07-12 00:48:22.888577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.280 [2024-07-12 00:48:22.888611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.280 qpair failed and we were unable to recover it. 00:35:55.280 [2024-07-12 00:48:22.888699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.280 [2024-07-12 00:48:22.888727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.280 qpair failed and we were unable to recover it. 00:35:55.280 [2024-07-12 00:48:22.888816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.280 [2024-07-12 00:48:22.888842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.280 qpair failed and we were unable to recover it. 00:35:55.280 [2024-07-12 00:48:22.888929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.280 [2024-07-12 00:48:22.888956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.280 qpair failed and we were unable to recover it. 00:35:55.280 [2024-07-12 00:48:22.889032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.280 [2024-07-12 00:48:22.889058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.280 qpair failed and we were unable to recover it. 00:35:55.280 [2024-07-12 00:48:22.889141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.281 [2024-07-12 00:48:22.889168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.281 qpair failed and we were unable to recover it. 00:35:55.281 [2024-07-12 00:48:22.889243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.281 [2024-07-12 00:48:22.889270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.281 qpair failed and we were unable to recover it. 00:35:55.281 [2024-07-12 00:48:22.889357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.281 [2024-07-12 00:48:22.889384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.281 qpair failed and we were unable to recover it. 00:35:55.281 [2024-07-12 00:48:22.889462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.281 [2024-07-12 00:48:22.889491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.281 qpair failed and we were unable to recover it. 00:35:55.281 [2024-07-12 00:48:22.889573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.281 [2024-07-12 00:48:22.889610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.281 qpair failed and we were unable to recover it. 00:35:55.281 [2024-07-12 00:48:22.889693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.281 [2024-07-12 00:48:22.889724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.281 qpair failed and we were unable to recover it. 00:35:55.281 [2024-07-12 00:48:22.889805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.281 [2024-07-12 00:48:22.889831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.281 qpair failed and we were unable to recover it. 00:35:55.281 [2024-07-12 00:48:22.889959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.281 [2024-07-12 00:48:22.889986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.281 qpair failed and we were unable to recover it. 00:35:55.281 [2024-07-12 00:48:22.890071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.281 [2024-07-12 00:48:22.890098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.281 qpair failed and we were unable to recover it. 00:35:55.281 [2024-07-12 00:48:22.890171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.281 [2024-07-12 00:48:22.890197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.281 qpair failed and we were unable to recover it. 00:35:55.281 [2024-07-12 00:48:22.890284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.281 [2024-07-12 00:48:22.890311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.281 qpair failed and we were unable to recover it. 00:35:55.281 [2024-07-12 00:48:22.890395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.281 [2024-07-12 00:48:22.890422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.281 qpair failed and we were unable to recover it. 00:35:55.281 [2024-07-12 00:48:22.890547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.281 [2024-07-12 00:48:22.890574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.281 qpair failed and we were unable to recover it. 00:35:55.281 [2024-07-12 00:48:22.890661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.281 [2024-07-12 00:48:22.890687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.281 qpair failed and we were unable to recover it. 00:35:55.281 [2024-07-12 00:48:22.890765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.281 [2024-07-12 00:48:22.890792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.281 qpair failed and we were unable to recover it. 00:35:55.281 [2024-07-12 00:48:22.890879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.281 [2024-07-12 00:48:22.890907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.281 qpair failed and we were unable to recover it. 00:35:55.281 [2024-07-12 00:48:22.890989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.281 [2024-07-12 00:48:22.891016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.281 qpair failed and we were unable to recover it. 00:35:55.281 [2024-07-12 00:48:22.891097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.281 [2024-07-12 00:48:22.891123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.281 qpair failed and we were unable to recover it. 00:35:55.281 [2024-07-12 00:48:22.891200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.281 [2024-07-12 00:48:22.891226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.281 qpair failed and we were unable to recover it. 00:35:55.281 [2024-07-12 00:48:22.891314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.281 [2024-07-12 00:48:22.891341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.281 qpair failed and we were unable to recover it. 00:35:55.281 [2024-07-12 00:48:22.891427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.281 [2024-07-12 00:48:22.891453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.281 qpair failed and we were unable to recover it. 00:35:55.281 [2024-07-12 00:48:22.891538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.281 [2024-07-12 00:48:22.891569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.281 qpair failed and we were unable to recover it. 00:35:55.281 [2024-07-12 00:48:22.891664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.281 [2024-07-12 00:48:22.891690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.281 qpair failed and we were unable to recover it. 00:35:55.281 [2024-07-12 00:48:22.891774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.281 [2024-07-12 00:48:22.891800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.281 qpair failed and we were unable to recover it. 00:35:55.281 [2024-07-12 00:48:22.891887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.281 [2024-07-12 00:48:22.891915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.281 qpair failed and we were unable to recover it. 00:35:55.281 [2024-07-12 00:48:22.891999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.281 [2024-07-12 00:48:22.892024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.281 qpair failed and we were unable to recover it. 00:35:55.281 [2024-07-12 00:48:22.892107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.281 [2024-07-12 00:48:22.892135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.281 qpair failed and we were unable to recover it. 00:35:55.281 [2024-07-12 00:48:22.892220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.281 [2024-07-12 00:48:22.892246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.281 qpair failed and we were unable to recover it. 00:35:55.281 [2024-07-12 00:48:22.892328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.281 [2024-07-12 00:48:22.892354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.281 qpair failed and we were unable to recover it. 00:35:55.281 [2024-07-12 00:48:22.892444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.281 [2024-07-12 00:48:22.892471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.281 qpair failed and we were unable to recover it. 00:35:55.281 [2024-07-12 00:48:22.892552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.281 [2024-07-12 00:48:22.892578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.281 qpair failed and we were unable to recover it. 00:35:55.281 [2024-07-12 00:48:22.892670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.281 [2024-07-12 00:48:22.892696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.281 qpair failed and we were unable to recover it. 00:35:55.281 [2024-07-12 00:48:22.892779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.281 [2024-07-12 00:48:22.892805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.281 qpair failed and we were unable to recover it. 00:35:55.281 [2024-07-12 00:48:22.892886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.281 [2024-07-12 00:48:22.892917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.281 qpair failed and we were unable to recover it. 00:35:55.281 [2024-07-12 00:48:22.892998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.281 [2024-07-12 00:48:22.893025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.281 qpair failed and we were unable to recover it. 00:35:55.281 [2024-07-12 00:48:22.893114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.281 [2024-07-12 00:48:22.893141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.281 qpair failed and we were unable to recover it. 00:35:55.281 [2024-07-12 00:48:22.893239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.281 [2024-07-12 00:48:22.893280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.281 qpair failed and we were unable to recover it. 00:35:55.281 [2024-07-12 00:48:22.893367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.281 [2024-07-12 00:48:22.893396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.281 qpair failed and we were unable to recover it. 00:35:55.281 [2024-07-12 00:48:22.893478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.281 [2024-07-12 00:48:22.893505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.281 qpair failed and we were unable to recover it. 00:35:55.281 [2024-07-12 00:48:22.893599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.281 [2024-07-12 00:48:22.893626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.281 qpair failed and we were unable to recover it. 00:35:55.281 [2024-07-12 00:48:22.893701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.281 [2024-07-12 00:48:22.893727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.281 qpair failed and we were unable to recover it. 00:35:55.281 [2024-07-12 00:48:22.893811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.281 [2024-07-12 00:48:22.893839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.281 qpair failed and we were unable to recover it. 00:35:55.281 [2024-07-12 00:48:22.893928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.281 [2024-07-12 00:48:22.893954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.281 qpair failed and we were unable to recover it. 00:35:55.281 [2024-07-12 00:48:22.894035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.281 [2024-07-12 00:48:22.894062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.281 qpair failed and we were unable to recover it. 00:35:55.281 [2024-07-12 00:48:22.894139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.281 [2024-07-12 00:48:22.894165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.281 qpair failed and we were unable to recover it. 00:35:55.281 [2024-07-12 00:48:22.894239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.281 [2024-07-12 00:48:22.894265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.281 qpair failed and we were unable to recover it. 00:35:55.281 [2024-07-12 00:48:22.894350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.281 [2024-07-12 00:48:22.894378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.281 qpair failed and we were unable to recover it. 00:35:55.281 [2024-07-12 00:48:22.894464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.281 [2024-07-12 00:48:22.894493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.281 qpair failed and we were unable to recover it. 00:35:55.281 [2024-07-12 00:48:22.894571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.281 [2024-07-12 00:48:22.894621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.281 qpair failed and we were unable to recover it. 00:35:55.281 [2024-07-12 00:48:22.894706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.281 [2024-07-12 00:48:22.894733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.281 qpair failed and we were unable to recover it. 00:35:55.281 [2024-07-12 00:48:22.894808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.281 [2024-07-12 00:48:22.894835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.281 qpair failed and we were unable to recover it. 00:35:55.281 [2024-07-12 00:48:22.894962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.281 [2024-07-12 00:48:22.894988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.281 qpair failed and we were unable to recover it. 00:35:55.281 [2024-07-12 00:48:22.895114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.281 [2024-07-12 00:48:22.895140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.281 qpair failed and we were unable to recover it. 00:35:55.281 [2024-07-12 00:48:22.895224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.281 [2024-07-12 00:48:22.895250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.281 qpair failed and we were unable to recover it. 00:35:55.281 [2024-07-12 00:48:22.895330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.281 [2024-07-12 00:48:22.895360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.281 qpair failed and we were unable to recover it. 00:35:55.281 [2024-07-12 00:48:22.895441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.281 [2024-07-12 00:48:22.895467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.281 qpair failed and we were unable to recover it. 00:35:55.281 [2024-07-12 00:48:22.895599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.281 [2024-07-12 00:48:22.895626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.281 qpair failed and we were unable to recover it. 00:35:55.281 [2024-07-12 00:48:22.895705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.281 [2024-07-12 00:48:22.895734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.281 qpair failed and we were unable to recover it. 00:35:55.281 [2024-07-12 00:48:22.895831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.281 [2024-07-12 00:48:22.895861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.281 qpair failed and we were unable to recover it. 00:35:55.281 [2024-07-12 00:48:22.895948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.281 [2024-07-12 00:48:22.895974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.281 qpair failed and we were unable to recover it. 00:35:55.281 [2024-07-12 00:48:22.896054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.281 [2024-07-12 00:48:22.896081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.281 qpair failed and we were unable to recover it. 00:35:55.281 [2024-07-12 00:48:22.896168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.281 [2024-07-12 00:48:22.896200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.281 qpair failed and we were unable to recover it. 00:35:55.281 [2024-07-12 00:48:22.896281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.281 [2024-07-12 00:48:22.896308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.281 qpair failed and we were unable to recover it. 00:35:55.281 [2024-07-12 00:48:22.896395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.281 [2024-07-12 00:48:22.896422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.281 qpair failed and we were unable to recover it. 00:35:55.281 [2024-07-12 00:48:22.896503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.281 [2024-07-12 00:48:22.896529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.281 qpair failed and we were unable to recover it. 00:35:55.281 [2024-07-12 00:48:22.896606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.281 [2024-07-12 00:48:22.896634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.281 qpair failed and we were unable to recover it. 00:35:55.281 [2024-07-12 00:48:22.896718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.281 [2024-07-12 00:48:22.896744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.281 qpair failed and we were unable to recover it. 00:35:55.281 [2024-07-12 00:48:22.896829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.281 [2024-07-12 00:48:22.896855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.281 qpair failed and we were unable to recover it. 00:35:55.281 [2024-07-12 00:48:22.896950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.281 [2024-07-12 00:48:22.896976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.281 qpair failed and we were unable to recover it. 00:35:55.281 [2024-07-12 00:48:22.897059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.281 [2024-07-12 00:48:22.897085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.281 qpair failed and we were unable to recover it. 00:35:55.281 [2024-07-12 00:48:22.897165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.281 [2024-07-12 00:48:22.897191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.281 qpair failed and we were unable to recover it. 00:35:55.281 [2024-07-12 00:48:22.897266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.281 [2024-07-12 00:48:22.897292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.281 qpair failed and we were unable to recover it. 00:35:55.281 [2024-07-12 00:48:22.897377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.281 [2024-07-12 00:48:22.897404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.281 qpair failed and we were unable to recover it. 00:35:55.281 [2024-07-12 00:48:22.897481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.281 [2024-07-12 00:48:22.897507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.281 qpair failed and we were unable to recover it. 00:35:55.281 [2024-07-12 00:48:22.897597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.281 [2024-07-12 00:48:22.897624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.281 qpair failed and we were unable to recover it. 00:35:55.281 [2024-07-12 00:48:22.897710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.281 [2024-07-12 00:48:22.897739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.281 qpair failed and we were unable to recover it. 00:35:55.281 [2024-07-12 00:48:22.897815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.281 [2024-07-12 00:48:22.897841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.281 qpair failed and we were unable to recover it. 00:35:55.281 [2024-07-12 00:48:22.897923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.281 [2024-07-12 00:48:22.897949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.281 qpair failed and we were unable to recover it. 00:35:55.281 [2024-07-12 00:48:22.898024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.282 [2024-07-12 00:48:22.898051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.282 qpair failed and we were unable to recover it. 00:35:55.282 [2024-07-12 00:48:22.898178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.282 [2024-07-12 00:48:22.898205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.282 qpair failed and we were unable to recover it. 00:35:55.282 [2024-07-12 00:48:22.898282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.282 [2024-07-12 00:48:22.898308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.282 qpair failed and we were unable to recover it. 00:35:55.282 [2024-07-12 00:48:22.898346] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x871320 (9): Bad file descriptor 00:35:55.282 [2024-07-12 00:48:22.898475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.282 [2024-07-12 00:48:22.898519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.282 qpair failed and we were unable to recover it. 00:35:55.282 [2024-07-12 00:48:22.898636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.282 [2024-07-12 00:48:22.898672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.282 qpair failed and we were unable to recover it. 00:35:55.282 [2024-07-12 00:48:22.898760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.282 [2024-07-12 00:48:22.898787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.282 qpair failed and we were unable to recover it. 00:35:55.282 [2024-07-12 00:48:22.898866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.282 [2024-07-12 00:48:22.898892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.282 qpair failed and we were unable to recover it. 00:35:55.282 [2024-07-12 00:48:22.898968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.282 [2024-07-12 00:48:22.898994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.282 qpair failed and we were unable to recover it. 00:35:55.282 [2024-07-12 00:48:22.899076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.282 [2024-07-12 00:48:22.899102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.282 qpair failed and we were unable to recover it. 00:35:55.282 [2024-07-12 00:48:22.899178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.282 [2024-07-12 00:48:22.899205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.282 qpair failed and we were unable to recover it. 00:35:55.282 [2024-07-12 00:48:22.899294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.282 [2024-07-12 00:48:22.899321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.282 qpair failed and we were unable to recover it. 00:35:55.282 [2024-07-12 00:48:22.899401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.282 [2024-07-12 00:48:22.899432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.282 qpair failed and we were unable to recover it. 00:35:55.282 [2024-07-12 00:48:22.899513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.282 [2024-07-12 00:48:22.899539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.282 qpair failed and we were unable to recover it. 00:35:55.282 [2024-07-12 00:48:22.899633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.282 [2024-07-12 00:48:22.899660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.282 qpair failed and we were unable to recover it. 00:35:55.282 [2024-07-12 00:48:22.899742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.282 [2024-07-12 00:48:22.899768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.282 qpair failed and we were unable to recover it. 00:35:55.282 [2024-07-12 00:48:22.899855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.282 [2024-07-12 00:48:22.899881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.282 qpair failed and we were unable to recover it. 00:35:55.282 [2024-07-12 00:48:22.899960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.282 [2024-07-12 00:48:22.899986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.282 qpair failed and we were unable to recover it. 00:35:55.282 [2024-07-12 00:48:22.900067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.282 [2024-07-12 00:48:22.900094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.282 qpair failed and we were unable to recover it. 00:35:55.282 [2024-07-12 00:48:22.900184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.282 [2024-07-12 00:48:22.900210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.282 qpair failed and we were unable to recover it. 00:35:55.282 [2024-07-12 00:48:22.900285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.282 [2024-07-12 00:48:22.900312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.282 qpair failed and we were unable to recover it. 00:35:55.282 [2024-07-12 00:48:22.900394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.282 [2024-07-12 00:48:22.900422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.282 qpair failed and we were unable to recover it. 00:35:55.282 [2024-07-12 00:48:22.900504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.282 [2024-07-12 00:48:22.900531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.282 qpair failed and we were unable to recover it. 00:35:55.282 [2024-07-12 00:48:22.900617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.282 [2024-07-12 00:48:22.900645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.282 qpair failed and we were unable to recover it. 00:35:55.282 [2024-07-12 00:48:22.900734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.282 [2024-07-12 00:48:22.900760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.282 qpair failed and we were unable to recover it. 00:35:55.282 [2024-07-12 00:48:22.900841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.282 [2024-07-12 00:48:22.900867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.282 qpair failed and we were unable to recover it. 00:35:55.282 [2024-07-12 00:48:22.900957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.282 [2024-07-12 00:48:22.900987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.282 qpair failed and we were unable to recover it. 00:35:55.282 [2024-07-12 00:48:22.901065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.282 [2024-07-12 00:48:22.901092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.282 qpair failed and we were unable to recover it. 00:35:55.282 [2024-07-12 00:48:22.901187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.282 [2024-07-12 00:48:22.901229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.282 qpair failed and we were unable to recover it. 00:35:55.282 [2024-07-12 00:48:22.901310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.282 [2024-07-12 00:48:22.901338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.282 qpair failed and we were unable to recover it. 00:35:55.282 [2024-07-12 00:48:22.901422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.282 [2024-07-12 00:48:22.901448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.282 qpair failed and we were unable to recover it. 00:35:55.282 [2024-07-12 00:48:22.901522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.282 [2024-07-12 00:48:22.901554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.282 qpair failed and we were unable to recover it. 00:35:55.282 [2024-07-12 00:48:22.901648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.282 [2024-07-12 00:48:22.901676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.282 qpair failed and we were unable to recover it. 00:35:55.282 [2024-07-12 00:48:22.901757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.282 [2024-07-12 00:48:22.901783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.282 qpair failed and we were unable to recover it. 00:35:55.282 [2024-07-12 00:48:22.901869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.282 [2024-07-12 00:48:22.901897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.282 qpair failed and we were unable to recover it. 00:35:55.282 [2024-07-12 00:48:22.901973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.282 [2024-07-12 00:48:22.901999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.282 qpair failed and we were unable to recover it. 00:35:55.282 [2024-07-12 00:48:22.902075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.282 [2024-07-12 00:48:22.902101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.282 qpair failed and we were unable to recover it. 00:35:55.282 [2024-07-12 00:48:22.902178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.282 [2024-07-12 00:48:22.902211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.282 qpair failed and we were unable to recover it. 00:35:55.282 [2024-07-12 00:48:22.902295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.282 [2024-07-12 00:48:22.902321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.282 qpair failed and we were unable to recover it. 00:35:55.282 [2024-07-12 00:48:22.902398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.282 [2024-07-12 00:48:22.902424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.282 qpair failed and we were unable to recover it. 00:35:55.282 [2024-07-12 00:48:22.902502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.282 [2024-07-12 00:48:22.902528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.282 qpair failed and we were unable to recover it. 00:35:55.282 [2024-07-12 00:48:22.902608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.282 [2024-07-12 00:48:22.902635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.282 qpair failed and we were unable to recover it. 00:35:55.282 [2024-07-12 00:48:22.902721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.282 [2024-07-12 00:48:22.902747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.282 qpair failed and we were unable to recover it. 00:35:55.282 [2024-07-12 00:48:22.902829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.282 [2024-07-12 00:48:22.902857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.282 qpair failed and we were unable to recover it. 00:35:55.282 [2024-07-12 00:48:22.902932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.282 [2024-07-12 00:48:22.902958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.282 qpair failed and we were unable to recover it. 00:35:55.282 [2024-07-12 00:48:22.903034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.282 [2024-07-12 00:48:22.903060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.282 qpair failed and we were unable to recover it. 00:35:55.282 [2024-07-12 00:48:22.903137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.282 [2024-07-12 00:48:22.903163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.282 qpair failed and we were unable to recover it. 00:35:55.282 [2024-07-12 00:48:22.903245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.282 [2024-07-12 00:48:22.903271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.282 qpair failed and we were unable to recover it. 00:35:55.282 [2024-07-12 00:48:22.903356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.282 [2024-07-12 00:48:22.903383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.282 qpair failed and we were unable to recover it. 00:35:55.282 [2024-07-12 00:48:22.903471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.282 [2024-07-12 00:48:22.903497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.282 qpair failed and we were unable to recover it. 00:35:55.282 [2024-07-12 00:48:22.903581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.282 [2024-07-12 00:48:22.903635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.282 qpair failed and we were unable to recover it. 00:35:55.282 [2024-07-12 00:48:22.903720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.282 [2024-07-12 00:48:22.903748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.282 qpair failed and we were unable to recover it. 00:35:55.282 [2024-07-12 00:48:22.903822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.282 [2024-07-12 00:48:22.903848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.282 qpair failed and we were unable to recover it. 00:35:55.282 [2024-07-12 00:48:22.903930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.282 [2024-07-12 00:48:22.903956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.282 qpair failed and we were unable to recover it. 00:35:55.282 [2024-07-12 00:48:22.904039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.282 [2024-07-12 00:48:22.904067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.282 qpair failed and we were unable to recover it. 00:35:55.282 [2024-07-12 00:48:22.904146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.282 [2024-07-12 00:48:22.904175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.282 qpair failed and we were unable to recover it. 00:35:55.282 [2024-07-12 00:48:22.904262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.282 [2024-07-12 00:48:22.904288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.282 qpair failed and we were unable to recover it. 00:35:55.282 [2024-07-12 00:48:22.904374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.282 [2024-07-12 00:48:22.904400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.282 qpair failed and we were unable to recover it. 00:35:55.282 [2024-07-12 00:48:22.904479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.282 [2024-07-12 00:48:22.904505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.282 qpair failed and we were unable to recover it. 00:35:55.282 [2024-07-12 00:48:22.904606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.282 [2024-07-12 00:48:22.904633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.282 qpair failed and we were unable to recover it. 00:35:55.282 [2024-07-12 00:48:22.904723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.282 [2024-07-12 00:48:22.904751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.282 qpair failed and we were unable to recover it. 00:35:55.282 [2024-07-12 00:48:22.904840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.282 [2024-07-12 00:48:22.904866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.282 qpair failed and we were unable to recover it. 00:35:55.282 [2024-07-12 00:48:22.904947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.282 [2024-07-12 00:48:22.904974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.282 qpair failed and we were unable to recover it. 00:35:55.282 [2024-07-12 00:48:22.905055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.282 [2024-07-12 00:48:22.905081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.282 qpair failed and we were unable to recover it. 00:35:55.282 [2024-07-12 00:48:22.905169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.282 [2024-07-12 00:48:22.905195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.282 qpair failed and we were unable to recover it. 00:35:55.282 [2024-07-12 00:48:22.905274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.282 [2024-07-12 00:48:22.905301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.282 qpair failed and we were unable to recover it. 00:35:55.282 [2024-07-12 00:48:22.905383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.282 [2024-07-12 00:48:22.905409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.282 qpair failed and we were unable to recover it. 00:35:55.282 [2024-07-12 00:48:22.905490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.282 [2024-07-12 00:48:22.905515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.282 qpair failed and we were unable to recover it. 00:35:55.282 [2024-07-12 00:48:22.905595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.282 [2024-07-12 00:48:22.905622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.282 qpair failed and we were unable to recover it. 00:35:55.282 [2024-07-12 00:48:22.905705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.282 [2024-07-12 00:48:22.905732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.282 qpair failed and we were unable to recover it. 00:35:55.282 [2024-07-12 00:48:22.905819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.282 [2024-07-12 00:48:22.905845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.282 qpair failed and we were unable to recover it. 00:35:55.283 [2024-07-12 00:48:22.905945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.283 [2024-07-12 00:48:22.905971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.283 qpair failed and we were unable to recover it. 00:35:55.283 [2024-07-12 00:48:22.906047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.283 [2024-07-12 00:48:22.906074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.283 qpair failed and we were unable to recover it. 00:35:55.283 [2024-07-12 00:48:22.906156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.283 [2024-07-12 00:48:22.906182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.283 qpair failed and we were unable to recover it. 00:35:55.283 [2024-07-12 00:48:22.906264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.283 [2024-07-12 00:48:22.906291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.283 qpair failed and we were unable to recover it. 00:35:55.283 [2024-07-12 00:48:22.906372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.283 [2024-07-12 00:48:22.906398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.283 qpair failed and we were unable to recover it. 00:35:55.283 [2024-07-12 00:48:22.906477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.283 [2024-07-12 00:48:22.906503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.283 qpair failed and we were unable to recover it. 00:35:55.283 [2024-07-12 00:48:22.906578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.283 [2024-07-12 00:48:22.906617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.283 qpair failed and we were unable to recover it. 00:35:55.283 [2024-07-12 00:48:22.906696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.283 [2024-07-12 00:48:22.906723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.283 qpair failed and we were unable to recover it. 00:35:55.283 [2024-07-12 00:48:22.906801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.283 [2024-07-12 00:48:22.906827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.283 qpair failed and we were unable to recover it. 00:35:55.283 [2024-07-12 00:48:22.906904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.283 [2024-07-12 00:48:22.906931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.283 qpair failed and we were unable to recover it. 00:35:55.283 [2024-07-12 00:48:22.907022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.283 [2024-07-12 00:48:22.907049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.283 qpair failed and we were unable to recover it. 00:35:55.283 [2024-07-12 00:48:22.907125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.283 [2024-07-12 00:48:22.907152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.283 qpair failed and we were unable to recover it. 00:35:55.283 [2024-07-12 00:48:22.907234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.283 [2024-07-12 00:48:22.907261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.283 qpair failed and we were unable to recover it. 00:35:55.283 [2024-07-12 00:48:22.907343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.283 [2024-07-12 00:48:22.907370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.283 qpair failed and we were unable to recover it. 00:35:55.283 [2024-07-12 00:48:22.907447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.283 [2024-07-12 00:48:22.907477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.283 qpair failed and we were unable to recover it. 00:35:55.283 [2024-07-12 00:48:22.907560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.283 [2024-07-12 00:48:22.907594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.283 qpair failed and we were unable to recover it. 00:35:55.283 [2024-07-12 00:48:22.907682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.283 [2024-07-12 00:48:22.907709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.283 qpair failed and we were unable to recover it. 00:35:55.283 [2024-07-12 00:48:22.907784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.283 [2024-07-12 00:48:22.907811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.283 qpair failed and we were unable to recover it. 00:35:55.283 [2024-07-12 00:48:22.907887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.283 [2024-07-12 00:48:22.907914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.283 qpair failed and we were unable to recover it. 00:35:55.283 [2024-07-12 00:48:22.907988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.283 [2024-07-12 00:48:22.908014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.283 qpair failed and we were unable to recover it. 00:35:55.283 [2024-07-12 00:48:22.908101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.283 [2024-07-12 00:48:22.908128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.283 qpair failed and we were unable to recover it. 00:35:55.283 [2024-07-12 00:48:22.908210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.283 [2024-07-12 00:48:22.908237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.283 qpair failed and we were unable to recover it. 00:35:55.283 [2024-07-12 00:48:22.908314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.283 [2024-07-12 00:48:22.908341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.283 qpair failed and we were unable to recover it. 00:35:55.283 [2024-07-12 00:48:22.908417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.283 [2024-07-12 00:48:22.908443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.283 qpair failed and we were unable to recover it. 00:35:55.283 [2024-07-12 00:48:22.908525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.283 [2024-07-12 00:48:22.908554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.283 qpair failed and we were unable to recover it. 00:35:55.283 [2024-07-12 00:48:22.908651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.283 [2024-07-12 00:48:22.908678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.283 qpair failed and we were unable to recover it. 00:35:55.283 [2024-07-12 00:48:22.908777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.283 [2024-07-12 00:48:22.908803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.283 qpair failed and we were unable to recover it. 00:35:55.283 [2024-07-12 00:48:22.908890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.283 [2024-07-12 00:48:22.908917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.283 qpair failed and we were unable to recover it. 00:35:55.283 [2024-07-12 00:48:22.908999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.283 [2024-07-12 00:48:22.909027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.283 qpair failed and we were unable to recover it. 00:35:55.283 [2024-07-12 00:48:22.909115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.283 [2024-07-12 00:48:22.909142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.283 qpair failed and we were unable to recover it. 00:35:55.283 [2024-07-12 00:48:22.909224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.283 [2024-07-12 00:48:22.909250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.283 qpair failed and we were unable to recover it. 00:35:55.283 [2024-07-12 00:48:22.909330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.283 [2024-07-12 00:48:22.909356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.283 qpair failed and we were unable to recover it. 00:35:55.283 [2024-07-12 00:48:22.909445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.283 [2024-07-12 00:48:22.909474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.283 qpair failed and we were unable to recover it. 00:35:55.283 [2024-07-12 00:48:22.909564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.283 [2024-07-12 00:48:22.909595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.283 qpair failed and we were unable to recover it. 00:35:55.283 [2024-07-12 00:48:22.909678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.283 [2024-07-12 00:48:22.909705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.283 qpair failed and we were unable to recover it. 00:35:55.283 [2024-07-12 00:48:22.909779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.283 [2024-07-12 00:48:22.909806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.283 qpair failed and we were unable to recover it. 00:35:55.283 [2024-07-12 00:48:22.909886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.283 [2024-07-12 00:48:22.909914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.283 qpair failed and we were unable to recover it. 00:35:55.283 [2024-07-12 00:48:22.909990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.283 [2024-07-12 00:48:22.910015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.283 qpair failed and we were unable to recover it. 00:35:55.283 [2024-07-12 00:48:22.910096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.283 [2024-07-12 00:48:22.910122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.283 qpair failed and we were unable to recover it. 00:35:55.283 [2024-07-12 00:48:22.910202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.283 [2024-07-12 00:48:22.910229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.283 qpair failed and we were unable to recover it. 00:35:55.283 [2024-07-12 00:48:22.910324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.283 [2024-07-12 00:48:22.910352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.283 qpair failed and we were unable to recover it. 00:35:55.283 [2024-07-12 00:48:22.910440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.283 [2024-07-12 00:48:22.910468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.283 qpair failed and we were unable to recover it. 00:35:55.283 [2024-07-12 00:48:22.910553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.283 [2024-07-12 00:48:22.910580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.283 qpair failed and we were unable to recover it. 00:35:55.283 [2024-07-12 00:48:22.910673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.283 [2024-07-12 00:48:22.910700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.283 qpair failed and we were unable to recover it. 00:35:55.283 [2024-07-12 00:48:22.910777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.283 [2024-07-12 00:48:22.910803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.283 qpair failed and we were unable to recover it. 00:35:55.283 [2024-07-12 00:48:22.910889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.283 [2024-07-12 00:48:22.910915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.283 qpair failed and we were unable to recover it. 00:35:55.283 [2024-07-12 00:48:22.910999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.283 [2024-07-12 00:48:22.911031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.283 qpair failed and we were unable to recover it. 00:35:55.283 [2024-07-12 00:48:22.911114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.283 [2024-07-12 00:48:22.911144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.283 qpair failed and we were unable to recover it. 00:35:55.283 [2024-07-12 00:48:22.911236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.283 [2024-07-12 00:48:22.911263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.283 qpair failed and we were unable to recover it. 00:35:55.283 [2024-07-12 00:48:22.911340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.283 [2024-07-12 00:48:22.911367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.283 qpair failed and we were unable to recover it. 00:35:55.283 [2024-07-12 00:48:22.911454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.283 [2024-07-12 00:48:22.911480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.283 qpair failed and we were unable to recover it. 00:35:55.283 [2024-07-12 00:48:22.911562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.283 [2024-07-12 00:48:22.911600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.283 qpair failed and we were unable to recover it. 00:35:55.283 [2024-07-12 00:48:22.911690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.283 [2024-07-12 00:48:22.911717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.283 qpair failed and we were unable to recover it. 00:35:55.283 [2024-07-12 00:48:22.911809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.283 [2024-07-12 00:48:22.911837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.283 qpair failed and we were unable to recover it. 00:35:55.283 [2024-07-12 00:48:22.911917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.283 [2024-07-12 00:48:22.911943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.283 qpair failed and we were unable to recover it. 00:35:55.283 [2024-07-12 00:48:22.912029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.283 [2024-07-12 00:48:22.912054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.283 qpair failed and we were unable to recover it. 00:35:55.283 [2024-07-12 00:48:22.912145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.283 [2024-07-12 00:48:22.912173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.283 qpair failed and we were unable to recover it. 00:35:55.283 [2024-07-12 00:48:22.912265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.283 [2024-07-12 00:48:22.912300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.283 qpair failed and we were unable to recover it. 00:35:55.283 [2024-07-12 00:48:22.912388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.283 [2024-07-12 00:48:22.912415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.283 qpair failed and we were unable to recover it. 00:35:55.283 [2024-07-12 00:48:22.912503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.283 [2024-07-12 00:48:22.912529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.283 qpair failed and we were unable to recover it. 00:35:55.283 [2024-07-12 00:48:22.912631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.283 [2024-07-12 00:48:22.912660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.283 qpair failed and we were unable to recover it. 00:35:55.283 [2024-07-12 00:48:22.912747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.283 [2024-07-12 00:48:22.912773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.283 qpair failed and we were unable to recover it. 00:35:55.283 [2024-07-12 00:48:22.912856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.283 [2024-07-12 00:48:22.912882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.283 qpair failed and we were unable to recover it. 00:35:55.283 [2024-07-12 00:48:22.912963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.283 [2024-07-12 00:48:22.912989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.283 qpair failed and we were unable to recover it. 00:35:55.283 [2024-07-12 00:48:22.913068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.283 [2024-07-12 00:48:22.913094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.283 qpair failed and we were unable to recover it. 00:35:55.283 [2024-07-12 00:48:22.913175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.283 [2024-07-12 00:48:22.913201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.283 qpair failed and we were unable to recover it. 00:35:55.283 [2024-07-12 00:48:22.913292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.283 [2024-07-12 00:48:22.913320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.283 qpair failed and we were unable to recover it. 00:35:55.283 [2024-07-12 00:48:22.913413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.283 [2024-07-12 00:48:22.913441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.283 qpair failed and we were unable to recover it. 00:35:55.283 [2024-07-12 00:48:22.913523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.283 [2024-07-12 00:48:22.913550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.283 qpair failed and we were unable to recover it. 00:35:55.283 [2024-07-12 00:48:22.913670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.283 [2024-07-12 00:48:22.913698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.283 qpair failed and we were unable to recover it. 00:35:55.283 [2024-07-12 00:48:22.913784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.283 [2024-07-12 00:48:22.913811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.283 qpair failed and we were unable to recover it. 00:35:55.283 [2024-07-12 00:48:22.913904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.283 [2024-07-12 00:48:22.913933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.283 qpair failed and we were unable to recover it. 00:35:55.283 [2024-07-12 00:48:22.914014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.283 [2024-07-12 00:48:22.914040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.283 qpair failed and we were unable to recover it. 00:35:55.283 [2024-07-12 00:48:22.914134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.283 [2024-07-12 00:48:22.914160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.283 qpair failed and we were unable to recover it. 00:35:55.283 [2024-07-12 00:48:22.914245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.283 [2024-07-12 00:48:22.914272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.283 qpair failed and we were unable to recover it. 00:35:55.283 [2024-07-12 00:48:22.914365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.283 [2024-07-12 00:48:22.914394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.283 qpair failed and we were unable to recover it. 00:35:55.283 [2024-07-12 00:48:22.914471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.283 [2024-07-12 00:48:22.914497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.283 qpair failed and we were unable to recover it. 00:35:55.283 [2024-07-12 00:48:22.914617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.283 [2024-07-12 00:48:22.914644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.283 qpair failed and we were unable to recover it. 00:35:55.283 [2024-07-12 00:48:22.914733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.284 [2024-07-12 00:48:22.914760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.284 qpair failed and we were unable to recover it. 00:35:55.284 [2024-07-12 00:48:22.914873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.284 [2024-07-12 00:48:22.914899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.284 qpair failed and we were unable to recover it. 00:35:55.284 [2024-07-12 00:48:22.914979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.284 [2024-07-12 00:48:22.915004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.284 qpair failed and we were unable to recover it. 00:35:55.284 [2024-07-12 00:48:22.915114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.284 [2024-07-12 00:48:22.915141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.284 qpair failed and we were unable to recover it. 00:35:55.284 [2024-07-12 00:48:22.915230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.284 [2024-07-12 00:48:22.915256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.284 qpair failed and we were unable to recover it. 00:35:55.284 [2024-07-12 00:48:22.915336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.284 [2024-07-12 00:48:22.915367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.284 qpair failed and we were unable to recover it. 00:35:55.284 [2024-07-12 00:48:22.915448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.284 [2024-07-12 00:48:22.915474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.284 qpair failed and we were unable to recover it. 00:35:55.284 [2024-07-12 00:48:22.915554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.284 [2024-07-12 00:48:22.915581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.284 qpair failed and we were unable to recover it. 00:35:55.284 [2024-07-12 00:48:22.915679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.284 [2024-07-12 00:48:22.915706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.284 qpair failed and we were unable to recover it. 00:35:55.284 [2024-07-12 00:48:22.915791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.284 [2024-07-12 00:48:22.915817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.284 qpair failed and we were unable to recover it. 00:35:55.284 [2024-07-12 00:48:22.915898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.284 [2024-07-12 00:48:22.915924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.284 qpair failed and we were unable to recover it. 00:35:55.284 [2024-07-12 00:48:22.916043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.284 [2024-07-12 00:48:22.916072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.284 qpair failed and we were unable to recover it. 00:35:55.284 [2024-07-12 00:48:22.916172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.284 [2024-07-12 00:48:22.916200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.284 qpair failed and we were unable to recover it. 00:35:55.284 [2024-07-12 00:48:22.916285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.284 [2024-07-12 00:48:22.916311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.284 qpair failed and we were unable to recover it. 00:35:55.284 [2024-07-12 00:48:22.916385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.284 [2024-07-12 00:48:22.916411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.284 qpair failed and we were unable to recover it. 00:35:55.284 [2024-07-12 00:48:22.916491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.284 [2024-07-12 00:48:22.916517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.284 qpair failed and we were unable to recover it. 00:35:55.284 [2024-07-12 00:48:22.916610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.284 [2024-07-12 00:48:22.916637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.284 qpair failed and we were unable to recover it. 00:35:55.284 [2024-07-12 00:48:22.916716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.284 [2024-07-12 00:48:22.916742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.284 qpair failed and we were unable to recover it. 00:35:55.284 [2024-07-12 00:48:22.916826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.284 [2024-07-12 00:48:22.916853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.284 qpair failed and we were unable to recover it. 00:35:55.284 [2024-07-12 00:48:22.916968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.284 [2024-07-12 00:48:22.916994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.284 qpair failed and we were unable to recover it. 00:35:55.284 [2024-07-12 00:48:22.917079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.284 [2024-07-12 00:48:22.917110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.284 qpair failed and we were unable to recover it. 00:35:55.284 [2024-07-12 00:48:22.917237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.284 [2024-07-12 00:48:22.917264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.284 qpair failed and we were unable to recover it. 00:35:55.284 [2024-07-12 00:48:22.917355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.284 [2024-07-12 00:48:22.917383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.284 qpair failed and we were unable to recover it. 00:35:55.284 [2024-07-12 00:48:22.917476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.284 [2024-07-12 00:48:22.917506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.284 qpair failed and we were unable to recover it. 00:35:55.284 [2024-07-12 00:48:22.917598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.284 [2024-07-12 00:48:22.917626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.284 qpair failed and we were unable to recover it. 00:35:55.284 [2024-07-12 00:48:22.917713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.284 [2024-07-12 00:48:22.917738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.284 qpair failed and we were unable to recover it. 00:35:55.284 [2024-07-12 00:48:22.917818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.284 [2024-07-12 00:48:22.917845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.284 qpair failed and we were unable to recover it. 00:35:55.284 [2024-07-12 00:48:22.917930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.284 [2024-07-12 00:48:22.917957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.284 qpair failed and we were unable to recover it. 00:35:55.284 [2024-07-12 00:48:22.918043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.284 [2024-07-12 00:48:22.918068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.284 qpair failed and we were unable to recover it. 00:35:55.284 [2024-07-12 00:48:22.918267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.284 [2024-07-12 00:48:22.918292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.284 qpair failed and we were unable to recover it. 00:35:55.284 [2024-07-12 00:48:22.918374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.284 [2024-07-12 00:48:22.918400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.284 qpair failed and we were unable to recover it. 00:35:55.284 [2024-07-12 00:48:22.918602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.284 [2024-07-12 00:48:22.918628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.284 qpair failed and we were unable to recover it. 00:35:55.284 [2024-07-12 00:48:22.918709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.284 [2024-07-12 00:48:22.918735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.284 qpair failed and we were unable to recover it. 00:35:55.284 [2024-07-12 00:48:22.918818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.284 [2024-07-12 00:48:22.918845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.284 qpair failed and we were unable to recover it. 00:35:55.284 [2024-07-12 00:48:22.918930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.284 [2024-07-12 00:48:22.918957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.284 qpair failed and we were unable to recover it. 00:35:55.284 [2024-07-12 00:48:22.919052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.284 [2024-07-12 00:48:22.919092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.284 qpair failed and we were unable to recover it. 00:35:55.284 [2024-07-12 00:48:22.919183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.284 [2024-07-12 00:48:22.919211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.284 qpair failed and we were unable to recover it. 00:35:55.284 [2024-07-12 00:48:22.919296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.284 [2024-07-12 00:48:22.919327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.284 qpair failed and we were unable to recover it. 00:35:55.284 [2024-07-12 00:48:22.919402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.284 [2024-07-12 00:48:22.919428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.284 qpair failed and we were unable to recover it. 00:35:55.284 [2024-07-12 00:48:22.919519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.284 [2024-07-12 00:48:22.919546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.284 qpair failed and we were unable to recover it. 00:35:55.284 [2024-07-12 00:48:22.919629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.284 [2024-07-12 00:48:22.919656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.284 qpair failed and we were unable to recover it. 00:35:55.284 [2024-07-12 00:48:22.919743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.284 [2024-07-12 00:48:22.919769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.284 qpair failed and we were unable to recover it. 00:35:55.284 [2024-07-12 00:48:22.919845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.284 [2024-07-12 00:48:22.919871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.284 qpair failed and we were unable to recover it. 00:35:55.284 [2024-07-12 00:48:22.919953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.284 [2024-07-12 00:48:22.919978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.284 qpair failed and we were unable to recover it. 00:35:55.284 [2024-07-12 00:48:22.920181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.284 [2024-07-12 00:48:22.920209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.284 qpair failed and we were unable to recover it. 00:35:55.284 [2024-07-12 00:48:22.920293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.284 [2024-07-12 00:48:22.920320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.284 qpair failed and we were unable to recover it. 00:35:55.284 [2024-07-12 00:48:22.920404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.284 [2024-07-12 00:48:22.920429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.284 qpair failed and we were unable to recover it. 00:35:55.284 [2024-07-12 00:48:22.920516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.284 [2024-07-12 00:48:22.920543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.284 qpair failed and we were unable to recover it. 00:35:55.284 [2024-07-12 00:48:22.920627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.284 [2024-07-12 00:48:22.920654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.284 qpair failed and we were unable to recover it. 00:35:55.284 [2024-07-12 00:48:22.920755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.284 [2024-07-12 00:48:22.920801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.284 qpair failed and we were unable to recover it. 00:35:55.284 [2024-07-12 00:48:22.920903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.284 [2024-07-12 00:48:22.920935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.284 qpair failed and we were unable to recover it. 00:35:55.284 [2024-07-12 00:48:22.921026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.284 [2024-07-12 00:48:22.921053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.284 qpair failed and we were unable to recover it. 00:35:55.284 [2024-07-12 00:48:22.921143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.284 [2024-07-12 00:48:22.921170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.284 qpair failed and we were unable to recover it. 00:35:55.284 [2024-07-12 00:48:22.921251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.284 [2024-07-12 00:48:22.921278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.284 qpair failed and we were unable to recover it. 00:35:55.284 [2024-07-12 00:48:22.921364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.284 [2024-07-12 00:48:22.921393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.284 qpair failed and we were unable to recover it. 00:35:55.284 [2024-07-12 00:48:22.921477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.284 [2024-07-12 00:48:22.921504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.284 qpair failed and we were unable to recover it. 00:35:55.284 [2024-07-12 00:48:22.921598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.284 [2024-07-12 00:48:22.921625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.284 qpair failed and we were unable to recover it. 00:35:55.284 [2024-07-12 00:48:22.921702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.284 [2024-07-12 00:48:22.921728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.284 qpair failed and we were unable to recover it. 00:35:55.284 [2024-07-12 00:48:22.921814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.284 [2024-07-12 00:48:22.921839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.284 qpair failed and we were unable to recover it. 00:35:55.284 [2024-07-12 00:48:22.921925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.284 [2024-07-12 00:48:22.921952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.284 qpair failed and we were unable to recover it. 00:35:55.284 [2024-07-12 00:48:22.922043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.284 [2024-07-12 00:48:22.922069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.284 qpair failed and we were unable to recover it. 00:35:55.284 [2024-07-12 00:48:22.922150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.284 [2024-07-12 00:48:22.922176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.284 qpair failed and we were unable to recover it. 00:35:55.284 [2024-07-12 00:48:22.922266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.284 [2024-07-12 00:48:22.922293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.284 qpair failed and we were unable to recover it. 00:35:55.284 [2024-07-12 00:48:22.922382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.284 [2024-07-12 00:48:22.922409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.284 qpair failed and we were unable to recover it. 00:35:55.284 [2024-07-12 00:48:22.922503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.284 [2024-07-12 00:48:22.922531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.284 qpair failed and we were unable to recover it. 00:35:55.284 [2024-07-12 00:48:22.922610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.284 [2024-07-12 00:48:22.922637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.284 qpair failed and we were unable to recover it. 00:35:55.284 [2024-07-12 00:48:22.922723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.284 [2024-07-12 00:48:22.922751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.284 qpair failed and we were unable to recover it. 00:35:55.284 [2024-07-12 00:48:22.922835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.284 [2024-07-12 00:48:22.922862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.284 qpair failed and we were unable to recover it. 00:35:55.284 [2024-07-12 00:48:22.922949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.284 [2024-07-12 00:48:22.922976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.284 qpair failed and we were unable to recover it. 00:35:55.284 [2024-07-12 00:48:22.923179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.284 [2024-07-12 00:48:22.923207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.284 qpair failed and we were unable to recover it. 00:35:55.284 [2024-07-12 00:48:22.923289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.284 [2024-07-12 00:48:22.923316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.284 qpair failed and we were unable to recover it. 00:35:55.284 [2024-07-12 00:48:22.923406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.285 [2024-07-12 00:48:22.923432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.285 qpair failed and we were unable to recover it. 00:35:55.285 [2024-07-12 00:48:22.923514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.285 [2024-07-12 00:48:22.923540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.285 qpair failed and we were unable to recover it. 00:35:55.285 [2024-07-12 00:48:22.923634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.285 [2024-07-12 00:48:22.923661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.285 qpair failed and we were unable to recover it. 00:35:55.285 [2024-07-12 00:48:22.923742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.285 [2024-07-12 00:48:22.923768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.285 qpair failed and we were unable to recover it. 00:35:55.285 [2024-07-12 00:48:22.923856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.285 [2024-07-12 00:48:22.923883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.285 qpair failed and we were unable to recover it. 00:35:55.285 [2024-07-12 00:48:22.923973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.285 [2024-07-12 00:48:22.924001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.285 qpair failed and we were unable to recover it. 00:35:55.285 [2024-07-12 00:48:22.924077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.285 [2024-07-12 00:48:22.924103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.285 qpair failed and we were unable to recover it. 00:35:55.285 [2024-07-12 00:48:22.924185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.285 [2024-07-12 00:48:22.924210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.285 qpair failed and we were unable to recover it. 00:35:55.285 [2024-07-12 00:48:22.924290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.285 [2024-07-12 00:48:22.924316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.285 qpair failed and we were unable to recover it. 00:35:55.285 [2024-07-12 00:48:22.924394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.285 [2024-07-12 00:48:22.924420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.285 qpair failed and we were unable to recover it. 00:35:55.285 [2024-07-12 00:48:22.924497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.285 [2024-07-12 00:48:22.924525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.285 qpair failed and we were unable to recover it. 00:35:55.285 [2024-07-12 00:48:22.924612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.285 [2024-07-12 00:48:22.924644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.285 qpair failed and we were unable to recover it. 00:35:55.285 [2024-07-12 00:48:22.924725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.285 [2024-07-12 00:48:22.924751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.285 qpair failed and we were unable to recover it. 00:35:55.285 [2024-07-12 00:48:22.924838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.285 [2024-07-12 00:48:22.924864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.285 qpair failed and we were unable to recover it. 00:35:55.285 [2024-07-12 00:48:22.924950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.285 [2024-07-12 00:48:22.924977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.285 qpair failed and we were unable to recover it. 00:35:55.285 [2024-07-12 00:48:22.925058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.285 [2024-07-12 00:48:22.925084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.285 qpair failed and we were unable to recover it. 00:35:55.285 [2024-07-12 00:48:22.925173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.285 [2024-07-12 00:48:22.925200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.285 qpair failed and we were unable to recover it. 00:35:55.285 [2024-07-12 00:48:22.925283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.285 [2024-07-12 00:48:22.925308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.285 qpair failed and we were unable to recover it. 00:35:55.285 [2024-07-12 00:48:22.925390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.285 [2024-07-12 00:48:22.925420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.285 qpair failed and we were unable to recover it. 00:35:55.285 [2024-07-12 00:48:22.925509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.285 [2024-07-12 00:48:22.925535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.285 qpair failed and we were unable to recover it. 00:35:55.285 [2024-07-12 00:48:22.925623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.285 [2024-07-12 00:48:22.925653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.285 qpair failed and we were unable to recover it. 00:35:55.285 [2024-07-12 00:48:22.925731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.285 [2024-07-12 00:48:22.925757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.285 qpair failed and we were unable to recover it. 00:35:55.285 [2024-07-12 00:48:22.925833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.285 [2024-07-12 00:48:22.925860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.285 qpair failed and we were unable to recover it. 00:35:55.285 [2024-07-12 00:48:22.925941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.285 [2024-07-12 00:48:22.925967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.285 qpair failed and we were unable to recover it. 00:35:55.285 [2024-07-12 00:48:22.926054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.285 [2024-07-12 00:48:22.926081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.285 qpair failed and we were unable to recover it. 00:35:55.285 [2024-07-12 00:48:22.926164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.285 [2024-07-12 00:48:22.926191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.285 qpair failed and we were unable to recover it. 00:35:55.285 [2024-07-12 00:48:22.926281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.285 [2024-07-12 00:48:22.926307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.285 qpair failed and we were unable to recover it. 00:35:55.285 [2024-07-12 00:48:22.926384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.285 [2024-07-12 00:48:22.926410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.285 qpair failed and we were unable to recover it. 00:35:55.285 [2024-07-12 00:48:22.926496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.285 [2024-07-12 00:48:22.926522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.285 qpair failed and we were unable to recover it. 00:35:55.285 [2024-07-12 00:48:22.926603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.285 [2024-07-12 00:48:22.926630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.285 qpair failed and we were unable to recover it. 00:35:55.285 [2024-07-12 00:48:22.926712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.285 [2024-07-12 00:48:22.926738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.285 qpair failed and we were unable to recover it. 00:35:55.285 [2024-07-12 00:48:22.926817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.285 [2024-07-12 00:48:22.926842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.285 qpair failed and we were unable to recover it. 00:35:55.285 [2024-07-12 00:48:22.926930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.285 [2024-07-12 00:48:22.926956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.285 qpair failed and we were unable to recover it. 00:35:55.285 [2024-07-12 00:48:22.927029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.285 [2024-07-12 00:48:22.927055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.285 qpair failed and we were unable to recover it. 00:35:55.285 [2024-07-12 00:48:22.927142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.285 [2024-07-12 00:48:22.927170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.285 qpair failed and we were unable to recover it. 00:35:55.285 [2024-07-12 00:48:22.927265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.285 [2024-07-12 00:48:22.927293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.285 qpair failed and we were unable to recover it. 00:35:55.285 [2024-07-12 00:48:22.927381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.285 [2024-07-12 00:48:22.927409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.285 qpair failed and we were unable to recover it. 00:35:55.285 [2024-07-12 00:48:22.927485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.285 [2024-07-12 00:48:22.927512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.285 qpair failed and we were unable to recover it. 00:35:55.285 [2024-07-12 00:48:22.927600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.285 [2024-07-12 00:48:22.927630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.285 qpair failed and we were unable to recover it. 00:35:55.285 [2024-07-12 00:48:22.927720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.285 [2024-07-12 00:48:22.927747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.285 qpair failed and we were unable to recover it. 00:35:55.285 [2024-07-12 00:48:22.927823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.285 [2024-07-12 00:48:22.927850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.285 qpair failed and we were unable to recover it. 00:35:55.285 [2024-07-12 00:48:22.927937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.285 [2024-07-12 00:48:22.927964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.285 qpair failed and we were unable to recover it. 00:35:55.285 [2024-07-12 00:48:22.928048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.285 [2024-07-12 00:48:22.928079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.285 qpair failed and we were unable to recover it. 00:35:55.285 [2024-07-12 00:48:22.928176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.285 [2024-07-12 00:48:22.928216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.285 qpair failed and we were unable to recover it. 00:35:55.285 [2024-07-12 00:48:22.928309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.285 [2024-07-12 00:48:22.928336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.285 qpair failed and we were unable to recover it. 00:35:55.285 [2024-07-12 00:48:22.928422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.285 [2024-07-12 00:48:22.928456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.285 qpair failed and we were unable to recover it. 00:35:55.285 [2024-07-12 00:48:22.928546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.285 [2024-07-12 00:48:22.928574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.285 qpair failed and we were unable to recover it. 00:35:55.285 [2024-07-12 00:48:22.928670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.285 [2024-07-12 00:48:22.928698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.285 qpair failed and we were unable to recover it. 00:35:55.285 [2024-07-12 00:48:22.928776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.285 [2024-07-12 00:48:22.928802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.285 qpair failed and we were unable to recover it. 00:35:55.285 [2024-07-12 00:48:22.928890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.285 [2024-07-12 00:48:22.928915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.285 qpair failed and we were unable to recover it. 00:35:55.285 [2024-07-12 00:48:22.929001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.285 [2024-07-12 00:48:22.929027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.285 qpair failed and we were unable to recover it. 00:35:55.285 [2024-07-12 00:48:22.929110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.285 [2024-07-12 00:48:22.929135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.285 qpair failed and we were unable to recover it. 00:35:55.285 [2024-07-12 00:48:22.929212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.285 [2024-07-12 00:48:22.929238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.285 qpair failed and we were unable to recover it. 00:35:55.285 [2024-07-12 00:48:22.929324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.285 [2024-07-12 00:48:22.929349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.285 qpair failed and we were unable to recover it. 00:35:55.285 [2024-07-12 00:48:22.929431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.285 [2024-07-12 00:48:22.929457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.285 qpair failed and we were unable to recover it. 00:35:55.285 [2024-07-12 00:48:22.929534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.285 [2024-07-12 00:48:22.929560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.285 qpair failed and we were unable to recover it. 00:35:55.285 [2024-07-12 00:48:22.929655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.285 [2024-07-12 00:48:22.929684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.285 qpair failed and we were unable to recover it. 00:35:55.285 [2024-07-12 00:48:22.929769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.285 [2024-07-12 00:48:22.929795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.285 qpair failed and we were unable to recover it. 00:35:55.285 [2024-07-12 00:48:22.929876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.285 [2024-07-12 00:48:22.929903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.285 qpair failed and we were unable to recover it. 00:35:55.285 [2024-07-12 00:48:22.929996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.285 [2024-07-12 00:48:22.930023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.285 qpair failed and we were unable to recover it. 00:35:55.285 [2024-07-12 00:48:22.930107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.285 [2024-07-12 00:48:22.930133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.285 qpair failed and we were unable to recover it. 00:35:55.285 [2024-07-12 00:48:22.930233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.285 [2024-07-12 00:48:22.930262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.285 qpair failed and we were unable to recover it. 00:35:55.285 [2024-07-12 00:48:22.930354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.285 [2024-07-12 00:48:22.930382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.285 qpair failed and we were unable to recover it. 00:35:55.285 [2024-07-12 00:48:22.930461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.285 [2024-07-12 00:48:22.930487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.285 qpair failed and we were unable to recover it. 00:35:55.285 [2024-07-12 00:48:22.930686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.285 [2024-07-12 00:48:22.930713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.285 qpair failed and we were unable to recover it. 00:35:55.285 [2024-07-12 00:48:22.930911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.285 [2024-07-12 00:48:22.930938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.285 qpair failed and we were unable to recover it. 00:35:55.285 [2024-07-12 00:48:22.931017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.285 [2024-07-12 00:48:22.931043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.285 qpair failed and we were unable to recover it. 00:35:55.285 [2024-07-12 00:48:22.931129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.285 [2024-07-12 00:48:22.931156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.285 qpair failed and we were unable to recover it. 00:35:55.285 [2024-07-12 00:48:22.931234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.285 [2024-07-12 00:48:22.931260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.285 qpair failed and we were unable to recover it. 00:35:55.285 [2024-07-12 00:48:22.931361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.285 [2024-07-12 00:48:22.931386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.285 qpair failed and we were unable to recover it. 00:35:55.285 [2024-07-12 00:48:22.931465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.285 [2024-07-12 00:48:22.931491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.285 qpair failed and we were unable to recover it. 00:35:55.285 [2024-07-12 00:48:22.931575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.285 [2024-07-12 00:48:22.931612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.285 qpair failed and we were unable to recover it. 00:35:55.285 [2024-07-12 00:48:22.931702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.285 [2024-07-12 00:48:22.931737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.285 qpair failed and we were unable to recover it. 00:35:55.285 [2024-07-12 00:48:22.931813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.285 [2024-07-12 00:48:22.931840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.285 qpair failed and we were unable to recover it. 00:35:55.285 [2024-07-12 00:48:22.931930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.285 [2024-07-12 00:48:22.931961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.285 qpair failed and we were unable to recover it. 00:35:55.285 [2024-07-12 00:48:22.932047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.285 [2024-07-12 00:48:22.932074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.285 qpair failed and we were unable to recover it. 00:35:55.285 [2024-07-12 00:48:22.932156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.285 [2024-07-12 00:48:22.932184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.285 qpair failed and we were unable to recover it. 00:35:55.285 [2024-07-12 00:48:22.932268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.285 [2024-07-12 00:48:22.932298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.285 qpair failed and we were unable to recover it. 00:35:55.285 [2024-07-12 00:48:22.932387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.285 [2024-07-12 00:48:22.932415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.285 qpair failed and we were unable to recover it. 00:35:55.285 [2024-07-12 00:48:22.932497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.286 [2024-07-12 00:48:22.932524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.286 qpair failed and we were unable to recover it. 00:35:55.286 [2024-07-12 00:48:22.932605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.286 [2024-07-12 00:48:22.932632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.286 qpair failed and we were unable to recover it. 00:35:55.286 [2024-07-12 00:48:22.932709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.286 [2024-07-12 00:48:22.932736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.286 qpair failed and we were unable to recover it. 00:35:55.286 [2024-07-12 00:48:22.932814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.286 [2024-07-12 00:48:22.932840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.286 qpair failed and we were unable to recover it. 00:35:55.286 [2024-07-12 00:48:22.932920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.286 [2024-07-12 00:48:22.932946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.286 qpair failed and we were unable to recover it. 00:35:55.286 [2024-07-12 00:48:22.933025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.286 [2024-07-12 00:48:22.933057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.286 qpair failed and we were unable to recover it. 00:35:55.286 [2024-07-12 00:48:22.933145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.286 [2024-07-12 00:48:22.933172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.286 qpair failed and we were unable to recover it. 00:35:55.286 [2024-07-12 00:48:22.933259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.286 [2024-07-12 00:48:22.933287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.286 qpair failed and we were unable to recover it. 00:35:55.286 [2024-07-12 00:48:22.933370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.286 [2024-07-12 00:48:22.933397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.286 qpair failed and we were unable to recover it. 00:35:55.286 [2024-07-12 00:48:22.933497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.286 [2024-07-12 00:48:22.933527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.286 qpair failed and we were unable to recover it. 00:35:55.286 [2024-07-12 00:48:22.933622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.286 [2024-07-12 00:48:22.933650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.286 qpair failed and we were unable to recover it. 00:35:55.286 [2024-07-12 00:48:22.933731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.286 [2024-07-12 00:48:22.933758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.286 qpair failed and we were unable to recover it. 00:35:55.286 [2024-07-12 00:48:22.933844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.286 [2024-07-12 00:48:22.933872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.286 qpair failed and we were unable to recover it. 00:35:55.286 [2024-07-12 00:48:22.933959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.286 [2024-07-12 00:48:22.933987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.286 qpair failed and we were unable to recover it. 00:35:55.286 [2024-07-12 00:48:22.934070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.286 [2024-07-12 00:48:22.934097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.286 qpair failed and we were unable to recover it. 00:35:55.286 [2024-07-12 00:48:22.934182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.286 [2024-07-12 00:48:22.934209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.286 qpair failed and we were unable to recover it. 00:35:55.286 [2024-07-12 00:48:22.934305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.286 [2024-07-12 00:48:22.934331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.286 qpair failed and we were unable to recover it. 00:35:55.286 [2024-07-12 00:48:22.934413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.286 [2024-07-12 00:48:22.934439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.286 qpair failed and we were unable to recover it. 00:35:55.286 [2024-07-12 00:48:22.934515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.286 [2024-07-12 00:48:22.934543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.286 qpair failed and we were unable to recover it. 00:35:55.286 [2024-07-12 00:48:22.934688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.286 [2024-07-12 00:48:22.934724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.286 qpair failed and we were unable to recover it. 00:35:55.286 [2024-07-12 00:48:22.934814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.286 [2024-07-12 00:48:22.934841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.286 qpair failed and we were unable to recover it. 00:35:55.286 [2024-07-12 00:48:22.934925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.286 [2024-07-12 00:48:22.934951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.286 qpair failed and we were unable to recover it. 00:35:55.286 [2024-07-12 00:48:22.935038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.286 [2024-07-12 00:48:22.935064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.286 qpair failed and we were unable to recover it. 00:35:55.286 [2024-07-12 00:48:22.935140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.286 [2024-07-12 00:48:22.935167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.286 qpair failed and we were unable to recover it. 00:35:55.286 [2024-07-12 00:48:22.935251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.286 [2024-07-12 00:48:22.935278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.286 qpair failed and we were unable to recover it. 00:35:55.286 [2024-07-12 00:48:22.935353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.286 [2024-07-12 00:48:22.935380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.286 qpair failed and we were unable to recover it. 00:35:55.286 [2024-07-12 00:48:22.935463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.286 [2024-07-12 00:48:22.935489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.286 qpair failed and we were unable to recover it. 00:35:55.286 [2024-07-12 00:48:22.935565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.286 [2024-07-12 00:48:22.935597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.286 qpair failed and we were unable to recover it. 00:35:55.286 [2024-07-12 00:48:22.935679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.286 [2024-07-12 00:48:22.935715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.286 qpair failed and we were unable to recover it. 00:35:55.286 [2024-07-12 00:48:22.935821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.286 [2024-07-12 00:48:22.935857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.286 qpair failed and we were unable to recover it. 00:35:55.286 [2024-07-12 00:48:22.935950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.286 [2024-07-12 00:48:22.935980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.286 qpair failed and we were unable to recover it. 00:35:55.286 [2024-07-12 00:48:22.936175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.286 [2024-07-12 00:48:22.936201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.286 qpair failed and we were unable to recover it. 00:35:55.286 [2024-07-12 00:48:22.936391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.286 [2024-07-12 00:48:22.936417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.286 qpair failed and we were unable to recover it. 00:35:55.286 [2024-07-12 00:48:22.936494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.286 [2024-07-12 00:48:22.936520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.286 qpair failed and we were unable to recover it. 00:35:55.286 [2024-07-12 00:48:22.936602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.286 [2024-07-12 00:48:22.936630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.286 qpair failed and we were unable to recover it. 00:35:55.286 [2024-07-12 00:48:22.936712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.286 [2024-07-12 00:48:22.936738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.286 qpair failed and we were unable to recover it. 00:35:55.286 [2024-07-12 00:48:22.936814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.286 [2024-07-12 00:48:22.936840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.286 qpair failed and we were unable to recover it. 00:35:55.286 [2024-07-12 00:48:22.936933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.286 [2024-07-12 00:48:22.936960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.286 qpair failed and we were unable to recover it. 00:35:55.286 [2024-07-12 00:48:22.937034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.286 [2024-07-12 00:48:22.937061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.286 qpair failed and we were unable to recover it. 00:35:55.286 [2024-07-12 00:48:22.937140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.286 [2024-07-12 00:48:22.937167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.286 qpair failed and we were unable to recover it. 00:35:55.286 [2024-07-12 00:48:22.937247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.286 [2024-07-12 00:48:22.937277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.286 qpair failed and we were unable to recover it. 00:35:55.286 [2024-07-12 00:48:22.937356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.286 [2024-07-12 00:48:22.937382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.286 qpair failed and we were unable to recover it. 00:35:55.286 [2024-07-12 00:48:22.937463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.286 [2024-07-12 00:48:22.937489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.286 qpair failed and we were unable to recover it. 00:35:55.286 [2024-07-12 00:48:22.937578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.286 [2024-07-12 00:48:22.937624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.286 qpair failed and we were unable to recover it. 00:35:55.286 [2024-07-12 00:48:22.937717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.286 [2024-07-12 00:48:22.937745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.286 qpair failed and we were unable to recover it. 00:35:55.286 [2024-07-12 00:48:22.937834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.286 [2024-07-12 00:48:22.937861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.286 qpair failed and we were unable to recover it. 00:35:55.286 [2024-07-12 00:48:22.937944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.286 [2024-07-12 00:48:22.937974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.286 qpair failed and we were unable to recover it. 00:35:55.286 [2024-07-12 00:48:22.938059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.286 [2024-07-12 00:48:22.938085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.286 qpair failed and we were unable to recover it. 00:35:55.286 [2024-07-12 00:48:22.938171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.286 [2024-07-12 00:48:22.938198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.286 qpair failed and we were unable to recover it. 00:35:55.286 [2024-07-12 00:48:22.938273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.286 [2024-07-12 00:48:22.938298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.286 qpair failed and we were unable to recover it. 00:35:55.286 [2024-07-12 00:48:22.938381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.286 [2024-07-12 00:48:22.938407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.286 qpair failed and we were unable to recover it. 00:35:55.286 [2024-07-12 00:48:22.938486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.286 [2024-07-12 00:48:22.938513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.286 qpair failed and we were unable to recover it. 00:35:55.286 [2024-07-12 00:48:22.938610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.286 [2024-07-12 00:48:22.938638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.286 qpair failed and we were unable to recover it. 00:35:55.286 [2024-07-12 00:48:22.938725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.286 [2024-07-12 00:48:22.938752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.286 qpair failed and we were unable to recover it. 00:35:55.286 [2024-07-12 00:48:22.938838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.286 [2024-07-12 00:48:22.938863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.286 qpair failed and we were unable to recover it. 00:35:55.286 [2024-07-12 00:48:22.938943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.286 [2024-07-12 00:48:22.938969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.286 qpair failed and we were unable to recover it. 00:35:55.286 [2024-07-12 00:48:22.939053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.286 [2024-07-12 00:48:22.939088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.286 qpair failed and we were unable to recover it. 00:35:55.286 [2024-07-12 00:48:22.939172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.286 [2024-07-12 00:48:22.939198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.286 qpair failed and we were unable to recover it. 00:35:55.286 [2024-07-12 00:48:22.939272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.286 [2024-07-12 00:48:22.939298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.286 qpair failed and we were unable to recover it. 00:35:55.286 [2024-07-12 00:48:22.939379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.286 [2024-07-12 00:48:22.939410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.286 qpair failed and we were unable to recover it. 00:35:55.286 [2024-07-12 00:48:22.939493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.286 [2024-07-12 00:48:22.939524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.286 qpair failed and we were unable to recover it. 00:35:55.286 [2024-07-12 00:48:22.939612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.286 [2024-07-12 00:48:22.939639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.286 qpair failed and we were unable to recover it. 00:35:55.286 [2024-07-12 00:48:22.939719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.286 [2024-07-12 00:48:22.939745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.286 qpair failed and we were unable to recover it. 00:35:55.286 [2024-07-12 00:48:22.939827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.286 [2024-07-12 00:48:22.939853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.286 qpair failed and we were unable to recover it. 00:35:55.286 [2024-07-12 00:48:22.939930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.286 [2024-07-12 00:48:22.939956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.286 qpair failed and we were unable to recover it. 00:35:55.286 [2024-07-12 00:48:22.940041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.286 [2024-07-12 00:48:22.940067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.286 qpair failed and we were unable to recover it. 00:35:55.286 [2024-07-12 00:48:22.940141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.286 [2024-07-12 00:48:22.940168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.286 qpair failed and we were unable to recover it. 00:35:55.286 [2024-07-12 00:48:22.940252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.286 [2024-07-12 00:48:22.940279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.286 qpair failed and we were unable to recover it. 00:35:55.286 [2024-07-12 00:48:22.940368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.286 [2024-07-12 00:48:22.940394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.286 qpair failed and we were unable to recover it. 00:35:55.286 [2024-07-12 00:48:22.940467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.286 [2024-07-12 00:48:22.940494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.286 qpair failed and we were unable to recover it. 00:35:55.286 [2024-07-12 00:48:22.940570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.286 [2024-07-12 00:48:22.940602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.286 qpair failed and we were unable to recover it. 00:35:55.286 [2024-07-12 00:48:22.940677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.286 [2024-07-12 00:48:22.940703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.286 qpair failed and we were unable to recover it. 00:35:55.286 [2024-07-12 00:48:22.940778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.286 [2024-07-12 00:48:22.940804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.286 qpair failed and we were unable to recover it. 00:35:55.286 [2024-07-12 00:48:22.940889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.286 [2024-07-12 00:48:22.940916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.286 qpair failed and we were unable to recover it. 00:35:55.286 [2024-07-12 00:48:22.941013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.286 [2024-07-12 00:48:22.941040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.286 qpair failed and we were unable to recover it. 00:35:55.286 [2024-07-12 00:48:22.941116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.286 [2024-07-12 00:48:22.941142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.286 qpair failed and we were unable to recover it. 00:35:55.286 [2024-07-12 00:48:22.941222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.286 [2024-07-12 00:48:22.941247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.287 qpair failed and we were unable to recover it. 00:35:55.287 [2024-07-12 00:48:22.941330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.287 [2024-07-12 00:48:22.941357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.287 qpair failed and we were unable to recover it. 00:35:55.287 [2024-07-12 00:48:22.941435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.287 [2024-07-12 00:48:22.941462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.287 qpair failed and we were unable to recover it. 00:35:55.287 [2024-07-12 00:48:22.941543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.287 [2024-07-12 00:48:22.941573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.287 qpair failed and we were unable to recover it. 00:35:55.287 [2024-07-12 00:48:22.941663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.287 [2024-07-12 00:48:22.941690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.287 qpair failed and we were unable to recover it. 00:35:55.287 [2024-07-12 00:48:22.941776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.287 [2024-07-12 00:48:22.941802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.287 qpair failed and we were unable to recover it. 00:35:55.287 [2024-07-12 00:48:22.941881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.287 [2024-07-12 00:48:22.941911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.287 qpair failed and we were unable to recover it. 00:35:55.287 [2024-07-12 00:48:22.942000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.287 [2024-07-12 00:48:22.942026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.287 qpair failed and we were unable to recover it. 00:35:55.287 [2024-07-12 00:48:22.942104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.287 [2024-07-12 00:48:22.942130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.287 qpair failed and we were unable to recover it. 00:35:55.287 [2024-07-12 00:48:22.942210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.287 [2024-07-12 00:48:22.942237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.287 qpair failed and we were unable to recover it. 00:35:55.287 [2024-07-12 00:48:22.942310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.287 [2024-07-12 00:48:22.942336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.287 qpair failed and we were unable to recover it. 00:35:55.287 [2024-07-12 00:48:22.942419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.287 [2024-07-12 00:48:22.942445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.287 qpair failed and we were unable to recover it. 00:35:55.287 [2024-07-12 00:48:22.942527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.287 [2024-07-12 00:48:22.942554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.287 qpair failed and we were unable to recover it. 00:35:55.287 [2024-07-12 00:48:22.942648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.287 [2024-07-12 00:48:22.942678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.287 qpair failed and we were unable to recover it. 00:35:55.287 [2024-07-12 00:48:22.942760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.287 [2024-07-12 00:48:22.942786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.287 qpair failed and we were unable to recover it. 00:35:55.287 [2024-07-12 00:48:22.942868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.287 [2024-07-12 00:48:22.942895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.287 qpair failed and we were unable to recover it. 00:35:55.287 [2024-07-12 00:48:22.942971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.287 [2024-07-12 00:48:22.942997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.287 qpair failed and we were unable to recover it. 00:35:55.287 [2024-07-12 00:48:22.943077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.287 [2024-07-12 00:48:22.943104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.287 qpair failed and we were unable to recover it. 00:35:55.287 [2024-07-12 00:48:22.943192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.287 [2024-07-12 00:48:22.943220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.287 qpair failed and we were unable to recover it. 00:35:55.287 [2024-07-12 00:48:22.943299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.287 [2024-07-12 00:48:22.943329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.287 qpair failed and we were unable to recover it. 00:35:55.287 [2024-07-12 00:48:22.943422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.287 [2024-07-12 00:48:22.943450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.287 qpair failed and we were unable to recover it. 00:35:55.287 [2024-07-12 00:48:22.943526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.287 [2024-07-12 00:48:22.943552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.287 qpair failed and we were unable to recover it. 00:35:55.287 [2024-07-12 00:48:22.943652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.287 [2024-07-12 00:48:22.943679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.287 qpair failed and we were unable to recover it. 00:35:55.287 [2024-07-12 00:48:22.943763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.287 [2024-07-12 00:48:22.943789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.287 qpair failed and we were unable to recover it. 00:35:55.287 [2024-07-12 00:48:22.943878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.287 [2024-07-12 00:48:22.943910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.287 qpair failed and we were unable to recover it. 00:35:55.287 [2024-07-12 00:48:22.943993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.287 [2024-07-12 00:48:22.944020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.287 qpair failed and we were unable to recover it. 00:35:55.287 [2024-07-12 00:48:22.944095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.287 [2024-07-12 00:48:22.944121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.287 qpair failed and we were unable to recover it. 00:35:55.287 [2024-07-12 00:48:22.944196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.287 [2024-07-12 00:48:22.944222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.287 qpair failed and we were unable to recover it. 00:35:55.287 [2024-07-12 00:48:22.944303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.287 [2024-07-12 00:48:22.944330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.287 qpair failed and we were unable to recover it. 00:35:55.287 [2024-07-12 00:48:22.944408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.287 [2024-07-12 00:48:22.944439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.287 qpair failed and we were unable to recover it. 00:35:55.287 [2024-07-12 00:48:22.944518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.287 [2024-07-12 00:48:22.944544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.287 qpair failed and we were unable to recover it. 00:35:55.287 [2024-07-12 00:48:22.944642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.287 [2024-07-12 00:48:22.944669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.287 qpair failed and we were unable to recover it. 00:35:55.287 [2024-07-12 00:48:22.944750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.287 [2024-07-12 00:48:22.944776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.287 qpair failed and we were unable to recover it. 00:35:55.287 [2024-07-12 00:48:22.944857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.287 [2024-07-12 00:48:22.944884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.287 qpair failed and we were unable to recover it. 00:35:55.287 [2024-07-12 00:48:22.944968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.287 [2024-07-12 00:48:22.944994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.287 qpair failed and we were unable to recover it. 00:35:55.287 [2024-07-12 00:48:22.945070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.287 [2024-07-12 00:48:22.945097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.287 qpair failed and we were unable to recover it. 00:35:55.287 [2024-07-12 00:48:22.945181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.287 [2024-07-12 00:48:22.945207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.287 qpair failed and we were unable to recover it. 00:35:55.287 [2024-07-12 00:48:22.945289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.287 [2024-07-12 00:48:22.945314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.287 qpair failed and we were unable to recover it. 00:35:55.287 [2024-07-12 00:48:22.945394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.287 [2024-07-12 00:48:22.945420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.287 qpair failed and we were unable to recover it. 00:35:55.287 [2024-07-12 00:48:22.945499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.287 [2024-07-12 00:48:22.945525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.287 qpair failed and we were unable to recover it. 00:35:55.287 [2024-07-12 00:48:22.945717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.287 [2024-07-12 00:48:22.945744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.287 qpair failed and we were unable to recover it. 00:35:55.287 [2024-07-12 00:48:22.945933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.287 [2024-07-12 00:48:22.945960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.287 qpair failed and we were unable to recover it. 00:35:55.287 [2024-07-12 00:48:22.946038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.287 [2024-07-12 00:48:22.946064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.287 qpair failed and we were unable to recover it. 00:35:55.287 [2024-07-12 00:48:22.946254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.287 [2024-07-12 00:48:22.946280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.287 qpair failed and we were unable to recover it. 00:35:55.287 [2024-07-12 00:48:22.946365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.287 [2024-07-12 00:48:22.946392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.287 qpair failed and we were unable to recover it. 00:35:55.287 [2024-07-12 00:48:22.946466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.287 [2024-07-12 00:48:22.946492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.287 qpair failed and we were unable to recover it. 00:35:55.287 [2024-07-12 00:48:22.946579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.287 [2024-07-12 00:48:22.946613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.287 qpair failed and we were unable to recover it. 00:35:55.287 [2024-07-12 00:48:22.946695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.287 [2024-07-12 00:48:22.946722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.287 qpair failed and we were unable to recover it. 00:35:55.287 [2024-07-12 00:48:22.946803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.287 [2024-07-12 00:48:22.946829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.287 qpair failed and we were unable to recover it. 00:35:55.287 [2024-07-12 00:48:22.946910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.287 [2024-07-12 00:48:22.946937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.287 qpair failed and we were unable to recover it. 00:35:55.287 [2024-07-12 00:48:22.947019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.287 [2024-07-12 00:48:22.947046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.287 qpair failed and we were unable to recover it. 00:35:55.287 [2024-07-12 00:48:22.947141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.287 [2024-07-12 00:48:22.947167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.287 qpair failed and we were unable to recover it. 00:35:55.287 [2024-07-12 00:48:22.947247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.287 [2024-07-12 00:48:22.947273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.287 qpair failed and we were unable to recover it. 00:35:55.287 [2024-07-12 00:48:22.947348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.287 [2024-07-12 00:48:22.947374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.287 qpair failed and we were unable to recover it. 00:35:55.287 [2024-07-12 00:48:22.947448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.287 [2024-07-12 00:48:22.947474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.287 qpair failed and we were unable to recover it. 00:35:55.287 [2024-07-12 00:48:22.947602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.287 [2024-07-12 00:48:22.947629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.287 qpair failed and we were unable to recover it. 00:35:55.287 [2024-07-12 00:48:22.947754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.287 [2024-07-12 00:48:22.947780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.287 qpair failed and we were unable to recover it. 00:35:55.287 [2024-07-12 00:48:22.947866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.287 [2024-07-12 00:48:22.947892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.287 qpair failed and we were unable to recover it. 00:35:55.287 [2024-07-12 00:48:22.947973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.287 [2024-07-12 00:48:22.948000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.287 qpair failed and we were unable to recover it. 00:35:55.287 [2024-07-12 00:48:22.948081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.287 [2024-07-12 00:48:22.948107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.287 qpair failed and we were unable to recover it. 00:35:55.287 [2024-07-12 00:48:22.948185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.287 [2024-07-12 00:48:22.948212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.287 qpair failed and we were unable to recover it. 00:35:55.287 [2024-07-12 00:48:22.948297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.287 [2024-07-12 00:48:22.948325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.287 qpair failed and we were unable to recover it. 00:35:55.287 [2024-07-12 00:48:22.948405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.287 [2024-07-12 00:48:22.948431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.287 qpair failed and we were unable to recover it. 00:35:55.287 [2024-07-12 00:48:22.948511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.287 [2024-07-12 00:48:22.948537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.287 qpair failed and we were unable to recover it. 00:35:55.287 [2024-07-12 00:48:22.948618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.287 [2024-07-12 00:48:22.948650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.287 qpair failed and we were unable to recover it. 00:35:55.287 [2024-07-12 00:48:22.948765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.287 [2024-07-12 00:48:22.948792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.287 qpair failed and we were unable to recover it. 00:35:55.287 [2024-07-12 00:48:22.948871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.287 [2024-07-12 00:48:22.948898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.287 qpair failed and we were unable to recover it. 00:35:55.287 [2024-07-12 00:48:22.948983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.287 [2024-07-12 00:48:22.949009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.287 qpair failed and we were unable to recover it. 00:35:55.287 [2024-07-12 00:48:22.949089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.287 [2024-07-12 00:48:22.949116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.287 qpair failed and we were unable to recover it. 00:35:55.287 [2024-07-12 00:48:22.949195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.287 [2024-07-12 00:48:22.949221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.287 qpair failed and we were unable to recover it. 00:35:55.287 [2024-07-12 00:48:22.949309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.287 [2024-07-12 00:48:22.949335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.287 qpair failed and we were unable to recover it. 00:35:55.287 [2024-07-12 00:48:22.949417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.287 [2024-07-12 00:48:22.949444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.287 qpair failed and we were unable to recover it. 00:35:55.287 [2024-07-12 00:48:22.949529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.287 [2024-07-12 00:48:22.949557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.287 qpair failed and we were unable to recover it. 00:35:55.287 [2024-07-12 00:48:22.949651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.287 [2024-07-12 00:48:22.949678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.287 qpair failed and we were unable to recover it. 00:35:55.288 [2024-07-12 00:48:22.949760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.288 [2024-07-12 00:48:22.949787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.288 qpair failed and we were unable to recover it. 00:35:55.288 [2024-07-12 00:48:22.949864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.288 [2024-07-12 00:48:22.949891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.288 qpair failed and we were unable to recover it. 00:35:55.288 [2024-07-12 00:48:22.949966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.288 [2024-07-12 00:48:22.949993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.288 qpair failed and we were unable to recover it. 00:35:55.288 [2024-07-12 00:48:22.950068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.288 [2024-07-12 00:48:22.950094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.288 qpair failed and we were unable to recover it. 00:35:55.288 [2024-07-12 00:48:22.950208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.288 [2024-07-12 00:48:22.950248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.288 qpair failed and we were unable to recover it. 00:35:55.288 [2024-07-12 00:48:22.950331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.288 [2024-07-12 00:48:22.950359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.288 qpair failed and we were unable to recover it. 00:35:55.288 [2024-07-12 00:48:22.950443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.288 [2024-07-12 00:48:22.950471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.288 qpair failed and we were unable to recover it. 00:35:55.288 [2024-07-12 00:48:22.950553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.288 [2024-07-12 00:48:22.950580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.288 qpair failed and we were unable to recover it. 00:35:55.288 [2024-07-12 00:48:22.950676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.288 [2024-07-12 00:48:22.950703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.288 qpair failed and we were unable to recover it. 00:35:55.288 [2024-07-12 00:48:22.950787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.288 [2024-07-12 00:48:22.950813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.288 qpair failed and we were unable to recover it. 00:35:55.288 [2024-07-12 00:48:22.950894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.288 [2024-07-12 00:48:22.950921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.288 qpair failed and we were unable to recover it. 00:35:55.288 [2024-07-12 00:48:22.950997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.288 [2024-07-12 00:48:22.951022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.288 qpair failed and we were unable to recover it. 00:35:55.288 [2024-07-12 00:48:22.951099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.288 [2024-07-12 00:48:22.951124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.288 qpair failed and we were unable to recover it. 00:35:55.288 [2024-07-12 00:48:22.951199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.288 [2024-07-12 00:48:22.951225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.288 qpair failed and we were unable to recover it. 00:35:55.288 [2024-07-12 00:48:22.951311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.288 [2024-07-12 00:48:22.951342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.288 qpair failed and we were unable to recover it. 00:35:55.288 [2024-07-12 00:48:22.951417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.288 [2024-07-12 00:48:22.951443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.288 qpair failed and we were unable to recover it. 00:35:55.288 [2024-07-12 00:48:22.951545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.288 [2024-07-12 00:48:22.951571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.288 qpair failed and we were unable to recover it. 00:35:55.288 [2024-07-12 00:48:22.951674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.288 [2024-07-12 00:48:22.951701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.288 qpair failed and we were unable to recover it. 00:35:55.288 [2024-07-12 00:48:22.951784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.288 [2024-07-12 00:48:22.951810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.288 qpair failed and we were unable to recover it. 00:35:55.288 [2024-07-12 00:48:22.951890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.288 [2024-07-12 00:48:22.951916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.288 qpair failed and we were unable to recover it. 00:35:55.288 [2024-07-12 00:48:22.951995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.288 [2024-07-12 00:48:22.952021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.288 qpair failed and we were unable to recover it. 00:35:55.288 [2024-07-12 00:48:22.952100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.288 [2024-07-12 00:48:22.952126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.288 qpair failed and we were unable to recover it. 00:35:55.288 [2024-07-12 00:48:22.952200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.288 [2024-07-12 00:48:22.952226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.288 qpair failed and we were unable to recover it. 00:35:55.288 [2024-07-12 00:48:22.952306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.288 [2024-07-12 00:48:22.952332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.288 qpair failed and we were unable to recover it. 00:35:55.288 [2024-07-12 00:48:22.952418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.288 [2024-07-12 00:48:22.952446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.288 qpair failed and we were unable to recover it. 00:35:55.288 [2024-07-12 00:48:22.952540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.288 [2024-07-12 00:48:22.952569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.288 qpair failed and we were unable to recover it. 00:35:55.288 [2024-07-12 00:48:22.952663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.288 [2024-07-12 00:48:22.952690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.288 qpair failed and we were unable to recover it. 00:35:55.288 [2024-07-12 00:48:22.952764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.288 [2024-07-12 00:48:22.952791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.288 qpair failed and we were unable to recover it. 00:35:55.288 [2024-07-12 00:48:22.952873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.288 [2024-07-12 00:48:22.952899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.288 qpair failed and we were unable to recover it. 00:35:55.288 [2024-07-12 00:48:22.952982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.288 [2024-07-12 00:48:22.953008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.288 qpair failed and we were unable to recover it. 00:35:55.288 [2024-07-12 00:48:22.953091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.288 [2024-07-12 00:48:22.953122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.288 qpair failed and we were unable to recover it. 00:35:55.288 [2024-07-12 00:48:22.953208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.288 [2024-07-12 00:48:22.953236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.288 qpair failed and we were unable to recover it. 00:35:55.288 [2024-07-12 00:48:22.953320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.288 [2024-07-12 00:48:22.953346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.288 qpair failed and we were unable to recover it. 00:35:55.288 [2024-07-12 00:48:22.953428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.288 [2024-07-12 00:48:22.953453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.288 qpair failed and we were unable to recover it. 00:35:55.288 [2024-07-12 00:48:22.953532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.288 [2024-07-12 00:48:22.953558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.288 qpair failed and we were unable to recover it. 00:35:55.288 [2024-07-12 00:48:22.953643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.288 [2024-07-12 00:48:22.953670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.288 qpair failed and we were unable to recover it. 00:35:55.288 [2024-07-12 00:48:22.953749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.288 [2024-07-12 00:48:22.953775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.288 qpair failed and we were unable to recover it. 00:35:55.288 [2024-07-12 00:48:22.953854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.288 [2024-07-12 00:48:22.953881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.288 qpair failed and we were unable to recover it. 00:35:55.288 [2024-07-12 00:48:22.953966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.288 [2024-07-12 00:48:22.953993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.288 qpair failed and we were unable to recover it. 00:35:55.288 [2024-07-12 00:48:22.954076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.288 [2024-07-12 00:48:22.954104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.288 qpair failed and we were unable to recover it. 00:35:55.288 [2024-07-12 00:48:22.954198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.288 [2024-07-12 00:48:22.954226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.288 qpair failed and we were unable to recover it. 00:35:55.288 [2024-07-12 00:48:22.954303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.288 [2024-07-12 00:48:22.954330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.288 qpair failed and we were unable to recover it. 00:35:55.288 [2024-07-12 00:48:22.954417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.288 [2024-07-12 00:48:22.954445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.288 qpair failed and we were unable to recover it. 00:35:55.288 [2024-07-12 00:48:22.954528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.288 [2024-07-12 00:48:22.954556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.288 qpair failed and we were unable to recover it. 00:35:55.288 [2024-07-12 00:48:22.954656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.288 [2024-07-12 00:48:22.954684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.288 qpair failed and we were unable to recover it. 00:35:55.288 [2024-07-12 00:48:22.954769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.288 [2024-07-12 00:48:22.954798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.288 qpair failed and we were unable to recover it. 00:35:55.288 [2024-07-12 00:48:22.954884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.288 [2024-07-12 00:48:22.954910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.288 qpair failed and we were unable to recover it. 00:35:55.288 [2024-07-12 00:48:22.954994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.288 [2024-07-12 00:48:22.955020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.288 qpair failed and we were unable to recover it. 00:35:55.288 [2024-07-12 00:48:22.955097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.288 [2024-07-12 00:48:22.955122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.288 qpair failed and we were unable to recover it. 00:35:55.288 [2024-07-12 00:48:22.955198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.288 [2024-07-12 00:48:22.955224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.288 qpair failed and we were unable to recover it. 00:35:55.288 [2024-07-12 00:48:22.955301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.288 [2024-07-12 00:48:22.955327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.288 qpair failed and we were unable to recover it. 00:35:55.288 [2024-07-12 00:48:22.955402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.288 [2024-07-12 00:48:22.955428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.288 qpair failed and we were unable to recover it. 00:35:55.288 [2024-07-12 00:48:22.955512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.288 [2024-07-12 00:48:22.955540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.288 qpair failed and we were unable to recover it. 00:35:55.288 [2024-07-12 00:48:22.955656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.288 [2024-07-12 00:48:22.955689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.288 qpair failed and we were unable to recover it. 00:35:55.288 [2024-07-12 00:48:22.955779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.288 [2024-07-12 00:48:22.955807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.288 qpair failed and we were unable to recover it. 00:35:55.288 [2024-07-12 00:48:22.955891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.288 [2024-07-12 00:48:22.955919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.288 qpair failed and we were unable to recover it. 00:35:55.288 [2024-07-12 00:48:22.955995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.288 [2024-07-12 00:48:22.956021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.288 qpair failed and we were unable to recover it. 00:35:55.288 [2024-07-12 00:48:22.956103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.288 [2024-07-12 00:48:22.956130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.288 qpair failed and we were unable to recover it. 00:35:55.288 [2024-07-12 00:48:22.956259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.288 [2024-07-12 00:48:22.956287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.288 qpair failed and we were unable to recover it. 00:35:55.288 [2024-07-12 00:48:22.956372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.288 [2024-07-12 00:48:22.956399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.288 qpair failed and we were unable to recover it. 00:35:55.288 [2024-07-12 00:48:22.956493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.288 [2024-07-12 00:48:22.956519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.288 qpair failed and we were unable to recover it. 00:35:55.288 [2024-07-12 00:48:22.956608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.288 [2024-07-12 00:48:22.956635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.288 qpair failed and we were unable to recover it. 00:35:55.288 [2024-07-12 00:48:22.956732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.288 [2024-07-12 00:48:22.956759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.288 qpair failed and we were unable to recover it. 00:35:55.288 [2024-07-12 00:48:22.956835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.288 [2024-07-12 00:48:22.956861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.288 qpair failed and we were unable to recover it. 00:35:55.288 [2024-07-12 00:48:22.956949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.288 [2024-07-12 00:48:22.956980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.288 qpair failed and we were unable to recover it. 00:35:55.288 [2024-07-12 00:48:22.957056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.288 [2024-07-12 00:48:22.957082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.288 qpair failed and we were unable to recover it. 00:35:55.288 [2024-07-12 00:48:22.957164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.288 [2024-07-12 00:48:22.957191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.288 qpair failed and we were unable to recover it. 00:35:55.288 [2024-07-12 00:48:22.957275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.288 [2024-07-12 00:48:22.957301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.288 qpair failed and we were unable to recover it. 00:35:55.288 [2024-07-12 00:48:22.957395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.288 [2024-07-12 00:48:22.957422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.288 qpair failed and we were unable to recover it. 00:35:55.288 [2024-07-12 00:48:22.957502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.288 [2024-07-12 00:48:22.957529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.288 qpair failed and we were unable to recover it. 00:35:55.288 [2024-07-12 00:48:22.957607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.288 [2024-07-12 00:48:22.957640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.288 qpair failed and we were unable to recover it. 00:35:55.288 [2024-07-12 00:48:22.957728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.288 [2024-07-12 00:48:22.957754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.288 qpair failed and we were unable to recover it. 00:35:55.288 [2024-07-12 00:48:22.957836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.288 [2024-07-12 00:48:22.957862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.288 qpair failed and we were unable to recover it. 00:35:55.288 [2024-07-12 00:48:22.957943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.288 [2024-07-12 00:48:22.957970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.288 qpair failed and we were unable to recover it. 00:35:55.288 [2024-07-12 00:48:22.958055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.288 [2024-07-12 00:48:22.958083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.288 qpair failed and we were unable to recover it. 00:35:55.288 [2024-07-12 00:48:22.958159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.288 [2024-07-12 00:48:22.958186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.288 qpair failed and we were unable to recover it. 00:35:55.288 [2024-07-12 00:48:22.958261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.288 [2024-07-12 00:48:22.958288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.288 qpair failed and we were unable to recover it. 00:35:55.288 [2024-07-12 00:48:22.958370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.288 [2024-07-12 00:48:22.958396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.288 qpair failed and we were unable to recover it. 00:35:55.288 [2024-07-12 00:48:22.958481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.289 [2024-07-12 00:48:22.958507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.289 qpair failed and we were unable to recover it. 00:35:55.289 [2024-07-12 00:48:22.958582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.289 [2024-07-12 00:48:22.958613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.289 qpair failed and we were unable to recover it. 00:35:55.289 [2024-07-12 00:48:22.958706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.289 [2024-07-12 00:48:22.958733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.289 qpair failed and we were unable to recover it. 00:35:55.289 [2024-07-12 00:48:22.958812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.289 [2024-07-12 00:48:22.958839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.289 qpair failed and we were unable to recover it. 00:35:55.289 [2024-07-12 00:48:22.958917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.289 [2024-07-12 00:48:22.958943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.289 qpair failed and we were unable to recover it. 00:35:55.289 [2024-07-12 00:48:22.959025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.289 [2024-07-12 00:48:22.959052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.289 qpair failed and we were unable to recover it. 00:35:55.289 [2024-07-12 00:48:22.959135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.289 [2024-07-12 00:48:22.959161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.289 qpair failed and we were unable to recover it. 00:35:55.289 [2024-07-12 00:48:22.959246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.289 [2024-07-12 00:48:22.959272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.289 qpair failed and we were unable to recover it. 00:35:55.289 [2024-07-12 00:48:22.959347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.289 [2024-07-12 00:48:22.959373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.289 qpair failed and we were unable to recover it. 00:35:55.289 [2024-07-12 00:48:22.959450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.289 [2024-07-12 00:48:22.959476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.289 qpair failed and we were unable to recover it. 00:35:55.289 [2024-07-12 00:48:22.959560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.289 [2024-07-12 00:48:22.959598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.289 qpair failed and we were unable to recover it. 00:35:55.289 [2024-07-12 00:48:22.959681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.289 [2024-07-12 00:48:22.959708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.289 qpair failed and we were unable to recover it. 00:35:55.289 [2024-07-12 00:48:22.959789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.289 [2024-07-12 00:48:22.959815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.289 qpair failed and we were unable to recover it. 00:35:55.289 [2024-07-12 00:48:22.959900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.289 [2024-07-12 00:48:22.959926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.289 qpair failed and we were unable to recover it. 00:35:55.289 [2024-07-12 00:48:22.960001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.289 [2024-07-12 00:48:22.960028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.289 qpair failed and we were unable to recover it. 00:35:55.289 [2024-07-12 00:48:22.960105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.289 [2024-07-12 00:48:22.960133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.289 qpair failed and we were unable to recover it. 00:35:55.289 [2024-07-12 00:48:22.960207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.289 [2024-07-12 00:48:22.960233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.289 qpair failed and we were unable to recover it. 00:35:55.289 [2024-07-12 00:48:22.960312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.289 [2024-07-12 00:48:22.960339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.289 qpair failed and we were unable to recover it. 00:35:55.289 [2024-07-12 00:48:22.960419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.289 [2024-07-12 00:48:22.960446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.289 qpair failed and we were unable to recover it. 00:35:55.289 [2024-07-12 00:48:22.960532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.289 [2024-07-12 00:48:22.960559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.289 qpair failed and we were unable to recover it. 00:35:55.289 [2024-07-12 00:48:22.960664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.289 [2024-07-12 00:48:22.960691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.289 qpair failed and we were unable to recover it. 00:35:55.289 [2024-07-12 00:48:22.960767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.289 [2024-07-12 00:48:22.960793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.289 qpair failed and we were unable to recover it. 00:35:55.289 [2024-07-12 00:48:22.960884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.289 [2024-07-12 00:48:22.960911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.289 qpair failed and we were unable to recover it. 00:35:55.289 [2024-07-12 00:48:22.960994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.289 [2024-07-12 00:48:22.961021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.289 qpair failed and we were unable to recover it. 00:35:55.289 [2024-07-12 00:48:22.961104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.289 [2024-07-12 00:48:22.961131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.289 qpair failed and we were unable to recover it. 00:35:55.289 [2024-07-12 00:48:22.961209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.289 [2024-07-12 00:48:22.961238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.289 qpair failed and we were unable to recover it. 00:35:55.289 [2024-07-12 00:48:22.961327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.289 [2024-07-12 00:48:22.961353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.289 qpair failed and we were unable to recover it. 00:35:55.289 [2024-07-12 00:48:22.961455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.289 [2024-07-12 00:48:22.961481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.289 qpair failed and we were unable to recover it. 00:35:55.289 [2024-07-12 00:48:22.961556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.289 [2024-07-12 00:48:22.961582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.289 qpair failed and we were unable to recover it. 00:35:55.289 [2024-07-12 00:48:22.961685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.289 [2024-07-12 00:48:22.961711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.289 qpair failed and we were unable to recover it. 00:35:55.289 [2024-07-12 00:48:22.961792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.289 [2024-07-12 00:48:22.961818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.289 qpair failed and we were unable to recover it. 00:35:55.289 [2024-07-12 00:48:22.961897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.289 [2024-07-12 00:48:22.961924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.289 qpair failed and we were unable to recover it. 00:35:55.289 [2024-07-12 00:48:22.962009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.289 [2024-07-12 00:48:22.962040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.289 qpair failed and we were unable to recover it. 00:35:55.289 [2024-07-12 00:48:22.962120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.289 [2024-07-12 00:48:22.962146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.289 qpair failed and we were unable to recover it. 00:35:55.289 [2024-07-12 00:48:22.962221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.289 [2024-07-12 00:48:22.962247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.289 qpair failed and we were unable to recover it. 00:35:55.289 [2024-07-12 00:48:22.962325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.289 [2024-07-12 00:48:22.962351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.289 qpair failed and we were unable to recover it. 00:35:55.289 [2024-07-12 00:48:22.962432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.289 [2024-07-12 00:48:22.962458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.289 qpair failed and we were unable to recover it. 00:35:55.289 [2024-07-12 00:48:22.962545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.289 [2024-07-12 00:48:22.962575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.289 qpair failed and we were unable to recover it. 00:35:55.289 [2024-07-12 00:48:22.962672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.289 [2024-07-12 00:48:22.962699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.289 qpair failed and we were unable to recover it. 00:35:55.289 [2024-07-12 00:48:22.962791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.289 [2024-07-12 00:48:22.962818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.289 qpair failed and we were unable to recover it. 00:35:55.289 [2024-07-12 00:48:22.962907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.289 [2024-07-12 00:48:22.962933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.289 qpair failed and we were unable to recover it. 00:35:55.289 [2024-07-12 00:48:22.963008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.289 [2024-07-12 00:48:22.963034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.289 qpair failed and we were unable to recover it. 00:35:55.289 [2024-07-12 00:48:22.963124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.289 [2024-07-12 00:48:22.963150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.289 qpair failed and we were unable to recover it. 00:35:55.289 [2024-07-12 00:48:22.963235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.289 [2024-07-12 00:48:22.963266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.289 qpair failed and we were unable to recover it. 00:35:55.289 [2024-07-12 00:48:22.963348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.289 [2024-07-12 00:48:22.963375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.289 qpair failed and we were unable to recover it. 00:35:55.289 [2024-07-12 00:48:22.963463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.289 [2024-07-12 00:48:22.963493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.289 qpair failed and we were unable to recover it. 00:35:55.289 [2024-07-12 00:48:22.963583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.289 [2024-07-12 00:48:22.963621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.289 qpair failed and we were unable to recover it. 00:35:55.289 [2024-07-12 00:48:22.963706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.289 [2024-07-12 00:48:22.963733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.289 qpair failed and we were unable to recover it. 00:35:55.289 [2024-07-12 00:48:22.963814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.289 [2024-07-12 00:48:22.963840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.289 qpair failed and we were unable to recover it. 00:35:55.289 [2024-07-12 00:48:22.963923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.289 [2024-07-12 00:48:22.963949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.289 qpair failed and we were unable to recover it. 00:35:55.289 [2024-07-12 00:48:22.964030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.289 [2024-07-12 00:48:22.964057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.289 qpair failed and we were unable to recover it. 00:35:55.289 [2024-07-12 00:48:22.964137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.289 [2024-07-12 00:48:22.964167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.289 qpair failed and we were unable to recover it. 00:35:55.289 [2024-07-12 00:48:22.964252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.289 [2024-07-12 00:48:22.964279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.289 qpair failed and we were unable to recover it. 00:35:55.289 [2024-07-12 00:48:22.964355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.289 [2024-07-12 00:48:22.964381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.289 qpair failed and we were unable to recover it. 00:35:55.289 [2024-07-12 00:48:22.964464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.289 [2024-07-12 00:48:22.964496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.289 qpair failed and we were unable to recover it. 00:35:55.289 [2024-07-12 00:48:22.964581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.289 [2024-07-12 00:48:22.964615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.289 qpair failed and we were unable to recover it. 00:35:55.289 [2024-07-12 00:48:22.964704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.289 [2024-07-12 00:48:22.964731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.289 qpair failed and we were unable to recover it. 00:35:55.289 [2024-07-12 00:48:22.964816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.289 [2024-07-12 00:48:22.964843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.289 qpair failed and we were unable to recover it. 00:35:55.289 [2024-07-12 00:48:22.964922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.289 [2024-07-12 00:48:22.964948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.289 qpair failed and we were unable to recover it. 00:35:55.289 [2024-07-12 00:48:22.965034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.289 [2024-07-12 00:48:22.965061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.289 qpair failed and we were unable to recover it. 00:35:55.289 [2024-07-12 00:48:22.965149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.289 [2024-07-12 00:48:22.965176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.289 qpair failed and we were unable to recover it. 00:35:55.289 [2024-07-12 00:48:22.965269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.289 [2024-07-12 00:48:22.965296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.289 qpair failed and we were unable to recover it. 00:35:55.289 [2024-07-12 00:48:22.965371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.289 [2024-07-12 00:48:22.965397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.289 qpair failed and we were unable to recover it. 00:35:55.289 [2024-07-12 00:48:22.965487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.289 [2024-07-12 00:48:22.965514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.289 qpair failed and we were unable to recover it. 00:35:55.289 [2024-07-12 00:48:22.965599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.289 [2024-07-12 00:48:22.965628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.289 qpair failed and we were unable to recover it. 00:35:55.289 [2024-07-12 00:48:22.965709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.289 [2024-07-12 00:48:22.965735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.289 qpair failed and we were unable to recover it. 00:35:55.289 [2024-07-12 00:48:22.965814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.289 [2024-07-12 00:48:22.965840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.289 qpair failed and we were unable to recover it. 00:35:55.289 [2024-07-12 00:48:22.965927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.289 [2024-07-12 00:48:22.965953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.289 qpair failed and we were unable to recover it. 00:35:55.289 [2024-07-12 00:48:22.966033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.289 [2024-07-12 00:48:22.966059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.289 qpair failed and we were unable to recover it. 00:35:55.289 [2024-07-12 00:48:22.966135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.289 [2024-07-12 00:48:22.966161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.289 qpair failed and we were unable to recover it. 00:35:55.289 [2024-07-12 00:48:22.966352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.289 [2024-07-12 00:48:22.966379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.289 qpair failed and we were unable to recover it. 00:35:55.289 [2024-07-12 00:48:22.966453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.289 [2024-07-12 00:48:22.966479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.290 qpair failed and we were unable to recover it. 00:35:55.290 [2024-07-12 00:48:22.966680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.290 [2024-07-12 00:48:22.966712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.290 qpair failed and we were unable to recover it. 00:35:55.290 [2024-07-12 00:48:22.966797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.290 [2024-07-12 00:48:22.966823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.290 qpair failed and we were unable to recover it. 00:35:55.290 [2024-07-12 00:48:22.967012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.290 [2024-07-12 00:48:22.967038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.290 qpair failed and we were unable to recover it. 00:35:55.290 [2024-07-12 00:48:22.967112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.290 [2024-07-12 00:48:22.967138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.290 qpair failed and we were unable to recover it. 00:35:55.290 [2024-07-12 00:48:22.967216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.290 [2024-07-12 00:48:22.967245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.290 qpair failed and we were unable to recover it. 00:35:55.290 [2024-07-12 00:48:22.967337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.290 [2024-07-12 00:48:22.967365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.290 qpair failed and we were unable to recover it. 00:35:55.290 [2024-07-12 00:48:22.967447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.290 [2024-07-12 00:48:22.967474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.290 qpair failed and we were unable to recover it. 00:35:55.290 [2024-07-12 00:48:22.967548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.290 [2024-07-12 00:48:22.967574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.290 qpair failed and we were unable to recover it. 00:35:55.290 [2024-07-12 00:48:22.967666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.290 [2024-07-12 00:48:22.967692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.290 qpair failed and we were unable to recover it. 00:35:55.290 [2024-07-12 00:48:22.967775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.290 [2024-07-12 00:48:22.967802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.290 qpair failed and we were unable to recover it. 00:35:55.290 [2024-07-12 00:48:22.967882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.290 [2024-07-12 00:48:22.967909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.290 qpair failed and we were unable to recover it. 00:35:55.290 [2024-07-12 00:48:22.967989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.290 [2024-07-12 00:48:22.968016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.290 qpair failed and we were unable to recover it. 00:35:55.290 [2024-07-12 00:48:22.968109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.290 [2024-07-12 00:48:22.968139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.290 qpair failed and we were unable to recover it. 00:35:55.290 [2024-07-12 00:48:22.968228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.290 [2024-07-12 00:48:22.968256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.290 qpair failed and we were unable to recover it. 00:35:55.290 [2024-07-12 00:48:22.968350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.290 [2024-07-12 00:48:22.968377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.290 qpair failed and we were unable to recover it. 00:35:55.290 [2024-07-12 00:48:22.968460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.290 [2024-07-12 00:48:22.968487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.290 qpair failed and we were unable to recover it. 00:35:55.290 [2024-07-12 00:48:22.968578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.290 [2024-07-12 00:48:22.968612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.290 qpair failed and we were unable to recover it. 00:35:55.290 [2024-07-12 00:48:22.968692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.290 [2024-07-12 00:48:22.968719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.290 qpair failed and we were unable to recover it. 00:35:55.290 [2024-07-12 00:48:22.968807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.290 [2024-07-12 00:48:22.968836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.290 qpair failed and we were unable to recover it. 00:35:55.290 [2024-07-12 00:48:22.968920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.290 [2024-07-12 00:48:22.968947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.290 qpair failed and we were unable to recover it. 00:35:55.290 [2024-07-12 00:48:22.969039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.290 [2024-07-12 00:48:22.969067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.290 qpair failed and we were unable to recover it. 00:35:55.290 [2024-07-12 00:48:22.969143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.290 [2024-07-12 00:48:22.969169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.290 qpair failed and we were unable to recover it. 00:35:55.290 [2024-07-12 00:48:22.969255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.290 [2024-07-12 00:48:22.969283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.290 qpair failed and we were unable to recover it. 00:35:55.290 [2024-07-12 00:48:22.969363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.290 [2024-07-12 00:48:22.969390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.290 qpair failed and we were unable to recover it. 00:35:55.290 [2024-07-12 00:48:22.969475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.290 [2024-07-12 00:48:22.969501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.290 qpair failed and we were unable to recover it. 00:35:55.290 [2024-07-12 00:48:22.969581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.290 [2024-07-12 00:48:22.969631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.290 qpair failed and we were unable to recover it. 00:35:55.290 [2024-07-12 00:48:22.969712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.290 [2024-07-12 00:48:22.969738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.290 qpair failed and we were unable to recover it. 00:35:55.290 [2024-07-12 00:48:22.969837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.290 [2024-07-12 00:48:22.969873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.290 qpair failed and we were unable to recover it. 00:35:55.290 [2024-07-12 00:48:22.969967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.290 [2024-07-12 00:48:22.969996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.290 qpair failed and we were unable to recover it. 00:35:55.290 [2024-07-12 00:48:22.970082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.290 [2024-07-12 00:48:22.970109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.290 qpair failed and we were unable to recover it. 00:35:55.290 [2024-07-12 00:48:22.970189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.290 [2024-07-12 00:48:22.970216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.290 qpair failed and we were unable to recover it. 00:35:55.290 [2024-07-12 00:48:22.970297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.290 [2024-07-12 00:48:22.970328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.290 qpair failed and we were unable to recover it. 00:35:55.290 [2024-07-12 00:48:22.970407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.290 [2024-07-12 00:48:22.970433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.290 qpair failed and we were unable to recover it. 00:35:55.290 [2024-07-12 00:48:22.970507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.290 [2024-07-12 00:48:22.970534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.290 qpair failed and we were unable to recover it. 00:35:55.290 [2024-07-12 00:48:22.970609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.290 [2024-07-12 00:48:22.970635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.290 qpair failed and we were unable to recover it. 00:35:55.290 [2024-07-12 00:48:22.970715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.290 [2024-07-12 00:48:22.970741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.290 qpair failed and we were unable to recover it. 00:35:55.290 [2024-07-12 00:48:22.970817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.290 [2024-07-12 00:48:22.970844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.290 qpair failed and we were unable to recover it. 00:35:55.290 [2024-07-12 00:48:22.970918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.290 [2024-07-12 00:48:22.970944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.290 qpair failed and we were unable to recover it. 00:35:55.290 [2024-07-12 00:48:22.971019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.290 [2024-07-12 00:48:22.971046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.290 qpair failed and we were unable to recover it. 00:35:55.290 [2024-07-12 00:48:22.971127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.290 [2024-07-12 00:48:22.971153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.290 qpair failed and we were unable to recover it. 00:35:55.290 [2024-07-12 00:48:22.971233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.290 [2024-07-12 00:48:22.971270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.290 qpair failed and we were unable to recover it. 00:35:55.290 [2024-07-12 00:48:22.971358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.290 [2024-07-12 00:48:22.971384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.290 qpair failed and we were unable to recover it. 00:35:55.290 [2024-07-12 00:48:22.971577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.290 [2024-07-12 00:48:22.971610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.290 qpair failed and we were unable to recover it. 00:35:55.290 [2024-07-12 00:48:22.971720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.290 [2024-07-12 00:48:22.971747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.290 qpair failed and we were unable to recover it. 00:35:55.290 [2024-07-12 00:48:22.971941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.290 [2024-07-12 00:48:22.971967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.290 qpair failed and we were unable to recover it. 00:35:55.290 [2024-07-12 00:48:22.972066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.290 [2024-07-12 00:48:22.972094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.290 qpair failed and we were unable to recover it. 00:35:55.290 [2024-07-12 00:48:22.972175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.290 [2024-07-12 00:48:22.972201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.290 qpair failed and we were unable to recover it. 00:35:55.290 [2024-07-12 00:48:22.972290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.290 [2024-07-12 00:48:22.972316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.290 qpair failed and we were unable to recover it. 00:35:55.290 [2024-07-12 00:48:22.972402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.290 [2024-07-12 00:48:22.972428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.290 qpair failed and we were unable to recover it. 00:35:55.290 [2024-07-12 00:48:22.972509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.290 [2024-07-12 00:48:22.972536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.290 qpair failed and we were unable to recover it. 00:35:55.290 [2024-07-12 00:48:22.972620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.290 [2024-07-12 00:48:22.972650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.290 qpair failed and we were unable to recover it. 00:35:55.290 [2024-07-12 00:48:22.972728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.290 [2024-07-12 00:48:22.972754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.290 qpair failed and we were unable to recover it. 00:35:55.290 [2024-07-12 00:48:22.972840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.290 [2024-07-12 00:48:22.972866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.290 qpair failed and we were unable to recover it. 00:35:55.290 [2024-07-12 00:48:22.972950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.290 [2024-07-12 00:48:22.972976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.290 qpair failed and we were unable to recover it. 00:35:55.290 [2024-07-12 00:48:22.973055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.290 [2024-07-12 00:48:22.973081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.290 qpair failed and we were unable to recover it. 00:35:55.290 [2024-07-12 00:48:22.973177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.290 [2024-07-12 00:48:22.973203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.290 qpair failed and we were unable to recover it. 00:35:55.290 [2024-07-12 00:48:22.973287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.290 [2024-07-12 00:48:22.973314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.290 qpair failed and we were unable to recover it. 00:35:55.290 [2024-07-12 00:48:22.973394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.290 [2024-07-12 00:48:22.973420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.290 qpair failed and we were unable to recover it. 00:35:55.290 [2024-07-12 00:48:22.973503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.290 [2024-07-12 00:48:22.973529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.290 qpair failed and we were unable to recover it. 00:35:55.290 [2024-07-12 00:48:22.973616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.290 [2024-07-12 00:48:22.973643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.290 qpair failed and we were unable to recover it. 00:35:55.290 [2024-07-12 00:48:22.973723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.290 [2024-07-12 00:48:22.973749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.290 qpair failed and we were unable to recover it. 00:35:55.290 [2024-07-12 00:48:22.973844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.290 [2024-07-12 00:48:22.973870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.290 qpair failed and we were unable to recover it. 00:35:55.290 [2024-07-12 00:48:22.973948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.290 [2024-07-12 00:48:22.973974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.290 qpair failed and we were unable to recover it. 00:35:55.290 [2024-07-12 00:48:22.974055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.290 [2024-07-12 00:48:22.974082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.290 qpair failed and we were unable to recover it. 00:35:55.290 [2024-07-12 00:48:22.974169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.290 [2024-07-12 00:48:22.974197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.290 qpair failed and we were unable to recover it. 00:35:55.290 [2024-07-12 00:48:22.974279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.290 [2024-07-12 00:48:22.974306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.290 qpair failed and we were unable to recover it. 00:35:55.290 [2024-07-12 00:48:22.974389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.290 [2024-07-12 00:48:22.974415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.290 qpair failed and we were unable to recover it. 00:35:55.290 [2024-07-12 00:48:22.974517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.290 [2024-07-12 00:48:22.974552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.290 qpair failed and we were unable to recover it. 00:35:55.290 [2024-07-12 00:48:22.974681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.290 [2024-07-12 00:48:22.974717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.290 qpair failed and we were unable to recover it. 00:35:55.290 [2024-07-12 00:48:22.974832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.290 [2024-07-12 00:48:22.974863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.290 qpair failed and we were unable to recover it. 00:35:55.290 [2024-07-12 00:48:22.974980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.290 [2024-07-12 00:48:22.975011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.290 qpair failed and we were unable to recover it. 00:35:55.290 [2024-07-12 00:48:22.975122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.290 [2024-07-12 00:48:22.975156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.290 qpair failed and we were unable to recover it. 00:35:55.290 [2024-07-12 00:48:22.975315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.290 [2024-07-12 00:48:22.975345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.290 qpair failed and we were unable to recover it. 00:35:55.290 [2024-07-12 00:48:22.975434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.290 [2024-07-12 00:48:22.975461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.290 qpair failed and we were unable to recover it. 00:35:55.290 [2024-07-12 00:48:22.975535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.290 [2024-07-12 00:48:22.975561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.290 qpair failed and we were unable to recover it. 00:35:55.290 [2024-07-12 00:48:22.975659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.290 [2024-07-12 00:48:22.975687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.290 qpair failed and we were unable to recover it. 00:35:55.290 [2024-07-12 00:48:22.975805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.290 [2024-07-12 00:48:22.975831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.290 qpair failed and we were unable to recover it. 00:35:55.290 [2024-07-12 00:48:22.975914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.291 [2024-07-12 00:48:22.975941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.291 qpair failed and we were unable to recover it. 00:35:55.291 [2024-07-12 00:48:22.976022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.291 [2024-07-12 00:48:22.976053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.291 qpair failed and we were unable to recover it. 00:35:55.291 [2024-07-12 00:48:22.976142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.291 [2024-07-12 00:48:22.976168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.291 qpair failed and we were unable to recover it. 00:35:55.291 [2024-07-12 00:48:22.976250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.291 [2024-07-12 00:48:22.976277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.291 qpair failed and we were unable to recover it. 00:35:55.291 [2024-07-12 00:48:22.976357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.291 [2024-07-12 00:48:22.976384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.291 qpair failed and we were unable to recover it. 00:35:55.291 [2024-07-12 00:48:22.976458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.291 [2024-07-12 00:48:22.976484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.291 qpair failed and we were unable to recover it. 00:35:55.291 [2024-07-12 00:48:22.976560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.291 [2024-07-12 00:48:22.976591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.291 qpair failed and we were unable to recover it. 00:35:55.291 [2024-07-12 00:48:22.976692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.291 [2024-07-12 00:48:22.976718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.291 qpair failed and we were unable to recover it. 00:35:55.291 [2024-07-12 00:48:22.976794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.291 [2024-07-12 00:48:22.976820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.291 qpair failed and we were unable to recover it. 00:35:55.291 [2024-07-12 00:48:22.976906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.291 [2024-07-12 00:48:22.976932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.291 qpair failed and we were unable to recover it. 00:35:55.291 [2024-07-12 00:48:22.977126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.291 [2024-07-12 00:48:22.977152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.291 qpair failed and we were unable to recover it. 00:35:55.291 [2024-07-12 00:48:22.977237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.291 [2024-07-12 00:48:22.977263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.291 qpair failed and we were unable to recover it. 00:35:55.291 [2024-07-12 00:48:22.977453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.291 [2024-07-12 00:48:22.977479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.291 qpair failed and we were unable to recover it. 00:35:55.291 [2024-07-12 00:48:22.977569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.291 [2024-07-12 00:48:22.977613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.291 qpair failed and we were unable to recover it. 00:35:55.291 [2024-07-12 00:48:22.977702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.291 [2024-07-12 00:48:22.977729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.291 qpair failed and we were unable to recover it. 00:35:55.291 [2024-07-12 00:48:22.977806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.291 [2024-07-12 00:48:22.977832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.291 qpair failed and we were unable to recover it. 00:35:55.291 [2024-07-12 00:48:22.977934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.291 [2024-07-12 00:48:22.977960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.291 qpair failed and we were unable to recover it. 00:35:55.291 [2024-07-12 00:48:22.978054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.291 [2024-07-12 00:48:22.978080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.291 qpair failed and we were unable to recover it. 00:35:55.291 [2024-07-12 00:48:22.978156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.291 [2024-07-12 00:48:22.978182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.291 qpair failed and we were unable to recover it. 00:35:55.291 [2024-07-12 00:48:22.978265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.291 [2024-07-12 00:48:22.978292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.291 qpair failed and we were unable to recover it. 00:35:55.291 [2024-07-12 00:48:22.978369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.291 [2024-07-12 00:48:22.978395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.291 qpair failed and we were unable to recover it. 00:35:55.291 [2024-07-12 00:48:22.978496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.291 [2024-07-12 00:48:22.978523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.291 qpair failed and we were unable to recover it. 00:35:55.291 [2024-07-12 00:48:22.978613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.291 [2024-07-12 00:48:22.978641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.291 qpair failed and we were unable to recover it. 00:35:55.291 [2024-07-12 00:48:22.978727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.291 [2024-07-12 00:48:22.978755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.291 qpair failed and we were unable to recover it. 00:35:55.291 [2024-07-12 00:48:22.978841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.291 [2024-07-12 00:48:22.978866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.291 qpair failed and we were unable to recover it. 00:35:55.291 [2024-07-12 00:48:22.978951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.291 [2024-07-12 00:48:22.978978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.291 qpair failed and we were unable to recover it. 00:35:55.291 [2024-07-12 00:48:22.979052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.291 [2024-07-12 00:48:22.979079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.291 qpair failed and we were unable to recover it. 00:35:55.291 [2024-07-12 00:48:22.979163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.291 [2024-07-12 00:48:22.979189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.291 qpair failed and we were unable to recover it. 00:35:55.291 [2024-07-12 00:48:22.979275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.291 [2024-07-12 00:48:22.979301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.291 qpair failed and we were unable to recover it. 00:35:55.291 [2024-07-12 00:48:22.979382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.291 [2024-07-12 00:48:22.979409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.291 qpair failed and we were unable to recover it. 00:35:55.291 [2024-07-12 00:48:22.979485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.291 [2024-07-12 00:48:22.979515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.291 qpair failed and we were unable to recover it. 00:35:55.291 [2024-07-12 00:48:22.979602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.291 [2024-07-12 00:48:22.979629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.291 qpair failed and we were unable to recover it. 00:35:55.291 [2024-07-12 00:48:22.979721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.291 [2024-07-12 00:48:22.979748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.291 qpair failed and we were unable to recover it. 00:35:55.291 [2024-07-12 00:48:22.979829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.291 [2024-07-12 00:48:22.979855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.291 qpair failed and we were unable to recover it. 00:35:55.291 [2024-07-12 00:48:22.979937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.291 [2024-07-12 00:48:22.979964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.291 qpair failed and we were unable to recover it. 00:35:55.291 [2024-07-12 00:48:22.980039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.291 [2024-07-12 00:48:22.980065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.291 qpair failed and we were unable to recover it. 00:35:55.291 [2024-07-12 00:48:22.980139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.291 [2024-07-12 00:48:22.980165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.291 qpair failed and we were unable to recover it. 00:35:55.291 [2024-07-12 00:48:22.980247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.291 [2024-07-12 00:48:22.980274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.291 qpair failed and we were unable to recover it. 00:35:55.291 [2024-07-12 00:48:22.980355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.291 [2024-07-12 00:48:22.980382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.291 qpair failed and we were unable to recover it. 00:35:55.291 [2024-07-12 00:48:22.980457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.291 [2024-07-12 00:48:22.980483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.291 qpair failed and we were unable to recover it. 00:35:55.291 [2024-07-12 00:48:22.980566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.291 [2024-07-12 00:48:22.980599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.291 qpair failed and we were unable to recover it. 00:35:55.291 [2024-07-12 00:48:22.980793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.291 [2024-07-12 00:48:22.980819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.291 qpair failed and we were unable to recover it. 00:35:55.291 [2024-07-12 00:48:22.980893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.291 [2024-07-12 00:48:22.980920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.291 qpair failed and we were unable to recover it. 00:35:55.291 [2024-07-12 00:48:22.980992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.291 [2024-07-12 00:48:22.981017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.291 qpair failed and we were unable to recover it. 00:35:55.291 [2024-07-12 00:48:22.981110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.291 [2024-07-12 00:48:22.981137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.291 qpair failed and we were unable to recover it. 00:35:55.291 [2024-07-12 00:48:22.981217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.291 [2024-07-12 00:48:22.981243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.291 qpair failed and we were unable to recover it. 00:35:55.291 [2024-07-12 00:48:22.981319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.291 [2024-07-12 00:48:22.981344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.291 qpair failed and we were unable to recover it. 00:35:55.291 [2024-07-12 00:48:22.981423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.291 [2024-07-12 00:48:22.981449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.291 qpair failed and we were unable to recover it. 00:35:55.291 [2024-07-12 00:48:22.981641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.291 [2024-07-12 00:48:22.981669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.291 qpair failed and we were unable to recover it. 00:35:55.291 [2024-07-12 00:48:22.981758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.291 [2024-07-12 00:48:22.981784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.291 qpair failed and we were unable to recover it. 00:35:55.291 [2024-07-12 00:48:22.981859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.291 [2024-07-12 00:48:22.981885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.291 qpair failed and we were unable to recover it. 00:35:55.291 [2024-07-12 00:48:22.981961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.291 [2024-07-12 00:48:22.981987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.291 qpair failed and we were unable to recover it. 00:35:55.291 [2024-07-12 00:48:22.982064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.291 [2024-07-12 00:48:22.982090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.291 qpair failed and we were unable to recover it. 00:35:55.291 [2024-07-12 00:48:22.982166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.291 [2024-07-12 00:48:22.982192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.291 qpair failed and we were unable to recover it. 00:35:55.291 [2024-07-12 00:48:22.982280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.291 [2024-07-12 00:48:22.982306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.291 qpair failed and we were unable to recover it. 00:35:55.291 [2024-07-12 00:48:22.982390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.291 [2024-07-12 00:48:22.982418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.291 qpair failed and we were unable to recover it. 00:35:55.291 [2024-07-12 00:48:22.982495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.291 [2024-07-12 00:48:22.982521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.291 qpair failed and we were unable to recover it. 00:35:55.291 [2024-07-12 00:48:22.982609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.291 [2024-07-12 00:48:22.982636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.291 qpair failed and we were unable to recover it. 00:35:55.291 [2024-07-12 00:48:22.982713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.291 [2024-07-12 00:48:22.982739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.291 qpair failed and we were unable to recover it. 00:35:55.291 [2024-07-12 00:48:22.982820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.291 [2024-07-12 00:48:22.982847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.291 qpair failed and we were unable to recover it. 00:35:55.291 [2024-07-12 00:48:22.982928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.291 [2024-07-12 00:48:22.982958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.291 qpair failed and we were unable to recover it. 00:35:55.291 [2024-07-12 00:48:22.983049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.291 [2024-07-12 00:48:22.983076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.291 qpair failed and we were unable to recover it. 00:35:55.291 [2024-07-12 00:48:22.983152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.291 [2024-07-12 00:48:22.983178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.291 qpair failed and we were unable to recover it. 00:35:55.291 [2024-07-12 00:48:22.983270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.291 [2024-07-12 00:48:22.983297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.291 qpair failed and we were unable to recover it. 00:35:55.291 [2024-07-12 00:48:22.983380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.291 [2024-07-12 00:48:22.983406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.291 qpair failed and we were unable to recover it. 00:35:55.291 [2024-07-12 00:48:22.983481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.291 [2024-07-12 00:48:22.983507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.291 qpair failed and we were unable to recover it. 00:35:55.291 [2024-07-12 00:48:22.983599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.291 [2024-07-12 00:48:22.983627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.291 qpair failed and we were unable to recover it. 00:35:55.291 [2024-07-12 00:48:22.983714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.291 [2024-07-12 00:48:22.983741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.291 qpair failed and we were unable to recover it. 00:35:55.291 [2024-07-12 00:48:22.983824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.291 [2024-07-12 00:48:22.983850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.291 qpair failed and we were unable to recover it. 00:35:55.291 [2024-07-12 00:48:22.983932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.291 [2024-07-12 00:48:22.983959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.291 qpair failed and we were unable to recover it. 00:35:55.291 [2024-07-12 00:48:22.984051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.291 [2024-07-12 00:48:22.984082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.291 qpair failed and we were unable to recover it. 00:35:55.292 [2024-07-12 00:48:22.984165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.292 [2024-07-12 00:48:22.984192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.292 qpair failed and we were unable to recover it. 00:35:55.292 [2024-07-12 00:48:22.984276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.292 [2024-07-12 00:48:22.984302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.292 qpair failed and we were unable to recover it. 00:35:55.292 [2024-07-12 00:48:22.984398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.292 [2024-07-12 00:48:22.984439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.292 qpair failed and we were unable to recover it. 00:35:55.292 [2024-07-12 00:48:22.984533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.292 [2024-07-12 00:48:22.984562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.292 qpair failed and we were unable to recover it. 00:35:55.292 [2024-07-12 00:48:22.984645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.292 [2024-07-12 00:48:22.984673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.292 qpair failed and we were unable to recover it. 00:35:55.292 [2024-07-12 00:48:22.984749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.292 [2024-07-12 00:48:22.984776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.292 qpair failed and we were unable to recover it. 00:35:55.292 [2024-07-12 00:48:22.984854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.292 [2024-07-12 00:48:22.984880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.292 qpair failed and we were unable to recover it. 00:35:55.292 [2024-07-12 00:48:22.984957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.292 [2024-07-12 00:48:22.984983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.292 qpair failed and we were unable to recover it. 00:35:55.292 [2024-07-12 00:48:22.985077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.292 [2024-07-12 00:48:22.985105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.292 qpair failed and we were unable to recover it. 00:35:55.292 [2024-07-12 00:48:22.985188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.292 [2024-07-12 00:48:22.985215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.292 qpair failed and we were unable to recover it. 00:35:55.292 [2024-07-12 00:48:22.985295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.292 [2024-07-12 00:48:22.985325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.292 qpair failed and we were unable to recover it. 00:35:55.292 [2024-07-12 00:48:22.985409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.292 [2024-07-12 00:48:22.985436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.292 qpair failed and we were unable to recover it. 00:35:55.292 [2024-07-12 00:48:22.985518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.292 [2024-07-12 00:48:22.985545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.292 qpair failed and we were unable to recover it. 00:35:55.292 [2024-07-12 00:48:22.985652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.292 [2024-07-12 00:48:22.985679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.292 qpair failed and we were unable to recover it. 00:35:55.292 [2024-07-12 00:48:22.985758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.292 [2024-07-12 00:48:22.985784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.292 qpair failed and we were unable to recover it. 00:35:55.292 [2024-07-12 00:48:22.985876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.292 [2024-07-12 00:48:22.985906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.292 qpair failed and we were unable to recover it. 00:35:55.292 [2024-07-12 00:48:22.985994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.292 [2024-07-12 00:48:22.986021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.292 qpair failed and we were unable to recover it. 00:35:55.292 [2024-07-12 00:48:22.986098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.292 [2024-07-12 00:48:22.986125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.292 qpair failed and we were unable to recover it. 00:35:55.292 [2024-07-12 00:48:22.986209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.292 [2024-07-12 00:48:22.986235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.292 qpair failed and we were unable to recover it. 00:35:55.292 [2024-07-12 00:48:22.986321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.292 [2024-07-12 00:48:22.986346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.292 qpair failed and we were unable to recover it. 00:35:55.292 [2024-07-12 00:48:22.986431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.292 [2024-07-12 00:48:22.986458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.292 qpair failed and we were unable to recover it. 00:35:55.292 [2024-07-12 00:48:22.986539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.292 [2024-07-12 00:48:22.986566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.292 qpair failed and we were unable to recover it. 00:35:55.292 [2024-07-12 00:48:22.986680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.292 [2024-07-12 00:48:22.986707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.292 qpair failed and we were unable to recover it. 00:35:55.292 [2024-07-12 00:48:22.986786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.292 [2024-07-12 00:48:22.986812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.292 qpair failed and we were unable to recover it. 00:35:55.292 [2024-07-12 00:48:22.986894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.292 [2024-07-12 00:48:22.986920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.292 qpair failed and we were unable to recover it. 00:35:55.292 [2024-07-12 00:48:22.987002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.292 [2024-07-12 00:48:22.987028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.292 qpair failed and we were unable to recover it. 00:35:55.292 [2024-07-12 00:48:22.987119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.292 [2024-07-12 00:48:22.987146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.292 qpair failed and we were unable to recover it. 00:35:55.292 [2024-07-12 00:48:22.987225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.292 [2024-07-12 00:48:22.987252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.292 qpair failed and we were unable to recover it. 00:35:55.292 [2024-07-12 00:48:22.987337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.292 [2024-07-12 00:48:22.987365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.292 qpair failed and we were unable to recover it. 00:35:55.292 [2024-07-12 00:48:22.987448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.292 [2024-07-12 00:48:22.987478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.292 qpair failed and we were unable to recover it. 00:35:55.292 [2024-07-12 00:48:22.987561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.292 [2024-07-12 00:48:22.987594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.292 qpair failed and we were unable to recover it. 00:35:55.292 [2024-07-12 00:48:22.987691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.292 [2024-07-12 00:48:22.987718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.292 qpair failed and we were unable to recover it. 00:35:55.292 [2024-07-12 00:48:22.987805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.292 [2024-07-12 00:48:22.987831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.292 qpair failed and we were unable to recover it. 00:35:55.292 [2024-07-12 00:48:22.987914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.292 [2024-07-12 00:48:22.987940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.292 qpair failed and we were unable to recover it. 00:35:55.292 [2024-07-12 00:48:22.988023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.292 [2024-07-12 00:48:22.988050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.292 qpair failed and we were unable to recover it. 00:35:55.292 [2024-07-12 00:48:22.988128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.292 [2024-07-12 00:48:22.988154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.292 qpair failed and we were unable to recover it. 00:35:55.292 [2024-07-12 00:48:22.988242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.292 [2024-07-12 00:48:22.988268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.292 qpair failed and we were unable to recover it. 00:35:55.292 [2024-07-12 00:48:22.988355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.292 [2024-07-12 00:48:22.988381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.292 qpair failed and we were unable to recover it. 00:35:55.292 [2024-07-12 00:48:22.988462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.292 [2024-07-12 00:48:22.988488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.292 qpair failed and we were unable to recover it. 00:35:55.292 [2024-07-12 00:48:22.988564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.292 [2024-07-12 00:48:22.988602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.292 qpair failed and we were unable to recover it. 00:35:55.292 [2024-07-12 00:48:22.988691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.292 [2024-07-12 00:48:22.988717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.292 qpair failed and we were unable to recover it. 00:35:55.292 [2024-07-12 00:48:22.988804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.292 [2024-07-12 00:48:22.988830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.292 qpair failed and we were unable to recover it. 00:35:55.292 [2024-07-12 00:48:22.988914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.292 [2024-07-12 00:48:22.988940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.292 qpair failed and we were unable to recover it. 00:35:55.292 [2024-07-12 00:48:22.989016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.292 [2024-07-12 00:48:22.989042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.292 qpair failed and we were unable to recover it. 00:35:55.292 [2024-07-12 00:48:22.989125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.292 [2024-07-12 00:48:22.989154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.292 qpair failed and we were unable to recover it. 00:35:55.292 [2024-07-12 00:48:22.989242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.292 [2024-07-12 00:48:22.989270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.292 qpair failed and we were unable to recover it. 00:35:55.292 [2024-07-12 00:48:22.989345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.292 [2024-07-12 00:48:22.989371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.292 qpair failed and we were unable to recover it. 00:35:55.292 [2024-07-12 00:48:22.989564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.292 [2024-07-12 00:48:22.989595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.292 qpair failed and we were unable to recover it. 00:35:55.292 [2024-07-12 00:48:22.989688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.292 [2024-07-12 00:48:22.989714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.292 qpair failed and we were unable to recover it. 00:35:55.292 [2024-07-12 00:48:22.989789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.292 [2024-07-12 00:48:22.989816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.292 qpair failed and we were unable to recover it. 00:35:55.292 [2024-07-12 00:48:22.989891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.292 [2024-07-12 00:48:22.989917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.292 qpair failed and we were unable to recover it. 00:35:55.292 [2024-07-12 00:48:22.990109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.292 [2024-07-12 00:48:22.990135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.292 qpair failed and we were unable to recover it. 00:35:55.292 [2024-07-12 00:48:22.990215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.292 [2024-07-12 00:48:22.990241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.292 qpair failed and we were unable to recover it. 00:35:55.292 [2024-07-12 00:48:22.990328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.292 [2024-07-12 00:48:22.990354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.292 qpair failed and we were unable to recover it. 00:35:55.292 [2024-07-12 00:48:22.990432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.292 [2024-07-12 00:48:22.990459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.292 qpair failed and we were unable to recover it. 00:35:55.292 [2024-07-12 00:48:22.990542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.292 [2024-07-12 00:48:22.990568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.292 qpair failed and we were unable to recover it. 00:35:55.292 [2024-07-12 00:48:22.990667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.292 [2024-07-12 00:48:22.990700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.292 qpair failed and we were unable to recover it. 00:35:55.292 [2024-07-12 00:48:22.990778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.292 [2024-07-12 00:48:22.990805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.292 qpair failed and we were unable to recover it. 00:35:55.292 [2024-07-12 00:48:22.990888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.292 [2024-07-12 00:48:22.990914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.292 qpair failed and we were unable to recover it. 00:35:55.292 [2024-07-12 00:48:22.991104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.292 [2024-07-12 00:48:22.991130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.292 qpair failed and we were unable to recover it. 00:35:55.292 [2024-07-12 00:48:22.991217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.292 [2024-07-12 00:48:22.991244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.292 qpair failed and we were unable to recover it. 00:35:55.292 [2024-07-12 00:48:22.991321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.292 [2024-07-12 00:48:22.991347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.292 qpair failed and we were unable to recover it. 00:35:55.292 [2024-07-12 00:48:22.991536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.292 [2024-07-12 00:48:22.991563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.292 qpair failed and we were unable to recover it. 00:35:55.292 [2024-07-12 00:48:22.991676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.292 [2024-07-12 00:48:22.991718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.292 qpair failed and we were unable to recover it. 00:35:55.292 [2024-07-12 00:48:22.991825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.292 [2024-07-12 00:48:22.991857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.292 qpair failed and we were unable to recover it. 00:35:55.292 [2024-07-12 00:48:22.991962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.292 [2024-07-12 00:48:22.991991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.292 qpair failed and we were unable to recover it. 00:35:55.292 [2024-07-12 00:48:22.992070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.292 [2024-07-12 00:48:22.992097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.292 qpair failed and we were unable to recover it. 00:35:55.292 [2024-07-12 00:48:22.992189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.292 [2024-07-12 00:48:22.992216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.292 qpair failed and we were unable to recover it. 00:35:55.292 [2024-07-12 00:48:22.992290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.292 [2024-07-12 00:48:22.992316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.292 qpair failed and we were unable to recover it. 00:35:55.292 [2024-07-12 00:48:22.992393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.292 [2024-07-12 00:48:22.992419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.292 qpair failed and we were unable to recover it. 00:35:55.292 [2024-07-12 00:48:22.992495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.292 [2024-07-12 00:48:22.992522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.292 qpair failed and we were unable to recover it. 00:35:55.292 [2024-07-12 00:48:22.992611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.292 [2024-07-12 00:48:22.992639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.292 qpair failed and we were unable to recover it. 00:35:55.292 [2024-07-12 00:48:22.992720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.292 [2024-07-12 00:48:22.992746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.292 qpair failed and we were unable to recover it. 00:35:55.292 [2024-07-12 00:48:22.992832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.292 [2024-07-12 00:48:22.992860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.292 qpair failed and we were unable to recover it. 00:35:55.292 [2024-07-12 00:48:22.992941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.292 [2024-07-12 00:48:22.992969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.292 qpair failed and we were unable to recover it. 00:35:55.292 [2024-07-12 00:48:22.993062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.292 [2024-07-12 00:48:22.993088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.292 qpair failed and we were unable to recover it. 00:35:55.293 [2024-07-12 00:48:22.993168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.293 [2024-07-12 00:48:22.993194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.293 qpair failed and we were unable to recover it. 00:35:55.293 [2024-07-12 00:48:22.993284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.293 [2024-07-12 00:48:22.993313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.293 qpair failed and we were unable to recover it. 00:35:55.293 [2024-07-12 00:48:22.993397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.293 [2024-07-12 00:48:22.993423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.293 qpair failed and we were unable to recover it. 00:35:55.293 [2024-07-12 00:48:22.993508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.293 [2024-07-12 00:48:22.993540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.293 qpair failed and we were unable to recover it. 00:35:55.293 [2024-07-12 00:48:22.993633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.293 [2024-07-12 00:48:22.993668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.293 qpair failed and we were unable to recover it. 00:35:55.293 [2024-07-12 00:48:22.993744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.293 [2024-07-12 00:48:22.993770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.293 qpair failed and we were unable to recover it. 00:35:55.293 [2024-07-12 00:48:22.993851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.293 [2024-07-12 00:48:22.993878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.293 qpair failed and we were unable to recover it. 00:35:55.293 [2024-07-12 00:48:22.993971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.293 [2024-07-12 00:48:22.993997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.293 qpair failed and we were unable to recover it. 00:35:55.293 [2024-07-12 00:48:22.994091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.293 [2024-07-12 00:48:22.994132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.293 qpair failed and we were unable to recover it. 00:35:55.293 [2024-07-12 00:48:22.994223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.293 [2024-07-12 00:48:22.994251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.293 qpair failed and we were unable to recover it. 00:35:55.293 [2024-07-12 00:48:22.994335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.293 [2024-07-12 00:48:22.994361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.293 qpair failed and we were unable to recover it. 00:35:55.293 [2024-07-12 00:48:22.994439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.293 [2024-07-12 00:48:22.994465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.293 qpair failed and we were unable to recover it. 00:35:55.293 [2024-07-12 00:48:22.994553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.293 [2024-07-12 00:48:22.994580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.293 qpair failed and we were unable to recover it. 00:35:55.293 [2024-07-12 00:48:22.994675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.293 [2024-07-12 00:48:22.994702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.293 qpair failed and we were unable to recover it. 00:35:55.293 [2024-07-12 00:48:22.994791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.293 [2024-07-12 00:48:22.994817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.293 qpair failed and we were unable to recover it. 00:35:55.293 [2024-07-12 00:48:22.994909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.293 [2024-07-12 00:48:22.994937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.293 qpair failed and we were unable to recover it. 00:35:55.293 [2024-07-12 00:48:22.995012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.293 [2024-07-12 00:48:22.995038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.293 qpair failed and we were unable to recover it. 00:35:55.293 [2024-07-12 00:48:22.995126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.293 [2024-07-12 00:48:22.995153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.293 qpair failed and we were unable to recover it. 00:35:55.293 [2024-07-12 00:48:22.995227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.293 [2024-07-12 00:48:22.995253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.293 qpair failed and we were unable to recover it. 00:35:55.293 [2024-07-12 00:48:22.995340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.293 [2024-07-12 00:48:22.995366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.293 qpair failed and we were unable to recover it. 00:35:55.293 [2024-07-12 00:48:22.995476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.293 [2024-07-12 00:48:22.995505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.293 qpair failed and we were unable to recover it. 00:35:55.293 [2024-07-12 00:48:22.995621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.293 [2024-07-12 00:48:22.995657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.293 qpair failed and we were unable to recover it. 00:35:55.293 [2024-07-12 00:48:22.995741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.293 [2024-07-12 00:48:22.995768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.293 qpair failed and we were unable to recover it. 00:35:55.293 [2024-07-12 00:48:22.995847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.293 [2024-07-12 00:48:22.995873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.293 qpair failed and we were unable to recover it. 00:35:55.293 [2024-07-12 00:48:22.995954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.293 [2024-07-12 00:48:22.995980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.293 qpair failed and we were unable to recover it. 00:35:55.293 [2024-07-12 00:48:22.996061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.293 [2024-07-12 00:48:22.996087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.293 qpair failed and we were unable to recover it. 00:35:55.293 [2024-07-12 00:48:22.996164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.293 [2024-07-12 00:48:22.996193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.293 qpair failed and we were unable to recover it. 00:35:55.293 [2024-07-12 00:48:22.996275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.293 [2024-07-12 00:48:22.996301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.293 qpair failed and we were unable to recover it. 00:35:55.293 [2024-07-12 00:48:22.996387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.293 [2024-07-12 00:48:22.996414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.293 qpair failed and we were unable to recover it. 00:35:55.293 [2024-07-12 00:48:22.996497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.293 [2024-07-12 00:48:22.996524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.293 qpair failed and we were unable to recover it. 00:35:55.293 [2024-07-12 00:48:22.996637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.293 [2024-07-12 00:48:22.996689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.293 qpair failed and we were unable to recover it. 00:35:55.293 [2024-07-12 00:48:22.996798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.293 [2024-07-12 00:48:22.996834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.293 qpair failed and we were unable to recover it. 00:35:55.293 [2024-07-12 00:48:22.996924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.293 [2024-07-12 00:48:22.996952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.293 qpair failed and we were unable to recover it. 00:35:55.293 [2024-07-12 00:48:22.997145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.293 [2024-07-12 00:48:22.997171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.293 qpair failed and we were unable to recover it. 00:35:55.293 [2024-07-12 00:48:22.997247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.293 [2024-07-12 00:48:22.997273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.293 qpair failed and we were unable to recover it. 00:35:55.293 [2024-07-12 00:48:22.997351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.293 [2024-07-12 00:48:22.997377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.293 qpair failed and we were unable to recover it. 00:35:55.293 [2024-07-12 00:48:22.997456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.293 [2024-07-12 00:48:22.997483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.293 qpair failed and we were unable to recover it. 00:35:55.293 [2024-07-12 00:48:22.997567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.293 [2024-07-12 00:48:22.997600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.293 qpair failed and we were unable to recover it. 00:35:55.293 [2024-07-12 00:48:22.997683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.293 [2024-07-12 00:48:22.997709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.293 qpair failed and we were unable to recover it. 00:35:55.293 [2024-07-12 00:48:22.997784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.293 [2024-07-12 00:48:22.997810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.293 qpair failed and we were unable to recover it. 00:35:55.293 [2024-07-12 00:48:22.997891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.293 [2024-07-12 00:48:22.997919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.293 qpair failed and we were unable to recover it. 00:35:55.293 [2024-07-12 00:48:22.997996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.293 [2024-07-12 00:48:22.998022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.293 qpair failed and we were unable to recover it. 00:35:55.293 [2024-07-12 00:48:22.998104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.293 [2024-07-12 00:48:22.998130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.293 qpair failed and we were unable to recover it. 00:35:55.293 [2024-07-12 00:48:22.998211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.293 [2024-07-12 00:48:22.998243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.293 qpair failed and we were unable to recover it. 00:35:55.293 [2024-07-12 00:48:22.998323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.293 [2024-07-12 00:48:22.998349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.293 qpair failed and we were unable to recover it. 00:35:55.293 [2024-07-12 00:48:22.998432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.293 [2024-07-12 00:48:22.998461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.293 qpair failed and we were unable to recover it. 00:35:55.293 [2024-07-12 00:48:22.998545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.293 [2024-07-12 00:48:22.998572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.293 qpair failed and we were unable to recover it. 00:35:55.293 [2024-07-12 00:48:22.998681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.293 [2024-07-12 00:48:22.998708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.293 qpair failed and we were unable to recover it. 00:35:55.293 [2024-07-12 00:48:22.998789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.293 [2024-07-12 00:48:22.998815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.293 qpair failed and we were unable to recover it. 00:35:55.293 [2024-07-12 00:48:22.998899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.293 [2024-07-12 00:48:22.998926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.293 qpair failed and we were unable to recover it. 00:35:55.293 [2024-07-12 00:48:22.999007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.293 [2024-07-12 00:48:22.999032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.293 qpair failed and we were unable to recover it. 00:35:55.293 [2024-07-12 00:48:22.999118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.293 [2024-07-12 00:48:22.999146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.293 qpair failed and we were unable to recover it. 00:35:55.293 [2024-07-12 00:48:22.999239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.293 [2024-07-12 00:48:22.999267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.293 qpair failed and we were unable to recover it. 00:35:55.293 [2024-07-12 00:48:22.999342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.293 [2024-07-12 00:48:22.999368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.293 qpair failed and we were unable to recover it. 00:35:55.293 [2024-07-12 00:48:22.999450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.293 [2024-07-12 00:48:22.999477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.293 qpair failed and we were unable to recover it. 00:35:55.293 [2024-07-12 00:48:22.999553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.293 [2024-07-12 00:48:22.999579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.293 qpair failed and we were unable to recover it. 00:35:55.293 [2024-07-12 00:48:22.999703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.293 [2024-07-12 00:48:22.999730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.293 qpair failed and we were unable to recover it. 00:35:55.293 [2024-07-12 00:48:22.999822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.293 [2024-07-12 00:48:22.999850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.293 qpair failed and we were unable to recover it. 00:35:55.293 [2024-07-12 00:48:22.999927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.293 [2024-07-12 00:48:22.999953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.293 qpair failed and we were unable to recover it. 00:35:55.293 [2024-07-12 00:48:23.000033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.293 [2024-07-12 00:48:23.000060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.293 qpair failed and we were unable to recover it. 00:35:55.293 [2024-07-12 00:48:23.000141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.293 [2024-07-12 00:48:23.000167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.293 qpair failed and we were unable to recover it. 00:35:55.293 [2024-07-12 00:48:23.000244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.293 [2024-07-12 00:48:23.000270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.293 qpair failed and we were unable to recover it. 00:35:55.293 [2024-07-12 00:48:23.000351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.293 [2024-07-12 00:48:23.000378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.293 qpair failed and we were unable to recover it. 00:35:55.293 [2024-07-12 00:48:23.000453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.293 [2024-07-12 00:48:23.000479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.293 qpair failed and we were unable to recover it. 00:35:55.293 [2024-07-12 00:48:23.000559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.293 [2024-07-12 00:48:23.000595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.293 qpair failed and we were unable to recover it. 00:35:55.293 [2024-07-12 00:48:23.000688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.293 [2024-07-12 00:48:23.000715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.293 qpair failed and we were unable to recover it. 00:35:55.293 [2024-07-12 00:48:23.000804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.293 [2024-07-12 00:48:23.000833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.293 qpair failed and we were unable to recover it. 00:35:55.293 [2024-07-12 00:48:23.000914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.293 [2024-07-12 00:48:23.000941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.293 qpair failed and we were unable to recover it. 00:35:55.293 [2024-07-12 00:48:23.001016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.293 [2024-07-12 00:48:23.001042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.293 qpair failed and we were unable to recover it. 00:35:55.293 [2024-07-12 00:48:23.001123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.293 [2024-07-12 00:48:23.001153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.293 qpair failed and we were unable to recover it. 00:35:55.293 [2024-07-12 00:48:23.001258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.294 [2024-07-12 00:48:23.001296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.294 qpair failed and we were unable to recover it. 00:35:55.294 [2024-07-12 00:48:23.001400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.294 [2024-07-12 00:48:23.001434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.294 qpair failed and we were unable to recover it. 00:35:55.294 [2024-07-12 00:48:23.001531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.294 [2024-07-12 00:48:23.001559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.294 qpair failed and we were unable to recover it. 00:35:55.294 [2024-07-12 00:48:23.001643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.294 [2024-07-12 00:48:23.001672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.294 qpair failed and we were unable to recover it. 00:35:55.294 [2024-07-12 00:48:23.001748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.294 [2024-07-12 00:48:23.001774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.294 qpair failed and we were unable to recover it. 00:35:55.294 [2024-07-12 00:48:23.001857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.294 [2024-07-12 00:48:23.001884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.294 qpair failed and we were unable to recover it. 00:35:55.294 [2024-07-12 00:48:23.001963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.294 [2024-07-12 00:48:23.001991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.294 qpair failed and we were unable to recover it. 00:35:55.294 [2024-07-12 00:48:23.002072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.294 [2024-07-12 00:48:23.002099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.294 qpair failed and we were unable to recover it. 00:35:55.294 [2024-07-12 00:48:23.002174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.294 [2024-07-12 00:48:23.002201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.294 qpair failed and we were unable to recover it. 00:35:55.294 [2024-07-12 00:48:23.002277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.294 [2024-07-12 00:48:23.002303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.294 qpair failed and we were unable to recover it. 00:35:55.294 [2024-07-12 00:48:23.002386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.294 [2024-07-12 00:48:23.002413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.294 qpair failed and we were unable to recover it. 00:35:55.294 [2024-07-12 00:48:23.002493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.294 [2024-07-12 00:48:23.002519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.294 qpair failed and we were unable to recover it. 00:35:55.294 [2024-07-12 00:48:23.002711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.294 [2024-07-12 00:48:23.002738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.294 qpair failed and we were unable to recover it. 00:35:55.294 [2024-07-12 00:48:23.002813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.294 [2024-07-12 00:48:23.002845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.294 qpair failed and we were unable to recover it. 00:35:55.294 [2024-07-12 00:48:23.002924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.294 [2024-07-12 00:48:23.002951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.294 qpair failed and we were unable to recover it. 00:35:55.294 [2024-07-12 00:48:23.003036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.294 [2024-07-12 00:48:23.003063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.294 qpair failed and we were unable to recover it. 00:35:55.294 [2024-07-12 00:48:23.003254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.294 [2024-07-12 00:48:23.003281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.294 qpair failed and we were unable to recover it. 00:35:55.294 [2024-07-12 00:48:23.003368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.294 [2024-07-12 00:48:23.003394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.294 qpair failed and we were unable to recover it. 00:35:55.294 [2024-07-12 00:48:23.003476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.294 [2024-07-12 00:48:23.003502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.294 qpair failed and we were unable to recover it. 00:35:55.294 [2024-07-12 00:48:23.003584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.294 [2024-07-12 00:48:23.003619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.294 qpair failed and we were unable to recover it. 00:35:55.294 [2024-07-12 00:48:23.003706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.294 [2024-07-12 00:48:23.003732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.294 qpair failed and we were unable to recover it. 00:35:55.294 [2024-07-12 00:48:23.003818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.294 [2024-07-12 00:48:23.003845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.294 qpair failed and we were unable to recover it. 00:35:55.294 [2024-07-12 00:48:23.003928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.294 [2024-07-12 00:48:23.003955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.294 qpair failed and we were unable to recover it. 00:35:55.294 [2024-07-12 00:48:23.004029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.294 [2024-07-12 00:48:23.004055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.294 qpair failed and we were unable to recover it. 00:35:55.294 [2024-07-12 00:48:23.004158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.294 [2024-07-12 00:48:23.004199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.294 qpair failed and we were unable to recover it. 00:35:55.294 [2024-07-12 00:48:23.004293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.294 [2024-07-12 00:48:23.004320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.294 qpair failed and we were unable to recover it. 00:35:55.294 [2024-07-12 00:48:23.004410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.294 [2024-07-12 00:48:23.004436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.294 qpair failed and we were unable to recover it. 00:35:55.294 [2024-07-12 00:48:23.004521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.294 [2024-07-12 00:48:23.004548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.294 qpair failed and we were unable to recover it. 00:35:55.294 [2024-07-12 00:48:23.004648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.294 [2024-07-12 00:48:23.004676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.294 qpair failed and we were unable to recover it. 00:35:55.294 [2024-07-12 00:48:23.004754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.294 [2024-07-12 00:48:23.004781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.294 qpair failed and we were unable to recover it. 00:35:55.294 [2024-07-12 00:48:23.004861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.294 [2024-07-12 00:48:23.004887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.294 qpair failed and we were unable to recover it. 00:35:55.294 [2024-07-12 00:48:23.004969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.294 [2024-07-12 00:48:23.004996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.294 qpair failed and we were unable to recover it. 00:35:55.294 [2024-07-12 00:48:23.005075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.294 [2024-07-12 00:48:23.005101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.294 qpair failed and we were unable to recover it. 00:35:55.294 [2024-07-12 00:48:23.005177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.294 [2024-07-12 00:48:23.005203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.294 qpair failed and we were unable to recover it. 00:35:55.294 [2024-07-12 00:48:23.005288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.294 [2024-07-12 00:48:23.005316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.294 qpair failed and we were unable to recover it. 00:35:55.294 [2024-07-12 00:48:23.005392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.294 [2024-07-12 00:48:23.005418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.294 qpair failed and we were unable to recover it. 00:35:55.294 [2024-07-12 00:48:23.005501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.294 [2024-07-12 00:48:23.005530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.294 qpair failed and we were unable to recover it. 00:35:55.294 [2024-07-12 00:48:23.005606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.294 [2024-07-12 00:48:23.005633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.294 qpair failed and we were unable to recover it. 00:35:55.294 [2024-07-12 00:48:23.005724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.294 [2024-07-12 00:48:23.005750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.294 qpair failed and we were unable to recover it. 00:35:55.294 [2024-07-12 00:48:23.005823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.294 [2024-07-12 00:48:23.005850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.294 qpair failed and we were unable to recover it. 00:35:55.294 [2024-07-12 00:48:23.005962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.294 [2024-07-12 00:48:23.006007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.294 qpair failed and we were unable to recover it. 00:35:55.294 [2024-07-12 00:48:23.006126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.294 [2024-07-12 00:48:23.006162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.294 qpair failed and we were unable to recover it. 00:35:55.294 [2024-07-12 00:48:23.006250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.294 [2024-07-12 00:48:23.006278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.294 qpair failed and we were unable to recover it. 00:35:55.294 [2024-07-12 00:48:23.006367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.294 [2024-07-12 00:48:23.006393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.294 qpair failed and we were unable to recover it. 00:35:55.294 [2024-07-12 00:48:23.006476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.294 [2024-07-12 00:48:23.006503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.294 qpair failed and we were unable to recover it. 00:35:55.294 [2024-07-12 00:48:23.006583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.294 [2024-07-12 00:48:23.006615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.294 qpair failed and we were unable to recover it. 00:35:55.294 [2024-07-12 00:48:23.006697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.294 [2024-07-12 00:48:23.006723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.294 qpair failed and we were unable to recover it. 00:35:55.294 [2024-07-12 00:48:23.006805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.294 [2024-07-12 00:48:23.006832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.294 qpair failed and we were unable to recover it. 00:35:55.294 [2024-07-12 00:48:23.006925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.294 [2024-07-12 00:48:23.006953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.294 qpair failed and we were unable to recover it. 00:35:55.294 [2024-07-12 00:48:23.007029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.294 [2024-07-12 00:48:23.007059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.294 qpair failed and we were unable to recover it. 00:35:55.294 [2024-07-12 00:48:23.007140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.294 [2024-07-12 00:48:23.007166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.294 qpair failed and we were unable to recover it. 00:35:55.294 [2024-07-12 00:48:23.007246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.294 [2024-07-12 00:48:23.007272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.294 qpair failed and we were unable to recover it. 00:35:55.294 [2024-07-12 00:48:23.007361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.294 [2024-07-12 00:48:23.007390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.294 qpair failed and we were unable to recover it. 00:35:55.294 [2024-07-12 00:48:23.007480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.294 [2024-07-12 00:48:23.007511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.294 qpair failed and we were unable to recover it. 00:35:55.294 [2024-07-12 00:48:23.007601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.294 [2024-07-12 00:48:23.007629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.294 qpair failed and we were unable to recover it. 00:35:55.294 [2024-07-12 00:48:23.007712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.294 [2024-07-12 00:48:23.007738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.294 qpair failed and we were unable to recover it. 00:35:55.294 [2024-07-12 00:48:23.007822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.294 [2024-07-12 00:48:23.007848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.295 qpair failed and we were unable to recover it. 00:35:55.295 [2024-07-12 00:48:23.007938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.295 [2024-07-12 00:48:23.007966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.295 qpair failed and we were unable to recover it. 00:35:55.295 [2024-07-12 00:48:23.008048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.295 [2024-07-12 00:48:23.008076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.295 qpair failed and we were unable to recover it. 00:35:55.295 [2024-07-12 00:48:23.008157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.295 [2024-07-12 00:48:23.008183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.295 qpair failed and we were unable to recover it. 00:35:55.295 [2024-07-12 00:48:23.008268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.295 [2024-07-12 00:48:23.008298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.295 qpair failed and we were unable to recover it. 00:35:55.295 [2024-07-12 00:48:23.008374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.295 [2024-07-12 00:48:23.008400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.295 qpair failed and we were unable to recover it. 00:35:55.295 [2024-07-12 00:48:23.008502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.295 [2024-07-12 00:48:23.008529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.295 qpair failed and we were unable to recover it. 00:35:55.295 [2024-07-12 00:48:23.008612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.295 [2024-07-12 00:48:23.008639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.295 qpair failed and we were unable to recover it. 00:35:55.295 [2024-07-12 00:48:23.008717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.295 [2024-07-12 00:48:23.008744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.295 qpair failed and we were unable to recover it. 00:35:55.295 [2024-07-12 00:48:23.008821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.295 [2024-07-12 00:48:23.008847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.295 qpair failed and we were unable to recover it. 00:35:55.295 [2024-07-12 00:48:23.008933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.295 [2024-07-12 00:48:23.008961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.295 qpair failed and we were unable to recover it. 00:35:55.295 [2024-07-12 00:48:23.009052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.295 [2024-07-12 00:48:23.009078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.295 qpair failed and we were unable to recover it. 00:35:55.295 [2024-07-12 00:48:23.009156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.295 [2024-07-12 00:48:23.009189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.295 qpair failed and we were unable to recover it. 00:35:55.295 [2024-07-12 00:48:23.009275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.295 [2024-07-12 00:48:23.009301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.295 qpair failed and we were unable to recover it. 00:35:55.295 [2024-07-12 00:48:23.009380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.295 [2024-07-12 00:48:23.009407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.295 qpair failed and we were unable to recover it. 00:35:55.295 [2024-07-12 00:48:23.009483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.295 [2024-07-12 00:48:23.009510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.295 qpair failed and we were unable to recover it. 00:35:55.295 [2024-07-12 00:48:23.009601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.295 [2024-07-12 00:48:23.009629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.295 qpair failed and we were unable to recover it. 00:35:55.295 [2024-07-12 00:48:23.009705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.295 [2024-07-12 00:48:23.009731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.295 qpair failed and we were unable to recover it. 00:35:55.295 [2024-07-12 00:48:23.009809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.295 [2024-07-12 00:48:23.009836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.295 qpair failed and we were unable to recover it. 00:35:55.295 [2024-07-12 00:48:23.009920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.295 [2024-07-12 00:48:23.009947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.295 qpair failed and we were unable to recover it. 00:35:55.295 [2024-07-12 00:48:23.010030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.295 [2024-07-12 00:48:23.010056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.295 qpair failed and we were unable to recover it. 00:35:55.295 [2024-07-12 00:48:23.010138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.295 [2024-07-12 00:48:23.010165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.295 qpair failed and we were unable to recover it. 00:35:55.295 [2024-07-12 00:48:23.010250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.295 [2024-07-12 00:48:23.010277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.295 qpair failed and we were unable to recover it. 00:35:55.295 [2024-07-12 00:48:23.010353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.295 [2024-07-12 00:48:23.010379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.295 qpair failed and we were unable to recover it. 00:35:55.295 [2024-07-12 00:48:23.010475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.295 [2024-07-12 00:48:23.010510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.295 qpair failed and we were unable to recover it. 00:35:55.295 [2024-07-12 00:48:23.010618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.295 [2024-07-12 00:48:23.010653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.295 qpair failed and we were unable to recover it. 00:35:55.295 [2024-07-12 00:48:23.010754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.295 [2024-07-12 00:48:23.010783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.295 qpair failed and we were unable to recover it. 00:35:55.295 [2024-07-12 00:48:23.010863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.295 [2024-07-12 00:48:23.010890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.295 qpair failed and we were unable to recover it. 00:35:55.295 [2024-07-12 00:48:23.011080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.295 [2024-07-12 00:48:23.011106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.295 qpair failed and we were unable to recover it. 00:35:55.295 [2024-07-12 00:48:23.011185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.295 [2024-07-12 00:48:23.011211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.295 qpair failed and we were unable to recover it. 00:35:55.295 [2024-07-12 00:48:23.011300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.295 [2024-07-12 00:48:23.011327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.295 qpair failed and we were unable to recover it. 00:35:55.295 [2024-07-12 00:48:23.011415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.295 [2024-07-12 00:48:23.011442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.295 qpair failed and we were unable to recover it. 00:35:55.295 [2024-07-12 00:48:23.011523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.295 [2024-07-12 00:48:23.011549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.295 qpair failed and we were unable to recover it. 00:35:55.295 [2024-07-12 00:48:23.011646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.295 [2024-07-12 00:48:23.011675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.295 qpair failed and we were unable to recover it. 00:35:55.295 [2024-07-12 00:48:23.011764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.295 [2024-07-12 00:48:23.011789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.295 qpair failed and we were unable to recover it. 00:35:55.295 [2024-07-12 00:48:23.011875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.295 [2024-07-12 00:48:23.011901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.295 qpair failed and we were unable to recover it. 00:35:55.295 [2024-07-12 00:48:23.011977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.295 [2024-07-12 00:48:23.012003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.295 qpair failed and we were unable to recover it. 00:35:55.295 [2024-07-12 00:48:23.012088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.295 [2024-07-12 00:48:23.012114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.295 qpair failed and we were unable to recover it. 00:35:55.295 [2024-07-12 00:48:23.012204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.295 [2024-07-12 00:48:23.012230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.295 qpair failed and we were unable to recover it. 00:35:55.295 [2024-07-12 00:48:23.012309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.295 [2024-07-12 00:48:23.012335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.295 qpair failed and we were unable to recover it. 00:35:55.295 [2024-07-12 00:48:23.012414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.295 [2024-07-12 00:48:23.012440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.295 qpair failed and we were unable to recover it. 00:35:55.295 [2024-07-12 00:48:23.012520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.295 [2024-07-12 00:48:23.012546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.295 qpair failed and we were unable to recover it. 00:35:55.295 [2024-07-12 00:48:23.012641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.295 [2024-07-12 00:48:23.012668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.295 qpair failed and we were unable to recover it. 00:35:55.295 [2024-07-12 00:48:23.012749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.295 [2024-07-12 00:48:23.012775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.295 qpair failed and we were unable to recover it. 00:35:55.295 [2024-07-12 00:48:23.012858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.295 [2024-07-12 00:48:23.012883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.295 qpair failed and we were unable to recover it. 00:35:55.295 [2024-07-12 00:48:23.012967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.295 [2024-07-12 00:48:23.012994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.295 qpair failed and we were unable to recover it. 00:35:55.295 [2024-07-12 00:48:23.013075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.295 [2024-07-12 00:48:23.013101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.295 qpair failed and we were unable to recover it. 00:35:55.295 [2024-07-12 00:48:23.013177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.295 [2024-07-12 00:48:23.013203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.295 qpair failed and we were unable to recover it. 00:35:55.295 [2024-07-12 00:48:23.013287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.295 [2024-07-12 00:48:23.013316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.295 qpair failed and we were unable to recover it. 00:35:55.295 [2024-07-12 00:48:23.013509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.295 [2024-07-12 00:48:23.013536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.295 qpair failed and we were unable to recover it. 00:35:55.295 [2024-07-12 00:48:23.013729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.295 [2024-07-12 00:48:23.013756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.295 qpair failed and we were unable to recover it. 00:35:55.295 [2024-07-12 00:48:23.013835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.295 [2024-07-12 00:48:23.013862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.295 qpair failed and we were unable to recover it. 00:35:55.295 [2024-07-12 00:48:23.013938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.295 [2024-07-12 00:48:23.013964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.295 qpair failed and we were unable to recover it. 00:35:55.295 [2024-07-12 00:48:23.014042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.295 [2024-07-12 00:48:23.014069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.295 qpair failed and we were unable to recover it. 00:35:55.295 [2024-07-12 00:48:23.014150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.295 [2024-07-12 00:48:23.014176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.295 qpair failed and we were unable to recover it. 00:35:55.295 [2024-07-12 00:48:23.014253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.295 [2024-07-12 00:48:23.014279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.295 qpair failed and we were unable to recover it. 00:35:55.295 [2024-07-12 00:48:23.014352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.295 [2024-07-12 00:48:23.014379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.295 qpair failed and we were unable to recover it. 00:35:55.295 [2024-07-12 00:48:23.014459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.295 [2024-07-12 00:48:23.014486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.295 qpair failed and we were unable to recover it. 00:35:55.295 [2024-07-12 00:48:23.014561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.295 [2024-07-12 00:48:23.014593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.295 qpair failed and we were unable to recover it. 00:35:55.295 [2024-07-12 00:48:23.014679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.295 [2024-07-12 00:48:23.014711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.295 qpair failed and we were unable to recover it. 00:35:55.295 [2024-07-12 00:48:23.014786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.295 [2024-07-12 00:48:23.014812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.295 qpair failed and we were unable to recover it. 00:35:55.295 [2024-07-12 00:48:23.014893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.295 [2024-07-12 00:48:23.014919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.295 qpair failed and we were unable to recover it. 00:35:55.295 [2024-07-12 00:48:23.014999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.295 [2024-07-12 00:48:23.015025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.295 qpair failed and we were unable to recover it. 00:35:55.295 [2024-07-12 00:48:23.015217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.295 [2024-07-12 00:48:23.015244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.295 qpair failed and we were unable to recover it. 00:35:55.295 [2024-07-12 00:48:23.015331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.295 [2024-07-12 00:48:23.015363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.295 qpair failed and we were unable to recover it. 00:35:55.295 [2024-07-12 00:48:23.015441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.295 [2024-07-12 00:48:23.015467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.295 qpair failed and we were unable to recover it. 00:35:55.295 [2024-07-12 00:48:23.015541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.295 [2024-07-12 00:48:23.015567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.295 qpair failed and we were unable to recover it. 00:35:55.295 [2024-07-12 00:48:23.015665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.295 [2024-07-12 00:48:23.015692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.295 qpair failed and we were unable to recover it. 00:35:55.295 [2024-07-12 00:48:23.015770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.295 [2024-07-12 00:48:23.015797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.295 qpair failed and we were unable to recover it. 00:35:55.295 [2024-07-12 00:48:23.015986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.296 [2024-07-12 00:48:23.016013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.296 qpair failed and we were unable to recover it. 00:35:55.296 [2024-07-12 00:48:23.016092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.296 [2024-07-12 00:48:23.016122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.296 qpair failed and we were unable to recover it. 00:35:55.296 [2024-07-12 00:48:23.016197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.296 [2024-07-12 00:48:23.016224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.296 qpair failed and we were unable to recover it. 00:35:55.296 [2024-07-12 00:48:23.016307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.296 [2024-07-12 00:48:23.016333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.296 qpair failed and we were unable to recover it. 00:35:55.296 [2024-07-12 00:48:23.016409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.296 [2024-07-12 00:48:23.016435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.296 qpair failed and we were unable to recover it. 00:35:55.296 [2024-07-12 00:48:23.016509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.296 [2024-07-12 00:48:23.016536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.296 qpair failed and we were unable to recover it. 00:35:55.296 [2024-07-12 00:48:23.016617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.296 [2024-07-12 00:48:23.016645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.296 qpair failed and we were unable to recover it. 00:35:55.296 [2024-07-12 00:48:23.016727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.296 [2024-07-12 00:48:23.016753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.296 qpair failed and we were unable to recover it. 00:35:55.296 [2024-07-12 00:48:23.016828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.296 [2024-07-12 00:48:23.016855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.296 qpair failed and we were unable to recover it. 00:35:55.296 [2024-07-12 00:48:23.016948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.296 [2024-07-12 00:48:23.016974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.296 qpair failed and we were unable to recover it. 00:35:55.296 [2024-07-12 00:48:23.017051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.296 [2024-07-12 00:48:23.017078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.296 qpair failed and we were unable to recover it. 00:35:55.296 [2024-07-12 00:48:23.017158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.296 [2024-07-12 00:48:23.017184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.296 qpair failed and we were unable to recover it. 00:35:55.296 [2024-07-12 00:48:23.017269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.296 [2024-07-12 00:48:23.017295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.296 qpair failed and we were unable to recover it. 00:35:55.296 [2024-07-12 00:48:23.017377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.296 [2024-07-12 00:48:23.017404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.296 qpair failed and we were unable to recover it. 00:35:55.296 [2024-07-12 00:48:23.017488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.296 [2024-07-12 00:48:23.017515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.296 qpair failed and we were unable to recover it. 00:35:55.296 [2024-07-12 00:48:23.017610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.296 [2024-07-12 00:48:23.017640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.296 qpair failed and we were unable to recover it. 00:35:55.296 [2024-07-12 00:48:23.017729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.296 [2024-07-12 00:48:23.017756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.296 qpair failed and we were unable to recover it. 00:35:55.296 [2024-07-12 00:48:23.017843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.296 [2024-07-12 00:48:23.017870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.296 qpair failed and we were unable to recover it. 00:35:55.296 [2024-07-12 00:48:23.017954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.296 [2024-07-12 00:48:23.017980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.296 qpair failed and we were unable to recover it. 00:35:55.296 [2024-07-12 00:48:23.018062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.296 [2024-07-12 00:48:23.018089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.296 qpair failed and we were unable to recover it. 00:35:55.296 [2024-07-12 00:48:23.018163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.296 [2024-07-12 00:48:23.018189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.296 qpair failed and we were unable to recover it. 00:35:55.296 [2024-07-12 00:48:23.018272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.296 [2024-07-12 00:48:23.018301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.296 qpair failed and we were unable to recover it. 00:35:55.296 [2024-07-12 00:48:23.018389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.296 [2024-07-12 00:48:23.018416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.296 qpair failed and we were unable to recover it. 00:35:55.296 [2024-07-12 00:48:23.018498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.296 [2024-07-12 00:48:23.018524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.296 qpair failed and we were unable to recover it. 00:35:55.296 [2024-07-12 00:48:23.018617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.296 [2024-07-12 00:48:23.018645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.296 qpair failed and we were unable to recover it. 00:35:55.296 [2024-07-12 00:48:23.018731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.296 [2024-07-12 00:48:23.018758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.296 qpair failed and we were unable to recover it. 00:35:55.296 [2024-07-12 00:48:23.018950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.296 [2024-07-12 00:48:23.018977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.296 qpair failed and we were unable to recover it. 00:35:55.296 [2024-07-12 00:48:23.019059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.296 [2024-07-12 00:48:23.019086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.296 qpair failed and we were unable to recover it. 00:35:55.296 [2024-07-12 00:48:23.019172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.296 [2024-07-12 00:48:23.019199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.296 qpair failed and we were unable to recover it. 00:35:55.296 [2024-07-12 00:48:23.019391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.296 [2024-07-12 00:48:23.019417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.296 qpair failed and we were unable to recover it. 00:35:55.296 [2024-07-12 00:48:23.019495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.296 [2024-07-12 00:48:23.019522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.296 qpair failed and we were unable to recover it. 00:35:55.296 [2024-07-12 00:48:23.019714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.296 [2024-07-12 00:48:23.019741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.296 qpair failed and we were unable to recover it. 00:35:55.296 [2024-07-12 00:48:23.019825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.296 [2024-07-12 00:48:23.019852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.296 qpair failed and we were unable to recover it. 00:35:55.296 [2024-07-12 00:48:23.019934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.296 [2024-07-12 00:48:23.019960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.296 qpair failed and we were unable to recover it. 00:35:55.296 [2024-07-12 00:48:23.020045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.296 [2024-07-12 00:48:23.020071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.296 qpair failed and we were unable to recover it. 00:35:55.296 [2024-07-12 00:48:23.020148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.296 [2024-07-12 00:48:23.020180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.296 qpair failed and we were unable to recover it. 00:35:55.296 [2024-07-12 00:48:23.020256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.296 [2024-07-12 00:48:23.020282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.296 qpair failed and we were unable to recover it. 00:35:55.296 [2024-07-12 00:48:23.020358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.296 [2024-07-12 00:48:23.020384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.296 qpair failed and we were unable to recover it. 00:35:55.296 [2024-07-12 00:48:23.020461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.296 [2024-07-12 00:48:23.020487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.296 qpair failed and we were unable to recover it. 00:35:55.296 [2024-07-12 00:48:23.020564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.296 [2024-07-12 00:48:23.020598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.296 qpair failed and we were unable to recover it. 00:35:55.296 [2024-07-12 00:48:23.020677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.296 [2024-07-12 00:48:23.020704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.296 qpair failed and we were unable to recover it. 00:35:55.296 [2024-07-12 00:48:23.020782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.296 [2024-07-12 00:48:23.020808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.296 qpair failed and we were unable to recover it. 00:35:55.296 [2024-07-12 00:48:23.020884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.296 [2024-07-12 00:48:23.020910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.296 qpair failed and we were unable to recover it. 00:35:55.296 [2024-07-12 00:48:23.020995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.296 [2024-07-12 00:48:23.021022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.296 qpair failed and we were unable to recover it. 00:35:55.296 [2024-07-12 00:48:23.021114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.296 [2024-07-12 00:48:23.021142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.296 qpair failed and we were unable to recover it. 00:35:55.296 [2024-07-12 00:48:23.021227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.296 [2024-07-12 00:48:23.021255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.296 qpair failed and we were unable to recover it. 00:35:55.296 [2024-07-12 00:48:23.021346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.296 [2024-07-12 00:48:23.021372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.296 qpair failed and we were unable to recover it. 00:35:55.296 [2024-07-12 00:48:23.021456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.296 [2024-07-12 00:48:23.021482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.296 qpair failed and we were unable to recover it. 00:35:55.296 [2024-07-12 00:48:23.021564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.296 [2024-07-12 00:48:23.021606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.296 qpair failed and we were unable to recover it. 00:35:55.296 [2024-07-12 00:48:23.021697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.296 [2024-07-12 00:48:23.021724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.296 qpair failed and we were unable to recover it. 00:35:55.296 [2024-07-12 00:48:23.021803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.296 [2024-07-12 00:48:23.021829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.296 qpair failed and we were unable to recover it. 00:35:55.296 [2024-07-12 00:48:23.021916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.296 [2024-07-12 00:48:23.021941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.296 qpair failed and we were unable to recover it. 00:35:55.296 [2024-07-12 00:48:23.022020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.297 [2024-07-12 00:48:23.022046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.297 qpair failed and we were unable to recover it. 00:35:55.297 [2024-07-12 00:48:23.022125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.297 [2024-07-12 00:48:23.022151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.297 qpair failed and we were unable to recover it. 00:35:55.297 [2024-07-12 00:48:23.022227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.297 [2024-07-12 00:48:23.022252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.297 qpair failed and we were unable to recover it. 00:35:55.297 [2024-07-12 00:48:23.022331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.297 [2024-07-12 00:48:23.022357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.297 qpair failed and we were unable to recover it. 00:35:55.297 [2024-07-12 00:48:23.022443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.297 [2024-07-12 00:48:23.022468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.297 qpair failed and we were unable to recover it. 00:35:55.297 [2024-07-12 00:48:23.022551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.297 [2024-07-12 00:48:23.022580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.297 qpair failed and we were unable to recover it. 00:35:55.297 [2024-07-12 00:48:23.022675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.297 [2024-07-12 00:48:23.022702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.297 qpair failed and we were unable to recover it. 00:35:55.297 [2024-07-12 00:48:23.022793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.297 [2024-07-12 00:48:23.022820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.297 qpair failed and we were unable to recover it. 00:35:55.297 [2024-07-12 00:48:23.022901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.297 [2024-07-12 00:48:23.022927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.297 qpair failed and we were unable to recover it. 00:35:55.297 [2024-07-12 00:48:23.023008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.297 [2024-07-12 00:48:23.023034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.297 qpair failed and we were unable to recover it. 00:35:55.297 [2024-07-12 00:48:23.023130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.297 [2024-07-12 00:48:23.023157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.297 qpair failed and we were unable to recover it. 00:35:55.297 [2024-07-12 00:48:23.023242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.297 [2024-07-12 00:48:23.023269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.297 qpair failed and we were unable to recover it. 00:35:55.297 [2024-07-12 00:48:23.023352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.297 [2024-07-12 00:48:23.023378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.297 qpair failed and we were unable to recover it. 00:35:55.297 [2024-07-12 00:48:23.023453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.297 [2024-07-12 00:48:23.023479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.297 qpair failed and we were unable to recover it. 00:35:55.297 [2024-07-12 00:48:23.023581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.297 [2024-07-12 00:48:23.023613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.297 qpair failed and we were unable to recover it. 00:35:55.297 [2024-07-12 00:48:23.023695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.297 [2024-07-12 00:48:23.023722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.297 qpair failed and we were unable to recover it. 00:35:55.297 [2024-07-12 00:48:23.023805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.297 [2024-07-12 00:48:23.023831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.297 qpair failed and we were unable to recover it. 00:35:55.297 [2024-07-12 00:48:23.023905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.297 [2024-07-12 00:48:23.023930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.297 qpair failed and we were unable to recover it. 00:35:55.297 [2024-07-12 00:48:23.024016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.297 [2024-07-12 00:48:23.024042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.297 qpair failed and we were unable to recover it. 00:35:55.297 [2024-07-12 00:48:23.024121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.297 [2024-07-12 00:48:23.024147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.297 qpair failed and we were unable to recover it. 00:35:55.297 [2024-07-12 00:48:23.024234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.297 [2024-07-12 00:48:23.024262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.297 qpair failed and we were unable to recover it. 00:35:55.297 [2024-07-12 00:48:23.024346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.297 [2024-07-12 00:48:23.024372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.297 qpair failed and we were unable to recover it. 00:35:55.297 [2024-07-12 00:48:23.024473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.297 [2024-07-12 00:48:23.024500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.297 qpair failed and we were unable to recover it. 00:35:55.297 [2024-07-12 00:48:23.024579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.297 [2024-07-12 00:48:23.024619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.297 qpair failed and we were unable to recover it. 00:35:55.297 [2024-07-12 00:48:23.024704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.297 [2024-07-12 00:48:23.024731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.297 qpair failed and we were unable to recover it. 00:35:55.297 [2024-07-12 00:48:23.024807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.297 [2024-07-12 00:48:23.024834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.297 qpair failed and we were unable to recover it. 00:35:55.297 [2024-07-12 00:48:23.025026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.297 [2024-07-12 00:48:23.025052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.297 qpair failed and we were unable to recover it. 00:35:55.297 [2024-07-12 00:48:23.025127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.297 [2024-07-12 00:48:23.025154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.297 qpair failed and we were unable to recover it. 00:35:55.297 [2024-07-12 00:48:23.025228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.297 [2024-07-12 00:48:23.025254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.297 qpair failed and we were unable to recover it. 00:35:55.297 [2024-07-12 00:48:23.025336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.297 [2024-07-12 00:48:23.025364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.297 qpair failed and we were unable to recover it. 00:35:55.297 [2024-07-12 00:48:23.025446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.297 [2024-07-12 00:48:23.025474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.297 qpair failed and we were unable to recover it. 00:35:55.297 [2024-07-12 00:48:23.025555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.297 [2024-07-12 00:48:23.025581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.297 qpair failed and we were unable to recover it. 00:35:55.297 [2024-07-12 00:48:23.025665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.297 [2024-07-12 00:48:23.025691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.297 qpair failed and we were unable to recover it. 00:35:55.297 [2024-07-12 00:48:23.025771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.297 [2024-07-12 00:48:23.025798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.297 qpair failed and we were unable to recover it. 00:35:55.297 [2024-07-12 00:48:23.025874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.297 [2024-07-12 00:48:23.025900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.297 qpair failed and we were unable to recover it. 00:35:55.297 [2024-07-12 00:48:23.025975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.297 [2024-07-12 00:48:23.026001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.297 qpair failed and we were unable to recover it. 00:35:55.297 [2024-07-12 00:48:23.026080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.297 [2024-07-12 00:48:23.026107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.297 qpair failed and we were unable to recover it. 00:35:55.297 [2024-07-12 00:48:23.026192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.297 [2024-07-12 00:48:23.026218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.297 qpair failed and we were unable to recover it. 00:35:55.297 [2024-07-12 00:48:23.026297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.297 [2024-07-12 00:48:23.026323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.297 qpair failed and we were unable to recover it. 00:35:55.297 [2024-07-12 00:48:23.026404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.297 [2024-07-12 00:48:23.026431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.297 qpair failed and we were unable to recover it. 00:35:55.297 [2024-07-12 00:48:23.026514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.297 [2024-07-12 00:48:23.026540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.297 qpair failed and we were unable to recover it. 00:35:55.297 [2024-07-12 00:48:23.026629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.297 [2024-07-12 00:48:23.026656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.297 qpair failed and we were unable to recover it. 00:35:55.297 [2024-07-12 00:48:23.026748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.297 [2024-07-12 00:48:23.026774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.297 qpair failed and we were unable to recover it. 00:35:55.297 [2024-07-12 00:48:23.026858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.297 [2024-07-12 00:48:23.026885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.297 qpair failed and we were unable to recover it. 00:35:55.297 [2024-07-12 00:48:23.026967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.297 [2024-07-12 00:48:23.026994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.297 qpair failed and we were unable to recover it. 00:35:55.297 [2024-07-12 00:48:23.027075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.297 [2024-07-12 00:48:23.027101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.297 qpair failed and we were unable to recover it. 00:35:55.297 [2024-07-12 00:48:23.027184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.297 [2024-07-12 00:48:23.027215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.297 qpair failed and we were unable to recover it. 00:35:55.297 [2024-07-12 00:48:23.027291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.297 [2024-07-12 00:48:23.027317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.297 qpair failed and we were unable to recover it. 00:35:55.297 [2024-07-12 00:48:23.027398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.297 [2024-07-12 00:48:23.027426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.297 qpair failed and we were unable to recover it. 00:35:55.297 [2024-07-12 00:48:23.027501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.297 [2024-07-12 00:48:23.027528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.297 qpair failed and we were unable to recover it. 00:35:55.297 [2024-07-12 00:48:23.027621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.297 [2024-07-12 00:48:23.027651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.297 qpair failed and we were unable to recover it. 00:35:55.297 [2024-07-12 00:48:23.027733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.297 [2024-07-12 00:48:23.027760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.297 qpair failed and we were unable to recover it. 00:35:55.297 [2024-07-12 00:48:23.027846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.297 [2024-07-12 00:48:23.027872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.297 qpair failed and we were unable to recover it. 00:35:55.297 [2024-07-12 00:48:23.027948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.297 [2024-07-12 00:48:23.027978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.297 qpair failed and we were unable to recover it. 00:35:55.297 [2024-07-12 00:48:23.028052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.297 [2024-07-12 00:48:23.028078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.297 qpair failed and we were unable to recover it. 00:35:55.297 [2024-07-12 00:48:23.028159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.297 [2024-07-12 00:48:23.028185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.297 qpair failed and we were unable to recover it. 00:35:55.297 [2024-07-12 00:48:23.028260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.297 [2024-07-12 00:48:23.028286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.297 qpair failed and we were unable to recover it. 00:35:55.297 [2024-07-12 00:48:23.028367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.297 [2024-07-12 00:48:23.028393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.297 qpair failed and we were unable to recover it. 00:35:55.297 [2024-07-12 00:48:23.028477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.297 [2024-07-12 00:48:23.028505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.297 qpair failed and we were unable to recover it. 00:35:55.297 [2024-07-12 00:48:23.028599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.297 [2024-07-12 00:48:23.028627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.297 qpair failed and we were unable to recover it. 00:35:55.297 [2024-07-12 00:48:23.028712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.298 [2024-07-12 00:48:23.028739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.298 qpair failed and we were unable to recover it. 00:35:55.298 [2024-07-12 00:48:23.028817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.298 [2024-07-12 00:48:23.028843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.298 qpair failed and we were unable to recover it. 00:35:55.298 [2024-07-12 00:48:23.028923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.298 [2024-07-12 00:48:23.028949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.298 qpair failed and we were unable to recover it. 00:35:55.298 [2024-07-12 00:48:23.029024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.298 [2024-07-12 00:48:23.029055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.298 qpair failed and we were unable to recover it. 00:35:55.298 [2024-07-12 00:48:23.029141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.298 [2024-07-12 00:48:23.029168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.298 qpair failed and we were unable to recover it. 00:35:55.298 [2024-07-12 00:48:23.029249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.298 [2024-07-12 00:48:23.029278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.298 qpair failed and we were unable to recover it. 00:35:55.298 [2024-07-12 00:48:23.029355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.298 [2024-07-12 00:48:23.029382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.298 qpair failed and we were unable to recover it. 00:35:55.298 [2024-07-12 00:48:23.029457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.298 [2024-07-12 00:48:23.029484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.298 qpair failed and we were unable to recover it. 00:35:55.298 [2024-07-12 00:48:23.029559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.298 [2024-07-12 00:48:23.029595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.298 qpair failed and we were unable to recover it. 00:35:55.298 [2024-07-12 00:48:23.029682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.298 [2024-07-12 00:48:23.029709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.298 qpair failed and we were unable to recover it. 00:35:55.298 [2024-07-12 00:48:23.029795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.298 [2024-07-12 00:48:23.029825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.298 qpair failed and we were unable to recover it. 00:35:55.298 [2024-07-12 00:48:23.029907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.298 [2024-07-12 00:48:23.029935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.298 qpair failed and we were unable to recover it. 00:35:55.298 [2024-07-12 00:48:23.030013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.298 [2024-07-12 00:48:23.030040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.298 qpair failed and we were unable to recover it. 00:35:55.298 [2024-07-12 00:48:23.030114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.298 [2024-07-12 00:48:23.030140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.298 qpair failed and we were unable to recover it. 00:35:55.298 [2024-07-12 00:48:23.030215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.298 [2024-07-12 00:48:23.030241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.298 qpair failed and we were unable to recover it. 00:35:55.298 [2024-07-12 00:48:23.030324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.298 [2024-07-12 00:48:23.030352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.298 qpair failed and we were unable to recover it. 00:35:55.298 [2024-07-12 00:48:23.030543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.298 [2024-07-12 00:48:23.030570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.298 qpair failed and we were unable to recover it. 00:35:55.298 [2024-07-12 00:48:23.030662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.298 [2024-07-12 00:48:23.030690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.298 qpair failed and we were unable to recover it. 00:35:55.298 [2024-07-12 00:48:23.030780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.298 [2024-07-12 00:48:23.030807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.298 qpair failed and we were unable to recover it. 00:35:55.298 [2024-07-12 00:48:23.030885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.298 [2024-07-12 00:48:23.030912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.298 qpair failed and we were unable to recover it. 00:35:55.298 [2024-07-12 00:48:23.030996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.298 [2024-07-12 00:48:23.031022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.298 qpair failed and we were unable to recover it. 00:35:55.298 [2024-07-12 00:48:23.031104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.298 [2024-07-12 00:48:23.031129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.298 qpair failed and we were unable to recover it. 00:35:55.298 [2024-07-12 00:48:23.031214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.298 [2024-07-12 00:48:23.031241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.298 qpair failed and we were unable to recover it. 00:35:55.298 [2024-07-12 00:48:23.031323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.298 [2024-07-12 00:48:23.031350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.298 qpair failed and we were unable to recover it. 00:35:55.298 [2024-07-12 00:48:23.031438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.298 [2024-07-12 00:48:23.031465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.298 qpair failed and we were unable to recover it. 00:35:55.298 [2024-07-12 00:48:23.031548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.298 [2024-07-12 00:48:23.031574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.298 qpair failed and we were unable to recover it. 00:35:55.298 [2024-07-12 00:48:23.031663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.298 [2024-07-12 00:48:23.031693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.298 qpair failed and we were unable to recover it. 00:35:55.298 [2024-07-12 00:48:23.031777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.298 [2024-07-12 00:48:23.031804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.298 qpair failed and we were unable to recover it. 00:35:55.298 [2024-07-12 00:48:23.031890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.298 [2024-07-12 00:48:23.031916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.298 qpair failed and we were unable to recover it. 00:35:55.298 [2024-07-12 00:48:23.031994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.298 [2024-07-12 00:48:23.032021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.298 qpair failed and we were unable to recover it. 00:35:55.298 [2024-07-12 00:48:23.032114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.298 [2024-07-12 00:48:23.032140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.298 qpair failed and we were unable to recover it. 00:35:55.298 [2024-07-12 00:48:23.032230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.298 [2024-07-12 00:48:23.032257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.298 qpair failed and we were unable to recover it. 00:35:55.298 [2024-07-12 00:48:23.032333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.298 [2024-07-12 00:48:23.032359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.298 qpair failed and we were unable to recover it. 00:35:55.298 [2024-07-12 00:48:23.032434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.298 [2024-07-12 00:48:23.032464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.298 qpair failed and we were unable to recover it. 00:35:55.298 [2024-07-12 00:48:23.032555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.298 [2024-07-12 00:48:23.032581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.298 qpair failed and we were unable to recover it. 00:35:55.298 [2024-07-12 00:48:23.032664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.298 [2024-07-12 00:48:23.032690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.298 qpair failed and we were unable to recover it. 00:35:55.298 [2024-07-12 00:48:23.032770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.298 [2024-07-12 00:48:23.032797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.298 qpair failed and we were unable to recover it. 00:35:55.298 [2024-07-12 00:48:23.032878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.298 [2024-07-12 00:48:23.032906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.298 qpair failed and we were unable to recover it. 00:35:55.298 [2024-07-12 00:48:23.032990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.298 [2024-07-12 00:48:23.033019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.298 qpair failed and we were unable to recover it. 00:35:55.298 [2024-07-12 00:48:23.033105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.298 [2024-07-12 00:48:23.033131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.298 qpair failed and we were unable to recover it. 00:35:55.298 [2024-07-12 00:48:23.033207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.298 [2024-07-12 00:48:23.033233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.298 qpair failed and we were unable to recover it. 00:35:55.298 [2024-07-12 00:48:23.033314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.298 [2024-07-12 00:48:23.033342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.298 qpair failed and we were unable to recover it. 00:35:55.298 [2024-07-12 00:48:23.033417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.298 [2024-07-12 00:48:23.033444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.298 qpair failed and we were unable to recover it. 00:35:55.298 [2024-07-12 00:48:23.033529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.298 [2024-07-12 00:48:23.033560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.298 qpair failed and we were unable to recover it. 00:35:55.298 [2024-07-12 00:48:23.033642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.298 [2024-07-12 00:48:23.033669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.298 qpair failed and we were unable to recover it. 00:35:55.298 [2024-07-12 00:48:23.033755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.298 [2024-07-12 00:48:23.033781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.298 qpair failed and we were unable to recover it. 00:35:55.298 [2024-07-12 00:48:23.033855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.298 [2024-07-12 00:48:23.033881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.298 qpair failed and we were unable to recover it. 00:35:55.298 [2024-07-12 00:48:23.034074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.298 [2024-07-12 00:48:23.034100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.298 qpair failed and we were unable to recover it. 00:35:55.298 [2024-07-12 00:48:23.034182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.298 [2024-07-12 00:48:23.034208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.299 qpair failed and we were unable to recover it. 00:35:55.299 [2024-07-12 00:48:23.034285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.299 [2024-07-12 00:48:23.034312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.299 qpair failed and we were unable to recover it. 00:35:55.299 [2024-07-12 00:48:23.034392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.299 [2024-07-12 00:48:23.034418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.299 qpair failed and we were unable to recover it. 00:35:55.299 [2024-07-12 00:48:23.034495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.299 [2024-07-12 00:48:23.034521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.299 qpair failed and we were unable to recover it. 00:35:55.299 [2024-07-12 00:48:23.034613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.299 [2024-07-12 00:48:23.034640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.299 qpair failed and we were unable to recover it. 00:35:55.299 [2024-07-12 00:48:23.034721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.299 [2024-07-12 00:48:23.034747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.299 qpair failed and we were unable to recover it. 00:35:55.299 [2024-07-12 00:48:23.034829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.299 [2024-07-12 00:48:23.034856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.299 qpair failed and we were unable to recover it. 00:35:55.299 [2024-07-12 00:48:23.034937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.299 [2024-07-12 00:48:23.034964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.299 qpair failed and we were unable to recover it. 00:35:55.299 [2024-07-12 00:48:23.035041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.299 [2024-07-12 00:48:23.035070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.299 qpair failed and we were unable to recover it. 00:35:55.299 [2024-07-12 00:48:23.035163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.299 [2024-07-12 00:48:23.035189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.299 qpair failed and we were unable to recover it. 00:35:55.299 [2024-07-12 00:48:23.035278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.299 [2024-07-12 00:48:23.035305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.299 qpair failed and we were unable to recover it. 00:35:55.299 [2024-07-12 00:48:23.035384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.299 [2024-07-12 00:48:23.035410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.299 qpair failed and we were unable to recover it. 00:35:55.299 [2024-07-12 00:48:23.035494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.299 [2024-07-12 00:48:23.035524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.299 qpair failed and we were unable to recover it. 00:35:55.299 [2024-07-12 00:48:23.035618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.299 [2024-07-12 00:48:23.035645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.299 qpair failed and we were unable to recover it. 00:35:55.299 [2024-07-12 00:48:23.035722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.299 [2024-07-12 00:48:23.035748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.299 qpair failed and we were unable to recover it. 00:35:55.299 [2024-07-12 00:48:23.035824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.299 [2024-07-12 00:48:23.035850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.299 qpair failed and we were unable to recover it. 00:35:55.299 [2024-07-12 00:48:23.035934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.299 [2024-07-12 00:48:23.035959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.299 qpair failed and we were unable to recover it. 00:35:55.299 [2024-07-12 00:48:23.036039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.299 [2024-07-12 00:48:23.036065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.299 qpair failed and we were unable to recover it. 00:35:55.299 [2024-07-12 00:48:23.036140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.299 [2024-07-12 00:48:23.036166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.299 qpair failed and we were unable to recover it. 00:35:55.299 [2024-07-12 00:48:23.036252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.299 [2024-07-12 00:48:23.036278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.299 qpair failed and we were unable to recover it. 00:35:55.299 [2024-07-12 00:48:23.036360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.299 [2024-07-12 00:48:23.036388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.299 qpair failed and we were unable to recover it. 00:35:55.299 [2024-07-12 00:48:23.036475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.299 [2024-07-12 00:48:23.036502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.299 qpair failed and we were unable to recover it. 00:35:55.299 [2024-07-12 00:48:23.036609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.299 [2024-07-12 00:48:23.036645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.299 qpair failed and we were unable to recover it. 00:35:55.299 [2024-07-12 00:48:23.036751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.299 [2024-07-12 00:48:23.036780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.299 qpair failed and we were unable to recover it. 00:35:55.299 [2024-07-12 00:48:23.036865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.299 [2024-07-12 00:48:23.036891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.299 qpair failed and we were unable to recover it. 00:35:55.299 [2024-07-12 00:48:23.036974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.299 [2024-07-12 00:48:23.037000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.299 qpair failed and we were unable to recover it. 00:35:55.299 [2024-07-12 00:48:23.037081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.299 [2024-07-12 00:48:23.037107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.299 qpair failed and we were unable to recover it. 00:35:55.299 [2024-07-12 00:48:23.037189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.299 [2024-07-12 00:48:23.037215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.299 qpair failed and we were unable to recover it. 00:35:55.299 [2024-07-12 00:48:23.037297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.299 [2024-07-12 00:48:23.037328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.299 qpair failed and we were unable to recover it. 00:35:55.299 [2024-07-12 00:48:23.037404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.299 [2024-07-12 00:48:23.037431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.299 qpair failed and we were unable to recover it. 00:35:55.299 [2024-07-12 00:48:23.037519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.299 [2024-07-12 00:48:23.037545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.299 qpair failed and we were unable to recover it. 00:35:55.299 [2024-07-12 00:48:23.037629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.299 [2024-07-12 00:48:23.037656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.299 qpair failed and we were unable to recover it. 00:35:55.299 [2024-07-12 00:48:23.037744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.299 [2024-07-12 00:48:23.037770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.299 qpair failed and we were unable to recover it. 00:35:55.299 [2024-07-12 00:48:23.037853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.299 [2024-07-12 00:48:23.037880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.299 qpair failed and we were unable to recover it. 00:35:55.299 [2024-07-12 00:48:23.037960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.299 [2024-07-12 00:48:23.037986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.299 qpair failed and we were unable to recover it. 00:35:55.299 [2024-07-12 00:48:23.038063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.299 [2024-07-12 00:48:23.038094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.299 qpair failed and we were unable to recover it. 00:35:55.299 [2024-07-12 00:48:23.038168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.299 [2024-07-12 00:48:23.038195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.299 qpair failed and we were unable to recover it. 00:35:55.299 [2024-07-12 00:48:23.038272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.299 [2024-07-12 00:48:23.038299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.299 qpair failed and we were unable to recover it. 00:35:55.299 [2024-07-12 00:48:23.038379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.299 [2024-07-12 00:48:23.038405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.299 qpair failed and we were unable to recover it. 00:35:55.299 [2024-07-12 00:48:23.038487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.299 [2024-07-12 00:48:23.038515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.299 qpair failed and we were unable to recover it. 00:35:55.299 [2024-07-12 00:48:23.038602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.299 [2024-07-12 00:48:23.038628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.299 qpair failed and we were unable to recover it. 00:35:55.299 [2024-07-12 00:48:23.038706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.299 [2024-07-12 00:48:23.038733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.299 qpair failed and we were unable to recover it. 00:35:55.299 [2024-07-12 00:48:23.038819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.299 [2024-07-12 00:48:23.038844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.299 qpair failed and we were unable to recover it. 00:35:55.300 [2024-07-12 00:48:23.038930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.300 [2024-07-12 00:48:23.038958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.300 qpair failed and we were unable to recover it. 00:35:55.300 [2024-07-12 00:48:23.039048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.300 [2024-07-12 00:48:23.039074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.300 qpair failed and we were unable to recover it. 00:35:55.300 [2024-07-12 00:48:23.039162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.300 [2024-07-12 00:48:23.039190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.300 qpair failed and we were unable to recover it. 00:35:55.300 [2024-07-12 00:48:23.039276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.300 [2024-07-12 00:48:23.039305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.300 qpair failed and we were unable to recover it. 00:35:55.300 [2024-07-12 00:48:23.039388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.300 [2024-07-12 00:48:23.039416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.300 qpair failed and we were unable to recover it. 00:35:55.300 [2024-07-12 00:48:23.039503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.300 [2024-07-12 00:48:23.039529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.300 qpair failed and we were unable to recover it. 00:35:55.300 [2024-07-12 00:48:23.039613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.300 [2024-07-12 00:48:23.039644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.300 qpair failed and we were unable to recover it. 00:35:55.300 [2024-07-12 00:48:23.039726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.300 [2024-07-12 00:48:23.039752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.300 qpair failed and we were unable to recover it. 00:35:55.300 [2024-07-12 00:48:23.039830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.300 [2024-07-12 00:48:23.039856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.300 qpair failed and we were unable to recover it. 00:35:55.300 [2024-07-12 00:48:23.039938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.300 [2024-07-12 00:48:23.039965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.300 qpair failed and we were unable to recover it. 00:35:55.300 [2024-07-12 00:48:23.040041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.300 [2024-07-12 00:48:23.040068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.300 qpair failed and we were unable to recover it. 00:35:55.300 [2024-07-12 00:48:23.040148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.300 [2024-07-12 00:48:23.040174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.300 qpair failed and we were unable to recover it. 00:35:55.300 [2024-07-12 00:48:23.040256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.300 [2024-07-12 00:48:23.040282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.300 qpair failed and we were unable to recover it. 00:35:55.300 [2024-07-12 00:48:23.040361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.300 [2024-07-12 00:48:23.040391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.300 qpair failed and we were unable to recover it. 00:35:55.300 [2024-07-12 00:48:23.040469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.300 [2024-07-12 00:48:23.040495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.300 qpair failed and we were unable to recover it. 00:35:55.300 [2024-07-12 00:48:23.040576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.300 [2024-07-12 00:48:23.040610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.300 qpair failed and we were unable to recover it. 00:35:55.300 [2024-07-12 00:48:23.040686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.300 [2024-07-12 00:48:23.040712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.300 qpair failed and we were unable to recover it. 00:35:55.300 [2024-07-12 00:48:23.040905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.300 [2024-07-12 00:48:23.040931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.300 qpair failed and we were unable to recover it. 00:35:55.300 [2024-07-12 00:48:23.041017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.300 [2024-07-12 00:48:23.041045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.300 qpair failed and we were unable to recover it. 00:35:55.300 [2024-07-12 00:48:23.041142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.300 [2024-07-12 00:48:23.041177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.300 qpair failed and we were unable to recover it. 00:35:55.300 [2024-07-12 00:48:23.041293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.300 [2024-07-12 00:48:23.041332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.300 qpair failed and we were unable to recover it. 00:35:55.300 [2024-07-12 00:48:23.041428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.300 [2024-07-12 00:48:23.041459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.300 qpair failed and we were unable to recover it. 00:35:55.300 [2024-07-12 00:48:23.041535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.300 [2024-07-12 00:48:23.041562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.300 qpair failed and we were unable to recover it. 00:35:55.300 [2024-07-12 00:48:23.041644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.300 [2024-07-12 00:48:23.041671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.300 qpair failed and we were unable to recover it. 00:35:55.300 [2024-07-12 00:48:23.041752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.300 [2024-07-12 00:48:23.041779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.300 qpair failed and we were unable to recover it. 00:35:55.300 [2024-07-12 00:48:23.041859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.300 [2024-07-12 00:48:23.041885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.300 qpair failed and we were unable to recover it. 00:35:55.300 [2024-07-12 00:48:23.041966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.300 [2024-07-12 00:48:23.041993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.300 qpair failed and we were unable to recover it. 00:35:55.300 [2024-07-12 00:48:23.042067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.300 [2024-07-12 00:48:23.042092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.300 qpair failed and we were unable to recover it. 00:35:55.300 [2024-07-12 00:48:23.042182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.300 [2024-07-12 00:48:23.042211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.300 qpair failed and we were unable to recover it. 00:35:55.300 [2024-07-12 00:48:23.042300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.300 [2024-07-12 00:48:23.042326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.300 qpair failed and we were unable to recover it. 00:35:55.300 [2024-07-12 00:48:23.042404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.300 [2024-07-12 00:48:23.042430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.300 qpair failed and we were unable to recover it. 00:35:55.300 [2024-07-12 00:48:23.042514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.300 [2024-07-12 00:48:23.042540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.300 qpair failed and we were unable to recover it. 00:35:55.300 [2024-07-12 00:48:23.042627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.300 [2024-07-12 00:48:23.042659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.300 qpair failed and we were unable to recover it. 00:35:55.300 [2024-07-12 00:48:23.042737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.300 [2024-07-12 00:48:23.042763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.300 qpair failed and we were unable to recover it. 00:35:55.300 [2024-07-12 00:48:23.042844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.300 [2024-07-12 00:48:23.042874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.300 qpair failed and we were unable to recover it. 00:35:55.300 [2024-07-12 00:48:23.042959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.300 [2024-07-12 00:48:23.042985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.300 qpair failed and we were unable to recover it. 00:35:55.300 [2024-07-12 00:48:23.043073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.300 [2024-07-12 00:48:23.043101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.300 qpair failed and we were unable to recover it. 00:35:55.300 [2024-07-12 00:48:23.043181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.300 [2024-07-12 00:48:23.043208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.300 qpair failed and we were unable to recover it. 00:35:55.300 [2024-07-12 00:48:23.043283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.300 [2024-07-12 00:48:23.043310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.300 qpair failed and we were unable to recover it. 00:35:55.300 [2024-07-12 00:48:23.043389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.300 [2024-07-12 00:48:23.043423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.300 qpair failed and we were unable to recover it. 00:35:55.300 [2024-07-12 00:48:23.043500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.300 [2024-07-12 00:48:23.043527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.300 qpair failed and we were unable to recover it. 00:35:55.300 [2024-07-12 00:48:23.043613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.300 [2024-07-12 00:48:23.043641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.300 qpair failed and we were unable to recover it. 00:35:55.300 [2024-07-12 00:48:23.043718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.300 [2024-07-12 00:48:23.043744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.300 qpair failed and we were unable to recover it. 00:35:55.300 [2024-07-12 00:48:23.043823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.300 [2024-07-12 00:48:23.043852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.300 qpair failed and we were unable to recover it. 00:35:55.300 [2024-07-12 00:48:23.043929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.300 [2024-07-12 00:48:23.043956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.300 qpair failed and we were unable to recover it. 00:35:55.300 [2024-07-12 00:48:23.044148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.300 [2024-07-12 00:48:23.044176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.300 qpair failed and we were unable to recover it. 00:35:55.300 [2024-07-12 00:48:23.044256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.300 [2024-07-12 00:48:23.044283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.300 qpair failed and we were unable to recover it. 00:35:55.300 [2024-07-12 00:48:23.044371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.300 [2024-07-12 00:48:23.044397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.300 qpair failed and we were unable to recover it. 00:35:55.300 [2024-07-12 00:48:23.044474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.300 [2024-07-12 00:48:23.044500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.300 qpair failed and we were unable to recover it. 00:35:55.301 [2024-07-12 00:48:23.044577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.301 [2024-07-12 00:48:23.044624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.301 qpair failed and we were unable to recover it. 00:35:55.301 [2024-07-12 00:48:23.044711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.301 [2024-07-12 00:48:23.044738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.301 qpair failed and we were unable to recover it. 00:35:55.301 [2024-07-12 00:48:23.044822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.301 [2024-07-12 00:48:23.044849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.301 qpair failed and we were unable to recover it. 00:35:55.301 [2024-07-12 00:48:23.044924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.301 [2024-07-12 00:48:23.044951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.301 qpair failed and we were unable to recover it. 00:35:55.301 [2024-07-12 00:48:23.045029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.301 [2024-07-12 00:48:23.045056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.301 qpair failed and we were unable to recover it. 00:35:55.301 [2024-07-12 00:48:23.045136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.301 [2024-07-12 00:48:23.045162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.301 qpair failed and we were unable to recover it. 00:35:55.301 [2024-07-12 00:48:23.045353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.301 [2024-07-12 00:48:23.045380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.301 qpair failed and we were unable to recover it. 00:35:55.301 [2024-07-12 00:48:23.045455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.301 [2024-07-12 00:48:23.045481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.301 qpair failed and we were unable to recover it. 00:35:55.301 [2024-07-12 00:48:23.045563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.301 [2024-07-12 00:48:23.045598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.301 qpair failed and we were unable to recover it. 00:35:55.301 [2024-07-12 00:48:23.045678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.301 [2024-07-12 00:48:23.045707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.301 qpair failed and we were unable to recover it. 00:35:55.301 [2024-07-12 00:48:23.045793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.301 [2024-07-12 00:48:23.045820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.301 qpair failed and we were unable to recover it. 00:35:55.301 [2024-07-12 00:48:23.045898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.301 [2024-07-12 00:48:23.045924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.301 qpair failed and we were unable to recover it. 00:35:55.301 [2024-07-12 00:48:23.046008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.301 [2024-07-12 00:48:23.046036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.301 qpair failed and we were unable to recover it. 00:35:55.301 [2024-07-12 00:48:23.046164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.301 [2024-07-12 00:48:23.046190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.301 qpair failed and we were unable to recover it. 00:35:55.301 [2024-07-12 00:48:23.046270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.301 [2024-07-12 00:48:23.046298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.301 qpair failed and we were unable to recover it. 00:35:55.301 [2024-07-12 00:48:23.046379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.301 [2024-07-12 00:48:23.046405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.301 qpair failed and we were unable to recover it. 00:35:55.301 [2024-07-12 00:48:23.046486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.301 [2024-07-12 00:48:23.046512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.301 qpair failed and we were unable to recover it. 00:35:55.301 [2024-07-12 00:48:23.046640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.301 [2024-07-12 00:48:23.046667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.301 qpair failed and we were unable to recover it. 00:35:55.301 [2024-07-12 00:48:23.046741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.301 [2024-07-12 00:48:23.046767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.301 qpair failed and we were unable to recover it. 00:35:55.301 [2024-07-12 00:48:23.046854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.301 [2024-07-12 00:48:23.046881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.301 qpair failed and we were unable to recover it. 00:35:55.301 [2024-07-12 00:48:23.046971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.301 [2024-07-12 00:48:23.046998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.301 qpair failed and we were unable to recover it. 00:35:55.301 [2024-07-12 00:48:23.047079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.301 [2024-07-12 00:48:23.047112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.301 qpair failed and we were unable to recover it. 00:35:55.301 [2024-07-12 00:48:23.047198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.301 [2024-07-12 00:48:23.047224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.301 qpair failed and we were unable to recover it. 00:35:55.301 [2024-07-12 00:48:23.047303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.301 [2024-07-12 00:48:23.047338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.301 qpair failed and we were unable to recover it. 00:35:55.301 [2024-07-12 00:48:23.047427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.301 [2024-07-12 00:48:23.047454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.301 qpair failed and we were unable to recover it. 00:35:55.301 [2024-07-12 00:48:23.047535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.301 [2024-07-12 00:48:23.047562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.301 qpair failed and we were unable to recover it. 00:35:55.301 [2024-07-12 00:48:23.047644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.301 [2024-07-12 00:48:23.047671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.301 qpair failed and we were unable to recover it. 00:35:55.301 [2024-07-12 00:48:23.047759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.301 [2024-07-12 00:48:23.047787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.301 qpair failed and we were unable to recover it. 00:35:55.301 [2024-07-12 00:48:23.047875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.301 [2024-07-12 00:48:23.047901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.301 qpair failed and we were unable to recover it. 00:35:55.301 [2024-07-12 00:48:23.047986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.301 [2024-07-12 00:48:23.048012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.301 qpair failed and we were unable to recover it. 00:35:55.301 [2024-07-12 00:48:23.048095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.301 [2024-07-12 00:48:23.048121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.301 qpair failed and we were unable to recover it. 00:35:55.301 [2024-07-12 00:48:23.048195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.301 [2024-07-12 00:48:23.048222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.301 qpair failed and we were unable to recover it. 00:35:55.301 [2024-07-12 00:48:23.048297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.301 [2024-07-12 00:48:23.048323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.301 qpair failed and we were unable to recover it. 00:35:55.301 [2024-07-12 00:48:23.048413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.301 [2024-07-12 00:48:23.048441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.301 qpair failed and we were unable to recover it. 00:35:55.301 [2024-07-12 00:48:23.048519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.301 [2024-07-12 00:48:23.048545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.301 qpair failed and we were unable to recover it. 00:35:55.301 [2024-07-12 00:48:23.048634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.301 [2024-07-12 00:48:23.048662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.301 qpair failed and we were unable to recover it. 00:35:55.301 [2024-07-12 00:48:23.048737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.301 [2024-07-12 00:48:23.048763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.301 qpair failed and we were unable to recover it. 00:35:55.301 [2024-07-12 00:48:23.048850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.301 [2024-07-12 00:48:23.048877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.301 qpair failed and we were unable to recover it. 00:35:55.301 [2024-07-12 00:48:23.048953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.301 [2024-07-12 00:48:23.048980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.301 qpair failed and we were unable to recover it. 00:35:55.301 [2024-07-12 00:48:23.049068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.301 [2024-07-12 00:48:23.049096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.301 qpair failed and we were unable to recover it. 00:35:55.301 [2024-07-12 00:48:23.049173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.301 [2024-07-12 00:48:23.049199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.301 qpair failed and we were unable to recover it. 00:35:55.301 [2024-07-12 00:48:23.049278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.301 [2024-07-12 00:48:23.049304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.301 qpair failed and we were unable to recover it. 00:35:55.301 [2024-07-12 00:48:23.049379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.301 [2024-07-12 00:48:23.049405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.301 qpair failed and we were unable to recover it. 00:35:55.301 [2024-07-12 00:48:23.049493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.301 [2024-07-12 00:48:23.049521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.301 qpair failed and we were unable to recover it. 00:35:55.301 [2024-07-12 00:48:23.049611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.301 [2024-07-12 00:48:23.049638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.301 qpair failed and we were unable to recover it. 00:35:55.301 [2024-07-12 00:48:23.049723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.301 [2024-07-12 00:48:23.049752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.301 qpair failed and we were unable to recover it. 00:35:55.301 [2024-07-12 00:48:23.049827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.301 [2024-07-12 00:48:23.049854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.301 qpair failed and we were unable to recover it. 00:35:55.301 [2024-07-12 00:48:23.049928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.301 [2024-07-12 00:48:23.049954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.301 qpair failed and we were unable to recover it. 00:35:55.301 [2024-07-12 00:48:23.050037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.301 [2024-07-12 00:48:23.050064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.301 qpair failed and we were unable to recover it. 00:35:55.301 [2024-07-12 00:48:23.050152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.301 [2024-07-12 00:48:23.050178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.301 qpair failed and we were unable to recover it. 00:35:55.301 [2024-07-12 00:48:23.050283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.301 [2024-07-12 00:48:23.050322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.301 qpair failed and we were unable to recover it. 00:35:55.301 [2024-07-12 00:48:23.050427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.301 [2024-07-12 00:48:23.050454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.301 qpair failed and we were unable to recover it. 00:35:55.301 [2024-07-12 00:48:23.050540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.301 [2024-07-12 00:48:23.050567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.301 qpair failed and we were unable to recover it. 00:35:55.302 [2024-07-12 00:48:23.050701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.302 [2024-07-12 00:48:23.050727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.302 qpair failed and we were unable to recover it. 00:35:55.302 [2024-07-12 00:48:23.050854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.302 [2024-07-12 00:48:23.050880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.302 qpair failed and we were unable to recover it. 00:35:55.302 [2024-07-12 00:48:23.050957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.302 [2024-07-12 00:48:23.050983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.302 qpair failed and we were unable to recover it. 00:35:55.302 [2024-07-12 00:48:23.051068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.302 [2024-07-12 00:48:23.051094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.302 qpair failed and we were unable to recover it. 00:35:55.302 [2024-07-12 00:48:23.051172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.302 [2024-07-12 00:48:23.051199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.302 qpair failed and we were unable to recover it. 00:35:55.302 [2024-07-12 00:48:23.051277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.302 [2024-07-12 00:48:23.051303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.302 qpair failed and we were unable to recover it. 00:35:55.302 [2024-07-12 00:48:23.051379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.302 [2024-07-12 00:48:23.051405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.302 qpair failed and we were unable to recover it. 00:35:55.302 [2024-07-12 00:48:23.051480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.302 [2024-07-12 00:48:23.051507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.302 qpair failed and we were unable to recover it. 00:35:55.302 [2024-07-12 00:48:23.051583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.302 [2024-07-12 00:48:23.051617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.302 qpair failed and we were unable to recover it. 00:35:55.302 [2024-07-12 00:48:23.051692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.302 [2024-07-12 00:48:23.051718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.302 qpair failed and we were unable to recover it. 00:35:55.302 [2024-07-12 00:48:23.051795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.302 [2024-07-12 00:48:23.051821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.302 qpair failed and we were unable to recover it. 00:35:55.302 [2024-07-12 00:48:23.051911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.302 [2024-07-12 00:48:23.051937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.302 qpair failed and we were unable to recover it. 00:35:55.302 [2024-07-12 00:48:23.052027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.302 [2024-07-12 00:48:23.052054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.302 qpair failed and we were unable to recover it. 00:35:55.302 [2024-07-12 00:48:23.052134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.302 [2024-07-12 00:48:23.052161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.302 qpair failed and we were unable to recover it. 00:35:55.302 [2024-07-12 00:48:23.052241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.302 [2024-07-12 00:48:23.052267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.302 qpair failed and we were unable to recover it. 00:35:55.302 [2024-07-12 00:48:23.052349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.302 [2024-07-12 00:48:23.052375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.302 qpair failed and we were unable to recover it. 00:35:55.302 [2024-07-12 00:48:23.052456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.302 [2024-07-12 00:48:23.052482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.302 qpair failed and we were unable to recover it. 00:35:55.302 [2024-07-12 00:48:23.052561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.302 [2024-07-12 00:48:23.052594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.302 qpair failed and we were unable to recover it. 00:35:55.302 [2024-07-12 00:48:23.052672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.302 [2024-07-12 00:48:23.052698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.302 qpair failed and we were unable to recover it. 00:35:55.302 [2024-07-12 00:48:23.052784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.302 [2024-07-12 00:48:23.052810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.302 qpair failed and we were unable to recover it. 00:35:55.302 [2024-07-12 00:48:23.052892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.302 [2024-07-12 00:48:23.052918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.302 qpair failed and we were unable to recover it. 00:35:55.302 [2024-07-12 00:48:23.053001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.302 [2024-07-12 00:48:23.053027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.302 qpair failed and we were unable to recover it. 00:35:55.302 [2024-07-12 00:48:23.053101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.302 [2024-07-12 00:48:23.053127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.302 qpair failed and we were unable to recover it. 00:35:55.302 [2024-07-12 00:48:23.053209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.302 [2024-07-12 00:48:23.053235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.302 qpair failed and we were unable to recover it. 00:35:55.302 [2024-07-12 00:48:23.053322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.302 [2024-07-12 00:48:23.053349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.302 qpair failed and we were unable to recover it. 00:35:55.302 [2024-07-12 00:48:23.053431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.302 [2024-07-12 00:48:23.053458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.302 qpair failed and we were unable to recover it. 00:35:55.302 [2024-07-12 00:48:23.053537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.302 [2024-07-12 00:48:23.053563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.302 qpair failed and we were unable to recover it. 00:35:55.302 [2024-07-12 00:48:23.053655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.302 [2024-07-12 00:48:23.053685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.302 qpair failed and we were unable to recover it. 00:35:55.302 [2024-07-12 00:48:23.053763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.302 [2024-07-12 00:48:23.053789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.302 qpair failed and we were unable to recover it. 00:35:55.302 [2024-07-12 00:48:23.053869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.302 [2024-07-12 00:48:23.053900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.302 qpair failed and we were unable to recover it. 00:35:55.302 [2024-07-12 00:48:23.053986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.302 [2024-07-12 00:48:23.054014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.302 qpair failed and we were unable to recover it. 00:35:55.302 [2024-07-12 00:48:23.054094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.302 [2024-07-12 00:48:23.054121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.302 qpair failed and we were unable to recover it. 00:35:55.302 [2024-07-12 00:48:23.054311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.302 [2024-07-12 00:48:23.054338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.302 qpair failed and we were unable to recover it. 00:35:55.302 [2024-07-12 00:48:23.054416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.302 [2024-07-12 00:48:23.054442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.302 qpair failed and we were unable to recover it. 00:35:55.302 [2024-07-12 00:48:23.054525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.302 [2024-07-12 00:48:23.054551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.302 qpair failed and we were unable to recover it. 00:35:55.302 [2024-07-12 00:48:23.054642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.302 [2024-07-12 00:48:23.054670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.302 qpair failed and we were unable to recover it. 00:35:55.302 [2024-07-12 00:48:23.054860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.302 [2024-07-12 00:48:23.054887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.302 qpair failed and we were unable to recover it. 00:35:55.302 [2024-07-12 00:48:23.054963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.302 [2024-07-12 00:48:23.054994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.302 qpair failed and we were unable to recover it. 00:35:55.302 [2024-07-12 00:48:23.055071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.302 [2024-07-12 00:48:23.055097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.302 qpair failed and we were unable to recover it. 00:35:55.302 [2024-07-12 00:48:23.055181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.302 [2024-07-12 00:48:23.055208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.302 qpair failed and we were unable to recover it. 00:35:55.302 [2024-07-12 00:48:23.055291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.302 [2024-07-12 00:48:23.055318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.302 qpair failed and we were unable to recover it. 00:35:55.302 [2024-07-12 00:48:23.055399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.302 [2024-07-12 00:48:23.055427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.302 qpair failed and we were unable to recover it. 00:35:55.302 [2024-07-12 00:48:23.055515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.302 [2024-07-12 00:48:23.055544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.302 qpair failed and we were unable to recover it. 00:35:55.302 [2024-07-12 00:48:23.055653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.302 [2024-07-12 00:48:23.055681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.302 qpair failed and we were unable to recover it. 00:35:55.302 [2024-07-12 00:48:23.055763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.302 [2024-07-12 00:48:23.055789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.302 qpair failed and we were unable to recover it. 00:35:55.302 [2024-07-12 00:48:23.055874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.302 [2024-07-12 00:48:23.055899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.302 qpair failed and we were unable to recover it. 00:35:55.302 [2024-07-12 00:48:23.055986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.302 [2024-07-12 00:48:23.056012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.302 qpair failed and we were unable to recover it. 00:35:55.302 [2024-07-12 00:48:23.056091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.302 [2024-07-12 00:48:23.056117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.302 qpair failed and we were unable to recover it. 00:35:55.302 [2024-07-12 00:48:23.056196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.302 [2024-07-12 00:48:23.056228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.302 qpair failed and we were unable to recover it. 00:35:55.302 [2024-07-12 00:48:23.056308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.302 [2024-07-12 00:48:23.056334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.302 qpair failed and we were unable to recover it. 00:35:55.302 [2024-07-12 00:48:23.056413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.302 [2024-07-12 00:48:23.056439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.302 qpair failed and we were unable to recover it. 00:35:55.302 [2024-07-12 00:48:23.056525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.303 [2024-07-12 00:48:23.056551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.303 qpair failed and we were unable to recover it. 00:35:55.303 [2024-07-12 00:48:23.056634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.303 [2024-07-12 00:48:23.056661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.303 qpair failed and we were unable to recover it. 00:35:55.303 [2024-07-12 00:48:23.056742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.303 [2024-07-12 00:48:23.056768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.303 qpair failed and we were unable to recover it. 00:35:55.303 [2024-07-12 00:48:23.056850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.303 [2024-07-12 00:48:23.056878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.303 qpair failed and we were unable to recover it. 00:35:55.303 [2024-07-12 00:48:23.056954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.303 [2024-07-12 00:48:23.056980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.303 qpair failed and we were unable to recover it. 00:35:55.303 [2024-07-12 00:48:23.057080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.303 [2024-07-12 00:48:23.057107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.303 qpair failed and we were unable to recover it. 00:35:55.303 [2024-07-12 00:48:23.057187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.303 [2024-07-12 00:48:23.057219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.303 qpair failed and we were unable to recover it. 00:35:55.303 [2024-07-12 00:48:23.057303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.303 [2024-07-12 00:48:23.057329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.303 qpair failed and we were unable to recover it. 00:35:55.303 [2024-07-12 00:48:23.057403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.303 [2024-07-12 00:48:23.057430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.303 qpair failed and we were unable to recover it. 00:35:55.303 [2024-07-12 00:48:23.057514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.303 [2024-07-12 00:48:23.057540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.303 qpair failed and we were unable to recover it. 00:35:55.303 [2024-07-12 00:48:23.057623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.303 [2024-07-12 00:48:23.057650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.303 qpair failed and we were unable to recover it. 00:35:55.303 [2024-07-12 00:48:23.057734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.303 [2024-07-12 00:48:23.057761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.303 qpair failed and we were unable to recover it. 00:35:55.303 [2024-07-12 00:48:23.057840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.303 [2024-07-12 00:48:23.057867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.303 qpair failed and we were unable to recover it. 00:35:55.303 [2024-07-12 00:48:23.057957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.303 [2024-07-12 00:48:23.057984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.303 qpair failed and we were unable to recover it. 00:35:55.303 [2024-07-12 00:48:23.058059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.303 [2024-07-12 00:48:23.058085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.303 qpair failed and we were unable to recover it. 00:35:55.303 [2024-07-12 00:48:23.058167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.303 [2024-07-12 00:48:23.058194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.303 qpair failed and we were unable to recover it. 00:35:55.303 [2024-07-12 00:48:23.058278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.303 [2024-07-12 00:48:23.058304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.303 qpair failed and we were unable to recover it. 00:35:55.303 [2024-07-12 00:48:23.058386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.303 [2024-07-12 00:48:23.058413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.303 qpair failed and we were unable to recover it. 00:35:55.303 [2024-07-12 00:48:23.058500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.303 [2024-07-12 00:48:23.058527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.303 qpair failed and we were unable to recover it. 00:35:55.303 [2024-07-12 00:48:23.058604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.303 [2024-07-12 00:48:23.058631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.303 qpair failed and we were unable to recover it. 00:35:55.303 [2024-07-12 00:48:23.058824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.303 [2024-07-12 00:48:23.058850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.303 qpair failed and we were unable to recover it. 00:35:55.303 [2024-07-12 00:48:23.059040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.303 [2024-07-12 00:48:23.059066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.303 qpair failed and we were unable to recover it. 00:35:55.303 [2024-07-12 00:48:23.059256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.303 [2024-07-12 00:48:23.059283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.303 qpair failed and we were unable to recover it. 00:35:55.303 [2024-07-12 00:48:23.059366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.303 [2024-07-12 00:48:23.059392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.303 qpair failed and we were unable to recover it. 00:35:55.303 [2024-07-12 00:48:23.059582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.303 [2024-07-12 00:48:23.059618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.303 qpair failed and we were unable to recover it. 00:35:55.303 [2024-07-12 00:48:23.059699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.303 [2024-07-12 00:48:23.059729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.303 qpair failed and we were unable to recover it. 00:35:55.303 [2024-07-12 00:48:23.059843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.303 [2024-07-12 00:48:23.059874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.303 qpair failed and we were unable to recover it. 00:35:55.303 [2024-07-12 00:48:23.059949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.303 [2024-07-12 00:48:23.059976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.303 qpair failed and we were unable to recover it. 00:35:55.303 [2024-07-12 00:48:23.060060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.303 [2024-07-12 00:48:23.060088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.303 qpair failed and we were unable to recover it. 00:35:55.303 [2024-07-12 00:48:23.060172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.303 [2024-07-12 00:48:23.060199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.303 qpair failed and we were unable to recover it. 00:35:55.303 [2024-07-12 00:48:23.060307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.303 [2024-07-12 00:48:23.060347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.303 qpair failed and we were unable to recover it. 00:35:55.303 [2024-07-12 00:48:23.060435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.303 [2024-07-12 00:48:23.060462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.303 qpair failed and we were unable to recover it. 00:35:55.303 [2024-07-12 00:48:23.060549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.303 [2024-07-12 00:48:23.060577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.303 qpair failed and we were unable to recover it. 00:35:55.303 [2024-07-12 00:48:23.060682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.303 [2024-07-12 00:48:23.060713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.303 qpair failed and we were unable to recover it. 00:35:55.303 [2024-07-12 00:48:23.060805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.303 [2024-07-12 00:48:23.060832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.303 qpair failed and we were unable to recover it. 00:35:55.303 [2024-07-12 00:48:23.060921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.303 [2024-07-12 00:48:23.060948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.303 qpair failed and we were unable to recover it. 00:35:55.303 [2024-07-12 00:48:23.061032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.303 [2024-07-12 00:48:23.061060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.303 qpair failed and we were unable to recover it. 00:35:55.303 [2024-07-12 00:48:23.061136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.303 [2024-07-12 00:48:23.061162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.303 qpair failed and we were unable to recover it. 00:35:55.303 [2024-07-12 00:48:23.061245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.303 [2024-07-12 00:48:23.061271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.303 qpair failed and we were unable to recover it. 00:35:55.303 [2024-07-12 00:48:23.061349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.303 [2024-07-12 00:48:23.061376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.303 qpair failed and we were unable to recover it. 00:35:55.303 [2024-07-12 00:48:23.061459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.303 [2024-07-12 00:48:23.061486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.303 qpair failed and we were unable to recover it. 00:35:55.303 [2024-07-12 00:48:23.061575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.303 [2024-07-12 00:48:23.061609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.303 qpair failed and we were unable to recover it. 00:35:55.303 [2024-07-12 00:48:23.061692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.303 [2024-07-12 00:48:23.061718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.303 qpair failed and we were unable to recover it. 00:35:55.303 [2024-07-12 00:48:23.061801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.303 [2024-07-12 00:48:23.061829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.303 qpair failed and we were unable to recover it. 00:35:55.303 [2024-07-12 00:48:23.061915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.303 [2024-07-12 00:48:23.061942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.303 qpair failed and we were unable to recover it. 00:35:55.303 [2024-07-12 00:48:23.062026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.303 [2024-07-12 00:48:23.062056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.303 qpair failed and we were unable to recover it. 00:35:55.303 [2024-07-12 00:48:23.062144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.303 [2024-07-12 00:48:23.062170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.303 qpair failed and we were unable to recover it. 00:35:55.303 [2024-07-12 00:48:23.062259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.303 [2024-07-12 00:48:23.062285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.303 qpair failed and we were unable to recover it. 00:35:55.303 [2024-07-12 00:48:23.062369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.303 [2024-07-12 00:48:23.062397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.303 qpair failed and we were unable to recover it. 00:35:55.303 [2024-07-12 00:48:23.062483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.303 [2024-07-12 00:48:23.062509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.303 qpair failed and we were unable to recover it. 00:35:55.303 [2024-07-12 00:48:23.062598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.303 [2024-07-12 00:48:23.062625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.303 qpair failed and we were unable to recover it. 00:35:55.303 [2024-07-12 00:48:23.062714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.303 [2024-07-12 00:48:23.062740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.303 qpair failed and we were unable to recover it. 00:35:55.303 [2024-07-12 00:48:23.062829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.303 [2024-07-12 00:48:23.062855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.303 qpair failed and we were unable to recover it. 00:35:55.303 [2024-07-12 00:48:23.062943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.303 [2024-07-12 00:48:23.062973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.303 qpair failed and we were unable to recover it. 00:35:55.303 [2024-07-12 00:48:23.063057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.303 [2024-07-12 00:48:23.063083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.303 qpair failed and we were unable to recover it. 00:35:55.303 [2024-07-12 00:48:23.063167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.303 [2024-07-12 00:48:23.063193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.303 qpair failed and we were unable to recover it. 00:35:55.303 [2024-07-12 00:48:23.063275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.303 [2024-07-12 00:48:23.063300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.303 qpair failed and we were unable to recover it. 00:35:55.588 [2024-07-12 00:48:23.063383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.588 [2024-07-12 00:48:23.063410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.588 qpair failed and we were unable to recover it. 00:35:55.588 [2024-07-12 00:48:23.063492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.588 [2024-07-12 00:48:23.063518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.588 qpair failed and we were unable to recover it. 00:35:55.588 [2024-07-12 00:48:23.063599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.588 [2024-07-12 00:48:23.063626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.588 qpair failed and we were unable to recover it. 00:35:55.588 [2024-07-12 00:48:23.063716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.588 [2024-07-12 00:48:23.063743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.588 qpair failed and we were unable to recover it. 00:35:55.588 [2024-07-12 00:48:23.063828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.588 [2024-07-12 00:48:23.063855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.588 qpair failed and we were unable to recover it. 00:35:55.588 [2024-07-12 00:48:23.063948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.588 [2024-07-12 00:48:23.063976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.588 qpair failed and we were unable to recover it. 00:35:55.588 [2024-07-12 00:48:23.064068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.588 [2024-07-12 00:48:23.064096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.588 qpair failed and we were unable to recover it. 00:35:55.588 [2024-07-12 00:48:23.064182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.588 [2024-07-12 00:48:23.064210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.588 qpair failed and we were unable to recover it. 00:35:55.588 [2024-07-12 00:48:23.064294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.588 [2024-07-12 00:48:23.064324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.588 qpair failed and we were unable to recover it. 00:35:55.588 [2024-07-12 00:48:23.064408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.588 [2024-07-12 00:48:23.064440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.588 qpair failed and we were unable to recover it. 00:35:55.588 [2024-07-12 00:48:23.064530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.588 [2024-07-12 00:48:23.064557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.588 qpair failed and we were unable to recover it. 00:35:55.588 [2024-07-12 00:48:23.064651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.588 [2024-07-12 00:48:23.064677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.588 qpair failed and we were unable to recover it. 00:35:55.588 [2024-07-12 00:48:23.064779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.588 [2024-07-12 00:48:23.064807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.588 qpair failed and we were unable to recover it. 00:35:55.588 [2024-07-12 00:48:23.064898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.588 [2024-07-12 00:48:23.064926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.588 qpair failed and we were unable to recover it. 00:35:55.588 [2024-07-12 00:48:23.065017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.588 [2024-07-12 00:48:23.065043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.588 qpair failed and we were unable to recover it. 00:35:55.588 [2024-07-12 00:48:23.065131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.588 [2024-07-12 00:48:23.065165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.588 qpair failed and we were unable to recover it. 00:35:55.588 [2024-07-12 00:48:23.065247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.588 [2024-07-12 00:48:23.065273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.588 qpair failed and we were unable to recover it. 00:35:55.588 [2024-07-12 00:48:23.065354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.588 [2024-07-12 00:48:23.065380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.588 qpair failed and we were unable to recover it. 00:35:55.588 [2024-07-12 00:48:23.065578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.588 [2024-07-12 00:48:23.065614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.588 qpair failed and we were unable to recover it. 00:35:55.588 [2024-07-12 00:48:23.065698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.588 [2024-07-12 00:48:23.065725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.588 qpair failed and we were unable to recover it. 00:35:55.588 [2024-07-12 00:48:23.065803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.588 [2024-07-12 00:48:23.065829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.588 qpair failed and we were unable to recover it. 00:35:55.588 [2024-07-12 00:48:23.065914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.588 [2024-07-12 00:48:23.065940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.588 qpair failed and we were unable to recover it. 00:35:55.588 [2024-07-12 00:48:23.066020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.588 [2024-07-12 00:48:23.066047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.588 qpair failed and we were unable to recover it. 00:35:55.588 [2024-07-12 00:48:23.066129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.588 [2024-07-12 00:48:23.066155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.588 qpair failed and we were unable to recover it. 00:35:55.588 [2024-07-12 00:48:23.066237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.588 [2024-07-12 00:48:23.066264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.588 qpair failed and we were unable to recover it. 00:35:55.588 [2024-07-12 00:48:23.066357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.588 [2024-07-12 00:48:23.066384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.588 qpair failed and we were unable to recover it. 00:35:55.588 [2024-07-12 00:48:23.066471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.588 [2024-07-12 00:48:23.066497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.588 qpair failed and we were unable to recover it. 00:35:55.588 [2024-07-12 00:48:23.066579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.588 [2024-07-12 00:48:23.066618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.588 qpair failed and we were unable to recover it. 00:35:55.588 [2024-07-12 00:48:23.066700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.588 [2024-07-12 00:48:23.066726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.588 qpair failed and we were unable to recover it. 00:35:55.588 [2024-07-12 00:48:23.066802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.588 [2024-07-12 00:48:23.066829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.588 qpair failed and we were unable to recover it. 00:35:55.588 [2024-07-12 00:48:23.066910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.589 [2024-07-12 00:48:23.066936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.589 qpair failed and we were unable to recover it. 00:35:55.589 [2024-07-12 00:48:23.067033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.589 [2024-07-12 00:48:23.067060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.589 qpair failed and we were unable to recover it. 00:35:55.589 [2024-07-12 00:48:23.067152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.589 [2024-07-12 00:48:23.067178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.589 qpair failed and we were unable to recover it. 00:35:55.589 [2024-07-12 00:48:23.067267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.589 [2024-07-12 00:48:23.067297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.589 qpair failed and we were unable to recover it. 00:35:55.589 [2024-07-12 00:48:23.067381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.589 [2024-07-12 00:48:23.067406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.589 qpair failed and we were unable to recover it. 00:35:55.589 [2024-07-12 00:48:23.067500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.589 [2024-07-12 00:48:23.067527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.589 qpair failed and we were unable to recover it. 00:35:55.589 [2024-07-12 00:48:23.067622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.589 [2024-07-12 00:48:23.067653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.589 qpair failed and we were unable to recover it. 00:35:55.589 [2024-07-12 00:48:23.067736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.589 [2024-07-12 00:48:23.067762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.589 qpair failed and we were unable to recover it. 00:35:55.589 [2024-07-12 00:48:23.067846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.589 [2024-07-12 00:48:23.067872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.589 qpair failed and we were unable to recover it. 00:35:55.589 [2024-07-12 00:48:23.067952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.589 [2024-07-12 00:48:23.067978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.589 qpair failed and we were unable to recover it. 00:35:55.589 [2024-07-12 00:48:23.068064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.589 [2024-07-12 00:48:23.068091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.589 qpair failed and we were unable to recover it. 00:35:55.589 [2024-07-12 00:48:23.068175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.589 [2024-07-12 00:48:23.068205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.589 qpair failed and we were unable to recover it. 00:35:55.589 [2024-07-12 00:48:23.068279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.589 [2024-07-12 00:48:23.068305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.589 qpair failed and we were unable to recover it. 00:35:55.589 [2024-07-12 00:48:23.068384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.589 [2024-07-12 00:48:23.068411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.589 qpair failed and we were unable to recover it. 00:35:55.589 [2024-07-12 00:48:23.068494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.589 [2024-07-12 00:48:23.068521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.589 qpair failed and we were unable to recover it. 00:35:55.589 [2024-07-12 00:48:23.068607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.589 [2024-07-12 00:48:23.068635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.589 qpair failed and we were unable to recover it. 00:35:55.589 [2024-07-12 00:48:23.068716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.589 [2024-07-12 00:48:23.068748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.589 qpair failed and we were unable to recover it. 00:35:55.589 [2024-07-12 00:48:23.068834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.589 [2024-07-12 00:48:23.068861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.589 qpair failed and we were unable to recover it. 00:35:55.589 [2024-07-12 00:48:23.068962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.589 [2024-07-12 00:48:23.068991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.589 qpair failed and we were unable to recover it. 00:35:55.589 [2024-07-12 00:48:23.069069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.589 [2024-07-12 00:48:23.069099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.589 qpair failed and we were unable to recover it. 00:35:55.589 [2024-07-12 00:48:23.069175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.589 [2024-07-12 00:48:23.069201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.589 qpair failed and we were unable to recover it. 00:35:55.589 [2024-07-12 00:48:23.069320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.589 [2024-07-12 00:48:23.069347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.589 qpair failed and we were unable to recover it. 00:35:55.589 [2024-07-12 00:48:23.069423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.589 [2024-07-12 00:48:23.069450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.589 qpair failed and we were unable to recover it. 00:35:55.589 [2024-07-12 00:48:23.069530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.589 [2024-07-12 00:48:23.069560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.589 qpair failed and we were unable to recover it. 00:35:55.589 [2024-07-12 00:48:23.069650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.589 [2024-07-12 00:48:23.069677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.589 qpair failed and we were unable to recover it. 00:35:55.589 [2024-07-12 00:48:23.069831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.589 [2024-07-12 00:48:23.069858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.589 qpair failed and we were unable to recover it. 00:35:55.589 [2024-07-12 00:48:23.069937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.589 [2024-07-12 00:48:23.069963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.589 qpair failed and we were unable to recover it. 00:35:55.589 [2024-07-12 00:48:23.070042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.589 [2024-07-12 00:48:23.070067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.589 qpair failed and we were unable to recover it. 00:35:55.589 [2024-07-12 00:48:23.070143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.589 [2024-07-12 00:48:23.070168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.589 qpair failed and we were unable to recover it. 00:35:55.589 [2024-07-12 00:48:23.070248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.589 [2024-07-12 00:48:23.070275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.589 qpair failed and we were unable to recover it. 00:35:55.589 [2024-07-12 00:48:23.070360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.589 [2024-07-12 00:48:23.070387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.589 qpair failed and we were unable to recover it. 00:35:55.589 [2024-07-12 00:48:23.070473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.589 [2024-07-12 00:48:23.070503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.589 qpair failed and we were unable to recover it. 00:35:55.589 [2024-07-12 00:48:23.070583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.589 [2024-07-12 00:48:23.070616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.589 qpair failed and we were unable to recover it. 00:35:55.589 [2024-07-12 00:48:23.070699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.589 [2024-07-12 00:48:23.070726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.589 qpair failed and we were unable to recover it. 00:35:55.589 [2024-07-12 00:48:23.070805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.589 [2024-07-12 00:48:23.070831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.589 qpair failed and we were unable to recover it. 00:35:55.589 [2024-07-12 00:48:23.070915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.589 [2024-07-12 00:48:23.070940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.589 qpair failed and we were unable to recover it. 00:35:55.589 [2024-07-12 00:48:23.071029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.589 [2024-07-12 00:48:23.071058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.589 qpair failed and we were unable to recover it. 00:35:55.589 [2024-07-12 00:48:23.071135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.589 [2024-07-12 00:48:23.071161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.589 qpair failed and we were unable to recover it. 00:35:55.589 [2024-07-12 00:48:23.071243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.589 [2024-07-12 00:48:23.071269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.589 qpair failed and we were unable to recover it. 00:35:55.589 [2024-07-12 00:48:23.071355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.589 [2024-07-12 00:48:23.071382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.589 qpair failed and we were unable to recover it. 00:35:55.589 [2024-07-12 00:48:23.071464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.589 [2024-07-12 00:48:23.071488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.589 qpair failed and we were unable to recover it. 00:35:55.590 [2024-07-12 00:48:23.071564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.590 [2024-07-12 00:48:23.071598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.590 qpair failed and we were unable to recover it. 00:35:55.590 [2024-07-12 00:48:23.071686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.590 [2024-07-12 00:48:23.071713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.590 qpair failed and we were unable to recover it. 00:35:55.590 [2024-07-12 00:48:23.071802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.590 [2024-07-12 00:48:23.071828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.590 qpair failed and we were unable to recover it. 00:35:55.590 [2024-07-12 00:48:23.071912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.590 [2024-07-12 00:48:23.071939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.590 qpair failed and we were unable to recover it. 00:35:55.590 [2024-07-12 00:48:23.072020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.590 [2024-07-12 00:48:23.072046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.590 qpair failed and we were unable to recover it. 00:35:55.590 [2024-07-12 00:48:23.072128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.590 [2024-07-12 00:48:23.072154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.590 qpair failed and we were unable to recover it. 00:35:55.590 [2024-07-12 00:48:23.072236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.590 [2024-07-12 00:48:23.072262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.590 qpair failed and we were unable to recover it. 00:35:55.590 [2024-07-12 00:48:23.072347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.590 [2024-07-12 00:48:23.072376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.590 qpair failed and we were unable to recover it. 00:35:55.590 [2024-07-12 00:48:23.072461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.590 [2024-07-12 00:48:23.072488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.590 qpair failed and we were unable to recover it. 00:35:55.590 [2024-07-12 00:48:23.072576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.590 [2024-07-12 00:48:23.072622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.590 qpair failed and we were unable to recover it. 00:35:55.590 [2024-07-12 00:48:23.072717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.590 [2024-07-12 00:48:23.072744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.590 qpair failed and we were unable to recover it. 00:35:55.590 [2024-07-12 00:48:23.072826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.590 [2024-07-12 00:48:23.072854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.590 qpair failed and we were unable to recover it. 00:35:55.590 [2024-07-12 00:48:23.072932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.590 [2024-07-12 00:48:23.072958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.590 qpair failed and we were unable to recover it. 00:35:55.590 [2024-07-12 00:48:23.073041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.590 [2024-07-12 00:48:23.073067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.590 qpair failed and we were unable to recover it. 00:35:55.590 [2024-07-12 00:48:23.073154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.590 [2024-07-12 00:48:23.073179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.590 qpair failed and we were unable to recover it. 00:35:55.590 [2024-07-12 00:48:23.073261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.590 [2024-07-12 00:48:23.073287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.590 qpair failed and we were unable to recover it. 00:35:55.590 [2024-07-12 00:48:23.073363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.590 [2024-07-12 00:48:23.073391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.590 qpair failed and we were unable to recover it. 00:35:55.590 [2024-07-12 00:48:23.073478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.590 [2024-07-12 00:48:23.073506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.590 qpair failed and we were unable to recover it. 00:35:55.590 [2024-07-12 00:48:23.073603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.590 [2024-07-12 00:48:23.073648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.590 qpair failed and we were unable to recover it. 00:35:55.590 [2024-07-12 00:48:23.073725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.590 [2024-07-12 00:48:23.073752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.590 qpair failed and we were unable to recover it. 00:35:55.590 [2024-07-12 00:48:23.073836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.590 [2024-07-12 00:48:23.073863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.590 qpair failed and we were unable to recover it. 00:35:55.590 [2024-07-12 00:48:23.073941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.590 [2024-07-12 00:48:23.073967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.590 qpair failed and we were unable to recover it. 00:35:55.590 [2024-07-12 00:48:23.074051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.590 [2024-07-12 00:48:23.074084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.590 qpair failed and we were unable to recover it. 00:35:55.590 [2024-07-12 00:48:23.074162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.590 [2024-07-12 00:48:23.074189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.590 qpair failed and we were unable to recover it. 00:35:55.590 [2024-07-12 00:48:23.074272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.590 [2024-07-12 00:48:23.074298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.590 qpair failed and we were unable to recover it. 00:35:55.590 [2024-07-12 00:48:23.074378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.590 [2024-07-12 00:48:23.074403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.590 qpair failed and we were unable to recover it. 00:35:55.590 [2024-07-12 00:48:23.074550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.590 [2024-07-12 00:48:23.074577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.590 qpair failed and we were unable to recover it. 00:35:55.590 [2024-07-12 00:48:23.074666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.590 [2024-07-12 00:48:23.074693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.590 qpair failed and we were unable to recover it. 00:35:55.590 [2024-07-12 00:48:23.074775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.590 [2024-07-12 00:48:23.074801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.590 qpair failed and we were unable to recover it. 00:35:55.590 [2024-07-12 00:48:23.074933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.590 [2024-07-12 00:48:23.074959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.590 qpair failed and we were unable to recover it. 00:35:55.590 [2024-07-12 00:48:23.075039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.590 [2024-07-12 00:48:23.075064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.590 qpair failed and we were unable to recover it. 00:35:55.590 [2024-07-12 00:48:23.075140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.590 [2024-07-12 00:48:23.075165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.590 qpair failed and we were unable to recover it. 00:35:55.590 [2024-07-12 00:48:23.075267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.590 [2024-07-12 00:48:23.075293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.590 qpair failed and we were unable to recover it. 00:35:55.590 [2024-07-12 00:48:23.075380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.590 [2024-07-12 00:48:23.075406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.590 qpair failed and we were unable to recover it. 00:35:55.590 [2024-07-12 00:48:23.075487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.590 [2024-07-12 00:48:23.075513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.590 qpair failed and we were unable to recover it. 00:35:55.590 [2024-07-12 00:48:23.075597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.590 [2024-07-12 00:48:23.075624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.590 qpair failed and we were unable to recover it. 00:35:55.590 [2024-07-12 00:48:23.075704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.590 [2024-07-12 00:48:23.075730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.590 qpair failed and we were unable to recover it. 00:35:55.590 [2024-07-12 00:48:23.075816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.590 [2024-07-12 00:48:23.075845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.590 qpair failed and we were unable to recover it. 00:35:55.590 [2024-07-12 00:48:23.075936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.590 [2024-07-12 00:48:23.075963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.590 qpair failed and we were unable to recover it. 00:35:55.590 [2024-07-12 00:48:23.076053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.590 [2024-07-12 00:48:23.076079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.590 qpair failed and we were unable to recover it. 00:35:55.590 [2024-07-12 00:48:23.076160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.590 [2024-07-12 00:48:23.076187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.590 qpair failed and we were unable to recover it. 00:35:55.591 [2024-07-12 00:48:23.076266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.591 [2024-07-12 00:48:23.076292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.591 qpair failed and we were unable to recover it. 00:35:55.591 [2024-07-12 00:48:23.076366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.591 [2024-07-12 00:48:23.076392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.591 qpair failed and we were unable to recover it. 00:35:55.591 [2024-07-12 00:48:23.076483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.591 [2024-07-12 00:48:23.076510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.591 qpair failed and we were unable to recover it. 00:35:55.591 [2024-07-12 00:48:23.076612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.591 [2024-07-12 00:48:23.076639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.591 qpair failed and we were unable to recover it. 00:35:55.591 [2024-07-12 00:48:23.076731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.591 [2024-07-12 00:48:23.076758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.591 qpair failed and we were unable to recover it. 00:35:55.591 [2024-07-12 00:48:23.076834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.591 [2024-07-12 00:48:23.076860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.591 qpair failed and we were unable to recover it. 00:35:55.591 [2024-07-12 00:48:23.076943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.591 [2024-07-12 00:48:23.076969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.591 qpair failed and we were unable to recover it. 00:35:55.591 [2024-07-12 00:48:23.077046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.591 [2024-07-12 00:48:23.077072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.591 qpair failed and we were unable to recover it. 00:35:55.591 [2024-07-12 00:48:23.077155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.591 [2024-07-12 00:48:23.077188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.591 qpair failed and we were unable to recover it. 00:35:55.591 [2024-07-12 00:48:23.077272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.591 [2024-07-12 00:48:23.077298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.591 qpair failed and we were unable to recover it. 00:35:55.591 [2024-07-12 00:48:23.077377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.591 [2024-07-12 00:48:23.077404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.591 qpair failed and we were unable to recover it. 00:35:55.591 [2024-07-12 00:48:23.077479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.591 [2024-07-12 00:48:23.077504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.591 qpair failed and we were unable to recover it. 00:35:55.591 [2024-07-12 00:48:23.077597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.591 [2024-07-12 00:48:23.077624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.591 qpair failed and we were unable to recover it. 00:35:55.591 [2024-07-12 00:48:23.077783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.591 [2024-07-12 00:48:23.077810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.591 qpair failed and we were unable to recover it. 00:35:55.591 [2024-07-12 00:48:23.077908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.591 [2024-07-12 00:48:23.077934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.591 qpair failed and we were unable to recover it. 00:35:55.591 [2024-07-12 00:48:23.078010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.591 [2024-07-12 00:48:23.078036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.591 qpair failed and we were unable to recover it. 00:35:55.591 [2024-07-12 00:48:23.078117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.591 [2024-07-12 00:48:23.078143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.591 qpair failed and we were unable to recover it. 00:35:55.591 [2024-07-12 00:48:23.078295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.591 [2024-07-12 00:48:23.078326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.591 qpair failed and we were unable to recover it. 00:35:55.591 [2024-07-12 00:48:23.078422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.591 [2024-07-12 00:48:23.078449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.591 qpair failed and we were unable to recover it. 00:35:55.591 [2024-07-12 00:48:23.078527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.591 [2024-07-12 00:48:23.078553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.591 qpair failed and we were unable to recover it. 00:35:55.591 [2024-07-12 00:48:23.078640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.591 [2024-07-12 00:48:23.078668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.591 qpair failed and we were unable to recover it. 00:35:55.591 [2024-07-12 00:48:23.078748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.591 [2024-07-12 00:48:23.078774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.591 qpair failed and we were unable to recover it. 00:35:55.591 [2024-07-12 00:48:23.078850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.591 [2024-07-12 00:48:23.078877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.591 qpair failed and we were unable to recover it. 00:35:55.591 [2024-07-12 00:48:23.078957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.591 [2024-07-12 00:48:23.078984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.591 qpair failed and we were unable to recover it. 00:35:55.591 [2024-07-12 00:48:23.079071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.591 [2024-07-12 00:48:23.079097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.591 qpair failed and we were unable to recover it. 00:35:55.591 [2024-07-12 00:48:23.079177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.591 [2024-07-12 00:48:23.079204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.591 qpair failed and we were unable to recover it. 00:35:55.591 [2024-07-12 00:48:23.079292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.591 [2024-07-12 00:48:23.079319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.591 qpair failed and we were unable to recover it. 00:35:55.591 [2024-07-12 00:48:23.079403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.591 [2024-07-12 00:48:23.079430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.591 qpair failed and we were unable to recover it. 00:35:55.591 [2024-07-12 00:48:23.079507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.591 [2024-07-12 00:48:23.079533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.591 qpair failed and we were unable to recover it. 00:35:55.591 [2024-07-12 00:48:23.079610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.591 [2024-07-12 00:48:23.079636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.591 qpair failed and we were unable to recover it. 00:35:55.591 [2024-07-12 00:48:23.079714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.591 [2024-07-12 00:48:23.079740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.591 qpair failed and we were unable to recover it. 00:35:55.591 [2024-07-12 00:48:23.079829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.591 [2024-07-12 00:48:23.079856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.591 qpair failed and we were unable to recover it. 00:35:55.591 [2024-07-12 00:48:23.079934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.591 [2024-07-12 00:48:23.079967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.591 qpair failed and we were unable to recover it. 00:35:55.591 [2024-07-12 00:48:23.080051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.591 [2024-07-12 00:48:23.080077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.591 qpair failed and we were unable to recover it. 00:35:55.591 [2024-07-12 00:48:23.080159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.591 [2024-07-12 00:48:23.080186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.591 qpair failed and we were unable to recover it. 00:35:55.591 [2024-07-12 00:48:23.080269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.591 [2024-07-12 00:48:23.080297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.591 qpair failed and we were unable to recover it. 00:35:55.591 [2024-07-12 00:48:23.080383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.591 [2024-07-12 00:48:23.080410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.591 qpair failed and we were unable to recover it. 00:35:55.591 [2024-07-12 00:48:23.080496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.591 [2024-07-12 00:48:23.080524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.591 qpair failed and we were unable to recover it. 00:35:55.591 [2024-07-12 00:48:23.080611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.591 [2024-07-12 00:48:23.080639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.591 qpair failed and we were unable to recover it. 00:35:55.591 [2024-07-12 00:48:23.080724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.591 [2024-07-12 00:48:23.080749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.591 qpair failed and we were unable to recover it. 00:35:55.591 [2024-07-12 00:48:23.080833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.591 [2024-07-12 00:48:23.080859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.592 qpair failed and we were unable to recover it. 00:35:55.592 [2024-07-12 00:48:23.080942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.592 [2024-07-12 00:48:23.080968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.592 qpair failed and we were unable to recover it. 00:35:55.592 [2024-07-12 00:48:23.081053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.592 [2024-07-12 00:48:23.081079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.592 qpair failed and we were unable to recover it. 00:35:55.592 [2024-07-12 00:48:23.081161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.592 [2024-07-12 00:48:23.081187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.592 qpair failed and we were unable to recover it. 00:35:55.592 [2024-07-12 00:48:23.081282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.592 [2024-07-12 00:48:23.081310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.592 qpair failed and we were unable to recover it. 00:35:55.592 [2024-07-12 00:48:23.081394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.592 [2024-07-12 00:48:23.081421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.592 qpair failed and we were unable to recover it. 00:35:55.592 [2024-07-12 00:48:23.081509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.592 [2024-07-12 00:48:23.081537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.592 qpair failed and we were unable to recover it. 00:35:55.592 [2024-07-12 00:48:23.081617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.592 [2024-07-12 00:48:23.081644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.592 qpair failed and we were unable to recover it. 00:35:55.592 [2024-07-12 00:48:23.081727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.592 [2024-07-12 00:48:23.081754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.592 qpair failed and we were unable to recover it. 00:35:55.592 [2024-07-12 00:48:23.081844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.592 [2024-07-12 00:48:23.081872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.592 qpair failed and we were unable to recover it. 00:35:55.592 [2024-07-12 00:48:23.082067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.592 [2024-07-12 00:48:23.082096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.592 qpair failed and we were unable to recover it. 00:35:55.592 [2024-07-12 00:48:23.082172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.592 [2024-07-12 00:48:23.082197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.592 qpair failed and we were unable to recover it. 00:35:55.592 [2024-07-12 00:48:23.082273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.592 [2024-07-12 00:48:23.082299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.592 qpair failed and we were unable to recover it. 00:35:55.592 [2024-07-12 00:48:23.082387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.592 [2024-07-12 00:48:23.082413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.592 qpair failed and we were unable to recover it. 00:35:55.592 [2024-07-12 00:48:23.082489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.592 [2024-07-12 00:48:23.082514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.592 qpair failed and we were unable to recover it. 00:35:55.592 [2024-07-12 00:48:23.082597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.592 [2024-07-12 00:48:23.082624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.592 qpair failed and we were unable to recover it. 00:35:55.592 [2024-07-12 00:48:23.082706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.592 [2024-07-12 00:48:23.082732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.592 qpair failed and we were unable to recover it. 00:35:55.592 [2024-07-12 00:48:23.082814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.592 [2024-07-12 00:48:23.082845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.592 qpair failed and we were unable to recover it. 00:35:55.592 [2024-07-12 00:48:23.082929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.592 [2024-07-12 00:48:23.082961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.592 qpair failed and we were unable to recover it. 00:35:55.592 [2024-07-12 00:48:23.083038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.592 [2024-07-12 00:48:23.083064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.592 qpair failed and we were unable to recover it. 00:35:55.592 [2024-07-12 00:48:23.083141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.592 [2024-07-12 00:48:23.083167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.592 qpair failed and we were unable to recover it. 00:35:55.592 [2024-07-12 00:48:23.083242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.592 [2024-07-12 00:48:23.083267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.592 qpair failed and we were unable to recover it. 00:35:55.592 [2024-07-12 00:48:23.083343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.592 [2024-07-12 00:48:23.083370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.592 qpair failed and we were unable to recover it. 00:35:55.592 [2024-07-12 00:48:23.083452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.592 [2024-07-12 00:48:23.083477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.592 qpair failed and we were unable to recover it. 00:35:55.592 [2024-07-12 00:48:23.083562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.592 [2024-07-12 00:48:23.083594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.592 qpair failed and we were unable to recover it. 00:35:55.592 [2024-07-12 00:48:23.083678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.592 [2024-07-12 00:48:23.083703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.592 qpair failed and we were unable to recover it. 00:35:55.592 [2024-07-12 00:48:23.083791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.592 [2024-07-12 00:48:23.083816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.592 qpair failed and we were unable to recover it. 00:35:55.592 [2024-07-12 00:48:23.083897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.592 [2024-07-12 00:48:23.083922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.592 qpair failed and we were unable to recover it. 00:35:55.592 [2024-07-12 00:48:23.084003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.592 [2024-07-12 00:48:23.084028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.592 qpair failed and we were unable to recover it. 00:35:55.592 [2024-07-12 00:48:23.084106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.592 [2024-07-12 00:48:23.084137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.592 qpair failed and we were unable to recover it. 00:35:55.592 [2024-07-12 00:48:23.084217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.592 [2024-07-12 00:48:23.084246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.592 qpair failed and we were unable to recover it. 00:35:55.592 [2024-07-12 00:48:23.084344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.592 [2024-07-12 00:48:23.084372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.592 qpair failed and we were unable to recover it. 00:35:55.592 [2024-07-12 00:48:23.084463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.592 [2024-07-12 00:48:23.084490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.592 qpair failed and we were unable to recover it. 00:35:55.592 [2024-07-12 00:48:23.084578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.593 [2024-07-12 00:48:23.084618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.593 qpair failed and we were unable to recover it. 00:35:55.593 [2024-07-12 00:48:23.084694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.593 [2024-07-12 00:48:23.084721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.593 qpair failed and we were unable to recover it. 00:35:55.593 [2024-07-12 00:48:23.084811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.593 [2024-07-12 00:48:23.084837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.593 qpair failed and we were unable to recover it. 00:35:55.593 [2024-07-12 00:48:23.084918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.593 [2024-07-12 00:48:23.084944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.593 qpair failed and we were unable to recover it. 00:35:55.593 [2024-07-12 00:48:23.085028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.593 [2024-07-12 00:48:23.085054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.593 qpair failed and we were unable to recover it. 00:35:55.593 [2024-07-12 00:48:23.085128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.593 [2024-07-12 00:48:23.085153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.593 qpair failed and we were unable to recover it. 00:35:55.593 [2024-07-12 00:48:23.085236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.593 [2024-07-12 00:48:23.085262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.593 qpair failed and we were unable to recover it. 00:35:55.593 [2024-07-12 00:48:23.085340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.593 [2024-07-12 00:48:23.085367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.593 qpair failed and we were unable to recover it. 00:35:55.593 [2024-07-12 00:48:23.085459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.593 [2024-07-12 00:48:23.085486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.593 qpair failed and we were unable to recover it. 00:35:55.593 [2024-07-12 00:48:23.085568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.593 [2024-07-12 00:48:23.085602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.593 qpair failed and we were unable to recover it. 00:35:55.593 [2024-07-12 00:48:23.085682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.593 [2024-07-12 00:48:23.085709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.593 qpair failed and we were unable to recover it. 00:35:55.593 [2024-07-12 00:48:23.085814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.593 [2024-07-12 00:48:23.085852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.593 qpair failed and we were unable to recover it. 00:35:55.593 [2024-07-12 00:48:23.085957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.593 [2024-07-12 00:48:23.085993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.593 qpair failed and we were unable to recover it. 00:35:55.593 [2024-07-12 00:48:23.086092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.593 [2024-07-12 00:48:23.086123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.593 qpair failed and we were unable to recover it. 00:35:55.593 [2024-07-12 00:48:23.086210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.593 [2024-07-12 00:48:23.086236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.593 qpair failed and we were unable to recover it. 00:35:55.593 [2024-07-12 00:48:23.086317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.593 [2024-07-12 00:48:23.086344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.593 qpair failed and we were unable to recover it. 00:35:55.593 [2024-07-12 00:48:23.086424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.593 [2024-07-12 00:48:23.086449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.593 qpair failed and we were unable to recover it. 00:35:55.593 [2024-07-12 00:48:23.086524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.593 [2024-07-12 00:48:23.086549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.593 qpair failed and we were unable to recover it. 00:35:55.593 [2024-07-12 00:48:23.086724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.593 [2024-07-12 00:48:23.086752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.593 qpair failed and we were unable to recover it. 00:35:55.593 [2024-07-12 00:48:23.086844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.593 [2024-07-12 00:48:23.086875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.593 qpair failed and we were unable to recover it. 00:35:55.593 [2024-07-12 00:48:23.086956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.593 [2024-07-12 00:48:23.086982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.593 qpair failed and we were unable to recover it. 00:35:55.593 [2024-07-12 00:48:23.087056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.593 [2024-07-12 00:48:23.087082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.593 qpair failed and we were unable to recover it. 00:35:55.593 [2024-07-12 00:48:23.087172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.593 [2024-07-12 00:48:23.087198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.593 qpair failed and we were unable to recover it. 00:35:55.593 [2024-07-12 00:48:23.087277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.593 [2024-07-12 00:48:23.087302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.593 qpair failed and we were unable to recover it. 00:35:55.593 [2024-07-12 00:48:23.087380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.593 [2024-07-12 00:48:23.087409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.593 qpair failed and we were unable to recover it. 00:35:55.593 [2024-07-12 00:48:23.087484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.593 [2024-07-12 00:48:23.087509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.593 qpair failed and we were unable to recover it. 00:35:55.593 [2024-07-12 00:48:23.087593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.593 [2024-07-12 00:48:23.087627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.593 qpair failed and we were unable to recover it. 00:35:55.593 [2024-07-12 00:48:23.087714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.593 [2024-07-12 00:48:23.087741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.593 qpair failed and we were unable to recover it. 00:35:55.593 [2024-07-12 00:48:23.087819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.593 [2024-07-12 00:48:23.087845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.593 qpair failed and we were unable to recover it. 00:35:55.593 [2024-07-12 00:48:23.087926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.593 [2024-07-12 00:48:23.087952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.593 qpair failed and we were unable to recover it. 00:35:55.593 [2024-07-12 00:48:23.088035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.593 [2024-07-12 00:48:23.088062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.593 qpair failed and we were unable to recover it. 00:35:55.593 [2024-07-12 00:48:23.088143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.593 [2024-07-12 00:48:23.088169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.593 qpair failed and we were unable to recover it. 00:35:55.593 [2024-07-12 00:48:23.088244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.593 [2024-07-12 00:48:23.088272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.593 qpair failed and we were unable to recover it. 00:35:55.593 [2024-07-12 00:48:23.088348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.593 [2024-07-12 00:48:23.088373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.593 qpair failed and we were unable to recover it. 00:35:55.593 [2024-07-12 00:48:23.088452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.593 [2024-07-12 00:48:23.088478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.593 qpair failed and we were unable to recover it. 00:35:55.593 [2024-07-12 00:48:23.088556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.593 [2024-07-12 00:48:23.088582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.593 qpair failed and we were unable to recover it. 00:35:55.593 [2024-07-12 00:48:23.088676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.593 [2024-07-12 00:48:23.088703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.593 qpair failed and we were unable to recover it. 00:35:55.593 [2024-07-12 00:48:23.088780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.593 [2024-07-12 00:48:23.088805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.593 qpair failed and we were unable to recover it. 00:35:55.593 [2024-07-12 00:48:23.088891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.593 [2024-07-12 00:48:23.088916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.593 qpair failed and we were unable to recover it. 00:35:55.593 [2024-07-12 00:48:23.088992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.593 [2024-07-12 00:48:23.089017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.593 qpair failed and we were unable to recover it. 00:35:55.593 [2024-07-12 00:48:23.089099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.594 [2024-07-12 00:48:23.089125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.594 qpair failed and we were unable to recover it. 00:35:55.594 [2024-07-12 00:48:23.089204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.594 [2024-07-12 00:48:23.089230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.594 qpair failed and we were unable to recover it. 00:35:55.594 [2024-07-12 00:48:23.089311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.594 [2024-07-12 00:48:23.089336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.594 qpair failed and we were unable to recover it. 00:35:55.594 [2024-07-12 00:48:23.089414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.594 [2024-07-12 00:48:23.089444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.594 qpair failed and we were unable to recover it. 00:35:55.594 [2024-07-12 00:48:23.089522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.594 [2024-07-12 00:48:23.089547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.594 qpair failed and we were unable to recover it. 00:35:55.594 [2024-07-12 00:48:23.089637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.594 [2024-07-12 00:48:23.089664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.594 qpair failed and we were unable to recover it. 00:35:55.594 [2024-07-12 00:48:23.089754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.594 [2024-07-12 00:48:23.089778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.594 qpair failed and we were unable to recover it. 00:35:55.594 [2024-07-12 00:48:23.089861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.594 [2024-07-12 00:48:23.089887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.594 qpair failed and we were unable to recover it. 00:35:55.594 [2024-07-12 00:48:23.089970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.594 [2024-07-12 00:48:23.089996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.594 qpair failed and we were unable to recover it. 00:35:55.594 [2024-07-12 00:48:23.090077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.594 [2024-07-12 00:48:23.090104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.594 qpair failed and we were unable to recover it. 00:35:55.594 [2024-07-12 00:48:23.090193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.594 [2024-07-12 00:48:23.090220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.594 qpair failed and we were unable to recover it. 00:35:55.594 [2024-07-12 00:48:23.090309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.594 [2024-07-12 00:48:23.090334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.594 qpair failed and we were unable to recover it. 00:35:55.594 [2024-07-12 00:48:23.090416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.594 [2024-07-12 00:48:23.090442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.594 qpair failed and we were unable to recover it. 00:35:55.594 [2024-07-12 00:48:23.090522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.594 [2024-07-12 00:48:23.090550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.594 qpair failed and we were unable to recover it. 00:35:55.594 [2024-07-12 00:48:23.090647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.594 [2024-07-12 00:48:23.090674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.594 qpair failed and we were unable to recover it. 00:35:55.594 [2024-07-12 00:48:23.090756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.594 [2024-07-12 00:48:23.090782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.594 qpair failed and we were unable to recover it. 00:35:55.594 [2024-07-12 00:48:23.090860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.594 [2024-07-12 00:48:23.090889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.594 qpair failed and we were unable to recover it. 00:35:55.594 [2024-07-12 00:48:23.090963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.594 [2024-07-12 00:48:23.090990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.594 qpair failed and we were unable to recover it. 00:35:55.594 [2024-07-12 00:48:23.091065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.594 [2024-07-12 00:48:23.091091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.594 qpair failed and we were unable to recover it. 00:35:55.594 [2024-07-12 00:48:23.091168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.594 [2024-07-12 00:48:23.091194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.594 qpair failed and we were unable to recover it. 00:35:55.594 [2024-07-12 00:48:23.091269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.594 [2024-07-12 00:48:23.091295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.594 qpair failed and we were unable to recover it. 00:35:55.594 [2024-07-12 00:48:23.091384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.594 [2024-07-12 00:48:23.091410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.594 qpair failed and we were unable to recover it. 00:35:55.594 [2024-07-12 00:48:23.091492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.594 [2024-07-12 00:48:23.091519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.594 qpair failed and we were unable to recover it. 00:35:55.594 [2024-07-12 00:48:23.091606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.594 [2024-07-12 00:48:23.091634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.594 qpair failed and we were unable to recover it. 00:35:55.594 [2024-07-12 00:48:23.091716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.594 [2024-07-12 00:48:23.091747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.594 qpair failed and we were unable to recover it. 00:35:55.594 [2024-07-12 00:48:23.091835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.594 [2024-07-12 00:48:23.091861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.594 qpair failed and we were unable to recover it. 00:35:55.594 [2024-07-12 00:48:23.091945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.594 [2024-07-12 00:48:23.091971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.594 qpair failed and we were unable to recover it. 00:35:55.594 [2024-07-12 00:48:23.092050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.594 [2024-07-12 00:48:23.092076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.594 qpair failed and we were unable to recover it. 00:35:55.594 [2024-07-12 00:48:23.092155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.594 [2024-07-12 00:48:23.092181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.594 qpair failed and we were unable to recover it. 00:35:55.594 [2024-07-12 00:48:23.092262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.594 [2024-07-12 00:48:23.092288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.594 qpair failed and we were unable to recover it. 00:35:55.594 [2024-07-12 00:48:23.092369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.594 [2024-07-12 00:48:23.092395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.594 qpair failed and we were unable to recover it. 00:35:55.594 [2024-07-12 00:48:23.092474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.594 [2024-07-12 00:48:23.092501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.594 qpair failed and we were unable to recover it. 00:35:55.594 [2024-07-12 00:48:23.092577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.594 [2024-07-12 00:48:23.092609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.594 qpair failed and we were unable to recover it. 00:35:55.594 [2024-07-12 00:48:23.092692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.594 [2024-07-12 00:48:23.092718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.594 qpair failed and we were unable to recover it. 00:35:55.594 [2024-07-12 00:48:23.092798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.594 [2024-07-12 00:48:23.092824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.594 qpair failed and we were unable to recover it. 00:35:55.594 [2024-07-12 00:48:23.092907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.594 [2024-07-12 00:48:23.092933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.594 qpair failed and we were unable to recover it. 00:35:55.594 [2024-07-12 00:48:23.093008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.594 [2024-07-12 00:48:23.093034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.594 qpair failed and we were unable to recover it. 00:35:55.594 [2024-07-12 00:48:23.093110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.594 [2024-07-12 00:48:23.093137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.594 qpair failed and we were unable to recover it. 00:35:55.594 [2024-07-12 00:48:23.093220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.594 [2024-07-12 00:48:23.093247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.594 qpair failed and we were unable to recover it. 00:35:55.594 [2024-07-12 00:48:23.093322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.594 [2024-07-12 00:48:23.093349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.594 qpair failed and we were unable to recover it. 00:35:55.594 [2024-07-12 00:48:23.093433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.594 [2024-07-12 00:48:23.093462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.595 qpair failed and we were unable to recover it. 00:35:55.595 [2024-07-12 00:48:23.093547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.595 [2024-07-12 00:48:23.093574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.595 qpair failed and we were unable to recover it. 00:35:55.595 [2024-07-12 00:48:23.093662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.595 [2024-07-12 00:48:23.093693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.595 qpair failed and we were unable to recover it. 00:35:55.595 [2024-07-12 00:48:23.093778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.595 [2024-07-12 00:48:23.093803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.595 qpair failed and we were unable to recover it. 00:35:55.595 [2024-07-12 00:48:23.093881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.595 [2024-07-12 00:48:23.093906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.595 qpair failed and we were unable to recover it. 00:35:55.595 [2024-07-12 00:48:23.093986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.595 [2024-07-12 00:48:23.094010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.595 qpair failed and we were unable to recover it. 00:35:55.595 [2024-07-12 00:48:23.094091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.595 [2024-07-12 00:48:23.094117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.595 qpair failed and we were unable to recover it. 00:35:55.595 [2024-07-12 00:48:23.094204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.595 [2024-07-12 00:48:23.094231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.595 qpair failed and we were unable to recover it. 00:35:55.595 [2024-07-12 00:48:23.094318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.595 [2024-07-12 00:48:23.094343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.595 qpair failed and we were unable to recover it. 00:35:55.595 [2024-07-12 00:48:23.094426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.595 [2024-07-12 00:48:23.094453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.595 qpair failed and we were unable to recover it. 00:35:55.595 [2024-07-12 00:48:23.094532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.595 [2024-07-12 00:48:23.094558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.595 qpair failed and we were unable to recover it. 00:35:55.595 [2024-07-12 00:48:23.094675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.595 [2024-07-12 00:48:23.094714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.595 qpair failed and we were unable to recover it. 00:35:55.595 [2024-07-12 00:48:23.094815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.595 [2024-07-12 00:48:23.094843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.595 qpair failed and we were unable to recover it. 00:35:55.595 [2024-07-12 00:48:23.094934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.595 [2024-07-12 00:48:23.094960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.595 qpair failed and we were unable to recover it. 00:35:55.595 [2024-07-12 00:48:23.095036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.595 [2024-07-12 00:48:23.095061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.595 qpair failed and we were unable to recover it. 00:35:55.595 [2024-07-12 00:48:23.095152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.595 [2024-07-12 00:48:23.095178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.595 qpair failed and we were unable to recover it. 00:35:55.595 [2024-07-12 00:48:23.095264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.595 [2024-07-12 00:48:23.095290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.595 qpair failed and we were unable to recover it. 00:35:55.595 [2024-07-12 00:48:23.095377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.595 [2024-07-12 00:48:23.095403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.595 qpair failed and we were unable to recover it. 00:35:55.595 [2024-07-12 00:48:23.095485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.595 [2024-07-12 00:48:23.095511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.595 qpair failed and we were unable to recover it. 00:35:55.595 [2024-07-12 00:48:23.095599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.595 [2024-07-12 00:48:23.095626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.595 qpair failed and we were unable to recover it. 00:35:55.595 [2024-07-12 00:48:23.095704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.595 [2024-07-12 00:48:23.095730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.595 qpair failed and we were unable to recover it. 00:35:55.595 [2024-07-12 00:48:23.095813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.595 [2024-07-12 00:48:23.095841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.595 qpair failed and we were unable to recover it. 00:35:55.595 [2024-07-12 00:48:23.095929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.595 [2024-07-12 00:48:23.095955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.595 qpair failed and we were unable to recover it. 00:35:55.595 [2024-07-12 00:48:23.096035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.595 [2024-07-12 00:48:23.096064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.595 qpair failed and we were unable to recover it. 00:35:55.595 [2024-07-12 00:48:23.096151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.595 [2024-07-12 00:48:23.096182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.595 qpair failed and we were unable to recover it. 00:35:55.595 [2024-07-12 00:48:23.096258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.595 [2024-07-12 00:48:23.096284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.595 qpair failed and we were unable to recover it. 00:35:55.595 [2024-07-12 00:48:23.096363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.595 [2024-07-12 00:48:23.096388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.595 qpair failed and we were unable to recover it. 00:35:55.595 [2024-07-12 00:48:23.096468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.595 [2024-07-12 00:48:23.096494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.595 qpair failed and we were unable to recover it. 00:35:55.595 [2024-07-12 00:48:23.096573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.595 [2024-07-12 00:48:23.096604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.595 qpair failed and we were unable to recover it. 00:35:55.595 [2024-07-12 00:48:23.096686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.595 [2024-07-12 00:48:23.096711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.595 qpair failed and we were unable to recover it. 00:35:55.595 [2024-07-12 00:48:23.096792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.595 [2024-07-12 00:48:23.096818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.595 qpair failed and we were unable to recover it. 00:35:55.595 [2024-07-12 00:48:23.096894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.595 [2024-07-12 00:48:23.096920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.595 qpair failed and we were unable to recover it. 00:35:55.595 [2024-07-12 00:48:23.096999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.595 [2024-07-12 00:48:23.097024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.595 qpair failed and we were unable to recover it. 00:35:55.595 [2024-07-12 00:48:23.097102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.595 [2024-07-12 00:48:23.097127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.595 qpair failed and we were unable to recover it. 00:35:55.595 [2024-07-12 00:48:23.097204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.595 [2024-07-12 00:48:23.097230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.595 qpair failed and we were unable to recover it. 00:35:55.595 [2024-07-12 00:48:23.097308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.595 [2024-07-12 00:48:23.097334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.595 qpair failed and we were unable to recover it. 00:35:55.595 [2024-07-12 00:48:23.097413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.595 [2024-07-12 00:48:23.097439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.595 qpair failed and we were unable to recover it. 00:35:55.595 [2024-07-12 00:48:23.097515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.595 [2024-07-12 00:48:23.097541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.595 qpair failed and we were unable to recover it. 00:35:55.595 [2024-07-12 00:48:23.097642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.595 [2024-07-12 00:48:23.097673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.595 qpair failed and we were unable to recover it. 00:35:55.595 [2024-07-12 00:48:23.097757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.595 [2024-07-12 00:48:23.097783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.595 qpair failed and we were unable to recover it. 00:35:55.595 [2024-07-12 00:48:23.097862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.595 [2024-07-12 00:48:23.097893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.595 qpair failed and we were unable to recover it. 00:35:55.595 [2024-07-12 00:48:23.097972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.596 [2024-07-12 00:48:23.098001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.596 qpair failed and we were unable to recover it. 00:35:55.596 [2024-07-12 00:48:23.098086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.596 [2024-07-12 00:48:23.098113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.596 qpair failed and we were unable to recover it. 00:35:55.596 [2024-07-12 00:48:23.098187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.596 [2024-07-12 00:48:23.098213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.596 qpair failed and we were unable to recover it. 00:35:55.596 [2024-07-12 00:48:23.098289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.596 [2024-07-12 00:48:23.098316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.596 qpair failed and we were unable to recover it. 00:35:55.596 [2024-07-12 00:48:23.098399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.596 [2024-07-12 00:48:23.098426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.596 qpair failed and we were unable to recover it. 00:35:55.596 [2024-07-12 00:48:23.098506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.596 [2024-07-12 00:48:23.098532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.596 qpair failed and we were unable to recover it. 00:35:55.596 [2024-07-12 00:48:23.098616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.596 [2024-07-12 00:48:23.098644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.596 qpair failed and we were unable to recover it. 00:35:55.596 [2024-07-12 00:48:23.098718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.596 [2024-07-12 00:48:23.098744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.596 qpair failed and we were unable to recover it. 00:35:55.596 [2024-07-12 00:48:23.098818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.596 [2024-07-12 00:48:23.098844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.596 qpair failed and we were unable to recover it. 00:35:55.596 [2024-07-12 00:48:23.098928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.596 [2024-07-12 00:48:23.098955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.596 qpair failed and we were unable to recover it. 00:35:55.596 [2024-07-12 00:48:23.099038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.596 [2024-07-12 00:48:23.099068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.596 qpair failed and we were unable to recover it. 00:35:55.596 [2024-07-12 00:48:23.099147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.596 [2024-07-12 00:48:23.099172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.596 qpair failed and we were unable to recover it. 00:35:55.596 [2024-07-12 00:48:23.099252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.596 [2024-07-12 00:48:23.099279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.596 qpair failed and we were unable to recover it. 00:35:55.596 [2024-07-12 00:48:23.099368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.596 [2024-07-12 00:48:23.099393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.596 qpair failed and we were unable to recover it. 00:35:55.596 [2024-07-12 00:48:23.099474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.596 [2024-07-12 00:48:23.099500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.596 qpair failed and we were unable to recover it. 00:35:55.596 [2024-07-12 00:48:23.099584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.596 [2024-07-12 00:48:23.099625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.596 qpair failed and we were unable to recover it. 00:35:55.596 [2024-07-12 00:48:23.099707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.596 [2024-07-12 00:48:23.099733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.596 qpair failed and we were unable to recover it. 00:35:55.596 [2024-07-12 00:48:23.099807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.596 [2024-07-12 00:48:23.099834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.596 qpair failed and we were unable to recover it. 00:35:55.596 [2024-07-12 00:48:23.099914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.596 [2024-07-12 00:48:23.099940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.596 qpair failed and we were unable to recover it. 00:35:55.596 [2024-07-12 00:48:23.100015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.596 [2024-07-12 00:48:23.100041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.596 qpair failed and we were unable to recover it. 00:35:55.596 [2024-07-12 00:48:23.100120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.596 [2024-07-12 00:48:23.100146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.596 qpair failed and we were unable to recover it. 00:35:55.596 [2024-07-12 00:48:23.100222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.596 [2024-07-12 00:48:23.100248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.596 qpair failed and we were unable to recover it. 00:35:55.596 [2024-07-12 00:48:23.100322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.596 [2024-07-12 00:48:23.100347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.596 qpair failed and we were unable to recover it. 00:35:55.596 [2024-07-12 00:48:23.100423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.596 [2024-07-12 00:48:23.100449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.596 qpair failed and we were unable to recover it. 00:35:55.596 [2024-07-12 00:48:23.100539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.596 [2024-07-12 00:48:23.100565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.596 qpair failed and we were unable to recover it. 00:35:55.596 [2024-07-12 00:48:23.100648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.596 [2024-07-12 00:48:23.100674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.596 qpair failed and we were unable to recover it. 00:35:55.596 [2024-07-12 00:48:23.100752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.596 [2024-07-12 00:48:23.100779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.596 qpair failed and we were unable to recover it. 00:35:55.596 [2024-07-12 00:48:23.100862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.596 [2024-07-12 00:48:23.100888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.596 qpair failed and we were unable to recover it. 00:35:55.596 [2024-07-12 00:48:23.100971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.596 [2024-07-12 00:48:23.100997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.596 qpair failed and we were unable to recover it. 00:35:55.596 [2024-07-12 00:48:23.101083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.596 [2024-07-12 00:48:23.101109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.596 qpair failed and we were unable to recover it. 00:35:55.596 [2024-07-12 00:48:23.101194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.596 [2024-07-12 00:48:23.101223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.596 qpair failed and we were unable to recover it. 00:35:55.596 [2024-07-12 00:48:23.101299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.596 [2024-07-12 00:48:23.101325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.596 qpair failed and we were unable to recover it. 00:35:55.596 [2024-07-12 00:48:23.101400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.596 [2024-07-12 00:48:23.101427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.596 qpair failed and we were unable to recover it. 00:35:55.596 [2024-07-12 00:48:23.101514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.596 [2024-07-12 00:48:23.101541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.596 qpair failed and we were unable to recover it. 00:35:55.596 [2024-07-12 00:48:23.101629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.596 [2024-07-12 00:48:23.101656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.596 qpair failed and we were unable to recover it. 00:35:55.597 [2024-07-12 00:48:23.101731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.597 [2024-07-12 00:48:23.101757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.597 qpair failed and we were unable to recover it. 00:35:55.597 [2024-07-12 00:48:23.101839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.597 [2024-07-12 00:48:23.101866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.597 qpair failed and we were unable to recover it. 00:35:55.597 [2024-07-12 00:48:23.101952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.597 [2024-07-12 00:48:23.101977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.597 qpair failed and we were unable to recover it. 00:35:55.597 [2024-07-12 00:48:23.102054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.597 [2024-07-12 00:48:23.102079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.597 qpair failed and we were unable to recover it. 00:35:55.597 [2024-07-12 00:48:23.102157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.597 [2024-07-12 00:48:23.102186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.597 qpair failed and we were unable to recover it. 00:35:55.597 [2024-07-12 00:48:23.102273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.597 [2024-07-12 00:48:23.102298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.597 qpair failed and we were unable to recover it. 00:35:55.597 [2024-07-12 00:48:23.102383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.597 [2024-07-12 00:48:23.102410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.597 qpair failed and we were unable to recover it. 00:35:55.597 [2024-07-12 00:48:23.102487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.597 [2024-07-12 00:48:23.102515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.597 qpair failed and we were unable to recover it. 00:35:55.597 [2024-07-12 00:48:23.102606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.597 [2024-07-12 00:48:23.102633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.597 qpair failed and we were unable to recover it. 00:35:55.597 [2024-07-12 00:48:23.102719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.597 [2024-07-12 00:48:23.102746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.597 qpair failed and we were unable to recover it. 00:35:55.597 [2024-07-12 00:48:23.102827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.597 [2024-07-12 00:48:23.102853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.597 qpair failed and we were unable to recover it. 00:35:55.597 [2024-07-12 00:48:23.102940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.597 [2024-07-12 00:48:23.102966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.597 qpair failed and we were unable to recover it. 00:35:55.597 [2024-07-12 00:48:23.103045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.597 [2024-07-12 00:48:23.103071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.597 qpair failed and we were unable to recover it. 00:35:55.597 [2024-07-12 00:48:23.103163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.597 [2024-07-12 00:48:23.103192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.597 qpair failed and we were unable to recover it. 00:35:55.597 [2024-07-12 00:48:23.103274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.597 [2024-07-12 00:48:23.103299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.597 qpair failed and we were unable to recover it. 00:35:55.597 [2024-07-12 00:48:23.103381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.597 [2024-07-12 00:48:23.103414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.597 qpair failed and we were unable to recover it. 00:35:55.597 [2024-07-12 00:48:23.103499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.597 [2024-07-12 00:48:23.103524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.597 qpair failed and we were unable to recover it. 00:35:55.597 [2024-07-12 00:48:23.103622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.597 [2024-07-12 00:48:23.103649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.597 qpair failed and we were unable to recover it. 00:35:55.597 [2024-07-12 00:48:23.103738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.597 [2024-07-12 00:48:23.103764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.597 qpair failed and we were unable to recover it. 00:35:55.597 [2024-07-12 00:48:23.103843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.597 [2024-07-12 00:48:23.103868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.597 qpair failed and we were unable to recover it. 00:35:55.597 [2024-07-12 00:48:23.103947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.597 [2024-07-12 00:48:23.103973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.597 qpair failed and we were unable to recover it. 00:35:55.597 [2024-07-12 00:48:23.104061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.597 [2024-07-12 00:48:23.104087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.597 qpair failed and we were unable to recover it. 00:35:55.597 [2024-07-12 00:48:23.104170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.597 [2024-07-12 00:48:23.104196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.597 qpair failed and we were unable to recover it. 00:35:55.597 [2024-07-12 00:48:23.104276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.597 [2024-07-12 00:48:23.104302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.597 qpair failed and we were unable to recover it. 00:35:55.597 [2024-07-12 00:48:23.104388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.597 [2024-07-12 00:48:23.104413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.597 qpair failed and we were unable to recover it. 00:35:55.597 [2024-07-12 00:48:23.104489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.597 [2024-07-12 00:48:23.104517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.597 qpair failed and we were unable to recover it. 00:35:55.597 [2024-07-12 00:48:23.104611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.597 [2024-07-12 00:48:23.104638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.597 qpair failed and we were unable to recover it. 00:35:55.597 [2024-07-12 00:48:23.104719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.597 [2024-07-12 00:48:23.104745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.597 qpair failed and we were unable to recover it. 00:35:55.597 [2024-07-12 00:48:23.104874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.597 [2024-07-12 00:48:23.104900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.597 qpair failed and we were unable to recover it. 00:35:55.597 [2024-07-12 00:48:23.104988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.597 [2024-07-12 00:48:23.105016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.597 qpair failed and we were unable to recover it. 00:35:55.597 [2024-07-12 00:48:23.105100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.597 [2024-07-12 00:48:23.105126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.597 qpair failed and we were unable to recover it. 00:35:55.597 [2024-07-12 00:48:23.105216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.597 [2024-07-12 00:48:23.105242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.597 qpair failed and we were unable to recover it. 00:35:55.597 [2024-07-12 00:48:23.105326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.597 [2024-07-12 00:48:23.105351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.597 qpair failed and we were unable to recover it. 00:35:55.597 [2024-07-12 00:48:23.105433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.597 [2024-07-12 00:48:23.105462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.597 qpair failed and we were unable to recover it. 00:35:55.597 [2024-07-12 00:48:23.105538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.597 [2024-07-12 00:48:23.105564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.597 qpair failed and we were unable to recover it. 00:35:55.597 [2024-07-12 00:48:23.105652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.597 [2024-07-12 00:48:23.105679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.597 qpair failed and we were unable to recover it. 00:35:55.597 [2024-07-12 00:48:23.105763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.597 [2024-07-12 00:48:23.105789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.597 qpair failed and we were unable to recover it. 00:35:55.597 [2024-07-12 00:48:23.105869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.597 [2024-07-12 00:48:23.105899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.597 qpair failed and we were unable to recover it. 00:35:55.597 [2024-07-12 00:48:23.105980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.597 [2024-07-12 00:48:23.106007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.597 qpair failed and we were unable to recover it. 00:35:55.597 [2024-07-12 00:48:23.106091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.597 [2024-07-12 00:48:23.106118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.597 qpair failed and we were unable to recover it. 00:35:55.598 [2024-07-12 00:48:23.106199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.598 [2024-07-12 00:48:23.106226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.598 qpair failed and we were unable to recover it. 00:35:55.598 [2024-07-12 00:48:23.106310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.598 [2024-07-12 00:48:23.106342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.598 qpair failed and we were unable to recover it. 00:35:55.598 [2024-07-12 00:48:23.106423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.598 [2024-07-12 00:48:23.106449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.598 qpair failed and we were unable to recover it. 00:35:55.598 [2024-07-12 00:48:23.106579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.598 [2024-07-12 00:48:23.106616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.598 qpair failed and we were unable to recover it. 00:35:55.598 [2024-07-12 00:48:23.106707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.598 [2024-07-12 00:48:23.106733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.598 qpair failed and we were unable to recover it. 00:35:55.598 [2024-07-12 00:48:23.106814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.598 [2024-07-12 00:48:23.106841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.598 qpair failed and we were unable to recover it. 00:35:55.598 [2024-07-12 00:48:23.106917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.598 [2024-07-12 00:48:23.106942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.598 qpair failed and we were unable to recover it. 00:35:55.598 [2024-07-12 00:48:23.107017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.598 [2024-07-12 00:48:23.107043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.598 qpair failed and we were unable to recover it. 00:35:55.598 [2024-07-12 00:48:23.107119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.598 [2024-07-12 00:48:23.107144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.598 qpair failed and we were unable to recover it. 00:35:55.598 [2024-07-12 00:48:23.107224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.598 [2024-07-12 00:48:23.107249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.598 qpair failed and we were unable to recover it. 00:35:55.598 [2024-07-12 00:48:23.107325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.598 [2024-07-12 00:48:23.107352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.598 qpair failed and we were unable to recover it. 00:35:55.598 [2024-07-12 00:48:23.107427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.598 [2024-07-12 00:48:23.107453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.598 qpair failed and we were unable to recover it. 00:35:55.598 [2024-07-12 00:48:23.107531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.598 [2024-07-12 00:48:23.107556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.598 qpair failed and we were unable to recover it. 00:35:55.598 [2024-07-12 00:48:23.107649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.598 [2024-07-12 00:48:23.107675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.598 qpair failed and we were unable to recover it. 00:35:55.598 [2024-07-12 00:48:23.107764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.598 [2024-07-12 00:48:23.107790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.598 qpair failed and we were unable to recover it. 00:35:55.598 [2024-07-12 00:48:23.107879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.598 [2024-07-12 00:48:23.107912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.598 qpair failed and we were unable to recover it. 00:35:55.598 [2024-07-12 00:48:23.107999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.598 [2024-07-12 00:48:23.108025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.598 qpair failed and we were unable to recover it. 00:35:55.598 [2024-07-12 00:48:23.108109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.598 [2024-07-12 00:48:23.108135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.598 qpair failed and we were unable to recover it. 00:35:55.598 [2024-07-12 00:48:23.108212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.598 [2024-07-12 00:48:23.108238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.598 qpair failed and we were unable to recover it. 00:35:55.598 [2024-07-12 00:48:23.108321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.598 [2024-07-12 00:48:23.108347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.598 qpair failed and we were unable to recover it. 00:35:55.598 [2024-07-12 00:48:23.108430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.598 [2024-07-12 00:48:23.108456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.598 qpair failed and we were unable to recover it. 00:35:55.598 [2024-07-12 00:48:23.108532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.598 [2024-07-12 00:48:23.108558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.598 qpair failed and we were unable to recover it. 00:35:55.598 [2024-07-12 00:48:23.108643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.598 [2024-07-12 00:48:23.108670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.598 qpair failed and we were unable to recover it. 00:35:55.598 [2024-07-12 00:48:23.108743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.598 [2024-07-12 00:48:23.108769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.598 qpair failed and we were unable to recover it. 00:35:55.598 [2024-07-12 00:48:23.108897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.598 [2024-07-12 00:48:23.108923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.598 qpair failed and we were unable to recover it. 00:35:55.598 [2024-07-12 00:48:23.109003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.598 [2024-07-12 00:48:23.109029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.598 qpair failed and we were unable to recover it. 00:35:55.598 [2024-07-12 00:48:23.109103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.598 [2024-07-12 00:48:23.109129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.598 qpair failed and we were unable to recover it. 00:35:55.598 [2024-07-12 00:48:23.109206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.598 [2024-07-12 00:48:23.109232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.598 qpair failed and we were unable to recover it. 00:35:55.598 [2024-07-12 00:48:23.109310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.598 [2024-07-12 00:48:23.109336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.598 qpair failed and we were unable to recover it. 00:35:55.598 [2024-07-12 00:48:23.109417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.598 [2024-07-12 00:48:23.109443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.598 qpair failed and we were unable to recover it. 00:35:55.598 [2024-07-12 00:48:23.109525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.598 [2024-07-12 00:48:23.109557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.598 qpair failed and we were unable to recover it. 00:35:55.598 [2024-07-12 00:48:23.109655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.598 [2024-07-12 00:48:23.109681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.598 qpair failed and we were unable to recover it. 00:35:55.598 [2024-07-12 00:48:23.109757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.598 [2024-07-12 00:48:23.109783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.598 qpair failed and we were unable to recover it. 00:35:55.598 [2024-07-12 00:48:23.109861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.598 [2024-07-12 00:48:23.109886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.598 qpair failed and we were unable to recover it. 00:35:55.598 [2024-07-12 00:48:23.109972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.598 [2024-07-12 00:48:23.109997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.598 qpair failed and we were unable to recover it. 00:35:55.598 [2024-07-12 00:48:23.110077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.598 [2024-07-12 00:48:23.110102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.598 qpair failed and we were unable to recover it. 00:35:55.598 [2024-07-12 00:48:23.110190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.598 [2024-07-12 00:48:23.110219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.598 qpair failed and we were unable to recover it. 00:35:55.598 [2024-07-12 00:48:23.110295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.598 [2024-07-12 00:48:23.110321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.598 qpair failed and we were unable to recover it. 00:35:55.598 [2024-07-12 00:48:23.110396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.598 [2024-07-12 00:48:23.110427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.598 qpair failed and we were unable to recover it. 00:35:55.598 [2024-07-12 00:48:23.110504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.598 [2024-07-12 00:48:23.110530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.598 qpair failed and we were unable to recover it. 00:35:55.598 [2024-07-12 00:48:23.110606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.598 [2024-07-12 00:48:23.110633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.599 qpair failed and we were unable to recover it. 00:35:55.599 [2024-07-12 00:48:23.110717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.599 [2024-07-12 00:48:23.110744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.599 qpair failed and we were unable to recover it. 00:35:55.599 [2024-07-12 00:48:23.110836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.599 [2024-07-12 00:48:23.110864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.599 qpair failed and we were unable to recover it. 00:35:55.599 [2024-07-12 00:48:23.110945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.599 [2024-07-12 00:48:23.110973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.599 qpair failed and we were unable to recover it. 00:35:55.599 [2024-07-12 00:48:23.111051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.599 [2024-07-12 00:48:23.111078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.599 qpair failed and we were unable to recover it. 00:35:55.599 [2024-07-12 00:48:23.111162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.599 [2024-07-12 00:48:23.111188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.599 qpair failed and we were unable to recover it. 00:35:55.599 [2024-07-12 00:48:23.111269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.599 [2024-07-12 00:48:23.111294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.599 qpair failed and we were unable to recover it. 00:35:55.599 [2024-07-12 00:48:23.111370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.599 [2024-07-12 00:48:23.111395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.599 qpair failed and we were unable to recover it. 00:35:55.599 [2024-07-12 00:48:23.111484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.599 [2024-07-12 00:48:23.111509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.599 qpair failed and we were unable to recover it. 00:35:55.599 [2024-07-12 00:48:23.111607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.599 [2024-07-12 00:48:23.111634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.599 qpair failed and we were unable to recover it. 00:35:55.599 [2024-07-12 00:48:23.111710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.599 [2024-07-12 00:48:23.111737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.599 qpair failed and we were unable to recover it. 00:35:55.599 [2024-07-12 00:48:23.111812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.599 [2024-07-12 00:48:23.111837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.599 qpair failed and we were unable to recover it. 00:35:55.599 [2024-07-12 00:48:23.111913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.599 [2024-07-12 00:48:23.111938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.599 qpair failed and we were unable to recover it. 00:35:55.599 [2024-07-12 00:48:23.112014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.599 [2024-07-12 00:48:23.112039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.599 qpair failed and we were unable to recover it. 00:35:55.599 [2024-07-12 00:48:23.112125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.599 [2024-07-12 00:48:23.112154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.599 qpair failed and we were unable to recover it. 00:35:55.599 [2024-07-12 00:48:23.112231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.599 [2024-07-12 00:48:23.112263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.599 qpair failed and we were unable to recover it. 00:35:55.599 [2024-07-12 00:48:23.112351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.599 [2024-07-12 00:48:23.112378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.599 qpair failed and we were unable to recover it. 00:35:55.599 [2024-07-12 00:48:23.112466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.599 [2024-07-12 00:48:23.112492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.599 qpair failed and we were unable to recover it. 00:35:55.599 [2024-07-12 00:48:23.112569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.599 [2024-07-12 00:48:23.112601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.599 qpair failed and we were unable to recover it. 00:35:55.599 [2024-07-12 00:48:23.112685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.599 [2024-07-12 00:48:23.112716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.599 qpair failed and we were unable to recover it. 00:35:55.599 [2024-07-12 00:48:23.112800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.599 [2024-07-12 00:48:23.112829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.599 qpair failed and we were unable to recover it. 00:35:55.599 [2024-07-12 00:48:23.112914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.599 [2024-07-12 00:48:23.112940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.599 qpair failed and we were unable to recover it. 00:35:55.599 [2024-07-12 00:48:23.113014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.599 [2024-07-12 00:48:23.113040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.599 qpair failed and we were unable to recover it. 00:35:55.599 [2024-07-12 00:48:23.113113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.599 [2024-07-12 00:48:23.113139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.599 qpair failed and we were unable to recover it. 00:35:55.599 [2024-07-12 00:48:23.113227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.599 [2024-07-12 00:48:23.113254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.599 qpair failed and we were unable to recover it. 00:35:55.599 [2024-07-12 00:48:23.113339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.599 [2024-07-12 00:48:23.113367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.599 qpair failed and we were unable to recover it. 00:35:55.599 [2024-07-12 00:48:23.113448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.599 [2024-07-12 00:48:23.113474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.599 qpair failed and we were unable to recover it. 00:35:55.599 [2024-07-12 00:48:23.113606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.599 [2024-07-12 00:48:23.113633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.599 qpair failed and we were unable to recover it. 00:35:55.599 [2024-07-12 00:48:23.113709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.599 [2024-07-12 00:48:23.113735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.599 qpair failed and we were unable to recover it. 00:35:55.599 [2024-07-12 00:48:23.113817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.599 [2024-07-12 00:48:23.113843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.599 qpair failed and we were unable to recover it. 00:35:55.599 [2024-07-12 00:48:23.113918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.599 [2024-07-12 00:48:23.113944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.599 qpair failed and we were unable to recover it. 00:35:55.599 [2024-07-12 00:48:23.114020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.599 [2024-07-12 00:48:23.114046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.599 qpair failed and we were unable to recover it. 00:35:55.599 [2024-07-12 00:48:23.114130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.599 [2024-07-12 00:48:23.114157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.599 qpair failed and we were unable to recover it. 00:35:55.599 [2024-07-12 00:48:23.114241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.599 [2024-07-12 00:48:23.114267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.599 qpair failed and we were unable to recover it. 00:35:55.599 [2024-07-12 00:48:23.114356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.599 [2024-07-12 00:48:23.114383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.599 qpair failed and we were unable to recover it. 00:35:55.599 [2024-07-12 00:48:23.114465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.599 [2024-07-12 00:48:23.114492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.599 qpair failed and we were unable to recover it. 00:35:55.599 [2024-07-12 00:48:23.114568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.599 [2024-07-12 00:48:23.114604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.599 qpair failed and we were unable to recover it. 00:35:55.599 [2024-07-12 00:48:23.114682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.599 [2024-07-12 00:48:23.114712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.599 qpair failed and we were unable to recover it. 00:35:55.599 [2024-07-12 00:48:23.114795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.599 [2024-07-12 00:48:23.114823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.599 qpair failed and we were unable to recover it. 00:35:55.599 [2024-07-12 00:48:23.114900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.599 [2024-07-12 00:48:23.114926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.599 qpair failed and we were unable to recover it. 00:35:55.599 [2024-07-12 00:48:23.115008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.599 [2024-07-12 00:48:23.115034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.599 qpair failed and we were unable to recover it. 00:35:55.600 [2024-07-12 00:48:23.115122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.600 [2024-07-12 00:48:23.115148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.600 qpair failed and we were unable to recover it. 00:35:55.600 [2024-07-12 00:48:23.115244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.600 [2024-07-12 00:48:23.115270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.600 qpair failed and we were unable to recover it. 00:35:55.600 [2024-07-12 00:48:23.115356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.600 [2024-07-12 00:48:23.115382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.600 qpair failed and we were unable to recover it. 00:35:55.600 [2024-07-12 00:48:23.115459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.600 [2024-07-12 00:48:23.115487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.600 qpair failed and we were unable to recover it. 00:35:55.600 [2024-07-12 00:48:23.115575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.600 [2024-07-12 00:48:23.115610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.600 qpair failed and we were unable to recover it. 00:35:55.600 [2024-07-12 00:48:23.115686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.600 [2024-07-12 00:48:23.115712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.600 qpair failed and we were unable to recover it. 00:35:55.600 [2024-07-12 00:48:23.115800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.600 [2024-07-12 00:48:23.115827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.600 qpair failed and we were unable to recover it. 00:35:55.600 [2024-07-12 00:48:23.115911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.600 [2024-07-12 00:48:23.115937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.600 qpair failed and we were unable to recover it. 00:35:55.600 [2024-07-12 00:48:23.116026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.600 [2024-07-12 00:48:23.116053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.600 qpair failed and we were unable to recover it. 00:35:55.600 [2024-07-12 00:48:23.116134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.600 [2024-07-12 00:48:23.116160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.600 qpair failed and we were unable to recover it. 00:35:55.600 [2024-07-12 00:48:23.116239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.600 [2024-07-12 00:48:23.116263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.600 qpair failed and we were unable to recover it. 00:35:55.600 [2024-07-12 00:48:23.116339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.600 [2024-07-12 00:48:23.116365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.600 qpair failed and we were unable to recover it. 00:35:55.600 [2024-07-12 00:48:23.116453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.600 [2024-07-12 00:48:23.116482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.600 qpair failed and we were unable to recover it. 00:35:55.600 [2024-07-12 00:48:23.116571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.600 [2024-07-12 00:48:23.116606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.600 qpair failed and we were unable to recover it. 00:35:55.600 [2024-07-12 00:48:23.116697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.600 [2024-07-12 00:48:23.116729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.600 qpair failed and we were unable to recover it. 00:35:55.600 [2024-07-12 00:48:23.116814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.600 [2024-07-12 00:48:23.116844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.600 qpair failed and we were unable to recover it. 00:35:55.600 [2024-07-12 00:48:23.116929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.600 [2024-07-12 00:48:23.116956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.600 qpair failed and we were unable to recover it. 00:35:55.600 [2024-07-12 00:48:23.117042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.600 [2024-07-12 00:48:23.117067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.600 qpair failed and we were unable to recover it. 00:35:55.600 [2024-07-12 00:48:23.117152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.600 [2024-07-12 00:48:23.117180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.600 qpair failed and we were unable to recover it. 00:35:55.600 [2024-07-12 00:48:23.117270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.600 [2024-07-12 00:48:23.117296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.600 qpair failed and we were unable to recover it. 00:35:55.600 [2024-07-12 00:48:23.117378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.600 [2024-07-12 00:48:23.117403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.600 qpair failed and we were unable to recover it. 00:35:55.600 [2024-07-12 00:48:23.117491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.600 [2024-07-12 00:48:23.117519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.600 qpair failed and we were unable to recover it. 00:35:55.600 [2024-07-12 00:48:23.117602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.600 [2024-07-12 00:48:23.117629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.600 qpair failed and we were unable to recover it. 00:35:55.600 [2024-07-12 00:48:23.117716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.600 [2024-07-12 00:48:23.117742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.600 qpair failed and we were unable to recover it. 00:35:55.600 [2024-07-12 00:48:23.117818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.600 [2024-07-12 00:48:23.117844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.600 qpair failed and we were unable to recover it. 00:35:55.600 [2024-07-12 00:48:23.117924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.600 [2024-07-12 00:48:23.117949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.600 qpair failed and we were unable to recover it. 00:35:55.600 [2024-07-12 00:48:23.118029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.600 [2024-07-12 00:48:23.118055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.600 qpair failed and we were unable to recover it. 00:35:55.600 [2024-07-12 00:48:23.118128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.600 [2024-07-12 00:48:23.118154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.600 qpair failed and we were unable to recover it. 00:35:55.600 [2024-07-12 00:48:23.118255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.600 [2024-07-12 00:48:23.118281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.600 qpair failed and we were unable to recover it. 00:35:55.600 [2024-07-12 00:48:23.118367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.600 [2024-07-12 00:48:23.118393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.600 qpair failed and we were unable to recover it. 00:35:55.600 [2024-07-12 00:48:23.118474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.600 [2024-07-12 00:48:23.118503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.600 qpair failed and we were unable to recover it. 00:35:55.600 [2024-07-12 00:48:23.118578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.600 [2024-07-12 00:48:23.118611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.600 qpair failed and we were unable to recover it. 00:35:55.600 [2024-07-12 00:48:23.118692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.600 [2024-07-12 00:48:23.118717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.600 qpair failed and we were unable to recover it. 00:35:55.600 [2024-07-12 00:48:23.118801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.600 [2024-07-12 00:48:23.118826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.600 qpair failed and we were unable to recover it. 00:35:55.600 [2024-07-12 00:48:23.118908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.600 [2024-07-12 00:48:23.118933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.600 qpair failed and we were unable to recover it. 00:35:55.600 [2024-07-12 00:48:23.119013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.600 [2024-07-12 00:48:23.119041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.600 qpair failed and we were unable to recover it. 00:35:55.600 [2024-07-12 00:48:23.119121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.600 [2024-07-12 00:48:23.119146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.600 qpair failed and we were unable to recover it. 00:35:55.600 [2024-07-12 00:48:23.119221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.600 [2024-07-12 00:48:23.119247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.600 qpair failed and we were unable to recover it. 00:35:55.600 [2024-07-12 00:48:23.119328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.600 [2024-07-12 00:48:23.119354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.600 qpair failed and we were unable to recover it. 00:35:55.600 [2024-07-12 00:48:23.119437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.600 [2024-07-12 00:48:23.119462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.600 qpair failed and we were unable to recover it. 00:35:55.600 [2024-07-12 00:48:23.119540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.601 [2024-07-12 00:48:23.119565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.601 qpair failed and we were unable to recover it. 00:35:55.601 [2024-07-12 00:48:23.119655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.601 [2024-07-12 00:48:23.119680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.601 qpair failed and we were unable to recover it. 00:35:55.601 [2024-07-12 00:48:23.119763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.601 [2024-07-12 00:48:23.119793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.601 qpair failed and we were unable to recover it. 00:35:55.601 [2024-07-12 00:48:23.119875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.601 [2024-07-12 00:48:23.119901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.601 qpair failed and we were unable to recover it. 00:35:55.601 [2024-07-12 00:48:23.119980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.601 [2024-07-12 00:48:23.120006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.601 qpair failed and we were unable to recover it. 00:35:55.601 [2024-07-12 00:48:23.120081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.601 [2024-07-12 00:48:23.120106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.601 qpair failed and we were unable to recover it. 00:35:55.601 [2024-07-12 00:48:23.120179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.601 [2024-07-12 00:48:23.120204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.601 qpair failed and we were unable to recover it. 00:35:55.601 [2024-07-12 00:48:23.120286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.601 [2024-07-12 00:48:23.120312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.601 qpair failed and we were unable to recover it. 00:35:55.601 [2024-07-12 00:48:23.120390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.601 [2024-07-12 00:48:23.120415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.601 qpair failed and we were unable to recover it. 00:35:55.601 [2024-07-12 00:48:23.120493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.601 [2024-07-12 00:48:23.120519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.601 qpair failed and we were unable to recover it. 00:35:55.601 [2024-07-12 00:48:23.120607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.601 [2024-07-12 00:48:23.120634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.601 qpair failed and we were unable to recover it. 00:35:55.601 [2024-07-12 00:48:23.120720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.601 [2024-07-12 00:48:23.120749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.601 qpair failed and we were unable to recover it. 00:35:55.601 [2024-07-12 00:48:23.120846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.601 [2024-07-12 00:48:23.120872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.601 qpair failed and we were unable to recover it. 00:35:55.601 [2024-07-12 00:48:23.120955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.601 [2024-07-12 00:48:23.120981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.601 qpair failed and we were unable to recover it. 00:35:55.601 [2024-07-12 00:48:23.121060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.601 [2024-07-12 00:48:23.121091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.601 qpair failed and we were unable to recover it. 00:35:55.601 [2024-07-12 00:48:23.121176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.601 [2024-07-12 00:48:23.121201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.601 qpair failed and we were unable to recover it. 00:35:55.601 [2024-07-12 00:48:23.121280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.601 [2024-07-12 00:48:23.121307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.601 qpair failed and we were unable to recover it. 00:35:55.601 [2024-07-12 00:48:23.121390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.601 [2024-07-12 00:48:23.121417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.601 qpair failed and we were unable to recover it. 00:35:55.601 [2024-07-12 00:48:23.121502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.601 [2024-07-12 00:48:23.121530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.601 qpair failed and we were unable to recover it. 00:35:55.601 [2024-07-12 00:48:23.121616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.601 [2024-07-12 00:48:23.121643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.601 qpair failed and we were unable to recover it. 00:35:55.601 [2024-07-12 00:48:23.121730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.601 [2024-07-12 00:48:23.121757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.601 qpair failed and we were unable to recover it. 00:35:55.601 [2024-07-12 00:48:23.121832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.601 [2024-07-12 00:48:23.121858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.601 qpair failed and we were unable to recover it. 00:35:55.601 [2024-07-12 00:48:23.121941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.601 [2024-07-12 00:48:23.121967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.601 qpair failed and we were unable to recover it. 00:35:55.601 [2024-07-12 00:48:23.122054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.601 [2024-07-12 00:48:23.122083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.601 qpair failed and we were unable to recover it. 00:35:55.601 [2024-07-12 00:48:23.122174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.601 [2024-07-12 00:48:23.122199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.601 qpair failed and we were unable to recover it. 00:35:55.601 [2024-07-12 00:48:23.122278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.601 [2024-07-12 00:48:23.122304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.601 qpair failed and we were unable to recover it. 00:35:55.601 [2024-07-12 00:48:23.122379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.601 [2024-07-12 00:48:23.122404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.601 qpair failed and we were unable to recover it. 00:35:55.601 [2024-07-12 00:48:23.122487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.601 [2024-07-12 00:48:23.122513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.601 qpair failed and we were unable to recover it. 00:35:55.601 [2024-07-12 00:48:23.122601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.601 [2024-07-12 00:48:23.122631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.601 qpair failed and we were unable to recover it. 00:35:55.601 [2024-07-12 00:48:23.122709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.601 [2024-07-12 00:48:23.122737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.601 qpair failed and we were unable to recover it. 00:35:55.601 [2024-07-12 00:48:23.122817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.601 [2024-07-12 00:48:23.122843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.601 qpair failed and we were unable to recover it. 00:35:55.601 [2024-07-12 00:48:23.122926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.601 [2024-07-12 00:48:23.122954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.601 qpair failed and we were unable to recover it. 00:35:55.601 [2024-07-12 00:48:23.123036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.601 [2024-07-12 00:48:23.123063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.601 qpair failed and we were unable to recover it. 00:35:55.601 [2024-07-12 00:48:23.123145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.601 [2024-07-12 00:48:23.123171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.601 qpair failed and we were unable to recover it. 00:35:55.602 [2024-07-12 00:48:23.123248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.602 [2024-07-12 00:48:23.123278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.602 qpair failed and we were unable to recover it. 00:35:55.602 [2024-07-12 00:48:23.123357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.602 [2024-07-12 00:48:23.123384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.602 qpair failed and we were unable to recover it. 00:35:55.602 [2024-07-12 00:48:23.123471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.602 [2024-07-12 00:48:23.123497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.602 qpair failed and we were unable to recover it. 00:35:55.602 [2024-07-12 00:48:23.123573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.602 [2024-07-12 00:48:23.123605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.602 qpair failed and we were unable to recover it. 00:35:55.602 [2024-07-12 00:48:23.123681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.602 [2024-07-12 00:48:23.123706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.602 qpair failed and we were unable to recover it. 00:35:55.602 [2024-07-12 00:48:23.123789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.602 [2024-07-12 00:48:23.123814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.602 qpair failed and we were unable to recover it. 00:35:55.602 [2024-07-12 00:48:23.123888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.602 [2024-07-12 00:48:23.123913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.602 qpair failed and we were unable to recover it. 00:35:55.602 [2024-07-12 00:48:23.123998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.602 [2024-07-12 00:48:23.124023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.602 qpair failed and we were unable to recover it. 00:35:55.602 [2024-07-12 00:48:23.124102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.602 [2024-07-12 00:48:23.124132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.602 qpair failed and we were unable to recover it. 00:35:55.602 [2024-07-12 00:48:23.124212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.602 [2024-07-12 00:48:23.124242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.602 qpair failed and we were unable to recover it. 00:35:55.602 [2024-07-12 00:48:23.124318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.602 [2024-07-12 00:48:23.124343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.602 qpair failed and we were unable to recover it. 00:35:55.602 [2024-07-12 00:48:23.124423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.602 [2024-07-12 00:48:23.124448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.602 qpair failed and we were unable to recover it. 00:35:55.602 [2024-07-12 00:48:23.124523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.602 [2024-07-12 00:48:23.124547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.602 qpair failed and we were unable to recover it. 00:35:55.602 [2024-07-12 00:48:23.124638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.602 [2024-07-12 00:48:23.124665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.602 qpair failed and we were unable to recover it. 00:35:55.602 [2024-07-12 00:48:23.124740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.602 [2024-07-12 00:48:23.124765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.602 qpair failed and we were unable to recover it. 00:35:55.602 [2024-07-12 00:48:23.124841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.602 [2024-07-12 00:48:23.124867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.602 qpair failed and we were unable to recover it. 00:35:55.602 [2024-07-12 00:48:23.124952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.602 [2024-07-12 00:48:23.124981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.602 qpair failed and we were unable to recover it. 00:35:55.602 [2024-07-12 00:48:23.125059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.602 [2024-07-12 00:48:23.125085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.602 qpair failed and we were unable to recover it. 00:35:55.602 [2024-07-12 00:48:23.125166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.602 [2024-07-12 00:48:23.125193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.602 qpair failed and we were unable to recover it. 00:35:55.602 [2024-07-12 00:48:23.125268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.602 [2024-07-12 00:48:23.125295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.602 qpair failed and we were unable to recover it. 00:35:55.602 [2024-07-12 00:48:23.125369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.602 [2024-07-12 00:48:23.125400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.602 qpair failed and we were unable to recover it. 00:35:55.602 [2024-07-12 00:48:23.125485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.602 [2024-07-12 00:48:23.125513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.602 qpair failed and we were unable to recover it. 00:35:55.602 [2024-07-12 00:48:23.125599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.602 [2024-07-12 00:48:23.125627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.602 qpair failed and we were unable to recover it. 00:35:55.602 [2024-07-12 00:48:23.125717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.602 [2024-07-12 00:48:23.125742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.602 qpair failed and we were unable to recover it. 00:35:55.602 [2024-07-12 00:48:23.125824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.602 [2024-07-12 00:48:23.125851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.602 qpair failed and we were unable to recover it. 00:35:55.602 [2024-07-12 00:48:23.125932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.602 [2024-07-12 00:48:23.125958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.602 qpair failed and we were unable to recover it. 00:35:55.602 [2024-07-12 00:48:23.126041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.602 [2024-07-12 00:48:23.126067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.602 qpair failed and we were unable to recover it. 00:35:55.602 [2024-07-12 00:48:23.126148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.602 [2024-07-12 00:48:23.126175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.602 qpair failed and we were unable to recover it. 00:35:55.602 [2024-07-12 00:48:23.126254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.602 [2024-07-12 00:48:23.126280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.602 qpair failed and we were unable to recover it. 00:35:55.602 [2024-07-12 00:48:23.126368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.602 [2024-07-12 00:48:23.126393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.602 qpair failed and we were unable to recover it. 00:35:55.602 [2024-07-12 00:48:23.126473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.602 [2024-07-12 00:48:23.126499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.602 qpair failed and we were unable to recover it. 00:35:55.602 [2024-07-12 00:48:23.126573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.602 [2024-07-12 00:48:23.126605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.602 qpair failed and we were unable to recover it. 00:35:55.602 [2024-07-12 00:48:23.126697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.602 [2024-07-12 00:48:23.126723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.602 qpair failed and we were unable to recover it. 00:35:55.602 [2024-07-12 00:48:23.126811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.602 [2024-07-12 00:48:23.126836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.602 qpair failed and we were unable to recover it. 00:35:55.602 [2024-07-12 00:48:23.126917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.602 [2024-07-12 00:48:23.126946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.602 qpair failed and we were unable to recover it. 00:35:55.602 [2024-07-12 00:48:23.127039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.602 [2024-07-12 00:48:23.127065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.602 qpair failed and we were unable to recover it. 00:35:55.602 [2024-07-12 00:48:23.127145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.602 [2024-07-12 00:48:23.127171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.602 qpair failed and we were unable to recover it. 00:35:55.602 [2024-07-12 00:48:23.127257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.602 [2024-07-12 00:48:23.127284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.602 qpair failed and we were unable to recover it. 00:35:55.602 [2024-07-12 00:48:23.127363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.602 [2024-07-12 00:48:23.127390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.602 qpair failed and we were unable to recover it. 00:35:55.602 [2024-07-12 00:48:23.127472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.602 [2024-07-12 00:48:23.127498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.602 qpair failed and we were unable to recover it. 00:35:55.602 [2024-07-12 00:48:23.127579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.603 [2024-07-12 00:48:23.127612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.603 qpair failed and we were unable to recover it. 00:35:55.603 [2024-07-12 00:48:23.127691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.603 [2024-07-12 00:48:23.127716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.603 qpair failed and we were unable to recover it. 00:35:55.603 [2024-07-12 00:48:23.127792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.603 [2024-07-12 00:48:23.127818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.603 qpair failed and we were unable to recover it. 00:35:55.603 [2024-07-12 00:48:23.127901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.603 [2024-07-12 00:48:23.127929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.603 qpair failed and we were unable to recover it. 00:35:55.603 [2024-07-12 00:48:23.128018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.603 [2024-07-12 00:48:23.128043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.603 qpair failed and we were unable to recover it. 00:35:55.603 [2024-07-12 00:48:23.128122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.603 [2024-07-12 00:48:23.128147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.603 qpair failed and we were unable to recover it. 00:35:55.603 [2024-07-12 00:48:23.128228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.603 [2024-07-12 00:48:23.128253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.603 qpair failed and we were unable to recover it. 00:35:55.603 [2024-07-12 00:48:23.128331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.603 [2024-07-12 00:48:23.128358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.603 qpair failed and we were unable to recover it. 00:35:55.603 [2024-07-12 00:48:23.128439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.603 [2024-07-12 00:48:23.128465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.603 qpair failed and we were unable to recover it. 00:35:55.603 [2024-07-12 00:48:23.128547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.603 [2024-07-12 00:48:23.128572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.603 qpair failed and we were unable to recover it. 00:35:55.603 [2024-07-12 00:48:23.128658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.603 [2024-07-12 00:48:23.128684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.603 qpair failed and we were unable to recover it. 00:35:55.603 [2024-07-12 00:48:23.128765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.603 [2024-07-12 00:48:23.128789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.603 qpair failed and we were unable to recover it. 00:35:55.603 [2024-07-12 00:48:23.128873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.603 [2024-07-12 00:48:23.128902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.603 qpair failed and we were unable to recover it. 00:35:55.603 [2024-07-12 00:48:23.129001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.603 [2024-07-12 00:48:23.129027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.603 qpair failed and we were unable to recover it. 00:35:55.603 [2024-07-12 00:48:23.129108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.603 [2024-07-12 00:48:23.129134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.603 qpair failed and we were unable to recover it. 00:35:55.603 [2024-07-12 00:48:23.129213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.603 [2024-07-12 00:48:23.129241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.603 qpair failed and we were unable to recover it. 00:35:55.603 [2024-07-12 00:48:23.129325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.603 [2024-07-12 00:48:23.129351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.603 qpair failed and we were unable to recover it. 00:35:55.603 [2024-07-12 00:48:23.129440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.603 [2024-07-12 00:48:23.129467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.603 qpair failed and we were unable to recover it. 00:35:55.603 [2024-07-12 00:48:23.129557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.603 [2024-07-12 00:48:23.129583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.603 qpair failed and we were unable to recover it. 00:35:55.603 [2024-07-12 00:48:23.129673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.603 [2024-07-12 00:48:23.129700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.603 qpair failed and we were unable to recover it. 00:35:55.603 [2024-07-12 00:48:23.129778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.603 [2024-07-12 00:48:23.129809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.603 qpair failed and we were unable to recover it. 00:35:55.603 [2024-07-12 00:48:23.129889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.603 [2024-07-12 00:48:23.129915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.603 qpair failed and we were unable to recover it. 00:35:55.603 [2024-07-12 00:48:23.130001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.603 [2024-07-12 00:48:23.130026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.603 qpair failed and we were unable to recover it. 00:35:55.603 [2024-07-12 00:48:23.130117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.603 [2024-07-12 00:48:23.130144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.603 qpair failed and we were unable to recover it. 00:35:55.603 [2024-07-12 00:48:23.130226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.603 [2024-07-12 00:48:23.130252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.603 qpair failed and we were unable to recover it. 00:35:55.603 [2024-07-12 00:48:23.130334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.603 [2024-07-12 00:48:23.130359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.603 qpair failed and we were unable to recover it. 00:35:55.603 [2024-07-12 00:48:23.130433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.603 [2024-07-12 00:48:23.130459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.603 qpair failed and we were unable to recover it. 00:35:55.603 [2024-07-12 00:48:23.130544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.603 [2024-07-12 00:48:23.130573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.603 qpair failed and we were unable to recover it. 00:35:55.603 [2024-07-12 00:48:23.130661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.603 [2024-07-12 00:48:23.130687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.603 qpair failed and we were unable to recover it. 00:35:55.603 [2024-07-12 00:48:23.130775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.603 [2024-07-12 00:48:23.130802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.603 qpair failed and we were unable to recover it. 00:35:55.603 [2024-07-12 00:48:23.130882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.603 [2024-07-12 00:48:23.130907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.603 qpair failed and we were unable to recover it. 00:35:55.603 [2024-07-12 00:48:23.130993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.603 [2024-07-12 00:48:23.131019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.603 qpair failed and we were unable to recover it. 00:35:55.603 [2024-07-12 00:48:23.131092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.603 [2024-07-12 00:48:23.131118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.603 qpair failed and we were unable to recover it. 00:35:55.603 [2024-07-12 00:48:23.131192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.603 [2024-07-12 00:48:23.131218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.603 qpair failed and we were unable to recover it. 00:35:55.603 [2024-07-12 00:48:23.131310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.603 [2024-07-12 00:48:23.131338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.603 qpair failed and we were unable to recover it. 00:35:55.603 [2024-07-12 00:48:23.131423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.603 [2024-07-12 00:48:23.131448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.603 qpair failed and we were unable to recover it. 00:35:55.603 [2024-07-12 00:48:23.131533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.603 [2024-07-12 00:48:23.131561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.603 qpair failed and we were unable to recover it. 00:35:55.603 [2024-07-12 00:48:23.131650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.603 [2024-07-12 00:48:23.131677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.603 qpair failed and we were unable to recover it. 00:35:55.603 [2024-07-12 00:48:23.131762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.603 [2024-07-12 00:48:23.131788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.603 qpair failed and we were unable to recover it. 00:35:55.603 [2024-07-12 00:48:23.131878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.603 [2024-07-12 00:48:23.131906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.603 qpair failed and we were unable to recover it. 00:35:55.603 [2024-07-12 00:48:23.131992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.603 [2024-07-12 00:48:23.132019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.603 qpair failed and we were unable to recover it. 00:35:55.604 [2024-07-12 00:48:23.132097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.604 [2024-07-12 00:48:23.132123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.604 qpair failed and we were unable to recover it. 00:35:55.604 [2024-07-12 00:48:23.132199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.604 [2024-07-12 00:48:23.132226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.604 qpair failed and we were unable to recover it. 00:35:55.604 [2024-07-12 00:48:23.132305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.604 [2024-07-12 00:48:23.132331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.604 qpair failed and we were unable to recover it. 00:35:55.604 [2024-07-12 00:48:23.132413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.604 [2024-07-12 00:48:23.132440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.604 qpair failed and we were unable to recover it. 00:35:55.604 [2024-07-12 00:48:23.132522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.604 [2024-07-12 00:48:23.132549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.604 qpair failed and we were unable to recover it. 00:35:55.604 [2024-07-12 00:48:23.132644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.604 [2024-07-12 00:48:23.132671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.604 qpair failed and we were unable to recover it. 00:35:55.604 [2024-07-12 00:48:23.132755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.604 [2024-07-12 00:48:23.132781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.604 qpair failed and we were unable to recover it. 00:35:55.604 [2024-07-12 00:48:23.132856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.604 [2024-07-12 00:48:23.132881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.604 qpair failed and we were unable to recover it. 00:35:55.604 [2024-07-12 00:48:23.132957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.604 [2024-07-12 00:48:23.132983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.604 qpair failed and we were unable to recover it. 00:35:55.604 [2024-07-12 00:48:23.133060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.604 [2024-07-12 00:48:23.133086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.604 qpair failed and we were unable to recover it. 00:35:55.604 [2024-07-12 00:48:23.133160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.604 [2024-07-12 00:48:23.133185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.604 qpair failed and we were unable to recover it. 00:35:55.604 [2024-07-12 00:48:23.133265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.604 [2024-07-12 00:48:23.133296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.604 qpair failed and we were unable to recover it. 00:35:55.604 [2024-07-12 00:48:23.133378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.604 [2024-07-12 00:48:23.133404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.604 qpair failed and we were unable to recover it. 00:35:55.604 [2024-07-12 00:48:23.133481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.604 [2024-07-12 00:48:23.133511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.604 qpair failed and we were unable to recover it. 00:35:55.604 [2024-07-12 00:48:23.133598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.604 [2024-07-12 00:48:23.133625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.604 qpair failed and we were unable to recover it. 00:35:55.604 [2024-07-12 00:48:23.133705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.604 [2024-07-12 00:48:23.133730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.604 qpair failed and we were unable to recover it. 00:35:55.604 [2024-07-12 00:48:23.133806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.604 [2024-07-12 00:48:23.133831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.604 qpair failed and we were unable to recover it. 00:35:55.604 [2024-07-12 00:48:23.133918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.604 [2024-07-12 00:48:23.133944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.604 qpair failed and we were unable to recover it. 00:35:55.604 [2024-07-12 00:48:23.134026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.604 [2024-07-12 00:48:23.134052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.604 qpair failed and we were unable to recover it. 00:35:55.604 [2024-07-12 00:48:23.134132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.604 [2024-07-12 00:48:23.134163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.604 qpair failed and we were unable to recover it. 00:35:55.604 [2024-07-12 00:48:23.134239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.604 [2024-07-12 00:48:23.134265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.604 qpair failed and we were unable to recover it. 00:35:55.604 [2024-07-12 00:48:23.134349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.604 [2024-07-12 00:48:23.134375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.604 qpair failed and we were unable to recover it. 00:35:55.604 [2024-07-12 00:48:23.134465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.604 [2024-07-12 00:48:23.134493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.604 qpair failed and we were unable to recover it. 00:35:55.604 [2024-07-12 00:48:23.134569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.604 [2024-07-12 00:48:23.134600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.604 qpair failed and we were unable to recover it. 00:35:55.604 [2024-07-12 00:48:23.134682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.604 [2024-07-12 00:48:23.134708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.604 qpair failed and we were unable to recover it. 00:35:55.604 [2024-07-12 00:48:23.134791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.604 [2024-07-12 00:48:23.134817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.604 qpair failed and we were unable to recover it. 00:35:55.604 [2024-07-12 00:48:23.134904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.604 [2024-07-12 00:48:23.134933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.604 qpair failed and we were unable to recover it. 00:35:55.604 [2024-07-12 00:48:23.135008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.604 [2024-07-12 00:48:23.135034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.604 qpair failed and we were unable to recover it. 00:35:55.604 [2024-07-12 00:48:23.135136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.604 [2024-07-12 00:48:23.135162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.604 qpair failed and we were unable to recover it. 00:35:55.604 [2024-07-12 00:48:23.135247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.604 [2024-07-12 00:48:23.135274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.604 qpair failed and we were unable to recover it. 00:35:55.604 [2024-07-12 00:48:23.135359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.604 [2024-07-12 00:48:23.135386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.604 qpair failed and we were unable to recover it. 00:35:55.604 [2024-07-12 00:48:23.135463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.604 [2024-07-12 00:48:23.135494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.604 qpair failed and we were unable to recover it. 00:35:55.604 [2024-07-12 00:48:23.135578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.604 [2024-07-12 00:48:23.135611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.604 qpair failed and we were unable to recover it. 00:35:55.604 [2024-07-12 00:48:23.135692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.604 [2024-07-12 00:48:23.135719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.604 qpair failed and we were unable to recover it. 00:35:55.604 [2024-07-12 00:48:23.135796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.604 [2024-07-12 00:48:23.135823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.604 qpair failed and we were unable to recover it. 00:35:55.604 [2024-07-12 00:48:23.135904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.604 [2024-07-12 00:48:23.135934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.604 qpair failed and we were unable to recover it. 00:35:55.604 [2024-07-12 00:48:23.136021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.604 [2024-07-12 00:48:23.136046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.604 qpair failed and we were unable to recover it. 00:35:55.604 [2024-07-12 00:48:23.136127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.604 [2024-07-12 00:48:23.136156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.604 qpair failed and we were unable to recover it. 00:35:55.604 [2024-07-12 00:48:23.136235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.604 [2024-07-12 00:48:23.136261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.604 qpair failed and we were unable to recover it. 00:35:55.604 [2024-07-12 00:48:23.136336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.604 [2024-07-12 00:48:23.136362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.604 qpair failed and we were unable to recover it. 00:35:55.604 [2024-07-12 00:48:23.136443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.605 [2024-07-12 00:48:23.136472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.605 qpair failed and we were unable to recover it. 00:35:55.605 [2024-07-12 00:48:23.136548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.605 [2024-07-12 00:48:23.136574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.605 qpair failed and we were unable to recover it. 00:35:55.605 [2024-07-12 00:48:23.136666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.605 [2024-07-12 00:48:23.136693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.605 qpair failed and we were unable to recover it. 00:35:55.605 [2024-07-12 00:48:23.136772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.605 [2024-07-12 00:48:23.136798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.605 qpair failed and we were unable to recover it. 00:35:55.605 [2024-07-12 00:48:23.136880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.605 [2024-07-12 00:48:23.136907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.605 qpair failed and we were unable to recover it. 00:35:55.605 [2024-07-12 00:48:23.136992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.605 [2024-07-12 00:48:23.137019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.605 qpair failed and we were unable to recover it. 00:35:55.605 [2024-07-12 00:48:23.137101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.605 [2024-07-12 00:48:23.137129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.605 qpair failed and we were unable to recover it. 00:35:55.605 [2024-07-12 00:48:23.137218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.605 [2024-07-12 00:48:23.137243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.605 qpair failed and we were unable to recover it. 00:35:55.605 [2024-07-12 00:48:23.137326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.605 [2024-07-12 00:48:23.137354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.605 qpair failed and we were unable to recover it. 00:35:55.605 [2024-07-12 00:48:23.137436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.605 [2024-07-12 00:48:23.137466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.605 qpair failed and we were unable to recover it. 00:35:55.605 [2024-07-12 00:48:23.137549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.605 [2024-07-12 00:48:23.137576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.605 qpair failed and we were unable to recover it. 00:35:55.605 [2024-07-12 00:48:23.137659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.605 [2024-07-12 00:48:23.137685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.605 qpair failed and we were unable to recover it. 00:35:55.605 [2024-07-12 00:48:23.137771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.605 [2024-07-12 00:48:23.137799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.605 qpair failed and we were unable to recover it. 00:35:55.605 [2024-07-12 00:48:23.137881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.605 [2024-07-12 00:48:23.137908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.605 qpair failed and we were unable to recover it. 00:35:55.605 [2024-07-12 00:48:23.137992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.605 [2024-07-12 00:48:23.138019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.605 qpair failed and we were unable to recover it. 00:35:55.605 [2024-07-12 00:48:23.138094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.605 [2024-07-12 00:48:23.138124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.605 qpair failed and we were unable to recover it. 00:35:55.605 [2024-07-12 00:48:23.138202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.605 [2024-07-12 00:48:23.138229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.605 qpair failed and we were unable to recover it. 00:35:55.605 [2024-07-12 00:48:23.138305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.605 [2024-07-12 00:48:23.138331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.605 qpair failed and we were unable to recover it. 00:35:55.605 [2024-07-12 00:48:23.138412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.605 [2024-07-12 00:48:23.138440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.605 qpair failed and we were unable to recover it. 00:35:55.605 [2024-07-12 00:48:23.138529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.605 [2024-07-12 00:48:23.138559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.605 qpair failed and we were unable to recover it. 00:35:55.605 [2024-07-12 00:48:23.138654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.605 [2024-07-12 00:48:23.138681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.605 qpair failed and we were unable to recover it. 00:35:55.605 [2024-07-12 00:48:23.138768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.605 [2024-07-12 00:48:23.138794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.605 qpair failed and we were unable to recover it. 00:35:55.605 [2024-07-12 00:48:23.138878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.605 [2024-07-12 00:48:23.138903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.605 qpair failed and we were unable to recover it. 00:35:55.605 [2024-07-12 00:48:23.138985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.605 [2024-07-12 00:48:23.139010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.605 qpair failed and we were unable to recover it. 00:35:55.605 [2024-07-12 00:48:23.139087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.605 [2024-07-12 00:48:23.139115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.605 qpair failed and we were unable to recover it. 00:35:55.605 [2024-07-12 00:48:23.139206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.605 [2024-07-12 00:48:23.139232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.605 qpair failed and we were unable to recover it. 00:35:55.605 [2024-07-12 00:48:23.139323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.605 [2024-07-12 00:48:23.139351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.605 qpair failed and we were unable to recover it. 00:35:55.605 [2024-07-12 00:48:23.139437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.605 [2024-07-12 00:48:23.139463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.605 qpair failed and we were unable to recover it. 00:35:55.605 [2024-07-12 00:48:23.139544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.605 [2024-07-12 00:48:23.139570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.605 qpair failed and we were unable to recover it. 00:35:55.605 [2024-07-12 00:48:23.139655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.605 [2024-07-12 00:48:23.139682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.605 qpair failed and we were unable to recover it. 00:35:55.605 [2024-07-12 00:48:23.139770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.605 [2024-07-12 00:48:23.139798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.605 qpair failed and we were unable to recover it. 00:35:55.605 [2024-07-12 00:48:23.139888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.605 [2024-07-12 00:48:23.139914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.605 qpair failed and we were unable to recover it. 00:35:55.605 [2024-07-12 00:48:23.139989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.605 [2024-07-12 00:48:23.140016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.605 qpair failed and we were unable to recover it. 00:35:55.605 [2024-07-12 00:48:23.140100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.605 [2024-07-12 00:48:23.140126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.605 qpair failed and we were unable to recover it. 00:35:55.605 [2024-07-12 00:48:23.140207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.605 [2024-07-12 00:48:23.140233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.605 qpair failed and we were unable to recover it. 00:35:55.605 [2024-07-12 00:48:23.140317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.606 [2024-07-12 00:48:23.140342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.606 qpair failed and we were unable to recover it. 00:35:55.606 [2024-07-12 00:48:23.140430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.606 [2024-07-12 00:48:23.140457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.606 qpair failed and we were unable to recover it. 00:35:55.606 [2024-07-12 00:48:23.140545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.606 [2024-07-12 00:48:23.140573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.606 qpair failed and we were unable to recover it. 00:35:55.606 [2024-07-12 00:48:23.140679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.606 [2024-07-12 00:48:23.140705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.606 qpair failed and we were unable to recover it. 00:35:55.606 [2024-07-12 00:48:23.140791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.606 [2024-07-12 00:48:23.140819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.606 qpair failed and we were unable to recover it. 00:35:55.606 [2024-07-12 00:48:23.140899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.606 [2024-07-12 00:48:23.140925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.606 qpair failed and we were unable to recover it. 00:35:55.606 [2024-07-12 00:48:23.141004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.606 [2024-07-12 00:48:23.141030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.606 qpair failed and we were unable to recover it. 00:35:55.606 [2024-07-12 00:48:23.141109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.606 [2024-07-12 00:48:23.141135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.606 qpair failed and we were unable to recover it. 00:35:55.606 [2024-07-12 00:48:23.141239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.606 [2024-07-12 00:48:23.141265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.606 qpair failed and we were unable to recover it. 00:35:55.606 [2024-07-12 00:48:23.141365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.606 [2024-07-12 00:48:23.141391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.606 qpair failed and we were unable to recover it. 00:35:55.606 [2024-07-12 00:48:23.141471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.606 [2024-07-12 00:48:23.141497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.606 qpair failed and we were unable to recover it. 00:35:55.606 [2024-07-12 00:48:23.141580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.606 [2024-07-12 00:48:23.141614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.606 qpair failed and we were unable to recover it. 00:35:55.606 [2024-07-12 00:48:23.141701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.606 [2024-07-12 00:48:23.141727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.606 qpair failed and we were unable to recover it. 00:35:55.606 [2024-07-12 00:48:23.141805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.606 [2024-07-12 00:48:23.141831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.606 qpair failed and we were unable to recover it. 00:35:55.606 [2024-07-12 00:48:23.141906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.606 [2024-07-12 00:48:23.141932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.606 qpair failed and we were unable to recover it. 00:35:55.606 [2024-07-12 00:48:23.142017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.606 [2024-07-12 00:48:23.142045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.606 qpair failed and we were unable to recover it. 00:35:55.606 [2024-07-12 00:48:23.142122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.606 [2024-07-12 00:48:23.142148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.606 qpair failed and we were unable to recover it. 00:35:55.606 [2024-07-12 00:48:23.142233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.606 [2024-07-12 00:48:23.142260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.606 qpair failed and we were unable to recover it. 00:35:55.606 [2024-07-12 00:48:23.142343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.606 [2024-07-12 00:48:23.142369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.606 qpair failed and we were unable to recover it. 00:35:55.606 [2024-07-12 00:48:23.142451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.606 [2024-07-12 00:48:23.142478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.606 qpair failed and we were unable to recover it. 00:35:55.606 [2024-07-12 00:48:23.142561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.606 [2024-07-12 00:48:23.142595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.606 qpair failed and we were unable to recover it. 00:35:55.606 [2024-07-12 00:48:23.142679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.606 [2024-07-12 00:48:23.142705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.606 qpair failed and we were unable to recover it. 00:35:55.606 [2024-07-12 00:48:23.142782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.606 [2024-07-12 00:48:23.142808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.606 qpair failed and we were unable to recover it. 00:35:55.606 [2024-07-12 00:48:23.142889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.606 [2024-07-12 00:48:23.142916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.606 qpair failed and we were unable to recover it. 00:35:55.606 [2024-07-12 00:48:23.142991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.606 [2024-07-12 00:48:23.143021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.606 qpair failed and we were unable to recover it. 00:35:55.606 [2024-07-12 00:48:23.143104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.606 [2024-07-12 00:48:23.143131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.606 qpair failed and we were unable to recover it. 00:35:55.606 [2024-07-12 00:48:23.143226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.606 [2024-07-12 00:48:23.143253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.606 qpair failed and we were unable to recover it. 00:35:55.606 [2024-07-12 00:48:23.143336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.606 [2024-07-12 00:48:23.143363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.606 qpair failed and we were unable to recover it. 00:35:55.606 [2024-07-12 00:48:23.143453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.606 [2024-07-12 00:48:23.143481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.606 qpair failed and we were unable to recover it. 00:35:55.606 [2024-07-12 00:48:23.143568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.606 [2024-07-12 00:48:23.143602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.606 qpair failed and we were unable to recover it. 00:35:55.606 [2024-07-12 00:48:23.143683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.606 [2024-07-12 00:48:23.143710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.606 qpair failed and we were unable to recover it. 00:35:55.606 [2024-07-12 00:48:23.143795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.606 [2024-07-12 00:48:23.143820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.606 qpair failed and we were unable to recover it. 00:35:55.606 [2024-07-12 00:48:23.143907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.606 [2024-07-12 00:48:23.143935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.606 qpair failed and we were unable to recover it. 00:35:55.606 [2024-07-12 00:48:23.144020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.606 [2024-07-12 00:48:23.144046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.606 qpair failed and we were unable to recover it. 00:35:55.606 [2024-07-12 00:48:23.144131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.606 [2024-07-12 00:48:23.144158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.606 qpair failed and we were unable to recover it. 00:35:55.606 [2024-07-12 00:48:23.144245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.606 [2024-07-12 00:48:23.144271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.606 qpair failed and we were unable to recover it. 00:35:55.606 [2024-07-12 00:48:23.144354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.606 [2024-07-12 00:48:23.144380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.606 qpair failed and we were unable to recover it. 00:35:55.606 [2024-07-12 00:48:23.144464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.606 [2024-07-12 00:48:23.144489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.606 qpair failed and we were unable to recover it. 00:35:55.606 [2024-07-12 00:48:23.144573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.606 [2024-07-12 00:48:23.144607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.606 qpair failed and we were unable to recover it. 00:35:55.606 [2024-07-12 00:48:23.144686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.606 [2024-07-12 00:48:23.144711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.606 qpair failed and we were unable to recover it. 00:35:55.606 [2024-07-12 00:48:23.144787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.606 [2024-07-12 00:48:23.144813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.606 qpair failed and we were unable to recover it. 00:35:55.607 [2024-07-12 00:48:23.144893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.607 [2024-07-12 00:48:23.144920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.607 qpair failed and we were unable to recover it. 00:35:55.607 [2024-07-12 00:48:23.145002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.607 [2024-07-12 00:48:23.145028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.607 qpair failed and we were unable to recover it. 00:35:55.607 [2024-07-12 00:48:23.145107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.607 [2024-07-12 00:48:23.145132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.607 qpair failed and we were unable to recover it. 00:35:55.607 [2024-07-12 00:48:23.145210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.607 [2024-07-12 00:48:23.145235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.607 qpair failed and we were unable to recover it. 00:35:55.607 [2024-07-12 00:48:23.145318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.607 [2024-07-12 00:48:23.145343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.607 qpair failed and we were unable to recover it. 00:35:55.607 [2024-07-12 00:48:23.145416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.607 [2024-07-12 00:48:23.145442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.607 qpair failed and we were unable to recover it. 00:35:55.607 [2024-07-12 00:48:23.145522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.607 [2024-07-12 00:48:23.145553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.607 qpair failed and we were unable to recover it. 00:35:55.607 [2024-07-12 00:48:23.145648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.607 [2024-07-12 00:48:23.145674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.607 qpair failed and we were unable to recover it. 00:35:55.607 [2024-07-12 00:48:23.145763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.607 [2024-07-12 00:48:23.145789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.607 qpair failed and we were unable to recover it. 00:35:55.607 [2024-07-12 00:48:23.145871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.607 [2024-07-12 00:48:23.145896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.607 qpair failed and we were unable to recover it. 00:35:55.607 [2024-07-12 00:48:23.145987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.607 [2024-07-12 00:48:23.146013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.607 qpair failed and we were unable to recover it. 00:35:55.607 [2024-07-12 00:48:23.146088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.607 [2024-07-12 00:48:23.146114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.607 qpair failed and we were unable to recover it. 00:35:55.607 [2024-07-12 00:48:23.146195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.607 [2024-07-12 00:48:23.146221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.607 qpair failed and we were unable to recover it. 00:35:55.607 [2024-07-12 00:48:23.146303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.607 [2024-07-12 00:48:23.146330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.607 qpair failed and we were unable to recover it. 00:35:55.607 [2024-07-12 00:48:23.146424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.607 [2024-07-12 00:48:23.146454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.607 qpair failed and we were unable to recover it. 00:35:55.607 [2024-07-12 00:48:23.146538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.607 [2024-07-12 00:48:23.146564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.607 qpair failed and we were unable to recover it. 00:35:55.607 [2024-07-12 00:48:23.146653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.607 [2024-07-12 00:48:23.146680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.607 qpair failed and we were unable to recover it. 00:35:55.607 [2024-07-12 00:48:23.146759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.607 [2024-07-12 00:48:23.146785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.607 qpair failed and we were unable to recover it. 00:35:55.607 [2024-07-12 00:48:23.146867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.607 [2024-07-12 00:48:23.146893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.607 qpair failed and we were unable to recover it. 00:35:55.607 [2024-07-12 00:48:23.146978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.607 [2024-07-12 00:48:23.147004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.607 qpair failed and we were unable to recover it. 00:35:55.607 [2024-07-12 00:48:23.147084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.607 [2024-07-12 00:48:23.147111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.607 qpair failed and we were unable to recover it. 00:35:55.607 [2024-07-12 00:48:23.147192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.607 [2024-07-12 00:48:23.147219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.607 qpair failed and we were unable to recover it. 00:35:55.607 [2024-07-12 00:48:23.147302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.607 [2024-07-12 00:48:23.147332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.607 qpair failed and we were unable to recover it. 00:35:55.607 [2024-07-12 00:48:23.147412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.607 [2024-07-12 00:48:23.147443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.607 qpair failed and we were unable to recover it. 00:35:55.607 [2024-07-12 00:48:23.147524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.607 [2024-07-12 00:48:23.147551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.607 qpair failed and we were unable to recover it. 00:35:55.607 [2024-07-12 00:48:23.147636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.607 [2024-07-12 00:48:23.147663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.607 qpair failed and we were unable to recover it. 00:35:55.607 [2024-07-12 00:48:23.147746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.607 [2024-07-12 00:48:23.147772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.607 qpair failed and we were unable to recover it. 00:35:55.607 [2024-07-12 00:48:23.147851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.607 [2024-07-12 00:48:23.147878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.607 qpair failed and we were unable to recover it. 00:35:55.607 [2024-07-12 00:48:23.147957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.607 [2024-07-12 00:48:23.147983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.607 qpair failed and we were unable to recover it. 00:35:55.607 [2024-07-12 00:48:23.148064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.607 [2024-07-12 00:48:23.148090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.607 qpair failed and we were unable to recover it. 00:35:55.607 [2024-07-12 00:48:23.148166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.607 [2024-07-12 00:48:23.148192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.607 qpair failed and we were unable to recover it. 00:35:55.607 [2024-07-12 00:48:23.148271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.607 [2024-07-12 00:48:23.148297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.607 qpair failed and we were unable to recover it. 00:35:55.607 [2024-07-12 00:48:23.148426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.607 [2024-07-12 00:48:23.148454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.607 qpair failed and we were unable to recover it. 00:35:55.607 [2024-07-12 00:48:23.148533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.607 [2024-07-12 00:48:23.148559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.607 qpair failed and we were unable to recover it. 00:35:55.607 [2024-07-12 00:48:23.148658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.607 [2024-07-12 00:48:23.148685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.607 qpair failed and we were unable to recover it. 00:35:55.607 [2024-07-12 00:48:23.148766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.607 [2024-07-12 00:48:23.148792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.607 qpair failed and we were unable to recover it. 00:35:55.607 [2024-07-12 00:48:23.148866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.607 [2024-07-12 00:48:23.148892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.607 qpair failed and we were unable to recover it. 00:35:55.607 [2024-07-12 00:48:23.149005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.607 [2024-07-12 00:48:23.149032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.607 qpair failed and we were unable to recover it. 00:35:55.607 [2024-07-12 00:48:23.149110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.607 [2024-07-12 00:48:23.149140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.607 qpair failed and we were unable to recover it. 00:35:55.607 [2024-07-12 00:48:23.149228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.607 [2024-07-12 00:48:23.149253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.607 qpair failed and we were unable to recover it. 00:35:55.607 [2024-07-12 00:48:23.149330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.607 [2024-07-12 00:48:23.149355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.607 qpair failed and we were unable to recover it. 00:35:55.608 [2024-07-12 00:48:23.149436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.608 [2024-07-12 00:48:23.149464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.608 qpair failed and we were unable to recover it. 00:35:55.608 [2024-07-12 00:48:23.149540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.608 [2024-07-12 00:48:23.149566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.608 qpair failed and we were unable to recover it. 00:35:55.608 [2024-07-12 00:48:23.149685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.608 [2024-07-12 00:48:23.149731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.608 qpair failed and we were unable to recover it. 00:35:55.608 [2024-07-12 00:48:23.149832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.608 [2024-07-12 00:48:23.149860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.608 qpair failed and we were unable to recover it. 00:35:55.608 [2024-07-12 00:48:23.149935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.608 [2024-07-12 00:48:23.149963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.608 qpair failed and we were unable to recover it. 00:35:55.608 [2024-07-12 00:48:23.150054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.608 [2024-07-12 00:48:23.150080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.608 qpair failed and we were unable to recover it. 00:35:55.608 [2024-07-12 00:48:23.150162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.608 [2024-07-12 00:48:23.150189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.608 qpair failed and we were unable to recover it. 00:35:55.608 [2024-07-12 00:48:23.150273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.608 [2024-07-12 00:48:23.150299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.608 qpair failed and we were unable to recover it. 00:35:55.608 [2024-07-12 00:48:23.150380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.608 [2024-07-12 00:48:23.150408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.608 qpair failed and we were unable to recover it. 00:35:55.608 [2024-07-12 00:48:23.150502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.608 [2024-07-12 00:48:23.150539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.608 qpair failed and we were unable to recover it. 00:35:55.608 [2024-07-12 00:48:23.150647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.608 [2024-07-12 00:48:23.150683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.608 qpair failed and we were unable to recover it. 00:35:55.608 [2024-07-12 00:48:23.150787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.608 [2024-07-12 00:48:23.150820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.608 qpair failed and we were unable to recover it. 00:35:55.608 [2024-07-12 00:48:23.150921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.608 [2024-07-12 00:48:23.150948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.608 qpair failed and we were unable to recover it. 00:35:55.608 [2024-07-12 00:48:23.151029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.608 [2024-07-12 00:48:23.151055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.608 qpair failed and we were unable to recover it. 00:35:55.608 [2024-07-12 00:48:23.151135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.608 [2024-07-12 00:48:23.151162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.608 qpair failed and we were unable to recover it. 00:35:55.608 [2024-07-12 00:48:23.151242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.608 [2024-07-12 00:48:23.151268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.608 qpair failed and we were unable to recover it. 00:35:55.608 [2024-07-12 00:48:23.151348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.608 [2024-07-12 00:48:23.151375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.608 qpair failed and we were unable to recover it. 00:35:55.608 [2024-07-12 00:48:23.151460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.608 [2024-07-12 00:48:23.151488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.608 qpair failed and we were unable to recover it. 00:35:55.608 [2024-07-12 00:48:23.151572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.608 [2024-07-12 00:48:23.151608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.608 qpair failed and we were unable to recover it. 00:35:55.608 [2024-07-12 00:48:23.151697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.608 [2024-07-12 00:48:23.151724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.608 qpair failed and we were unable to recover it. 00:35:55.608 [2024-07-12 00:48:23.151812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.608 [2024-07-12 00:48:23.151838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.608 qpair failed and we were unable to recover it. 00:35:55.608 [2024-07-12 00:48:23.151919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.608 [2024-07-12 00:48:23.151948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.608 qpair failed and we were unable to recover it. 00:35:55.608 [2024-07-12 00:48:23.152032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.608 [2024-07-12 00:48:23.152063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.608 qpair failed and we were unable to recover it. 00:35:55.608 [2024-07-12 00:48:23.152151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.608 [2024-07-12 00:48:23.152178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.608 qpair failed and we were unable to recover it. 00:35:55.608 [2024-07-12 00:48:23.152262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.608 [2024-07-12 00:48:23.152288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.608 qpair failed and we were unable to recover it. 00:35:55.608 [2024-07-12 00:48:23.152361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.608 [2024-07-12 00:48:23.152386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.608 qpair failed and we were unable to recover it. 00:35:55.608 [2024-07-12 00:48:23.152467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.608 [2024-07-12 00:48:23.152495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.608 qpair failed and we were unable to recover it. 00:35:55.608 [2024-07-12 00:48:23.152573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.608 [2024-07-12 00:48:23.152606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.608 qpair failed and we were unable to recover it. 00:35:55.608 [2024-07-12 00:48:23.152687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.608 [2024-07-12 00:48:23.152715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.608 qpair failed and we were unable to recover it. 00:35:55.608 [2024-07-12 00:48:23.152791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.608 [2024-07-12 00:48:23.152817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.608 qpair failed and we were unable to recover it. 00:35:55.608 [2024-07-12 00:48:23.152900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.608 [2024-07-12 00:48:23.152928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.608 qpair failed and we were unable to recover it. 00:35:55.608 [2024-07-12 00:48:23.153009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.608 [2024-07-12 00:48:23.153036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.608 qpair failed and we were unable to recover it. 00:35:55.608 [2024-07-12 00:48:23.153122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.608 [2024-07-12 00:48:23.153154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.608 qpair failed and we were unable to recover it. 00:35:55.608 [2024-07-12 00:48:23.153231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.608 [2024-07-12 00:48:23.153258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.608 qpair failed and we were unable to recover it. 00:35:55.608 [2024-07-12 00:48:23.153331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.608 [2024-07-12 00:48:23.153357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.608 qpair failed and we were unable to recover it. 00:35:55.608 [2024-07-12 00:48:23.153435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.608 [2024-07-12 00:48:23.153461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.608 qpair failed and we were unable to recover it. 00:35:55.608 [2024-07-12 00:48:23.153553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.608 [2024-07-12 00:48:23.153579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.608 qpair failed and we were unable to recover it. 00:35:55.608 [2024-07-12 00:48:23.153666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.608 [2024-07-12 00:48:23.153693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.608 qpair failed and we were unable to recover it. 00:35:55.608 [2024-07-12 00:48:23.153771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.608 [2024-07-12 00:48:23.153796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.608 qpair failed and we were unable to recover it. 00:35:55.608 [2024-07-12 00:48:23.153871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.608 [2024-07-12 00:48:23.153896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.608 qpair failed and we were unable to recover it. 00:35:55.608 [2024-07-12 00:48:23.153972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.609 [2024-07-12 00:48:23.153997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.609 qpair failed and we were unable to recover it. 00:35:55.609 [2024-07-12 00:48:23.154081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.609 [2024-07-12 00:48:23.154108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.609 qpair failed and we were unable to recover it. 00:35:55.609 [2024-07-12 00:48:23.154183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.609 [2024-07-12 00:48:23.154209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.609 qpair failed and we were unable to recover it. 00:35:55.609 [2024-07-12 00:48:23.154289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.609 [2024-07-12 00:48:23.154315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.609 qpair failed and we were unable to recover it. 00:35:55.609 [2024-07-12 00:48:23.154402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.609 [2024-07-12 00:48:23.154428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.609 qpair failed and we were unable to recover it. 00:35:55.609 [2024-07-12 00:48:23.154504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.609 [2024-07-12 00:48:23.154529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.609 qpair failed and we were unable to recover it. 00:35:55.609 [2024-07-12 00:48:23.154621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.609 [2024-07-12 00:48:23.154649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.609 qpair failed and we were unable to recover it. 00:35:55.609 [2024-07-12 00:48:23.154734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.609 [2024-07-12 00:48:23.154764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.609 qpair failed and we were unable to recover it. 00:35:55.609 [2024-07-12 00:48:23.154847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.609 [2024-07-12 00:48:23.154873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.609 qpair failed and we were unable to recover it. 00:35:55.609 [2024-07-12 00:48:23.154954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.609 [2024-07-12 00:48:23.154985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.609 qpair failed and we were unable to recover it. 00:35:55.609 [2024-07-12 00:48:23.155083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.609 [2024-07-12 00:48:23.155123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.609 qpair failed and we were unable to recover it. 00:35:55.609 [2024-07-12 00:48:23.155211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.609 [2024-07-12 00:48:23.155239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.609 qpair failed and we were unable to recover it. 00:35:55.609 [2024-07-12 00:48:23.155322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.609 [2024-07-12 00:48:23.155349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.609 qpair failed and we were unable to recover it. 00:35:55.609 [2024-07-12 00:48:23.155425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.609 [2024-07-12 00:48:23.155451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.609 qpair failed and we were unable to recover it. 00:35:55.609 [2024-07-12 00:48:23.155539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.609 [2024-07-12 00:48:23.155566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.609 qpair failed and we were unable to recover it. 00:35:55.609 [2024-07-12 00:48:23.155660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.609 [2024-07-12 00:48:23.155687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.609 qpair failed and we were unable to recover it. 00:35:55.609 [2024-07-12 00:48:23.155771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.609 [2024-07-12 00:48:23.155797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.609 qpair failed and we were unable to recover it. 00:35:55.609 [2024-07-12 00:48:23.155881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.609 [2024-07-12 00:48:23.155907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.609 qpair failed and we were unable to recover it. 00:35:55.609 [2024-07-12 00:48:23.155988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.609 [2024-07-12 00:48:23.156016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.609 qpair failed and we were unable to recover it. 00:35:55.609 [2024-07-12 00:48:23.156098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.609 [2024-07-12 00:48:23.156126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.609 qpair failed and we were unable to recover it. 00:35:55.609 [2024-07-12 00:48:23.156213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.609 [2024-07-12 00:48:23.156239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.609 qpair failed and we were unable to recover it. 00:35:55.609 [2024-07-12 00:48:23.156313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.609 [2024-07-12 00:48:23.156339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.609 qpair failed and we were unable to recover it. 00:35:55.609 [2024-07-12 00:48:23.156417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.609 [2024-07-12 00:48:23.156442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.609 qpair failed and we were unable to recover it. 00:35:55.609 [2024-07-12 00:48:23.156529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.609 [2024-07-12 00:48:23.156554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.609 qpair failed and we were unable to recover it. 00:35:55.609 [2024-07-12 00:48:23.156646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.609 [2024-07-12 00:48:23.156674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.609 qpair failed and we were unable to recover it. 00:35:55.609 [2024-07-12 00:48:23.156754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.609 [2024-07-12 00:48:23.156780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.609 qpair failed and we were unable to recover it. 00:35:55.609 [2024-07-12 00:48:23.156853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.609 [2024-07-12 00:48:23.156880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.609 qpair failed and we were unable to recover it. 00:35:55.609 [2024-07-12 00:48:23.156954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.609 [2024-07-12 00:48:23.156980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.609 qpair failed and we were unable to recover it. 00:35:55.609 [2024-07-12 00:48:23.157063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.609 [2024-07-12 00:48:23.157089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.609 qpair failed and we were unable to recover it. 00:35:55.609 [2024-07-12 00:48:23.157170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.609 [2024-07-12 00:48:23.157196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.609 qpair failed and we were unable to recover it. 00:35:55.609 [2024-07-12 00:48:23.157276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.609 [2024-07-12 00:48:23.157305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.609 qpair failed and we were unable to recover it. 00:35:55.609 [2024-07-12 00:48:23.157382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.609 [2024-07-12 00:48:23.157409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.609 qpair failed and we were unable to recover it. 00:35:55.609 [2024-07-12 00:48:23.157485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.609 [2024-07-12 00:48:23.157511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.609 qpair failed and we were unable to recover it. 00:35:55.609 [2024-07-12 00:48:23.157598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.609 [2024-07-12 00:48:23.157625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.609 qpair failed and we were unable to recover it. 00:35:55.609 [2024-07-12 00:48:23.157711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.609 [2024-07-12 00:48:23.157737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.609 qpair failed and we were unable to recover it. 00:35:55.609 [2024-07-12 00:48:23.157810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.610 [2024-07-12 00:48:23.157835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.610 qpair failed and we were unable to recover it. 00:35:55.610 [2024-07-12 00:48:23.157920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.610 [2024-07-12 00:48:23.157945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.610 qpair failed and we were unable to recover it. 00:35:55.610 [2024-07-12 00:48:23.158020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.610 [2024-07-12 00:48:23.158046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.610 qpair failed and we were unable to recover it. 00:35:55.610 [2024-07-12 00:48:23.158120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.610 [2024-07-12 00:48:23.158146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.610 qpair failed and we were unable to recover it. 00:35:55.610 [2024-07-12 00:48:23.158220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.610 [2024-07-12 00:48:23.158246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.610 qpair failed and we were unable to recover it. 00:35:55.610 [2024-07-12 00:48:23.158319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.610 [2024-07-12 00:48:23.158344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.610 qpair failed and we were unable to recover it. 00:35:55.610 [2024-07-12 00:48:23.158419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.610 [2024-07-12 00:48:23.158444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.610 qpair failed and we were unable to recover it. 00:35:55.610 [2024-07-12 00:48:23.158525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.610 [2024-07-12 00:48:23.158550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.610 qpair failed and we were unable to recover it. 00:35:55.610 [2024-07-12 00:48:23.158691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.610 [2024-07-12 00:48:23.158718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.610 qpair failed and we were unable to recover it. 00:35:55.610 [2024-07-12 00:48:23.158799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.610 [2024-07-12 00:48:23.158826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.610 qpair failed and we were unable to recover it. 00:35:55.610 [2024-07-12 00:48:23.158911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.610 [2024-07-12 00:48:23.158939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.610 qpair failed and we were unable to recover it. 00:35:55.610 [2024-07-12 00:48:23.159015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.610 [2024-07-12 00:48:23.159041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.610 qpair failed and we were unable to recover it. 00:35:55.610 [2024-07-12 00:48:23.159114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.610 [2024-07-12 00:48:23.159140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.610 qpair failed and we were unable to recover it. 00:35:55.610 [2024-07-12 00:48:23.159214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.610 [2024-07-12 00:48:23.159240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.610 qpair failed and we were unable to recover it. 00:35:55.610 [2024-07-12 00:48:23.159320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.610 [2024-07-12 00:48:23.159354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.610 qpair failed and we were unable to recover it. 00:35:55.610 [2024-07-12 00:48:23.159434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.610 [2024-07-12 00:48:23.159460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.610 qpair failed and we were unable to recover it. 00:35:55.610 [2024-07-12 00:48:23.159541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.610 [2024-07-12 00:48:23.159568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.610 qpair failed and we were unable to recover it. 00:35:55.610 [2024-07-12 00:48:23.159656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.610 [2024-07-12 00:48:23.159683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.610 qpair failed and we were unable to recover it. 00:35:55.610 [2024-07-12 00:48:23.159775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.610 [2024-07-12 00:48:23.159804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.610 qpair failed and we were unable to recover it. 00:35:55.610 [2024-07-12 00:48:23.159882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.610 [2024-07-12 00:48:23.159909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.610 qpair failed and we were unable to recover it. 00:35:55.610 [2024-07-12 00:48:23.159990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.610 [2024-07-12 00:48:23.160016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.610 qpair failed and we were unable to recover it. 00:35:55.610 [2024-07-12 00:48:23.160103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.610 [2024-07-12 00:48:23.160130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.610 qpair failed and we were unable to recover it. 00:35:55.610 [2024-07-12 00:48:23.160209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.610 [2024-07-12 00:48:23.160235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.610 qpair failed and we were unable to recover it. 00:35:55.610 [2024-07-12 00:48:23.160311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.610 [2024-07-12 00:48:23.160342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.610 qpair failed and we were unable to recover it. 00:35:55.610 [2024-07-12 00:48:23.160438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.610 [2024-07-12 00:48:23.160465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.610 qpair failed and we were unable to recover it. 00:35:55.610 [2024-07-12 00:48:23.160547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.610 [2024-07-12 00:48:23.160572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.610 qpair failed and we were unable to recover it. 00:35:55.610 [2024-07-12 00:48:23.160662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.610 [2024-07-12 00:48:23.160689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.610 qpair failed and we were unable to recover it. 00:35:55.610 [2024-07-12 00:48:23.160768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.610 [2024-07-12 00:48:23.160794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.610 qpair failed and we were unable to recover it. 00:35:55.610 [2024-07-12 00:48:23.160883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.610 [2024-07-12 00:48:23.160909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.610 qpair failed and we were unable to recover it. 00:35:55.610 [2024-07-12 00:48:23.160999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.610 [2024-07-12 00:48:23.161025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.610 qpair failed and we were unable to recover it. 00:35:55.610 [2024-07-12 00:48:23.161110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.610 [2024-07-12 00:48:23.161137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.610 qpair failed and we were unable to recover it. 00:35:55.610 [2024-07-12 00:48:23.161219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.610 [2024-07-12 00:48:23.161246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.610 qpair failed and we were unable to recover it. 00:35:55.610 [2024-07-12 00:48:23.161323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.610 [2024-07-12 00:48:23.161352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.610 qpair failed and we were unable to recover it. 00:35:55.610 [2024-07-12 00:48:23.161442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.610 [2024-07-12 00:48:23.161468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.610 qpair failed and we were unable to recover it. 00:35:55.610 [2024-07-12 00:48:23.161550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.610 [2024-07-12 00:48:23.161577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.610 qpair failed and we were unable to recover it. 00:35:55.610 [2024-07-12 00:48:23.161667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.610 [2024-07-12 00:48:23.161694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.610 qpair failed and we were unable to recover it. 00:35:55.610 [2024-07-12 00:48:23.161773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.610 [2024-07-12 00:48:23.161800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.610 qpair failed and we were unable to recover it. 00:35:55.610 [2024-07-12 00:48:23.161875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.610 [2024-07-12 00:48:23.161901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.610 qpair failed and we were unable to recover it. 00:35:55.610 [2024-07-12 00:48:23.161977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.610 [2024-07-12 00:48:23.162003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.610 qpair failed and we were unable to recover it. 00:35:55.610 [2024-07-12 00:48:23.162080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.610 [2024-07-12 00:48:23.162107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.610 qpair failed and we were unable to recover it. 00:35:55.610 [2024-07-12 00:48:23.162186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.610 [2024-07-12 00:48:23.162214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.610 qpair failed and we were unable to recover it. 00:35:55.611 [2024-07-12 00:48:23.162297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.611 [2024-07-12 00:48:23.162326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.611 qpair failed and we were unable to recover it. 00:35:55.611 [2024-07-12 00:48:23.162410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.611 [2024-07-12 00:48:23.162436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.611 qpair failed and we were unable to recover it. 00:35:55.611 [2024-07-12 00:48:23.162511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.611 [2024-07-12 00:48:23.162537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.611 qpair failed and we were unable to recover it. 00:35:55.611 [2024-07-12 00:48:23.162627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.611 [2024-07-12 00:48:23.162654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.611 qpair failed and we were unable to recover it. 00:35:55.611 [2024-07-12 00:48:23.162730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.611 [2024-07-12 00:48:23.162757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.611 qpair failed and we were unable to recover it. 00:35:55.611 [2024-07-12 00:48:23.162839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.611 [2024-07-12 00:48:23.162865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.611 qpair failed and we were unable to recover it. 00:35:55.611 [2024-07-12 00:48:23.162939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.611 [2024-07-12 00:48:23.162971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.611 qpair failed and we were unable to recover it. 00:35:55.611 [2024-07-12 00:48:23.163046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.611 [2024-07-12 00:48:23.163071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.611 qpair failed and we were unable to recover it. 00:35:55.611 [2024-07-12 00:48:23.163157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.611 [2024-07-12 00:48:23.163184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.611 qpair failed and we were unable to recover it. 00:35:55.611 [2024-07-12 00:48:23.163273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.611 [2024-07-12 00:48:23.163302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.611 qpair failed and we were unable to recover it. 00:35:55.611 [2024-07-12 00:48:23.163398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.611 [2024-07-12 00:48:23.163426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.611 qpair failed and we were unable to recover it. 00:35:55.611 [2024-07-12 00:48:23.163511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.611 [2024-07-12 00:48:23.163538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.611 qpair failed and we were unable to recover it. 00:35:55.611 [2024-07-12 00:48:23.163624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.611 [2024-07-12 00:48:23.163651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.611 qpair failed and we were unable to recover it. 00:35:55.611 [2024-07-12 00:48:23.163731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.611 [2024-07-12 00:48:23.163763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.611 qpair failed and we were unable to recover it. 00:35:55.611 [2024-07-12 00:48:23.163841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.611 [2024-07-12 00:48:23.163867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.611 qpair failed and we were unable to recover it. 00:35:55.611 [2024-07-12 00:48:23.163941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.611 [2024-07-12 00:48:23.163967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.611 qpair failed and we were unable to recover it. 00:35:55.611 [2024-07-12 00:48:23.164053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.611 [2024-07-12 00:48:23.164079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.611 qpair failed and we were unable to recover it. 00:35:55.611 [2024-07-12 00:48:23.164157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.611 [2024-07-12 00:48:23.164184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.611 qpair failed and we were unable to recover it. 00:35:55.611 [2024-07-12 00:48:23.164270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.611 [2024-07-12 00:48:23.164297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.611 qpair failed and we were unable to recover it. 00:35:55.611 [2024-07-12 00:48:23.164378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.611 [2024-07-12 00:48:23.164404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.611 qpair failed and we were unable to recover it. 00:35:55.611 [2024-07-12 00:48:23.164478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.611 [2024-07-12 00:48:23.164504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.611 qpair failed and we were unable to recover it. 00:35:55.611 [2024-07-12 00:48:23.164583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.611 [2024-07-12 00:48:23.164615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.611 qpair failed and we were unable to recover it. 00:35:55.611 [2024-07-12 00:48:23.164700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.611 [2024-07-12 00:48:23.164727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.611 qpair failed and we were unable to recover it. 00:35:55.611 [2024-07-12 00:48:23.164805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.611 [2024-07-12 00:48:23.164831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.611 qpair failed and we were unable to recover it. 00:35:55.611 [2024-07-12 00:48:23.164934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.611 [2024-07-12 00:48:23.164963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.611 qpair failed and we were unable to recover it. 00:35:55.611 [2024-07-12 00:48:23.165044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.611 [2024-07-12 00:48:23.165070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.611 qpair failed and we were unable to recover it. 00:35:55.611 [2024-07-12 00:48:23.165154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.611 [2024-07-12 00:48:23.165180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.611 qpair failed and we were unable to recover it. 00:35:55.611 [2024-07-12 00:48:23.165261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.611 [2024-07-12 00:48:23.165287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.611 qpair failed and we were unable to recover it. 00:35:55.611 [2024-07-12 00:48:23.165370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.611 [2024-07-12 00:48:23.165397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.611 qpair failed and we were unable to recover it. 00:35:55.611 [2024-07-12 00:48:23.165472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.611 [2024-07-12 00:48:23.165498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.611 qpair failed and we were unable to recover it. 00:35:55.611 [2024-07-12 00:48:23.165572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.611 [2024-07-12 00:48:23.165606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.611 qpair failed and we were unable to recover it. 00:35:55.611 [2024-07-12 00:48:23.165697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.611 [2024-07-12 00:48:23.165723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.611 qpair failed and we were unable to recover it. 00:35:55.611 [2024-07-12 00:48:23.165803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.611 [2024-07-12 00:48:23.165828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.611 qpair failed and we were unable to recover it. 00:35:55.611 [2024-07-12 00:48:23.165910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.611 [2024-07-12 00:48:23.165936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.611 qpair failed and we were unable to recover it. 00:35:55.611 [2024-07-12 00:48:23.166019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.611 [2024-07-12 00:48:23.166045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.611 qpair failed and we were unable to recover it. 00:35:55.611 [2024-07-12 00:48:23.166125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.611 [2024-07-12 00:48:23.166152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.611 qpair failed and we were unable to recover it. 00:35:55.611 [2024-07-12 00:48:23.166239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.611 [2024-07-12 00:48:23.166266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.611 qpair failed and we were unable to recover it. 00:35:55.611 [2024-07-12 00:48:23.166350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.611 [2024-07-12 00:48:23.166380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.611 qpair failed and we were unable to recover it. 00:35:55.611 [2024-07-12 00:48:23.166456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.611 [2024-07-12 00:48:23.166483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.611 qpair failed and we were unable to recover it. 00:35:55.611 [2024-07-12 00:48:23.166574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.611 [2024-07-12 00:48:23.166608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.611 qpair failed and we were unable to recover it. 00:35:55.611 [2024-07-12 00:48:23.166712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.612 [2024-07-12 00:48:23.166739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.612 qpair failed and we were unable to recover it. 00:35:55.612 [2024-07-12 00:48:23.166813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.612 [2024-07-12 00:48:23.166839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.612 qpair failed and we were unable to recover it. 00:35:55.612 [2024-07-12 00:48:23.166925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.612 [2024-07-12 00:48:23.166953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.612 qpair failed and we were unable to recover it. 00:35:55.612 [2024-07-12 00:48:23.167031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.612 [2024-07-12 00:48:23.167057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.612 qpair failed and we were unable to recover it. 00:35:55.612 [2024-07-12 00:48:23.167140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.612 [2024-07-12 00:48:23.167166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.612 qpair failed and we were unable to recover it. 00:35:55.612 [2024-07-12 00:48:23.167249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.612 [2024-07-12 00:48:23.167275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.612 qpair failed and we were unable to recover it. 00:35:55.612 [2024-07-12 00:48:23.167348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.612 [2024-07-12 00:48:23.167374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.612 qpair failed and we were unable to recover it. 00:35:55.612 [2024-07-12 00:48:23.167455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.612 [2024-07-12 00:48:23.167486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.612 qpair failed and we were unable to recover it. 00:35:55.612 [2024-07-12 00:48:23.167569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.612 [2024-07-12 00:48:23.167611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.612 qpair failed and we were unable to recover it. 00:35:55.612 [2024-07-12 00:48:23.167702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.612 [2024-07-12 00:48:23.167729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.612 qpair failed and we were unable to recover it. 00:35:55.612 [2024-07-12 00:48:23.167810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.612 [2024-07-12 00:48:23.167837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.612 qpair failed and we were unable to recover it. 00:35:55.612 [2024-07-12 00:48:23.167917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.612 [2024-07-12 00:48:23.167944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.612 qpair failed and we were unable to recover it. 00:35:55.612 [2024-07-12 00:48:23.168017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.612 [2024-07-12 00:48:23.168043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.612 qpair failed and we were unable to recover it. 00:35:55.612 [2024-07-12 00:48:23.168133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.612 [2024-07-12 00:48:23.168168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.612 qpair failed and we were unable to recover it. 00:35:55.612 [2024-07-12 00:48:23.168249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.612 [2024-07-12 00:48:23.168276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.612 qpair failed and we were unable to recover it. 00:35:55.612 [2024-07-12 00:48:23.168351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.612 [2024-07-12 00:48:23.168377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.612 qpair failed and we were unable to recover it. 00:35:55.612 [2024-07-12 00:48:23.168450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.612 [2024-07-12 00:48:23.168476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.612 qpair failed and we were unable to recover it. 00:35:55.612 [2024-07-12 00:48:23.168553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.612 [2024-07-12 00:48:23.168582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.612 qpair failed and we were unable to recover it. 00:35:55.612 [2024-07-12 00:48:23.168720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.612 [2024-07-12 00:48:23.168747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.612 qpair failed and we were unable to recover it. 00:35:55.612 [2024-07-12 00:48:23.168820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.612 [2024-07-12 00:48:23.168846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.612 qpair failed and we were unable to recover it. 00:35:55.612 [2024-07-12 00:48:23.168972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.612 [2024-07-12 00:48:23.168999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.612 qpair failed and we were unable to recover it. 00:35:55.612 [2024-07-12 00:48:23.169075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.612 [2024-07-12 00:48:23.169101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.612 qpair failed and we were unable to recover it. 00:35:55.612 [2024-07-12 00:48:23.169175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.612 [2024-07-12 00:48:23.169201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.612 qpair failed and we were unable to recover it. 00:35:55.612 [2024-07-12 00:48:23.169283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.612 [2024-07-12 00:48:23.169310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.612 qpair failed and we were unable to recover it. 00:35:55.612 [2024-07-12 00:48:23.169385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.612 [2024-07-12 00:48:23.169411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.612 qpair failed and we were unable to recover it. 00:35:55.612 [2024-07-12 00:48:23.169501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.612 [2024-07-12 00:48:23.169527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.612 qpair failed and we were unable to recover it. 00:35:55.612 [2024-07-12 00:48:23.169614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.612 [2024-07-12 00:48:23.169642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.612 qpair failed and we were unable to recover it. 00:35:55.612 [2024-07-12 00:48:23.169727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.612 [2024-07-12 00:48:23.169754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.612 qpair failed and we were unable to recover it. 00:35:55.612 [2024-07-12 00:48:23.169835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.612 [2024-07-12 00:48:23.169862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.612 qpair failed and we were unable to recover it. 00:35:55.612 [2024-07-12 00:48:23.169948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.612 [2024-07-12 00:48:23.169975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.612 qpair failed and we were unable to recover it. 00:35:55.612 [2024-07-12 00:48:23.170061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.612 [2024-07-12 00:48:23.170091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.612 qpair failed and we were unable to recover it. 00:35:55.612 [2024-07-12 00:48:23.170168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.612 [2024-07-12 00:48:23.170194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.612 qpair failed and we were unable to recover it. 00:35:55.612 [2024-07-12 00:48:23.170274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.612 [2024-07-12 00:48:23.170302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.612 qpair failed and we were unable to recover it. 00:35:55.612 [2024-07-12 00:48:23.170392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.612 [2024-07-12 00:48:23.170418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.612 qpair failed and we were unable to recover it. 00:35:55.612 [2024-07-12 00:48:23.170508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.612 [2024-07-12 00:48:23.170535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.612 qpair failed and we were unable to recover it. 00:35:55.612 [2024-07-12 00:48:23.170612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.612 [2024-07-12 00:48:23.170638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.612 qpair failed and we were unable to recover it. 00:35:55.612 [2024-07-12 00:48:23.170722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.612 [2024-07-12 00:48:23.170753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.612 qpair failed and we were unable to recover it. 00:35:55.612 [2024-07-12 00:48:23.170834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.612 [2024-07-12 00:48:23.170861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.612 qpair failed and we were unable to recover it. 00:35:55.612 [2024-07-12 00:48:23.170946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.612 [2024-07-12 00:48:23.170974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.612 qpair failed and we were unable to recover it. 00:35:55.612 [2024-07-12 00:48:23.171066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.612 [2024-07-12 00:48:23.171093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.612 qpair failed and we were unable to recover it. 00:35:55.613 [2024-07-12 00:48:23.171179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.613 [2024-07-12 00:48:23.171206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.613 qpair failed and we were unable to recover it. 00:35:55.613 [2024-07-12 00:48:23.171289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.613 [2024-07-12 00:48:23.171316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.613 qpair failed and we were unable to recover it. 00:35:55.613 [2024-07-12 00:48:23.171401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.613 [2024-07-12 00:48:23.171428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.613 qpair failed and we were unable to recover it. 00:35:55.613 [2024-07-12 00:48:23.171508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.613 [2024-07-12 00:48:23.171534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.613 qpair failed and we were unable to recover it. 00:35:55.613 [2024-07-12 00:48:23.171626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.613 [2024-07-12 00:48:23.171655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.613 qpair failed and we were unable to recover it. 00:35:55.613 [2024-07-12 00:48:23.171737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.613 [2024-07-12 00:48:23.171763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.613 qpair failed and we were unable to recover it. 00:35:55.613 [2024-07-12 00:48:23.171843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.613 [2024-07-12 00:48:23.171869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.613 qpair failed and we were unable to recover it. 00:35:55.613 [2024-07-12 00:48:23.171945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.613 [2024-07-12 00:48:23.171972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.613 qpair failed and we were unable to recover it. 00:35:55.613 [2024-07-12 00:48:23.172054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.613 [2024-07-12 00:48:23.172080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.613 qpair failed and we were unable to recover it. 00:35:55.613 [2024-07-12 00:48:23.172160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.613 [2024-07-12 00:48:23.172187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.613 qpair failed and we were unable to recover it. 00:35:55.613 [2024-07-12 00:48:23.172277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.613 [2024-07-12 00:48:23.172304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.613 qpair failed and we were unable to recover it. 00:35:55.613 [2024-07-12 00:48:23.172392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.613 [2024-07-12 00:48:23.172418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.613 qpair failed and we were unable to recover it. 00:35:55.613 [2024-07-12 00:48:23.172502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.613 [2024-07-12 00:48:23.172528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.613 qpair failed and we were unable to recover it. 00:35:55.613 [2024-07-12 00:48:23.172603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.613 [2024-07-12 00:48:23.172635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.613 qpair failed and we were unable to recover it. 00:35:55.613 [2024-07-12 00:48:23.172721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.613 [2024-07-12 00:48:23.172749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.613 qpair failed and we were unable to recover it. 00:35:55.613 [2024-07-12 00:48:23.172828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.613 [2024-07-12 00:48:23.172856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.613 qpair failed and we were unable to recover it. 00:35:55.613 [2024-07-12 00:48:23.172937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.613 [2024-07-12 00:48:23.172965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.613 qpair failed and we were unable to recover it. 00:35:55.613 [2024-07-12 00:48:23.173041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.613 [2024-07-12 00:48:23.173067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.613 qpair failed and we were unable to recover it. 00:35:55.613 [2024-07-12 00:48:23.173149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.613 [2024-07-12 00:48:23.173179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.613 qpair failed and we were unable to recover it. 00:35:55.613 [2024-07-12 00:48:23.173255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.613 [2024-07-12 00:48:23.173281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.613 qpair failed and we were unable to recover it. 00:35:55.613 [2024-07-12 00:48:23.173365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.613 [2024-07-12 00:48:23.173392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.613 qpair failed and we were unable to recover it. 00:35:55.613 [2024-07-12 00:48:23.173472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.613 [2024-07-12 00:48:23.173498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.613 qpair failed and we were unable to recover it. 00:35:55.613 [2024-07-12 00:48:23.173579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.613 [2024-07-12 00:48:23.173612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.613 qpair failed and we were unable to recover it. 00:35:55.613 [2024-07-12 00:48:23.173691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.613 [2024-07-12 00:48:23.173717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.613 qpair failed and we were unable to recover it. 00:35:55.613 [2024-07-12 00:48:23.173800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.613 [2024-07-12 00:48:23.173828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.613 qpair failed and we were unable to recover it. 00:35:55.613 [2024-07-12 00:48:23.173905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.613 [2024-07-12 00:48:23.173932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.613 qpair failed and we were unable to recover it. 00:35:55.613 [2024-07-12 00:48:23.174012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.613 [2024-07-12 00:48:23.174038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.613 qpair failed and we were unable to recover it. 00:35:55.613 [2024-07-12 00:48:23.174130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.613 [2024-07-12 00:48:23.174157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.613 qpair failed and we were unable to recover it. 00:35:55.613 [2024-07-12 00:48:23.174237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.613 [2024-07-12 00:48:23.174265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.613 qpair failed and we were unable to recover it. 00:35:55.613 [2024-07-12 00:48:23.174341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.613 [2024-07-12 00:48:23.174368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.613 qpair failed and we were unable to recover it. 00:35:55.613 [2024-07-12 00:48:23.174457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.613 [2024-07-12 00:48:23.174483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.613 qpair failed and we were unable to recover it. 00:35:55.613 [2024-07-12 00:48:23.174568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.613 [2024-07-12 00:48:23.174697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.613 qpair failed and we were unable to recover it. 00:35:55.613 [2024-07-12 00:48:23.174786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.613 [2024-07-12 00:48:23.174814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.613 qpair failed and we were unable to recover it. 00:35:55.613 [2024-07-12 00:48:23.174891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.613 [2024-07-12 00:48:23.174917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.613 qpair failed and we were unable to recover it. 00:35:55.613 [2024-07-12 00:48:23.175009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.613 [2024-07-12 00:48:23.175036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.613 qpair failed and we were unable to recover it. 00:35:55.613 [2024-07-12 00:48:23.175124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.613 [2024-07-12 00:48:23.175151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.613 qpair failed and we were unable to recover it. 00:35:55.613 [2024-07-12 00:48:23.175242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.613 [2024-07-12 00:48:23.175271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.613 qpair failed and we were unable to recover it. 00:35:55.613 [2024-07-12 00:48:23.175369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.613 [2024-07-12 00:48:23.175398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.613 qpair failed and we were unable to recover it. 00:35:55.613 [2024-07-12 00:48:23.175478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.613 [2024-07-12 00:48:23.175504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.613 qpair failed and we were unable to recover it. 00:35:55.613 [2024-07-12 00:48:23.175592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.613 [2024-07-12 00:48:23.175619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.613 qpair failed and we were unable to recover it. 00:35:55.613 [2024-07-12 00:48:23.175710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.614 [2024-07-12 00:48:23.175737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.614 qpair failed and we were unable to recover it. 00:35:55.614 [2024-07-12 00:48:23.175818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.614 [2024-07-12 00:48:23.175844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.614 qpair failed and we were unable to recover it. 00:35:55.614 [2024-07-12 00:48:23.175919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.614 [2024-07-12 00:48:23.175948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.614 qpair failed and we were unable to recover it. 00:35:55.614 [2024-07-12 00:48:23.176027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.614 [2024-07-12 00:48:23.176054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.614 qpair failed and we were unable to recover it. 00:35:55.614 [2024-07-12 00:48:23.176137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.614 [2024-07-12 00:48:23.176163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.614 qpair failed and we were unable to recover it. 00:35:55.614 [2024-07-12 00:48:23.176243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.614 [2024-07-12 00:48:23.176272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.614 qpair failed and we were unable to recover it. 00:35:55.614 [2024-07-12 00:48:23.176351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.614 [2024-07-12 00:48:23.176377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.614 qpair failed and we were unable to recover it. 00:35:55.614 [2024-07-12 00:48:23.176459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.614 [2024-07-12 00:48:23.176485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.614 qpair failed and we were unable to recover it. 00:35:55.614 [2024-07-12 00:48:23.176559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.614 [2024-07-12 00:48:23.176598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.614 qpair failed and we were unable to recover it. 00:35:55.614 [2024-07-12 00:48:23.176678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.614 [2024-07-12 00:48:23.176705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.614 qpair failed and we were unable to recover it. 00:35:55.614 [2024-07-12 00:48:23.176781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.614 [2024-07-12 00:48:23.176808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.614 qpair failed and we were unable to recover it. 00:35:55.614 [2024-07-12 00:48:23.176890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.614 [2024-07-12 00:48:23.176916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.614 qpair failed and we were unable to recover it. 00:35:55.614 [2024-07-12 00:48:23.176991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.614 [2024-07-12 00:48:23.177017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.614 qpair failed and we were unable to recover it. 00:35:55.614 [2024-07-12 00:48:23.177100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.614 [2024-07-12 00:48:23.177131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.614 qpair failed and we were unable to recover it. 00:35:55.614 [2024-07-12 00:48:23.177206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.614 [2024-07-12 00:48:23.177232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.614 qpair failed and we were unable to recover it. 00:35:55.614 [2024-07-12 00:48:23.177307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.614 [2024-07-12 00:48:23.177333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.614 qpair failed and we were unable to recover it. 00:35:55.614 [2024-07-12 00:48:23.177407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.614 [2024-07-12 00:48:23.177433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.614 qpair failed and we were unable to recover it. 00:35:55.614 [2024-07-12 00:48:23.177516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.614 [2024-07-12 00:48:23.177544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.614 qpair failed and we were unable to recover it. 00:35:55.614 [2024-07-12 00:48:23.177642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.614 [2024-07-12 00:48:23.177671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.614 qpair failed and we were unable to recover it. 00:35:55.614 [2024-07-12 00:48:23.177754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.614 [2024-07-12 00:48:23.177780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.614 qpair failed and we were unable to recover it. 00:35:55.614 [2024-07-12 00:48:23.177855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.614 [2024-07-12 00:48:23.177883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.614 qpair failed and we were unable to recover it. 00:35:55.614 [2024-07-12 00:48:23.177958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.614 [2024-07-12 00:48:23.177983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.614 qpair failed and we were unable to recover it. 00:35:55.614 [2024-07-12 00:48:23.178057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.614 [2024-07-12 00:48:23.178083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.614 qpair failed and we were unable to recover it. 00:35:55.614 [2024-07-12 00:48:23.178158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.614 [2024-07-12 00:48:23.178184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.614 qpair failed and we were unable to recover it. 00:35:55.614 [2024-07-12 00:48:23.178268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.614 [2024-07-12 00:48:23.178297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.614 qpair failed and we were unable to recover it. 00:35:55.614 [2024-07-12 00:48:23.178375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.614 [2024-07-12 00:48:23.178401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.614 qpair failed and we were unable to recover it. 00:35:55.614 [2024-07-12 00:48:23.178485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.614 [2024-07-12 00:48:23.178512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.615 qpair failed and we were unable to recover it. 00:35:55.615 [2024-07-12 00:48:23.178597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.615 [2024-07-12 00:48:23.178625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.615 qpair failed and we were unable to recover it. 00:35:55.615 [2024-07-12 00:48:23.178700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.615 [2024-07-12 00:48:23.178726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.615 qpair failed and we were unable to recover it. 00:35:55.615 [2024-07-12 00:48:23.178803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.615 [2024-07-12 00:48:23.178829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.615 qpair failed and we were unable to recover it. 00:35:55.615 [2024-07-12 00:48:23.178908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.615 [2024-07-12 00:48:23.178935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.615 qpair failed and we were unable to recover it. 00:35:55.615 [2024-07-12 00:48:23.179020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.615 [2024-07-12 00:48:23.179047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.615 qpair failed and we were unable to recover it. 00:35:55.615 [2024-07-12 00:48:23.179131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.615 [2024-07-12 00:48:23.179158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.615 qpair failed and we were unable to recover it. 00:35:55.615 [2024-07-12 00:48:23.179242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.615 [2024-07-12 00:48:23.179267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.615 qpair failed and we were unable to recover it. 00:35:55.615 [2024-07-12 00:48:23.179347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.615 [2024-07-12 00:48:23.179373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.615 qpair failed and we were unable to recover it. 00:35:55.615 [2024-07-12 00:48:23.179465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.615 [2024-07-12 00:48:23.179490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.615 qpair failed and we were unable to recover it. 00:35:55.615 [2024-07-12 00:48:23.179563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.615 [2024-07-12 00:48:23.179595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.615 qpair failed and we were unable to recover it. 00:35:55.615 [2024-07-12 00:48:23.179676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.615 [2024-07-12 00:48:23.179702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.615 qpair failed and we were unable to recover it. 00:35:55.615 [2024-07-12 00:48:23.179776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.615 [2024-07-12 00:48:23.179802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.615 qpair failed and we were unable to recover it. 00:35:55.615 [2024-07-12 00:48:23.179889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.615 [2024-07-12 00:48:23.179915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.615 qpair failed and we were unable to recover it. 00:35:55.615 [2024-07-12 00:48:23.179998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.615 [2024-07-12 00:48:23.180024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.615 qpair failed and we were unable to recover it. 00:35:55.615 [2024-07-12 00:48:23.180099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.615 [2024-07-12 00:48:23.180125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.615 qpair failed and we were unable to recover it. 00:35:55.615 [2024-07-12 00:48:23.180213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.615 [2024-07-12 00:48:23.180240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.615 qpair failed and we were unable to recover it. 00:35:55.615 [2024-07-12 00:48:23.180322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.615 [2024-07-12 00:48:23.180348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.615 qpair failed and we were unable to recover it. 00:35:55.615 [2024-07-12 00:48:23.180423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.615 [2024-07-12 00:48:23.180450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.615 qpair failed and we were unable to recover it. 00:35:55.615 [2024-07-12 00:48:23.180535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.615 [2024-07-12 00:48:23.180564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.615 qpair failed and we were unable to recover it. 00:35:55.615 [2024-07-12 00:48:23.180651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.615 [2024-07-12 00:48:23.180677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.615 qpair failed and we were unable to recover it. 00:35:55.615 [2024-07-12 00:48:23.180757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.615 [2024-07-12 00:48:23.180785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.615 qpair failed and we were unable to recover it. 00:35:55.615 [2024-07-12 00:48:23.180868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.615 [2024-07-12 00:48:23.180895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.615 qpair failed and we were unable to recover it. 00:35:55.615 [2024-07-12 00:48:23.180971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.615 [2024-07-12 00:48:23.180997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.615 qpair failed and we were unable to recover it. 00:35:55.615 [2024-07-12 00:48:23.181073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.615 [2024-07-12 00:48:23.181100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.615 qpair failed and we were unable to recover it. 00:35:55.615 [2024-07-12 00:48:23.181184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.615 [2024-07-12 00:48:23.181210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.615 qpair failed and we were unable to recover it. 00:35:55.615 [2024-07-12 00:48:23.181290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.615 [2024-07-12 00:48:23.181315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.615 qpair failed and we were unable to recover it. 00:35:55.615 [2024-07-12 00:48:23.181392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.616 [2024-07-12 00:48:23.181422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.616 qpair failed and we were unable to recover it. 00:35:55.616 [2024-07-12 00:48:23.181500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.616 [2024-07-12 00:48:23.181525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.616 qpair failed and we were unable to recover it. 00:35:55.616 [2024-07-12 00:48:23.181608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.616 [2024-07-12 00:48:23.181635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.616 qpair failed and we were unable to recover it. 00:35:55.616 [2024-07-12 00:48:23.181714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.616 [2024-07-12 00:48:23.181740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.616 qpair failed and we were unable to recover it. 00:35:55.616 [2024-07-12 00:48:23.181814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.616 [2024-07-12 00:48:23.181840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.616 qpair failed and we were unable to recover it. 00:35:55.616 [2024-07-12 00:48:23.181916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.616 [2024-07-12 00:48:23.181941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.616 qpair failed and we were unable to recover it. 00:35:55.616 [2024-07-12 00:48:23.182015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.616 [2024-07-12 00:48:23.182041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.616 qpair failed and we were unable to recover it. 00:35:55.616 [2024-07-12 00:48:23.182123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.616 [2024-07-12 00:48:23.182150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.616 qpair failed and we were unable to recover it. 00:35:55.616 [2024-07-12 00:48:23.182233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.616 [2024-07-12 00:48:23.182259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.616 qpair failed and we were unable to recover it. 00:35:55.616 [2024-07-12 00:48:23.182343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.616 [2024-07-12 00:48:23.182370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.616 qpair failed and we were unable to recover it. 00:35:55.616 [2024-07-12 00:48:23.182454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.616 [2024-07-12 00:48:23.182483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.616 qpair failed and we were unable to recover it. 00:35:55.616 [2024-07-12 00:48:23.182563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.616 [2024-07-12 00:48:23.182594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.616 qpair failed and we were unable to recover it. 00:35:55.616 [2024-07-12 00:48:23.182675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.616 [2024-07-12 00:48:23.182700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.616 qpair failed and we were unable to recover it. 00:35:55.616 [2024-07-12 00:48:23.182779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.616 [2024-07-12 00:48:23.182805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.616 qpair failed and we were unable to recover it. 00:35:55.616 [2024-07-12 00:48:23.182893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.616 [2024-07-12 00:48:23.182920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.616 qpair failed and we were unable to recover it. 00:35:55.616 [2024-07-12 00:48:23.183002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.616 [2024-07-12 00:48:23.183031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.616 qpair failed and we were unable to recover it. 00:35:55.616 [2024-07-12 00:48:23.183118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.616 [2024-07-12 00:48:23.183144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.616 qpair failed and we were unable to recover it. 00:35:55.616 [2024-07-12 00:48:23.183229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.616 [2024-07-12 00:48:23.183255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.616 qpair failed and we were unable to recover it. 00:35:55.616 [2024-07-12 00:48:23.183345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.616 [2024-07-12 00:48:23.183371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.616 qpair failed and we were unable to recover it. 00:35:55.616 [2024-07-12 00:48:23.183452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.616 [2024-07-12 00:48:23.183478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.616 qpair failed and we were unable to recover it. 00:35:55.616 [2024-07-12 00:48:23.183561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.616 [2024-07-12 00:48:23.183602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.616 qpair failed and we were unable to recover it. 00:35:55.616 [2024-07-12 00:48:23.183678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.616 [2024-07-12 00:48:23.183709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.616 qpair failed and we were unable to recover it. 00:35:55.616 [2024-07-12 00:48:23.183785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.616 [2024-07-12 00:48:23.183811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.616 qpair failed and we were unable to recover it. 00:35:55.616 [2024-07-12 00:48:23.183894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.616 [2024-07-12 00:48:23.183920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.616 qpair failed and we were unable to recover it. 00:35:55.616 [2024-07-12 00:48:23.183998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.616 [2024-07-12 00:48:23.184024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.616 qpair failed and we were unable to recover it. 00:35:55.616 [2024-07-12 00:48:23.184103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.616 [2024-07-12 00:48:23.184129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.617 qpair failed and we were unable to recover it. 00:35:55.617 [2024-07-12 00:48:23.184205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.617 [2024-07-12 00:48:23.184231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.617 qpair failed and we were unable to recover it. 00:35:55.617 [2024-07-12 00:48:23.184325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.617 [2024-07-12 00:48:23.184352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.617 qpair failed and we were unable to recover it. 00:35:55.617 [2024-07-12 00:48:23.184435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.617 [2024-07-12 00:48:23.184461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.617 qpair failed and we were unable to recover it. 00:35:55.617 [2024-07-12 00:48:23.184544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.617 [2024-07-12 00:48:23.184574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.617 qpair failed and we were unable to recover it. 00:35:55.617 [2024-07-12 00:48:23.184658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.617 [2024-07-12 00:48:23.184684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.617 qpair failed and we were unable to recover it. 00:35:55.617 [2024-07-12 00:48:23.184759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.617 [2024-07-12 00:48:23.184785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.617 qpair failed and we were unable to recover it. 00:35:55.617 [2024-07-12 00:48:23.184867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.617 [2024-07-12 00:48:23.184893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.617 qpair failed and we were unable to recover it. 00:35:55.617 [2024-07-12 00:48:23.184969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.617 [2024-07-12 00:48:23.184995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.617 qpair failed and we were unable to recover it. 00:35:55.617 [2024-07-12 00:48:23.185073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.617 [2024-07-12 00:48:23.185102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.617 qpair failed and we were unable to recover it. 00:35:55.617 [2024-07-12 00:48:23.185182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.617 [2024-07-12 00:48:23.185208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.617 qpair failed and we were unable to recover it. 00:35:55.617 [2024-07-12 00:48:23.185283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.617 [2024-07-12 00:48:23.185308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.617 qpair failed and we were unable to recover it. 00:35:55.617 [2024-07-12 00:48:23.185395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.617 [2024-07-12 00:48:23.185422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.617 qpair failed and we were unable to recover it. 00:35:55.617 [2024-07-12 00:48:23.185504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.617 [2024-07-12 00:48:23.185530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.617 qpair failed and we were unable to recover it. 00:35:55.617 [2024-07-12 00:48:23.185613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.617 [2024-07-12 00:48:23.185640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.617 qpair failed and we were unable to recover it. 00:35:55.617 [2024-07-12 00:48:23.185715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.617 [2024-07-12 00:48:23.185746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.617 qpair failed and we were unable to recover it. 00:35:55.617 [2024-07-12 00:48:23.185830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.617 [2024-07-12 00:48:23.185856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.617 qpair failed and we were unable to recover it. 00:35:55.617 [2024-07-12 00:48:23.185931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.617 [2024-07-12 00:48:23.185957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.617 qpair failed and we were unable to recover it. 00:35:55.617 [2024-07-12 00:48:23.186040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.617 [2024-07-12 00:48:23.186067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.617 qpair failed and we were unable to recover it. 00:35:55.617 [2024-07-12 00:48:23.186145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.617 [2024-07-12 00:48:23.186172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.617 qpair failed and we were unable to recover it. 00:35:55.617 [2024-07-12 00:48:23.186259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.617 [2024-07-12 00:48:23.186286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.617 qpair failed and we were unable to recover it. 00:35:55.617 [2024-07-12 00:48:23.186367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.617 [2024-07-12 00:48:23.186394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.617 qpair failed and we were unable to recover it. 00:35:55.617 [2024-07-12 00:48:23.186475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.617 [2024-07-12 00:48:23.186502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.617 qpair failed and we were unable to recover it. 00:35:55.617 [2024-07-12 00:48:23.186584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.617 [2024-07-12 00:48:23.186616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.617 qpair failed and we were unable to recover it. 00:35:55.617 [2024-07-12 00:48:23.186695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.617 [2024-07-12 00:48:23.186722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.617 qpair failed and we were unable to recover it. 00:35:55.617 [2024-07-12 00:48:23.186803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.617 [2024-07-12 00:48:23.186829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.617 qpair failed and we were unable to recover it. 00:35:55.617 [2024-07-12 00:48:23.186916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.618 [2024-07-12 00:48:23.186944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.618 qpair failed and we were unable to recover it. 00:35:55.618 [2024-07-12 00:48:23.187027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.618 [2024-07-12 00:48:23.187054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.618 qpair failed and we were unable to recover it. 00:35:55.618 [2024-07-12 00:48:23.187132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.618 [2024-07-12 00:48:23.187158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.618 qpair failed and we were unable to recover it. 00:35:55.618 [2024-07-12 00:48:23.187241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.618 [2024-07-12 00:48:23.187267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.618 qpair failed and we were unable to recover it. 00:35:55.618 [2024-07-12 00:48:23.187344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.618 [2024-07-12 00:48:23.187370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.618 qpair failed and we were unable to recover it. 00:35:55.618 [2024-07-12 00:48:23.187444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.618 [2024-07-12 00:48:23.187470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.618 qpair failed and we were unable to recover it. 00:35:55.618 [2024-07-12 00:48:23.187556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.618 [2024-07-12 00:48:23.187582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.618 qpair failed and we were unable to recover it. 00:35:55.618 [2024-07-12 00:48:23.187675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.618 [2024-07-12 00:48:23.187702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.618 qpair failed and we were unable to recover it. 00:35:55.618 [2024-07-12 00:48:23.187775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.618 [2024-07-12 00:48:23.187801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.618 qpair failed and we were unable to recover it. 00:35:55.618 [2024-07-12 00:48:23.187884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.618 [2024-07-12 00:48:23.187910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.618 qpair failed and we were unable to recover it. 00:35:55.618 [2024-07-12 00:48:23.187996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.618 [2024-07-12 00:48:23.188025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.618 qpair failed and we were unable to recover it. 00:35:55.618 [2024-07-12 00:48:23.188110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.618 [2024-07-12 00:48:23.188136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.618 qpair failed and we were unable to recover it. 00:35:55.618 [2024-07-12 00:48:23.188218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.618 [2024-07-12 00:48:23.188244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.618 qpair failed and we were unable to recover it. 00:35:55.618 [2024-07-12 00:48:23.188318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.618 [2024-07-12 00:48:23.188344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.618 qpair failed and we were unable to recover it. 00:35:55.618 [2024-07-12 00:48:23.188418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.618 [2024-07-12 00:48:23.188444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.618 qpair failed and we were unable to recover it. 00:35:55.618 [2024-07-12 00:48:23.188518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.618 [2024-07-12 00:48:23.188543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.618 qpair failed and we were unable to recover it. 00:35:55.618 [2024-07-12 00:48:23.188624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.618 [2024-07-12 00:48:23.188652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.618 qpair failed and we were unable to recover it. 00:35:55.618 [2024-07-12 00:48:23.188732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.618 [2024-07-12 00:48:23.188758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.618 qpair failed and we were unable to recover it. 00:35:55.618 [2024-07-12 00:48:23.188833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.618 [2024-07-12 00:48:23.188860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.618 qpair failed and we were unable to recover it. 00:35:55.618 [2024-07-12 00:48:23.188936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.618 [2024-07-12 00:48:23.188962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.618 qpair failed and we were unable to recover it. 00:35:55.618 [2024-07-12 00:48:23.189035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.618 [2024-07-12 00:48:23.189061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.618 qpair failed and we were unable to recover it. 00:35:55.618 [2024-07-12 00:48:23.189140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.618 [2024-07-12 00:48:23.189167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.618 qpair failed and we were unable to recover it. 00:35:55.618 [2024-07-12 00:48:23.189243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.618 [2024-07-12 00:48:23.189270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.618 qpair failed and we were unable to recover it. 00:35:55.618 [2024-07-12 00:48:23.189358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.618 [2024-07-12 00:48:23.189386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.618 qpair failed and we were unable to recover it. 00:35:55.618 [2024-07-12 00:48:23.189470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.618 [2024-07-12 00:48:23.189498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.618 qpair failed and we were unable to recover it. 00:35:55.618 [2024-07-12 00:48:23.189576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.618 [2024-07-12 00:48:23.189612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.618 qpair failed and we were unable to recover it. 00:35:55.618 [2024-07-12 00:48:23.189701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.618 [2024-07-12 00:48:23.189727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.619 qpair failed and we were unable to recover it. 00:35:55.619 [2024-07-12 00:48:23.189809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.619 [2024-07-12 00:48:23.189835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.619 qpair failed and we were unable to recover it. 00:35:55.619 [2024-07-12 00:48:23.189915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.619 [2024-07-12 00:48:23.189940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.619 qpair failed and we were unable to recover it. 00:35:55.619 [2024-07-12 00:48:23.190021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.619 [2024-07-12 00:48:23.190055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.619 qpair failed and we were unable to recover it. 00:35:55.619 [2024-07-12 00:48:23.190139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.619 [2024-07-12 00:48:23.190165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.619 qpair failed and we were unable to recover it. 00:35:55.619 [2024-07-12 00:48:23.190248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.619 [2024-07-12 00:48:23.190274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.619 qpair failed and we were unable to recover it. 00:35:55.619 [2024-07-12 00:48:23.190351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.619 [2024-07-12 00:48:23.190377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.619 qpair failed and we were unable to recover it. 00:35:55.619 [2024-07-12 00:48:23.190459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.619 [2024-07-12 00:48:23.190485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.619 qpair failed and we were unable to recover it. 00:35:55.619 [2024-07-12 00:48:23.190563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.619 [2024-07-12 00:48:23.190608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.619 qpair failed and we were unable to recover it. 00:35:55.619 [2024-07-12 00:48:23.190692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.619 [2024-07-12 00:48:23.190719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.619 qpair failed and we were unable to recover it. 00:35:55.619 [2024-07-12 00:48:23.190794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.619 [2024-07-12 00:48:23.190819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.619 qpair failed and we were unable to recover it. 00:35:55.619 [2024-07-12 00:48:23.190894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.619 [2024-07-12 00:48:23.190920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.619 qpair failed and we were unable to recover it. 00:35:55.619 [2024-07-12 00:48:23.191004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.619 [2024-07-12 00:48:23.191032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.619 qpair failed and we were unable to recover it. 00:35:55.619 [2024-07-12 00:48:23.191107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.619 [2024-07-12 00:48:23.191133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.619 qpair failed and we were unable to recover it. 00:35:55.619 [2024-07-12 00:48:23.191215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.619 [2024-07-12 00:48:23.191242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.619 qpair failed and we were unable to recover it. 00:35:55.619 [2024-07-12 00:48:23.191320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.619 [2024-07-12 00:48:23.191346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.619 qpair failed and we were unable to recover it. 00:35:55.619 [2024-07-12 00:48:23.191426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.619 [2024-07-12 00:48:23.191452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.619 qpair failed and we were unable to recover it. 00:35:55.619 [2024-07-12 00:48:23.191540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.619 [2024-07-12 00:48:23.191569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.619 qpair failed and we were unable to recover it. 00:35:55.619 [2024-07-12 00:48:23.191657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.619 [2024-07-12 00:48:23.191683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.619 qpair failed and we were unable to recover it. 00:35:55.619 [2024-07-12 00:48:23.191760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.619 [2024-07-12 00:48:23.191786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.619 qpair failed and we were unable to recover it. 00:35:55.619 [2024-07-12 00:48:23.191873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.619 [2024-07-12 00:48:23.191900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.619 qpair failed and we were unable to recover it. 00:35:55.619 [2024-07-12 00:48:23.191979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.619 [2024-07-12 00:48:23.192005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.619 qpair failed and we were unable to recover it. 00:35:55.619 [2024-07-12 00:48:23.192087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.619 [2024-07-12 00:48:23.192114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.619 qpair failed and we were unable to recover it. 00:35:55.619 [2024-07-12 00:48:23.192197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.619 [2024-07-12 00:48:23.192223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.619 qpair failed and we were unable to recover it. 00:35:55.619 [2024-07-12 00:48:23.192302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.619 [2024-07-12 00:48:23.192332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.619 qpair failed and we were unable to recover it. 00:35:55.619 [2024-07-12 00:48:23.192422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.619 [2024-07-12 00:48:23.192447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.619 qpair failed and we were unable to recover it. 00:35:55.619 [2024-07-12 00:48:23.192531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.620 [2024-07-12 00:48:23.192557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.620 qpair failed and we were unable to recover it. 00:35:55.620 [2024-07-12 00:48:23.192640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.620 [2024-07-12 00:48:23.192667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.620 qpair failed and we were unable to recover it. 00:35:55.620 [2024-07-12 00:48:23.192747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.620 [2024-07-12 00:48:23.192773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.620 qpair failed and we were unable to recover it. 00:35:55.620 [2024-07-12 00:48:23.192847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.620 [2024-07-12 00:48:23.192874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.620 qpair failed and we were unable to recover it. 00:35:55.620 [2024-07-12 00:48:23.192961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.620 [2024-07-12 00:48:23.192988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.620 qpair failed and we were unable to recover it. 00:35:55.620 [2024-07-12 00:48:23.193074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.620 [2024-07-12 00:48:23.193100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.620 qpair failed and we were unable to recover it. 00:35:55.620 [2024-07-12 00:48:23.193177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.620 [2024-07-12 00:48:23.193203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.620 qpair failed and we were unable to recover it. 00:35:55.620 [2024-07-12 00:48:23.193283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.620 [2024-07-12 00:48:23.193309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.620 qpair failed and we were unable to recover it. 00:35:55.620 [2024-07-12 00:48:23.193390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.620 [2024-07-12 00:48:23.193416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.620 qpair failed and we were unable to recover it. 00:35:55.620 [2024-07-12 00:48:23.193498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.620 [2024-07-12 00:48:23.193525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.620 qpair failed and we were unable to recover it. 00:35:55.620 [2024-07-12 00:48:23.193601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.620 [2024-07-12 00:48:23.193628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.620 qpair failed and we were unable to recover it. 00:35:55.620 [2024-07-12 00:48:23.193714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.620 [2024-07-12 00:48:23.193739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.620 qpair failed and we were unable to recover it. 00:35:55.620 [2024-07-12 00:48:23.193821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.620 [2024-07-12 00:48:23.193851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.620 qpair failed and we were unable to recover it. 00:35:55.620 [2024-07-12 00:48:23.193963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.620 [2024-07-12 00:48:23.193990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.620 qpair failed and we were unable to recover it. 00:35:55.620 [2024-07-12 00:48:23.194080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.620 [2024-07-12 00:48:23.194107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.620 qpair failed and we were unable to recover it. 00:35:55.620 [2024-07-12 00:48:23.194183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.620 [2024-07-12 00:48:23.194209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.620 qpair failed and we were unable to recover it. 00:35:55.620 [2024-07-12 00:48:23.194291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.620 [2024-07-12 00:48:23.194317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.620 qpair failed and we were unable to recover it. 00:35:55.620 [2024-07-12 00:48:23.194401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.620 [2024-07-12 00:48:23.194433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.620 qpair failed and we were unable to recover it. 00:35:55.620 [2024-07-12 00:48:23.194511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.621 [2024-07-12 00:48:23.194537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.621 qpair failed and we were unable to recover it. 00:35:55.621 [2024-07-12 00:48:23.194615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.621 [2024-07-12 00:48:23.194642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.621 qpair failed and we were unable to recover it. 00:35:55.621 [2024-07-12 00:48:23.194725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.621 [2024-07-12 00:48:23.194752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.621 qpair failed and we were unable to recover it. 00:35:55.621 [2024-07-12 00:48:23.194828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.621 [2024-07-12 00:48:23.194854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.621 qpair failed and we were unable to recover it. 00:35:55.621 [2024-07-12 00:48:23.194928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.621 [2024-07-12 00:48:23.194954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.621 qpair failed and we were unable to recover it. 00:35:55.621 [2024-07-12 00:48:23.195028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.621 [2024-07-12 00:48:23.195055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.621 qpair failed and we were unable to recover it. 00:35:55.621 [2024-07-12 00:48:23.195128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.621 [2024-07-12 00:48:23.195155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.621 qpair failed and we were unable to recover it. 00:35:55.621 [2024-07-12 00:48:23.195228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.621 [2024-07-12 00:48:23.195254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.621 qpair failed and we were unable to recover it. 00:35:55.621 [2024-07-12 00:48:23.195337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.621 [2024-07-12 00:48:23.195363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.621 qpair failed and we were unable to recover it. 00:35:55.621 [2024-07-12 00:48:23.195440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.621 [2024-07-12 00:48:23.195469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.621 qpair failed and we were unable to recover it. 00:35:55.621 [2024-07-12 00:48:23.195551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.621 [2024-07-12 00:48:23.195577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.621 qpair failed and we were unable to recover it. 00:35:55.621 [2024-07-12 00:48:23.195659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.621 [2024-07-12 00:48:23.195686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.621 qpair failed and we were unable to recover it. 00:35:55.621 [2024-07-12 00:48:23.195762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.621 [2024-07-12 00:48:23.195788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.621 qpair failed and we were unable to recover it. 00:35:55.621 [2024-07-12 00:48:23.195871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.621 [2024-07-12 00:48:23.195897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.621 qpair failed and we were unable to recover it. 00:35:55.621 [2024-07-12 00:48:23.195972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.621 [2024-07-12 00:48:23.195998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.621 qpair failed and we were unable to recover it. 00:35:55.621 [2024-07-12 00:48:23.196084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.621 [2024-07-12 00:48:23.196112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.621 qpair failed and we were unable to recover it. 00:35:55.621 [2024-07-12 00:48:23.196193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.621 [2024-07-12 00:48:23.196220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.621 qpair failed and we were unable to recover it. 00:35:55.621 [2024-07-12 00:48:23.196296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.621 [2024-07-12 00:48:23.196323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.621 qpair failed and we were unable to recover it. 00:35:55.621 [2024-07-12 00:48:23.196396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.621 [2024-07-12 00:48:23.196423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.621 qpair failed and we were unable to recover it. 00:35:55.621 [2024-07-12 00:48:23.196505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.621 [2024-07-12 00:48:23.196532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.621 qpair failed and we were unable to recover it. 00:35:55.621 [2024-07-12 00:48:23.196606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.621 [2024-07-12 00:48:23.196633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.621 qpair failed and we were unable to recover it. 00:35:55.621 [2024-07-12 00:48:23.196710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.621 [2024-07-12 00:48:23.196738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.621 qpair failed and we were unable to recover it. 00:35:55.621 [2024-07-12 00:48:23.196820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.621 [2024-07-12 00:48:23.196847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.621 qpair failed and we were unable to recover it. 00:35:55.621 [2024-07-12 00:48:23.196932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.621 [2024-07-12 00:48:23.196958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.621 qpair failed and we were unable to recover it. 00:35:55.621 [2024-07-12 00:48:23.197045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.621 [2024-07-12 00:48:23.197075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.621 qpair failed and we were unable to recover it. 00:35:55.621 [2024-07-12 00:48:23.197159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.621 [2024-07-12 00:48:23.197186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.621 qpair failed and we were unable to recover it. 00:35:55.621 [2024-07-12 00:48:23.197268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.622 [2024-07-12 00:48:23.197295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.622 qpair failed and we were unable to recover it. 00:35:55.622 [2024-07-12 00:48:23.197388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.622 [2024-07-12 00:48:23.197414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.622 qpair failed and we were unable to recover it. 00:35:55.622 [2024-07-12 00:48:23.197494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.622 [2024-07-12 00:48:23.197520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.622 qpair failed and we were unable to recover it. 00:35:55.622 [2024-07-12 00:48:23.197616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.622 [2024-07-12 00:48:23.197643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.622 qpair failed and we were unable to recover it. 00:35:55.622 [2024-07-12 00:48:23.197719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.622 [2024-07-12 00:48:23.197746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.622 qpair failed and we were unable to recover it. 00:35:55.622 [2024-07-12 00:48:23.197842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.622 [2024-07-12 00:48:23.197870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.622 qpair failed and we were unable to recover it. 00:35:55.622 [2024-07-12 00:48:23.197945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.622 [2024-07-12 00:48:23.197971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.622 qpair failed and we were unable to recover it. 00:35:55.622 [2024-07-12 00:48:23.198072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.622 [2024-07-12 00:48:23.198098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.622 qpair failed and we were unable to recover it. 00:35:55.622 [2024-07-12 00:48:23.198185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.622 [2024-07-12 00:48:23.198211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.622 qpair failed and we were unable to recover it. 00:35:55.622 [2024-07-12 00:48:23.198288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.622 [2024-07-12 00:48:23.198314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.622 qpair failed and we were unable to recover it. 00:35:55.622 [2024-07-12 00:48:23.198394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.622 [2024-07-12 00:48:23.198420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.622 qpair failed and we were unable to recover it. 00:35:55.622 [2024-07-12 00:48:23.198500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.622 [2024-07-12 00:48:23.198526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.622 qpair failed and we were unable to recover it. 00:35:55.622 [2024-07-12 00:48:23.198612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.622 [2024-07-12 00:48:23.198639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.622 qpair failed and we were unable to recover it. 00:35:55.622 [2024-07-12 00:48:23.198722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.622 [2024-07-12 00:48:23.198755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.622 qpair failed and we were unable to recover it. 00:35:55.622 [2024-07-12 00:48:23.198847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.622 [2024-07-12 00:48:23.198874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.622 qpair failed and we were unable to recover it. 00:35:55.622 [2024-07-12 00:48:23.198967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.622 [2024-07-12 00:48:23.198993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.622 qpair failed and we were unable to recover it. 00:35:55.622 [2024-07-12 00:48:23.199079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.622 [2024-07-12 00:48:23.199105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.622 qpair failed and we were unable to recover it. 00:35:55.622 [2024-07-12 00:48:23.199194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.622 [2024-07-12 00:48:23.199219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.622 qpair failed and we were unable to recover it. 00:35:55.622 [2024-07-12 00:48:23.199313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.622 [2024-07-12 00:48:23.199340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.622 qpair failed and we were unable to recover it. 00:35:55.622 [2024-07-12 00:48:23.199416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.622 [2024-07-12 00:48:23.199442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.622 qpair failed and we were unable to recover it. 00:35:55.622 [2024-07-12 00:48:23.199517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.622 [2024-07-12 00:48:23.199543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.622 qpair failed and we were unable to recover it. 00:35:55.622 [2024-07-12 00:48:23.199649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.622 [2024-07-12 00:48:23.199676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.622 qpair failed and we were unable to recover it. 00:35:55.622 [2024-07-12 00:48:23.199750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.622 [2024-07-12 00:48:23.199777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.622 qpair failed and we were unable to recover it. 00:35:55.622 [2024-07-12 00:48:23.199857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.622 [2024-07-12 00:48:23.199883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.622 qpair failed and we were unable to recover it. 00:35:55.622 [2024-07-12 00:48:23.199961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.622 [2024-07-12 00:48:23.199987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.622 qpair failed and we were unable to recover it. 00:35:55.622 [2024-07-12 00:48:23.200067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.622 [2024-07-12 00:48:23.200094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.623 qpair failed and we were unable to recover it. 00:35:55.623 [2024-07-12 00:48:23.200174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.623 [2024-07-12 00:48:23.200200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.623 qpair failed and we were unable to recover it. 00:35:55.623 [2024-07-12 00:48:23.200286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.623 [2024-07-12 00:48:23.200312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.623 qpair failed and we were unable to recover it. 00:35:55.623 [2024-07-12 00:48:23.200387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.623 [2024-07-12 00:48:23.200413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.623 qpair failed and we were unable to recover it. 00:35:55.623 [2024-07-12 00:48:23.200490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.623 [2024-07-12 00:48:23.200516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.623 qpair failed and we were unable to recover it. 00:35:55.623 [2024-07-12 00:48:23.200601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.623 [2024-07-12 00:48:23.200629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.623 qpair failed and we were unable to recover it. 00:35:55.623 [2024-07-12 00:48:23.200705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.623 [2024-07-12 00:48:23.200731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.623 qpair failed and we were unable to recover it. 00:35:55.623 [2024-07-12 00:48:23.200806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.623 [2024-07-12 00:48:23.200832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.623 qpair failed and we were unable to recover it. 00:35:55.623 [2024-07-12 00:48:23.200910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.623 [2024-07-12 00:48:23.200937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.623 qpair failed and we were unable to recover it. 00:35:55.623 [2024-07-12 00:48:23.201020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.623 [2024-07-12 00:48:23.201048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.623 qpair failed and we were unable to recover it. 00:35:55.623 [2024-07-12 00:48:23.201124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.623 [2024-07-12 00:48:23.201150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.623 qpair failed and we were unable to recover it. 00:35:55.623 [2024-07-12 00:48:23.201235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.623 [2024-07-12 00:48:23.201261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.623 qpair failed and we were unable to recover it. 00:35:55.623 [2024-07-12 00:48:23.201343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.623 [2024-07-12 00:48:23.201369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.623 qpair failed and we were unable to recover it. 00:35:55.623 [2024-07-12 00:48:23.201460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.623 [2024-07-12 00:48:23.201487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.623 qpair failed and we were unable to recover it. 00:35:55.623 [2024-07-12 00:48:23.201570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.623 [2024-07-12 00:48:23.201602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.623 qpair failed and we were unable to recover it. 00:35:55.623 [2024-07-12 00:48:23.201694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.623 [2024-07-12 00:48:23.201723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.623 qpair failed and we were unable to recover it. 00:35:55.623 [2024-07-12 00:48:23.201806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.623 [2024-07-12 00:48:23.201833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.623 qpair failed and we were unable to recover it. 00:35:55.623 [2024-07-12 00:48:23.201908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.623 [2024-07-12 00:48:23.201934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.623 qpair failed and we were unable to recover it. 00:35:55.623 [2024-07-12 00:48:23.202015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.623 [2024-07-12 00:48:23.202042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.623 qpair failed and we were unable to recover it. 00:35:55.623 [2024-07-12 00:48:23.202128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.623 [2024-07-12 00:48:23.202154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.623 qpair failed and we were unable to recover it. 00:35:55.623 [2024-07-12 00:48:23.202233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.623 [2024-07-12 00:48:23.202259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.623 qpair failed and we were unable to recover it. 00:35:55.623 [2024-07-12 00:48:23.202348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.623 [2024-07-12 00:48:23.202377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.623 qpair failed and we were unable to recover it. 00:35:55.623 [2024-07-12 00:48:23.202452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.623 [2024-07-12 00:48:23.202478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.623 qpair failed and we were unable to recover it. 00:35:55.623 [2024-07-12 00:48:23.202558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.623 [2024-07-12 00:48:23.202584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.623 qpair failed and we were unable to recover it. 00:35:55.623 [2024-07-12 00:48:23.202685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.623 [2024-07-12 00:48:23.202713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.623 qpair failed and we were unable to recover it. 00:35:55.623 [2024-07-12 00:48:23.202803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.623 [2024-07-12 00:48:23.202830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.623 qpair failed and we were unable to recover it. 00:35:55.624 [2024-07-12 00:48:23.202906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.624 [2024-07-12 00:48:23.202932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.624 qpair failed and we were unable to recover it. 00:35:55.624 [2024-07-12 00:48:23.203016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.624 [2024-07-12 00:48:23.203046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.624 qpair failed and we were unable to recover it. 00:35:55.624 [2024-07-12 00:48:23.203122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.624 [2024-07-12 00:48:23.203153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.624 qpair failed and we were unable to recover it. 00:35:55.624 [2024-07-12 00:48:23.203241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.624 [2024-07-12 00:48:23.203269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.624 qpair failed and we were unable to recover it. 00:35:55.624 [2024-07-12 00:48:23.203345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.624 [2024-07-12 00:48:23.203373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.624 qpair failed and we were unable to recover it. 00:35:55.624 [2024-07-12 00:48:23.203474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.624 [2024-07-12 00:48:23.203501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.624 qpair failed and we were unable to recover it. 00:35:55.624 [2024-07-12 00:48:23.203592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.624 [2024-07-12 00:48:23.203625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.624 qpair failed and we were unable to recover it. 00:35:55.624 [2024-07-12 00:48:23.203719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.624 [2024-07-12 00:48:23.203746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.624 qpair failed and we were unable to recover it. 00:35:55.624 [2024-07-12 00:48:23.203825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.624 [2024-07-12 00:48:23.203852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.624 qpair failed and we were unable to recover it. 00:35:55.624 [2024-07-12 00:48:23.203932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.624 [2024-07-12 00:48:23.203962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.624 qpair failed and we were unable to recover it. 00:35:55.624 [2024-07-12 00:48:23.204047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.624 [2024-07-12 00:48:23.204073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.624 qpair failed and we were unable to recover it. 00:35:55.624 [2024-07-12 00:48:23.204147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.624 [2024-07-12 00:48:23.204173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.624 qpair failed and we were unable to recover it. 00:35:55.624 [2024-07-12 00:48:23.204251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.624 [2024-07-12 00:48:23.204277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.624 qpair failed and we were unable to recover it. 00:35:55.624 [2024-07-12 00:48:23.204353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.624 [2024-07-12 00:48:23.204381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.624 qpair failed and we were unable to recover it. 00:35:55.624 [2024-07-12 00:48:23.204459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.624 [2024-07-12 00:48:23.204485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.624 qpair failed and we were unable to recover it. 00:35:55.624 [2024-07-12 00:48:23.204563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.624 [2024-07-12 00:48:23.204598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.624 qpair failed and we were unable to recover it. 00:35:55.624 [2024-07-12 00:48:23.204687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.624 [2024-07-12 00:48:23.204713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.624 qpair failed and we were unable to recover it. 00:35:55.624 [2024-07-12 00:48:23.204794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.624 [2024-07-12 00:48:23.204821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.624 qpair failed and we were unable to recover it. 00:35:55.624 [2024-07-12 00:48:23.204907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.624 [2024-07-12 00:48:23.204933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.624 qpair failed and we were unable to recover it. 00:35:55.624 [2024-07-12 00:48:23.205012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.624 [2024-07-12 00:48:23.205038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.624 qpair failed and we were unable to recover it. 00:35:55.624 [2024-07-12 00:48:23.205122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.624 [2024-07-12 00:48:23.205147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.624 qpair failed and we were unable to recover it. 00:35:55.624 [2024-07-12 00:48:23.205223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.624 [2024-07-12 00:48:23.205249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.624 qpair failed and we were unable to recover it. 00:35:55.624 [2024-07-12 00:48:23.205326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.624 [2024-07-12 00:48:23.205352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.624 qpair failed and we were unable to recover it. 00:35:55.624 [2024-07-12 00:48:23.205426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.624 [2024-07-12 00:48:23.205452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.624 qpair failed and we were unable to recover it. 00:35:55.624 [2024-07-12 00:48:23.205528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.624 [2024-07-12 00:48:23.205554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.624 qpair failed and we were unable to recover it. 00:35:55.624 [2024-07-12 00:48:23.205639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.624 [2024-07-12 00:48:23.205666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.624 qpair failed and we were unable to recover it. 00:35:55.624 [2024-07-12 00:48:23.205743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.624 [2024-07-12 00:48:23.205770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.624 qpair failed and we were unable to recover it. 00:35:55.624 [2024-07-12 00:48:23.205853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.624 [2024-07-12 00:48:23.205880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.624 qpair failed and we were unable to recover it. 00:35:55.624 [2024-07-12 00:48:23.205967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.624 [2024-07-12 00:48:23.205993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.624 qpair failed and we were unable to recover it. 00:35:55.624 [2024-07-12 00:48:23.206083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.624 [2024-07-12 00:48:23.206112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.624 qpair failed and we were unable to recover it. 00:35:55.625 [2024-07-12 00:48:23.206193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.625 [2024-07-12 00:48:23.206220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.625 qpair failed and we were unable to recover it. 00:35:55.625 [2024-07-12 00:48:23.206310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.625 [2024-07-12 00:48:23.206339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.625 qpair failed and we were unable to recover it. 00:35:55.625 [2024-07-12 00:48:23.206427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.625 [2024-07-12 00:48:23.206454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.625 qpair failed and we were unable to recover it. 00:35:55.625 [2024-07-12 00:48:23.206531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.625 [2024-07-12 00:48:23.206557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.625 qpair failed and we were unable to recover it. 00:35:55.625 [2024-07-12 00:48:23.206654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.625 [2024-07-12 00:48:23.206681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.625 qpair failed and we were unable to recover it. 00:35:55.625 [2024-07-12 00:48:23.206758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.625 [2024-07-12 00:48:23.206784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.625 qpair failed and we were unable to recover it. 00:35:55.625 [2024-07-12 00:48:23.206860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.625 [2024-07-12 00:48:23.206887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.625 qpair failed and we were unable to recover it. 00:35:55.625 [2024-07-12 00:48:23.206966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.625 [2024-07-12 00:48:23.206991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.625 qpair failed and we were unable to recover it. 00:35:55.625 [2024-07-12 00:48:23.207073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.625 [2024-07-12 00:48:23.207099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.625 qpair failed and we were unable to recover it. 00:35:55.625 [2024-07-12 00:48:23.207178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.625 [2024-07-12 00:48:23.207203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.625 qpair failed and we were unable to recover it. 00:35:55.625 [2024-07-12 00:48:23.207292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.625 [2024-07-12 00:48:23.207319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.625 qpair failed and we were unable to recover it. 00:35:55.625 [2024-07-12 00:48:23.207406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.625 [2024-07-12 00:48:23.207432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.625 qpair failed and we were unable to recover it. 00:35:55.625 [2024-07-12 00:48:23.207507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.625 [2024-07-12 00:48:23.207538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.625 qpair failed and we were unable to recover it. 00:35:55.625 [2024-07-12 00:48:23.207636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.625 [2024-07-12 00:48:23.207665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.625 qpair failed and we were unable to recover it. 00:35:55.625 [2024-07-12 00:48:23.207751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.625 [2024-07-12 00:48:23.207778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.625 qpair failed and we were unable to recover it. 00:35:55.625 [2024-07-12 00:48:23.207863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.625 [2024-07-12 00:48:23.207889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.625 qpair failed and we were unable to recover it. 00:35:55.625 [2024-07-12 00:48:23.207975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.625 [2024-07-12 00:48:23.208001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.625 qpair failed and we were unable to recover it. 00:35:55.625 [2024-07-12 00:48:23.208076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.625 [2024-07-12 00:48:23.208103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.625 qpair failed and we were unable to recover it. 00:35:55.625 [2024-07-12 00:48:23.208183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.625 [2024-07-12 00:48:23.208210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.625 qpair failed and we were unable to recover it. 00:35:55.625 [2024-07-12 00:48:23.208294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.625 [2024-07-12 00:48:23.208323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.625 qpair failed and we were unable to recover it. 00:35:55.625 [2024-07-12 00:48:23.208401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.625 [2024-07-12 00:48:23.208427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.625 qpair failed and we were unable to recover it. 00:35:55.625 [2024-07-12 00:48:23.208508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.625 [2024-07-12 00:48:23.208535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.625 qpair failed and we were unable to recover it. 00:35:55.625 [2024-07-12 00:48:23.208627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.625 [2024-07-12 00:48:23.208655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.625 qpair failed and we were unable to recover it. 00:35:55.625 [2024-07-12 00:48:23.208732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.625 [2024-07-12 00:48:23.208758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.625 qpair failed and we were unable to recover it. 00:35:55.625 [2024-07-12 00:48:23.208833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.626 [2024-07-12 00:48:23.208859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.626 qpair failed and we were unable to recover it. 00:35:55.626 [2024-07-12 00:48:23.208942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.626 [2024-07-12 00:48:23.208969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.626 qpair failed and we were unable to recover it. 00:35:55.626 [2024-07-12 00:48:23.209051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.626 [2024-07-12 00:48:23.209078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.626 qpair failed and we were unable to recover it. 00:35:55.626 [2024-07-12 00:48:23.209153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.626 [2024-07-12 00:48:23.209180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.626 qpair failed and we were unable to recover it. 00:35:55.626 [2024-07-12 00:48:23.209261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.626 [2024-07-12 00:48:23.209289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.626 qpair failed and we were unable to recover it. 00:35:55.626 [2024-07-12 00:48:23.209369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.626 [2024-07-12 00:48:23.209395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.626 qpair failed and we were unable to recover it. 00:35:55.626 [2024-07-12 00:48:23.209474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.626 [2024-07-12 00:48:23.209501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.626 qpair failed and we were unable to recover it. 00:35:55.626 [2024-07-12 00:48:23.209576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.626 [2024-07-12 00:48:23.209608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.626 qpair failed and we were unable to recover it. 00:35:55.626 [2024-07-12 00:48:23.209689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.626 [2024-07-12 00:48:23.209716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.626 qpair failed and we were unable to recover it. 00:35:55.626 [2024-07-12 00:48:23.209792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.626 [2024-07-12 00:48:23.209818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.626 qpair failed and we were unable to recover it. 00:35:55.626 [2024-07-12 00:48:23.209901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.626 [2024-07-12 00:48:23.209927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.626 qpair failed and we were unable to recover it. 00:35:55.626 [2024-07-12 00:48:23.210007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.626 [2024-07-12 00:48:23.210034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.626 qpair failed and we were unable to recover it. 00:35:55.627 [2024-07-12 00:48:23.210107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.627 [2024-07-12 00:48:23.210134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.627 qpair failed and we were unable to recover it. 00:35:55.627 [2024-07-12 00:48:23.210209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.627 [2024-07-12 00:48:23.210236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.627 qpair failed and we were unable to recover it. 00:35:55.627 [2024-07-12 00:48:23.210309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.627 [2024-07-12 00:48:23.210335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.627 qpair failed and we were unable to recover it. 00:35:55.627 [2024-07-12 00:48:23.210415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.627 [2024-07-12 00:48:23.210445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.627 qpair failed and we were unable to recover it. 00:35:55.627 [2024-07-12 00:48:23.210526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.627 [2024-07-12 00:48:23.210553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.627 qpair failed and we were unable to recover it. 00:35:55.627 [2024-07-12 00:48:23.210651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.627 [2024-07-12 00:48:23.210680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.627 qpair failed and we were unable to recover it. 00:35:55.627 [2024-07-12 00:48:23.210754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.627 [2024-07-12 00:48:23.210780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.627 qpair failed and we were unable to recover it. 00:35:55.627 [2024-07-12 00:48:23.210866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.627 [2024-07-12 00:48:23.210892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.627 qpair failed and we were unable to recover it. 00:35:55.627 [2024-07-12 00:48:23.210969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.627 [2024-07-12 00:48:23.210995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.627 qpair failed and we were unable to recover it. 00:35:55.627 [2024-07-12 00:48:23.211082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.627 [2024-07-12 00:48:23.211110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.627 qpair failed and we were unable to recover it. 00:35:55.627 [2024-07-12 00:48:23.211200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.627 [2024-07-12 00:48:23.211228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.627 qpair failed and we were unable to recover it. 00:35:55.627 [2024-07-12 00:48:23.211310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.627 [2024-07-12 00:48:23.211337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.627 qpair failed and we were unable to recover it. 00:35:55.627 [2024-07-12 00:48:23.211420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.627 [2024-07-12 00:48:23.211447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.627 qpair failed and we were unable to recover it. 00:35:55.627 [2024-07-12 00:48:23.211530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.627 [2024-07-12 00:48:23.211559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.627 qpair failed and we were unable to recover it. 00:35:55.627 [2024-07-12 00:48:23.211646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.627 [2024-07-12 00:48:23.211673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.627 qpair failed and we were unable to recover it. 00:35:55.627 [2024-07-12 00:48:23.211752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.627 [2024-07-12 00:48:23.211778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.627 qpair failed and we were unable to recover it. 00:35:55.627 [2024-07-12 00:48:23.211860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.627 [2024-07-12 00:48:23.211886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.627 qpair failed and we were unable to recover it. 00:35:55.627 [2024-07-12 00:48:23.211974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.627 [2024-07-12 00:48:23.212000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.627 qpair failed and we were unable to recover it. 00:35:55.627 [2024-07-12 00:48:23.212086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.627 [2024-07-12 00:48:23.212112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.627 qpair failed and we were unable to recover it. 00:35:55.627 [2024-07-12 00:48:23.212193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.627 [2024-07-12 00:48:23.212220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.627 qpair failed and we were unable to recover it. 00:35:55.627 [2024-07-12 00:48:23.212296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.627 [2024-07-12 00:48:23.212322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.627 qpair failed and we were unable to recover it. 00:35:55.627 [2024-07-12 00:48:23.212396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.627 [2024-07-12 00:48:23.212426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.627 qpair failed and we were unable to recover it. 00:35:55.627 [2024-07-12 00:48:23.212516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.627 [2024-07-12 00:48:23.212543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.628 qpair failed and we were unable to recover it. 00:35:55.628 [2024-07-12 00:48:23.212634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.628 [2024-07-12 00:48:23.212661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.628 qpair failed and we were unable to recover it. 00:35:55.628 [2024-07-12 00:48:23.212737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.628 [2024-07-12 00:48:23.212763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.628 qpair failed and we were unable to recover it. 00:35:55.628 [2024-07-12 00:48:23.212845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.628 [2024-07-12 00:48:23.212873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.628 qpair failed and we were unable to recover it. 00:35:55.628 [2024-07-12 00:48:23.212949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.628 [2024-07-12 00:48:23.212976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.628 qpair failed and we were unable to recover it. 00:35:55.628 [2024-07-12 00:48:23.213059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.628 [2024-07-12 00:48:23.213086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.628 qpair failed and we were unable to recover it. 00:35:55.628 [2024-07-12 00:48:23.213168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.628 [2024-07-12 00:48:23.213194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.628 qpair failed and we were unable to recover it. 00:35:55.628 [2024-07-12 00:48:23.213276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.628 [2024-07-12 00:48:23.213304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.628 qpair failed and we were unable to recover it. 00:35:55.628 [2024-07-12 00:48:23.213393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.628 [2024-07-12 00:48:23.213420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.628 qpair failed and we were unable to recover it. 00:35:55.628 [2024-07-12 00:48:23.213502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.628 [2024-07-12 00:48:23.213528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.628 qpair failed and we were unable to recover it. 00:35:55.628 [2024-07-12 00:48:23.213611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.628 [2024-07-12 00:48:23.213639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.628 qpair failed and we were unable to recover it. 00:35:55.628 [2024-07-12 00:48:23.213717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.628 [2024-07-12 00:48:23.213746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.628 qpair failed and we were unable to recover it. 00:35:55.628 [2024-07-12 00:48:23.213830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.628 [2024-07-12 00:48:23.213859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.628 qpair failed and we were unable to recover it. 00:35:55.628 [2024-07-12 00:48:23.213944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.628 [2024-07-12 00:48:23.213971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.628 qpair failed and we were unable to recover it. 00:35:55.628 [2024-07-12 00:48:23.214053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.628 [2024-07-12 00:48:23.214079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.628 qpair failed and we were unable to recover it. 00:35:55.629 [2024-07-12 00:48:23.214161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.629 [2024-07-12 00:48:23.214187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.629 qpair failed and we were unable to recover it. 00:35:55.629 [2024-07-12 00:48:23.214270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.629 [2024-07-12 00:48:23.214296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.629 qpair failed and we were unable to recover it. 00:35:55.629 [2024-07-12 00:48:23.214378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.629 [2024-07-12 00:48:23.214404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.629 qpair failed and we were unable to recover it. 00:35:55.629 [2024-07-12 00:48:23.214481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.629 [2024-07-12 00:48:23.214508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.629 qpair failed and we were unable to recover it. 00:35:55.629 [2024-07-12 00:48:23.214602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.629 [2024-07-12 00:48:23.214630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.629 qpair failed and we were unable to recover it. 00:35:55.629 [2024-07-12 00:48:23.214710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.629 [2024-07-12 00:48:23.214736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.629 qpair failed and we were unable to recover it. 00:35:55.629 [2024-07-12 00:48:23.214814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.629 [2024-07-12 00:48:23.214845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.629 qpair failed and we were unable to recover it. 00:35:55.629 [2024-07-12 00:48:23.214934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.629 [2024-07-12 00:48:23.214962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.629 qpair failed and we were unable to recover it. 00:35:55.629 [2024-07-12 00:48:23.215039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.629 [2024-07-12 00:48:23.215065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.629 qpair failed and we were unable to recover it. 00:35:55.629 [2024-07-12 00:48:23.215155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.629 [2024-07-12 00:48:23.215183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.629 qpair failed and we were unable to recover it. 00:35:55.629 [2024-07-12 00:48:23.215266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.629 [2024-07-12 00:48:23.215292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.629 qpair failed and we were unable to recover it. 00:35:55.629 [2024-07-12 00:48:23.215366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.629 [2024-07-12 00:48:23.215392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.629 qpair failed and we were unable to recover it. 00:35:55.629 [2024-07-12 00:48:23.215475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.629 [2024-07-12 00:48:23.215504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.629 qpair failed and we were unable to recover it. 00:35:55.629 [2024-07-12 00:48:23.215580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.629 [2024-07-12 00:48:23.215616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.629 qpair failed and we were unable to recover it. 00:35:55.629 [2024-07-12 00:48:23.215690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.629 [2024-07-12 00:48:23.215717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.629 qpair failed and we were unable to recover it. 00:35:55.629 [2024-07-12 00:48:23.215799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.629 [2024-07-12 00:48:23.215825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.629 qpair failed and we were unable to recover it. 00:35:55.629 [2024-07-12 00:48:23.215908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.629 [2024-07-12 00:48:23.215934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.629 qpair failed and we were unable to recover it. 00:35:55.629 [2024-07-12 00:48:23.216013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.629 [2024-07-12 00:48:23.216039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.629 qpair failed and we were unable to recover it. 00:35:55.629 [2024-07-12 00:48:23.216121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.629 [2024-07-12 00:48:23.216148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.629 qpair failed and we were unable to recover it. 00:35:55.629 [2024-07-12 00:48:23.216224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.629 [2024-07-12 00:48:23.216250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.629 qpair failed and we were unable to recover it. 00:35:55.629 [2024-07-12 00:48:23.216337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.629 [2024-07-12 00:48:23.216364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.629 qpair failed and we were unable to recover it. 00:35:55.629 [2024-07-12 00:48:23.216439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.629 [2024-07-12 00:48:23.216466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.630 qpair failed and we were unable to recover it. 00:35:55.630 [2024-07-12 00:48:23.216549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.630 [2024-07-12 00:48:23.216576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.630 qpair failed and we were unable to recover it. 00:35:55.630 [2024-07-12 00:48:23.216661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.630 [2024-07-12 00:48:23.216687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.630 qpair failed and we were unable to recover it. 00:35:55.630 [2024-07-12 00:48:23.216765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.630 [2024-07-12 00:48:23.216792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.630 qpair failed and we were unable to recover it. 00:35:55.630 [2024-07-12 00:48:23.216869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.630 [2024-07-12 00:48:23.216895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.630 qpair failed and we were unable to recover it. 00:35:55.630 [2024-07-12 00:48:23.216974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.630 [2024-07-12 00:48:23.217001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.630 qpair failed and we were unable to recover it. 00:35:55.630 [2024-07-12 00:48:23.217075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.630 [2024-07-12 00:48:23.217102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.630 qpair failed and we were unable to recover it. 00:35:55.630 [2024-07-12 00:48:23.217186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.630 [2024-07-12 00:48:23.217212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.630 qpair failed and we were unable to recover it. 00:35:55.630 [2024-07-12 00:48:23.217292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.630 [2024-07-12 00:48:23.217318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.630 qpair failed and we were unable to recover it. 00:35:55.630 [2024-07-12 00:48:23.217398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.630 [2024-07-12 00:48:23.217424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.630 qpair failed and we were unable to recover it. 00:35:55.630 [2024-07-12 00:48:23.217504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.630 [2024-07-12 00:48:23.217530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.630 qpair failed and we were unable to recover it. 00:35:55.630 [2024-07-12 00:48:23.217618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.630 [2024-07-12 00:48:23.217645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.630 qpair failed and we were unable to recover it. 00:35:55.630 [2024-07-12 00:48:23.217749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.630 [2024-07-12 00:48:23.217775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.630 qpair failed and we were unable to recover it. 00:35:55.630 [2024-07-12 00:48:23.217854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.630 [2024-07-12 00:48:23.217880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.630 qpair failed and we were unable to recover it. 00:35:55.630 [2024-07-12 00:48:23.217963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.630 [2024-07-12 00:48:23.217990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.630 qpair failed and we were unable to recover it. 00:35:55.630 [2024-07-12 00:48:23.218087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.630 [2024-07-12 00:48:23.218128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.630 qpair failed and we were unable to recover it. 00:35:55.630 [2024-07-12 00:48:23.218226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.630 [2024-07-12 00:48:23.218254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.630 qpair failed and we were unable to recover it. 00:35:55.630 [2024-07-12 00:48:23.218336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.630 [2024-07-12 00:48:23.218363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.630 qpair failed and we were unable to recover it. 00:35:55.630 [2024-07-12 00:48:23.218459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.630 [2024-07-12 00:48:23.218486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.630 qpair failed and we were unable to recover it. 00:35:55.630 [2024-07-12 00:48:23.218563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.630 [2024-07-12 00:48:23.218596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.630 qpair failed and we were unable to recover it. 00:35:55.630 [2024-07-12 00:48:23.218678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.630 [2024-07-12 00:48:23.218705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.630 qpair failed and we were unable to recover it. 00:35:55.630 [2024-07-12 00:48:23.218790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.630 [2024-07-12 00:48:23.218817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.630 qpair failed and we were unable to recover it. 00:35:55.630 [2024-07-12 00:48:23.218903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.630 [2024-07-12 00:48:23.218930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.630 qpair failed and we were unable to recover it. 00:35:55.630 [2024-07-12 00:48:23.219007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.630 [2024-07-12 00:48:23.219034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.631 qpair failed and we were unable to recover it. 00:35:55.631 [2024-07-12 00:48:23.219108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.631 [2024-07-12 00:48:23.219134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.631 qpair failed and we were unable to recover it. 00:35:55.631 [2024-07-12 00:48:23.219218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.631 [2024-07-12 00:48:23.219250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.631 qpair failed and we were unable to recover it. 00:35:55.631 [2024-07-12 00:48:23.219340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.631 [2024-07-12 00:48:23.219368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.631 qpair failed and we were unable to recover it. 00:35:55.631 [2024-07-12 00:48:23.219466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.631 [2024-07-12 00:48:23.219494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.631 qpair failed and we were unable to recover it. 00:35:55.631 [2024-07-12 00:48:23.219582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.631 [2024-07-12 00:48:23.219616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.631 qpair failed and we were unable to recover it. 00:35:55.631 [2024-07-12 00:48:23.219698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.631 [2024-07-12 00:48:23.219726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.631 qpair failed and we were unable to recover it. 00:35:55.631 [2024-07-12 00:48:23.219803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.631 [2024-07-12 00:48:23.219830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.631 qpair failed and we were unable to recover it. 00:35:55.631 [2024-07-12 00:48:23.219911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.631 [2024-07-12 00:48:23.219939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.631 qpair failed and we were unable to recover it. 00:35:55.631 [2024-07-12 00:48:23.220015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.631 [2024-07-12 00:48:23.220041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.631 qpair failed and we were unable to recover it. 00:35:55.631 [2024-07-12 00:48:23.220126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.631 [2024-07-12 00:48:23.220154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.631 qpair failed and we were unable to recover it. 00:35:55.631 [2024-07-12 00:48:23.220239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.631 [2024-07-12 00:48:23.220266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.631 qpair failed and we were unable to recover it. 00:35:55.631 [2024-07-12 00:48:23.220348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.631 [2024-07-12 00:48:23.220374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.631 qpair failed and we were unable to recover it. 00:35:55.631 [2024-07-12 00:48:23.220462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.631 [2024-07-12 00:48:23.220488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.631 qpair failed and we were unable to recover it. 00:35:55.631 [2024-07-12 00:48:23.220580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.631 [2024-07-12 00:48:23.220614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.631 qpair failed and we were unable to recover it. 00:35:55.631 [2024-07-12 00:48:23.220691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.631 [2024-07-12 00:48:23.220716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.631 qpair failed and we were unable to recover it. 00:35:55.631 [2024-07-12 00:48:23.220805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.631 [2024-07-12 00:48:23.220833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.631 qpair failed and we were unable to recover it. 00:35:55.631 [2024-07-12 00:48:23.220919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.631 [2024-07-12 00:48:23.220946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.631 qpair failed and we were unable to recover it. 00:35:55.631 [2024-07-12 00:48:23.221027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.631 [2024-07-12 00:48:23.221054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.631 qpair failed and we were unable to recover it. 00:35:55.631 [2024-07-12 00:48:23.221142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.631 [2024-07-12 00:48:23.221169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.631 qpair failed and we were unable to recover it. 00:35:55.631 [2024-07-12 00:48:23.221246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.631 [2024-07-12 00:48:23.221272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.631 qpair failed and we were unable to recover it. 00:35:55.631 [2024-07-12 00:48:23.221351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.631 [2024-07-12 00:48:23.221377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.631 qpair failed and we were unable to recover it. 00:35:55.631 [2024-07-12 00:48:23.221459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.631 [2024-07-12 00:48:23.221487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.631 qpair failed and we were unable to recover it. 00:35:55.631 [2024-07-12 00:48:23.221573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.631 [2024-07-12 00:48:23.221618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.631 qpair failed and we were unable to recover it. 00:35:55.631 [2024-07-12 00:48:23.221697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.631 [2024-07-12 00:48:23.221723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.631 qpair failed and we were unable to recover it. 00:35:55.631 [2024-07-12 00:48:23.221808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.631 [2024-07-12 00:48:23.221834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.631 qpair failed and we were unable to recover it. 00:35:55.631 [2024-07-12 00:48:23.221909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.631 [2024-07-12 00:48:23.221936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.632 qpair failed and we were unable to recover it. 00:35:55.632 [2024-07-12 00:48:23.222026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.632 [2024-07-12 00:48:23.222052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.632 qpair failed and we were unable to recover it. 00:35:55.632 [2024-07-12 00:48:23.222128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.632 [2024-07-12 00:48:23.222154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.632 qpair failed and we were unable to recover it. 00:35:55.632 [2024-07-12 00:48:23.222241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.632 [2024-07-12 00:48:23.222267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.632 qpair failed and we were unable to recover it. 00:35:55.632 [2024-07-12 00:48:23.222355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.632 [2024-07-12 00:48:23.222382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.632 qpair failed and we were unable to recover it. 00:35:55.632 [2024-07-12 00:48:23.222466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.632 [2024-07-12 00:48:23.222492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.632 qpair failed and we were unable to recover it. 00:35:55.632 [2024-07-12 00:48:23.222570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.632 [2024-07-12 00:48:23.222605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.632 qpair failed and we were unable to recover it. 00:35:55.632 [2024-07-12 00:48:23.222690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.632 [2024-07-12 00:48:23.222716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.632 qpair failed and we were unable to recover it. 00:35:55.632 [2024-07-12 00:48:23.222790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.632 [2024-07-12 00:48:23.222816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.632 qpair failed and we were unable to recover it. 00:35:55.632 [2024-07-12 00:48:23.222900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.632 [2024-07-12 00:48:23.222926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.632 qpair failed and we were unable to recover it. 00:35:55.632 [2024-07-12 00:48:23.223010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.632 [2024-07-12 00:48:23.223036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.632 qpair failed and we were unable to recover it. 00:35:55.632 [2024-07-12 00:48:23.223116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.632 [2024-07-12 00:48:23.223142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.632 qpair failed and we were unable to recover it. 00:35:55.632 [2024-07-12 00:48:23.223219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.632 [2024-07-12 00:48:23.223245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.632 qpair failed and we were unable to recover it. 00:35:55.632 [2024-07-12 00:48:23.223325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.632 [2024-07-12 00:48:23.223350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.632 qpair failed and we were unable to recover it. 00:35:55.632 [2024-07-12 00:48:23.223436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.632 [2024-07-12 00:48:23.223465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.632 qpair failed and we were unable to recover it. 00:35:55.632 [2024-07-12 00:48:23.223543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.632 [2024-07-12 00:48:23.223570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.632 qpair failed and we were unable to recover it. 00:35:55.632 [2024-07-12 00:48:23.223654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.632 [2024-07-12 00:48:23.223686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.632 qpair failed and we were unable to recover it. 00:35:55.632 [2024-07-12 00:48:23.223774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.632 [2024-07-12 00:48:23.223800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.632 qpair failed and we were unable to recover it. 00:35:55.632 [2024-07-12 00:48:23.223875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.632 [2024-07-12 00:48:23.223901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.632 qpair failed and we were unable to recover it. 00:35:55.632 [2024-07-12 00:48:23.223984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.632 [2024-07-12 00:48:23.224010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.632 qpair failed and we were unable to recover it. 00:35:55.632 [2024-07-12 00:48:23.224086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.632 [2024-07-12 00:48:23.224112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.632 qpair failed and we were unable to recover it. 00:35:55.632 [2024-07-12 00:48:23.224193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.632 [2024-07-12 00:48:23.224220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.632 qpair failed and we were unable to recover it. 00:35:55.632 [2024-07-12 00:48:23.224300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.632 [2024-07-12 00:48:23.224327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.632 qpair failed and we were unable to recover it. 00:35:55.632 [2024-07-12 00:48:23.224402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.632 [2024-07-12 00:48:23.224428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.632 qpair failed and we were unable to recover it. 00:35:55.632 [2024-07-12 00:48:23.224509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.632 [2024-07-12 00:48:23.224535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.632 qpair failed and we were unable to recover it. 00:35:55.632 [2024-07-12 00:48:23.224612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.632 [2024-07-12 00:48:23.224638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.632 qpair failed and we were unable to recover it. 00:35:55.632 [2024-07-12 00:48:23.224716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.632 [2024-07-12 00:48:23.224745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.632 qpair failed and we were unable to recover it. 00:35:55.632 [2024-07-12 00:48:23.224826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.632 [2024-07-12 00:48:23.224853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.633 qpair failed and we were unable to recover it. 00:35:55.633 [2024-07-12 00:48:23.224928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.633 [2024-07-12 00:48:23.224954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.633 qpair failed and we were unable to recover it. 00:35:55.633 [2024-07-12 00:48:23.225029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.633 [2024-07-12 00:48:23.225056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.633 qpair failed and we were unable to recover it. 00:35:55.633 [2024-07-12 00:48:23.225144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.633 [2024-07-12 00:48:23.225170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.633 qpair failed and we were unable to recover it. 00:35:55.633 [2024-07-12 00:48:23.225246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.633 [2024-07-12 00:48:23.225272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.633 qpair failed and we were unable to recover it. 00:35:55.633 [2024-07-12 00:48:23.225351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.633 [2024-07-12 00:48:23.225377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.633 qpair failed and we were unable to recover it. 00:35:55.633 [2024-07-12 00:48:23.225451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.633 [2024-07-12 00:48:23.225477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.633 qpair failed and we were unable to recover it. 00:35:55.633 [2024-07-12 00:48:23.225564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.633 [2024-07-12 00:48:23.225597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.633 qpair failed and we were unable to recover it. 00:35:55.633 [2024-07-12 00:48:23.225684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.633 [2024-07-12 00:48:23.225710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.633 qpair failed and we were unable to recover it. 00:35:55.633 [2024-07-12 00:48:23.225796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.633 [2024-07-12 00:48:23.225823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.633 qpair failed and we were unable to recover it. 00:35:55.633 [2024-07-12 00:48:23.225905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.633 [2024-07-12 00:48:23.225931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.633 qpair failed and we were unable to recover it. 00:35:55.633 [2024-07-12 00:48:23.226012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.633 [2024-07-12 00:48:23.226039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.633 qpair failed and we were unable to recover it. 00:35:55.633 [2024-07-12 00:48:23.226118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.633 [2024-07-12 00:48:23.226144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.633 qpair failed and we were unable to recover it. 00:35:55.633 [2024-07-12 00:48:23.226223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.633 [2024-07-12 00:48:23.226249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.633 qpair failed and we were unable to recover it. 00:35:55.633 [2024-07-12 00:48:23.226324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.633 [2024-07-12 00:48:23.226350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.633 qpair failed and we were unable to recover it. 00:35:55.633 [2024-07-12 00:48:23.226438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.633 [2024-07-12 00:48:23.226466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.633 qpair failed and we were unable to recover it. 00:35:55.633 [2024-07-12 00:48:23.226553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.633 [2024-07-12 00:48:23.226580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.633 qpair failed and we were unable to recover it. 00:35:55.633 [2024-07-12 00:48:23.226673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.633 [2024-07-12 00:48:23.226702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.633 qpair failed and we were unable to recover it. 00:35:55.633 [2024-07-12 00:48:23.226782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.633 [2024-07-12 00:48:23.226809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.633 qpair failed and we were unable to recover it. 00:35:55.633 [2024-07-12 00:48:23.226890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.633 [2024-07-12 00:48:23.226916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.633 qpair failed and we were unable to recover it. 00:35:55.633 [2024-07-12 00:48:23.226994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.633 [2024-07-12 00:48:23.227021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.633 qpair failed and we were unable to recover it. 00:35:55.633 [2024-07-12 00:48:23.227100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.633 [2024-07-12 00:48:23.227127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.633 qpair failed and we were unable to recover it. 00:35:55.633 [2024-07-12 00:48:23.227200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.633 [2024-07-12 00:48:23.227226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.633 qpair failed and we were unable to recover it. 00:35:55.633 [2024-07-12 00:48:23.227310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.633 [2024-07-12 00:48:23.227338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.633 qpair failed and we were unable to recover it. 00:35:55.633 [2024-07-12 00:48:23.227414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.633 [2024-07-12 00:48:23.227440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.633 qpair failed and we were unable to recover it. 00:35:55.633 [2024-07-12 00:48:23.227516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.633 [2024-07-12 00:48:23.227543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.633 qpair failed and we were unable to recover it. 00:35:55.633 [2024-07-12 00:48:23.227633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.633 [2024-07-12 00:48:23.227661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.633 qpair failed and we were unable to recover it. 00:35:55.633 [2024-07-12 00:48:23.227738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.633 [2024-07-12 00:48:23.227765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.633 qpair failed and we were unable to recover it. 00:35:55.633 [2024-07-12 00:48:23.227842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.633 [2024-07-12 00:48:23.227868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.633 qpair failed and we were unable to recover it. 00:35:55.633 [2024-07-12 00:48:23.227945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.633 [2024-07-12 00:48:23.227975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.633 qpair failed and we were unable to recover it. 00:35:55.633 [2024-07-12 00:48:23.228058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.633 [2024-07-12 00:48:23.228086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.633 qpair failed and we were unable to recover it. 00:35:55.633 [2024-07-12 00:48:23.228168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.633 [2024-07-12 00:48:23.228194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.633 qpair failed and we were unable to recover it. 00:35:55.633 [2024-07-12 00:48:23.228270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.633 [2024-07-12 00:48:23.228297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.633 qpair failed and we were unable to recover it. 00:35:55.633 [2024-07-12 00:48:23.228379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.633 [2024-07-12 00:48:23.228405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.633 qpair failed and we were unable to recover it. 00:35:55.633 [2024-07-12 00:48:23.228489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.633 [2024-07-12 00:48:23.228516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.633 qpair failed and we were unable to recover it. 00:35:55.633 [2024-07-12 00:48:23.228598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.633 [2024-07-12 00:48:23.228625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.633 qpair failed and we were unable to recover it. 00:35:55.633 [2024-07-12 00:48:23.228706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.633 [2024-07-12 00:48:23.228731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.633 qpair failed and we were unable to recover it. 00:35:55.633 [2024-07-12 00:48:23.228809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.633 [2024-07-12 00:48:23.228835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.633 qpair failed and we were unable to recover it. 00:35:55.633 [2024-07-12 00:48:23.228917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.633 [2024-07-12 00:48:23.228943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.633 qpair failed and we were unable to recover it. 00:35:55.633 [2024-07-12 00:48:23.229022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.633 [2024-07-12 00:48:23.229048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.633 qpair failed and we were unable to recover it. 00:35:55.633 [2024-07-12 00:48:23.229125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.633 [2024-07-12 00:48:23.229151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.633 qpair failed and we were unable to recover it. 00:35:55.633 [2024-07-12 00:48:23.229235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.633 [2024-07-12 00:48:23.229264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.633 qpair failed and we were unable to recover it. 00:35:55.634 [2024-07-12 00:48:23.229341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.634 [2024-07-12 00:48:23.229367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.634 qpair failed and we were unable to recover it. 00:35:55.634 [2024-07-12 00:48:23.229458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.634 [2024-07-12 00:48:23.229484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.634 qpair failed and we were unable to recover it. 00:35:55.634 [2024-07-12 00:48:23.229570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.634 [2024-07-12 00:48:23.229613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.634 qpair failed and we were unable to recover it. 00:35:55.634 [2024-07-12 00:48:23.229701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.634 [2024-07-12 00:48:23.229728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.634 qpair failed and we were unable to recover it. 00:35:55.634 [2024-07-12 00:48:23.229813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.634 [2024-07-12 00:48:23.229839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.634 qpair failed and we were unable to recover it. 00:35:55.634 [2024-07-12 00:48:23.229923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.634 [2024-07-12 00:48:23.229949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.634 qpair failed and we were unable to recover it. 00:35:55.634 [2024-07-12 00:48:23.230033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.634 [2024-07-12 00:48:23.230059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.634 qpair failed and we were unable to recover it. 00:35:55.634 [2024-07-12 00:48:23.230135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.634 [2024-07-12 00:48:23.230164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.634 qpair failed and we were unable to recover it. 00:35:55.634 [2024-07-12 00:48:23.230248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.634 [2024-07-12 00:48:23.230274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.634 qpair failed and we were unable to recover it. 00:35:55.634 [2024-07-12 00:48:23.230354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.634 [2024-07-12 00:48:23.230380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.634 qpair failed and we were unable to recover it. 00:35:55.634 [2024-07-12 00:48:23.230456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.634 [2024-07-12 00:48:23.230482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.634 qpair failed and we were unable to recover it. 00:35:55.634 [2024-07-12 00:48:23.230569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.634 [2024-07-12 00:48:23.230602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.634 qpair failed and we were unable to recover it. 00:35:55.634 [2024-07-12 00:48:23.230682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.634 [2024-07-12 00:48:23.230708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.634 qpair failed and we were unable to recover it. 00:35:55.634 [2024-07-12 00:48:23.230791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.634 [2024-07-12 00:48:23.230818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.634 qpair failed and we were unable to recover it. 00:35:55.634 [2024-07-12 00:48:23.230905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.634 [2024-07-12 00:48:23.230932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.634 qpair failed and we were unable to recover it. 00:35:55.634 [2024-07-12 00:48:23.231018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.634 [2024-07-12 00:48:23.231044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.634 qpair failed and we were unable to recover it. 00:35:55.634 [2024-07-12 00:48:23.231118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.634 [2024-07-12 00:48:23.231145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.634 qpair failed and we were unable to recover it. 00:35:55.634 [2024-07-12 00:48:23.231227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.634 [2024-07-12 00:48:23.231256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.634 qpair failed and we were unable to recover it. 00:35:55.634 [2024-07-12 00:48:23.231333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.634 [2024-07-12 00:48:23.231359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.634 qpair failed and we were unable to recover it. 00:35:55.634 [2024-07-12 00:48:23.231439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.634 [2024-07-12 00:48:23.231465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.634 qpair failed and we were unable to recover it. 00:35:55.635 [2024-07-12 00:48:23.231540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.635 [2024-07-12 00:48:23.231566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.635 qpair failed and we were unable to recover it. 00:35:55.635 [2024-07-12 00:48:23.231650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.635 [2024-07-12 00:48:23.231677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.635 qpair failed and we were unable to recover it. 00:35:55.635 [2024-07-12 00:48:23.231751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.635 [2024-07-12 00:48:23.231777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.635 qpair failed and we were unable to recover it. 00:35:55.635 [2024-07-12 00:48:23.231868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.635 [2024-07-12 00:48:23.231896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.635 qpair failed and we were unable to recover it. 00:35:55.635 [2024-07-12 00:48:23.231978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.635 [2024-07-12 00:48:23.232005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.635 qpair failed and we were unable to recover it. 00:35:55.635 [2024-07-12 00:48:23.232089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.635 [2024-07-12 00:48:23.232117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.635 qpair failed and we were unable to recover it. 00:35:55.635 [2024-07-12 00:48:23.232193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.635 [2024-07-12 00:48:23.232223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.635 qpair failed and we were unable to recover it. 00:35:55.635 [2024-07-12 00:48:23.232305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.635 [2024-07-12 00:48:23.232336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.635 qpair failed and we were unable to recover it. 00:35:55.635 [2024-07-12 00:48:23.232412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.635 [2024-07-12 00:48:23.232438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.635 qpair failed and we were unable to recover it. 00:35:55.635 [2024-07-12 00:48:23.232526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.635 [2024-07-12 00:48:23.232553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.635 qpair failed and we were unable to recover it. 00:35:55.635 [2024-07-12 00:48:23.232637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.635 [2024-07-12 00:48:23.232664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.635 qpair failed and we were unable to recover it. 00:35:55.635 [2024-07-12 00:48:23.232749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.635 [2024-07-12 00:48:23.232776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.635 qpair failed and we were unable to recover it. 00:35:55.635 [2024-07-12 00:48:23.232863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.635 [2024-07-12 00:48:23.232891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.635 qpair failed and we were unable to recover it. 00:35:55.635 [2024-07-12 00:48:23.232971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.635 [2024-07-12 00:48:23.232998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.635 qpair failed and we were unable to recover it. 00:35:55.635 [2024-07-12 00:48:23.233080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.635 [2024-07-12 00:48:23.233107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.635 qpair failed and we were unable to recover it. 00:35:55.635 [2024-07-12 00:48:23.233199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.635 [2024-07-12 00:48:23.233227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.635 qpair failed and we were unable to recover it. 00:35:55.635 [2024-07-12 00:48:23.233313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.635 [2024-07-12 00:48:23.233340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.635 qpair failed and we were unable to recover it. 00:35:55.635 [2024-07-12 00:48:23.233419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.635 [2024-07-12 00:48:23.233445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.635 qpair failed and we were unable to recover it. 00:35:55.635 [2024-07-12 00:48:23.233527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.635 [2024-07-12 00:48:23.233552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.635 qpair failed and we were unable to recover it. 00:35:55.635 [2024-07-12 00:48:23.233633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.635 [2024-07-12 00:48:23.233659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.635 qpair failed and we were unable to recover it. 00:35:55.635 [2024-07-12 00:48:23.233742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.635 [2024-07-12 00:48:23.233768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.635 qpair failed and we were unable to recover it. 00:35:55.635 [2024-07-12 00:48:23.233856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.635 [2024-07-12 00:48:23.233883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.635 qpair failed and we were unable to recover it. 00:35:55.635 [2024-07-12 00:48:23.233960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.635 [2024-07-12 00:48:23.233986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.635 qpair failed and we were unable to recover it. 00:35:55.635 [2024-07-12 00:48:23.234065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.635 [2024-07-12 00:48:23.234090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.635 qpair failed and we were unable to recover it. 00:35:55.635 [2024-07-12 00:48:23.234174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.635 [2024-07-12 00:48:23.234202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.635 qpair failed and we were unable to recover it. 00:35:55.635 [2024-07-12 00:48:23.234283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.635 [2024-07-12 00:48:23.234310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.635 qpair failed and we were unable to recover it. 00:35:55.635 [2024-07-12 00:48:23.234391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.635 [2024-07-12 00:48:23.234421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.635 qpair failed and we were unable to recover it. 00:35:55.635 [2024-07-12 00:48:23.234504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.636 [2024-07-12 00:48:23.234529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.636 qpair failed and we were unable to recover it. 00:35:55.636 [2024-07-12 00:48:23.234619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.636 [2024-07-12 00:48:23.234647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.636 qpair failed and we were unable to recover it. 00:35:55.636 [2024-07-12 00:48:23.234735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.636 [2024-07-12 00:48:23.234761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.636 qpair failed and we were unable to recover it. 00:35:55.636 [2024-07-12 00:48:23.234835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.636 [2024-07-12 00:48:23.234862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.636 qpair failed and we were unable to recover it. 00:35:55.636 [2024-07-12 00:48:23.234950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.636 [2024-07-12 00:48:23.234976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.636 qpair failed and we were unable to recover it. 00:35:55.636 [2024-07-12 00:48:23.235089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.636 [2024-07-12 00:48:23.235115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.636 qpair failed and we were unable to recover it. 00:35:55.636 [2024-07-12 00:48:23.235196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.636 [2024-07-12 00:48:23.235224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.636 qpair failed and we were unable to recover it. 00:35:55.636 [2024-07-12 00:48:23.235312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.636 [2024-07-12 00:48:23.235338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.636 qpair failed and we were unable to recover it. 00:35:55.636 [2024-07-12 00:48:23.235414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.636 [2024-07-12 00:48:23.235440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.636 qpair failed and we were unable to recover it. 00:35:55.636 [2024-07-12 00:48:23.235513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.636 [2024-07-12 00:48:23.235539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.636 qpair failed and we were unable to recover it. 00:35:55.636 [2024-07-12 00:48:23.235615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.636 [2024-07-12 00:48:23.235642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.636 qpair failed and we were unable to recover it. 00:35:55.636 [2024-07-12 00:48:23.235717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.636 [2024-07-12 00:48:23.235744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.636 qpair failed and we were unable to recover it. 00:35:55.636 [2024-07-12 00:48:23.235834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.636 [2024-07-12 00:48:23.235862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.636 qpair failed and we were unable to recover it. 00:35:55.636 [2024-07-12 00:48:23.235938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.636 [2024-07-12 00:48:23.235965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.636 qpair failed and we were unable to recover it. 00:35:55.636 [2024-07-12 00:48:23.236038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.636 [2024-07-12 00:48:23.236065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.636 qpair failed and we were unable to recover it. 00:35:55.636 [2024-07-12 00:48:23.236141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.636 [2024-07-12 00:48:23.236168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.636 qpair failed and we were unable to recover it. 00:35:55.636 [2024-07-12 00:48:23.236253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.636 [2024-07-12 00:48:23.236282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.636 qpair failed and we were unable to recover it. 00:35:55.636 [2024-07-12 00:48:23.236360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.636 [2024-07-12 00:48:23.236387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.636 qpair failed and we were unable to recover it. 00:35:55.636 [2024-07-12 00:48:23.236472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.636 [2024-07-12 00:48:23.236499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.636 qpair failed and we were unable to recover it. 00:35:55.636 [2024-07-12 00:48:23.236580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.636 [2024-07-12 00:48:23.236613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.636 qpair failed and we were unable to recover it. 00:35:55.636 [2024-07-12 00:48:23.236695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.636 [2024-07-12 00:48:23.236726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.636 qpair failed and we were unable to recover it. 00:35:55.636 [2024-07-12 00:48:23.236803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.636 [2024-07-12 00:48:23.236829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.636 qpair failed and we were unable to recover it. 00:35:55.636 [2024-07-12 00:48:23.236904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.636 [2024-07-12 00:48:23.236930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.636 qpair failed and we were unable to recover it. 00:35:55.636 [2024-07-12 00:48:23.237012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.636 [2024-07-12 00:48:23.237039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.636 qpair failed and we were unable to recover it. 00:35:55.636 [2024-07-12 00:48:23.237121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.636 [2024-07-12 00:48:23.237149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.636 qpair failed and we were unable to recover it. 00:35:55.636 [2024-07-12 00:48:23.237231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.636 [2024-07-12 00:48:23.237258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.636 qpair failed and we were unable to recover it. 00:35:55.636 [2024-07-12 00:48:23.237334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.636 [2024-07-12 00:48:23.237360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.637 qpair failed and we were unable to recover it. 00:35:55.637 [2024-07-12 00:48:23.237450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.637 [2024-07-12 00:48:23.237479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.637 qpair failed and we were unable to recover it. 00:35:55.637 [2024-07-12 00:48:23.237564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.637 [2024-07-12 00:48:23.237607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.637 qpair failed and we were unable to recover it. 00:35:55.637 [2024-07-12 00:48:23.237691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.637 [2024-07-12 00:48:23.237717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.637 qpair failed and we were unable to recover it. 00:35:55.637 [2024-07-12 00:48:23.237796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.637 [2024-07-12 00:48:23.237822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.637 qpair failed and we were unable to recover it. 00:35:55.637 [2024-07-12 00:48:23.237896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.637 [2024-07-12 00:48:23.237922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.637 qpair failed and we were unable to recover it. 00:35:55.637 [2024-07-12 00:48:23.237997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.637 [2024-07-12 00:48:23.238023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.637 qpair failed and we were unable to recover it. 00:35:55.637 [2024-07-12 00:48:23.238098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.637 [2024-07-12 00:48:23.238124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.637 qpair failed and we were unable to recover it. 00:35:55.637 [2024-07-12 00:48:23.238205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.637 [2024-07-12 00:48:23.238231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.637 qpair failed and we were unable to recover it. 00:35:55.637 [2024-07-12 00:48:23.238315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.637 [2024-07-12 00:48:23.238343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.637 qpair failed and we were unable to recover it. 00:35:55.637 [2024-07-12 00:48:23.238430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.637 [2024-07-12 00:48:23.238456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.637 qpair failed and we were unable to recover it. 00:35:55.637 [2024-07-12 00:48:23.238536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.637 [2024-07-12 00:48:23.238567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.637 qpair failed and we were unable to recover it. 00:35:55.637 [2024-07-12 00:48:23.238657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.637 [2024-07-12 00:48:23.238683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.637 qpair failed and we were unable to recover it. 00:35:55.637 [2024-07-12 00:48:23.238764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.637 [2024-07-12 00:48:23.238790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.637 qpair failed and we were unable to recover it. 00:35:55.637 [2024-07-12 00:48:23.238878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.637 [2024-07-12 00:48:23.238904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.637 qpair failed and we were unable to recover it. 00:35:55.637 [2024-07-12 00:48:23.238986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.637 [2024-07-12 00:48:23.239014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.637 qpair failed and we were unable to recover it. 00:35:55.637 [2024-07-12 00:48:23.239100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.637 [2024-07-12 00:48:23.239129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.637 qpair failed and we were unable to recover it. 00:35:55.637 [2024-07-12 00:48:23.239208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.637 [2024-07-12 00:48:23.239235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.637 qpair failed and we were unable to recover it. 00:35:55.637 [2024-07-12 00:48:23.239314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.637 [2024-07-12 00:48:23.239341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.637 qpair failed and we were unable to recover it. 00:35:55.637 [2024-07-12 00:48:23.239424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.637 [2024-07-12 00:48:23.239450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.637 qpair failed and we were unable to recover it. 00:35:55.637 [2024-07-12 00:48:23.239533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.637 [2024-07-12 00:48:23.239560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.637 qpair failed and we were unable to recover it. 00:35:55.637 [2024-07-12 00:48:23.239659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.637 [2024-07-12 00:48:23.239686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.637 qpair failed and we were unable to recover it. 00:35:55.637 [2024-07-12 00:48:23.239776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.637 [2024-07-12 00:48:23.239802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.637 qpair failed and we were unable to recover it. 00:35:55.637 [2024-07-12 00:48:23.239890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.637 [2024-07-12 00:48:23.239916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.637 qpair failed and we were unable to recover it. 00:35:55.637 [2024-07-12 00:48:23.239992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.637 [2024-07-12 00:48:23.240018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.637 qpair failed and we were unable to recover it. 00:35:55.637 [2024-07-12 00:48:23.240100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.637 [2024-07-12 00:48:23.240126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.637 qpair failed and we were unable to recover it. 00:35:55.637 [2024-07-12 00:48:23.240205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.637 [2024-07-12 00:48:23.240235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.637 qpair failed and we were unable to recover it. 00:35:55.637 [2024-07-12 00:48:23.240332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.637 [2024-07-12 00:48:23.240358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.637 qpair failed and we were unable to recover it. 00:35:55.637 [2024-07-12 00:48:23.240440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.637 [2024-07-12 00:48:23.240469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.637 qpair failed and we were unable to recover it. 00:35:55.637 [2024-07-12 00:48:23.240545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.638 [2024-07-12 00:48:23.240572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.638 qpair failed and we were unable to recover it. 00:35:55.638 [2024-07-12 00:48:23.240681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.638 [2024-07-12 00:48:23.240707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.638 qpair failed and we were unable to recover it. 00:35:55.638 [2024-07-12 00:48:23.240786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.638 [2024-07-12 00:48:23.240813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.638 qpair failed and we were unable to recover it. 00:35:55.638 [2024-07-12 00:48:23.240895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.638 [2024-07-12 00:48:23.240921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.638 qpair failed and we were unable to recover it. 00:35:55.638 [2024-07-12 00:48:23.241002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.638 [2024-07-12 00:48:23.241028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.638 qpair failed and we were unable to recover it. 00:35:55.638 [2024-07-12 00:48:23.241112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.638 [2024-07-12 00:48:23.241142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.638 qpair failed and we were unable to recover it. 00:35:55.638 [2024-07-12 00:48:23.241220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.638 [2024-07-12 00:48:23.241246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.638 qpair failed and we were unable to recover it. 00:35:55.638 [2024-07-12 00:48:23.241324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.638 [2024-07-12 00:48:23.241350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.638 qpair failed and we were unable to recover it. 00:35:55.638 [2024-07-12 00:48:23.241433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.638 [2024-07-12 00:48:23.241459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.638 qpair failed and we were unable to recover it. 00:35:55.638 [2024-07-12 00:48:23.241541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.638 [2024-07-12 00:48:23.241567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.638 qpair failed and we were unable to recover it. 00:35:55.638 [2024-07-12 00:48:23.241651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.638 [2024-07-12 00:48:23.241677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.638 qpair failed and we were unable to recover it. 00:35:55.638 [2024-07-12 00:48:23.241765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.638 [2024-07-12 00:48:23.241797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.638 qpair failed and we were unable to recover it. 00:35:55.638 [2024-07-12 00:48:23.241882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.638 [2024-07-12 00:48:23.241909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.638 qpair failed and we were unable to recover it. 00:35:55.638 [2024-07-12 00:48:23.241984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.638 [2024-07-12 00:48:23.242010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.638 qpair failed and we were unable to recover it. 00:35:55.638 [2024-07-12 00:48:23.242086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.638 [2024-07-12 00:48:23.242112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.638 qpair failed and we were unable to recover it. 00:35:55.638 [2024-07-12 00:48:23.242193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.638 [2024-07-12 00:48:23.242224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.638 qpair failed and we were unable to recover it. 00:35:55.638 [2024-07-12 00:48:23.242303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.638 [2024-07-12 00:48:23.242330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.638 qpair failed and we were unable to recover it. 00:35:55.638 [2024-07-12 00:48:23.242415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.638 [2024-07-12 00:48:23.242442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.638 qpair failed and we were unable to recover it. 00:35:55.638 [2024-07-12 00:48:23.242517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.638 [2024-07-12 00:48:23.242544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.638 qpair failed and we were unable to recover it. 00:35:55.638 [2024-07-12 00:48:23.242637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.638 [2024-07-12 00:48:23.242664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.638 qpair failed and we were unable to recover it. 00:35:55.638 [2024-07-12 00:48:23.242744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.638 [2024-07-12 00:48:23.242771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.638 qpair failed and we were unable to recover it. 00:35:55.638 [2024-07-12 00:48:23.242845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.638 [2024-07-12 00:48:23.242871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.638 qpair failed and we were unable to recover it. 00:35:55.638 [2024-07-12 00:48:23.242945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.638 [2024-07-12 00:48:23.242973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.638 qpair failed and we were unable to recover it. 00:35:55.638 [2024-07-12 00:48:23.243062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.638 [2024-07-12 00:48:23.243087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.638 qpair failed and we were unable to recover it. 00:35:55.638 [2024-07-12 00:48:23.243178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.638 [2024-07-12 00:48:23.243204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.638 qpair failed and we were unable to recover it. 00:35:55.638 [2024-07-12 00:48:23.243279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.638 [2024-07-12 00:48:23.243305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.638 qpair failed and we were unable to recover it. 00:35:55.638 [2024-07-12 00:48:23.243394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.638 [2024-07-12 00:48:23.243423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.638 qpair failed and we were unable to recover it. 00:35:55.638 [2024-07-12 00:48:23.243505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.638 [2024-07-12 00:48:23.243530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.638 qpair failed and we were unable to recover it. 00:35:55.638 [2024-07-12 00:48:23.243623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.638 [2024-07-12 00:48:23.243650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.638 qpair failed and we were unable to recover it. 00:35:55.638 [2024-07-12 00:48:23.243738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.638 [2024-07-12 00:48:23.243764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.638 qpair failed and we were unable to recover it. 00:35:55.638 [2024-07-12 00:48:23.243846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.638 [2024-07-12 00:48:23.243872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.638 qpair failed and we were unable to recover it. 00:35:55.638 [2024-07-12 00:48:23.243956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.638 [2024-07-12 00:48:23.243982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.638 qpair failed and we were unable to recover it. 00:35:55.638 [2024-07-12 00:48:23.244068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.638 [2024-07-12 00:48:23.244097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.638 qpair failed and we were unable to recover it. 00:35:55.638 [2024-07-12 00:48:23.244178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.639 [2024-07-12 00:48:23.244205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.639 qpair failed and we were unable to recover it. 00:35:55.639 [2024-07-12 00:48:23.244285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.639 [2024-07-12 00:48:23.244312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.639 qpair failed and we were unable to recover it. 00:35:55.639 [2024-07-12 00:48:23.244396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.639 [2024-07-12 00:48:23.244422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.639 qpair failed and we were unable to recover it. 00:35:55.639 [2024-07-12 00:48:23.244512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.639 [2024-07-12 00:48:23.244540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.639 qpair failed and we were unable to recover it. 00:35:55.639 [2024-07-12 00:48:23.244626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.639 [2024-07-12 00:48:23.244653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.639 qpair failed and we were unable to recover it. 00:35:55.639 [2024-07-12 00:48:23.244739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.639 [2024-07-12 00:48:23.244769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.639 qpair failed and we were unable to recover it. 00:35:55.639 [2024-07-12 00:48:23.244848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.639 [2024-07-12 00:48:23.244874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.639 qpair failed and we were unable to recover it. 00:35:55.639 [2024-07-12 00:48:23.244953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.639 [2024-07-12 00:48:23.244979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.639 qpair failed and we were unable to recover it. 00:35:55.639 [2024-07-12 00:48:23.245061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.639 [2024-07-12 00:48:23.245087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.639 qpair failed and we were unable to recover it. 00:35:55.639 [2024-07-12 00:48:23.245161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.639 [2024-07-12 00:48:23.245187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.639 qpair failed and we were unable to recover it. 00:35:55.639 [2024-07-12 00:48:23.245266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.639 [2024-07-12 00:48:23.245293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.639 qpair failed and we were unable to recover it. 00:35:55.639 [2024-07-12 00:48:23.245368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.639 [2024-07-12 00:48:23.245395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.639 qpair failed and we were unable to recover it. 00:35:55.639 [2024-07-12 00:48:23.245477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.639 [2024-07-12 00:48:23.245510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.639 qpair failed and we were unable to recover it. 00:35:55.639 [2024-07-12 00:48:23.245615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.639 [2024-07-12 00:48:23.245642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.639 qpair failed and we were unable to recover it. 00:35:55.639 [2024-07-12 00:48:23.245717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.639 [2024-07-12 00:48:23.245744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.639 qpair failed and we were unable to recover it. 00:35:55.639 [2024-07-12 00:48:23.245821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.639 [2024-07-12 00:48:23.245847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.639 qpair failed and we were unable to recover it. 00:35:55.639 [2024-07-12 00:48:23.245944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.639 [2024-07-12 00:48:23.245980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.639 qpair failed and we were unable to recover it. 00:35:55.639 [2024-07-12 00:48:23.246080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.639 [2024-07-12 00:48:23.246109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.639 qpair failed and we were unable to recover it. 00:35:55.639 [2024-07-12 00:48:23.246198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.639 [2024-07-12 00:48:23.246226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.639 qpair failed and we were unable to recover it. 00:35:55.639 [2024-07-12 00:48:23.246306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.639 [2024-07-12 00:48:23.246332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.639 qpair failed and we were unable to recover it. 00:35:55.639 [2024-07-12 00:48:23.246474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.639 [2024-07-12 00:48:23.246500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.639 qpair failed and we were unable to recover it. 00:35:55.639 [2024-07-12 00:48:23.246584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.639 [2024-07-12 00:48:23.246618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.639 qpair failed and we were unable to recover it. 00:35:55.639 [2024-07-12 00:48:23.246709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.639 [2024-07-12 00:48:23.246734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.639 qpair failed and we were unable to recover it. 00:35:55.639 [2024-07-12 00:48:23.246822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.639 [2024-07-12 00:48:23.246849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.639 qpair failed and we were unable to recover it. 00:35:55.639 [2024-07-12 00:48:23.246929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.639 [2024-07-12 00:48:23.246955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.639 qpair failed and we were unable to recover it. 00:35:55.639 [2024-07-12 00:48:23.247040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.640 [2024-07-12 00:48:23.247066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.640 qpair failed and we were unable to recover it. 00:35:55.640 [2024-07-12 00:48:23.247147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.640 [2024-07-12 00:48:23.247173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.640 qpair failed and we were unable to recover it. 00:35:55.640 [2024-07-12 00:48:23.247255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.640 [2024-07-12 00:48:23.247280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.640 qpair failed and we were unable to recover it. 00:35:55.640 [2024-07-12 00:48:23.247361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.640 [2024-07-12 00:48:23.247388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.640 qpair failed and we were unable to recover it. 00:35:55.640 [2024-07-12 00:48:23.247486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.640 [2024-07-12 00:48:23.247516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.640 qpair failed and we were unable to recover it. 00:35:55.640 [2024-07-12 00:48:23.247609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.640 [2024-07-12 00:48:23.247637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.640 qpair failed and we were unable to recover it. 00:35:55.640 [2024-07-12 00:48:23.247729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.640 [2024-07-12 00:48:23.247756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.640 qpair failed and we were unable to recover it. 00:35:55.640 [2024-07-12 00:48:23.247838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.640 [2024-07-12 00:48:23.247865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.640 qpair failed and we were unable to recover it. 00:35:55.640 [2024-07-12 00:48:23.247940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.640 [2024-07-12 00:48:23.247966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.640 qpair failed and we were unable to recover it. 00:35:55.640 [2024-07-12 00:48:23.248048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.640 [2024-07-12 00:48:23.248077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.640 qpair failed and we were unable to recover it. 00:35:55.640 [2024-07-12 00:48:23.248163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.640 [2024-07-12 00:48:23.248191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.640 qpair failed and we were unable to recover it. 00:35:55.640 [2024-07-12 00:48:23.248274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.640 [2024-07-12 00:48:23.248302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.640 qpair failed and we were unable to recover it. 00:35:55.640 [2024-07-12 00:48:23.248389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.640 [2024-07-12 00:48:23.248416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.640 qpair failed and we were unable to recover it. 00:35:55.640 [2024-07-12 00:48:23.248504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.640 [2024-07-12 00:48:23.248531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.640 qpair failed and we were unable to recover it. 00:35:55.640 [2024-07-12 00:48:23.248621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.640 [2024-07-12 00:48:23.248657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.640 qpair failed and we were unable to recover it. 00:35:55.640 [2024-07-12 00:48:23.248752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.640 [2024-07-12 00:48:23.248779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.640 qpair failed and we were unable to recover it. 00:35:55.640 [2024-07-12 00:48:23.248882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.640 [2024-07-12 00:48:23.248911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.640 qpair failed and we were unable to recover it. 00:35:55.640 [2024-07-12 00:48:23.248988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.640 [2024-07-12 00:48:23.249014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.640 qpair failed and we were unable to recover it. 00:35:55.640 [2024-07-12 00:48:23.249098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.640 [2024-07-12 00:48:23.249127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.640 qpair failed and we were unable to recover it. 00:35:55.640 [2024-07-12 00:48:23.249211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.640 [2024-07-12 00:48:23.249238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.640 qpair failed and we were unable to recover it. 00:35:55.640 [2024-07-12 00:48:23.249317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.640 [2024-07-12 00:48:23.249343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.640 qpair failed and we were unable to recover it. 00:35:55.640 [2024-07-12 00:48:23.249423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.640 [2024-07-12 00:48:23.249448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.640 qpair failed and we were unable to recover it. 00:35:55.640 [2024-07-12 00:48:23.249538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.640 [2024-07-12 00:48:23.249566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.640 qpair failed and we were unable to recover it. 00:35:55.640 [2024-07-12 00:48:23.249659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.640 [2024-07-12 00:48:23.249686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.640 qpair failed and we were unable to recover it. 00:35:55.640 [2024-07-12 00:48:23.249770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.641 [2024-07-12 00:48:23.249797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.641 qpair failed and we were unable to recover it. 00:35:55.641 [2024-07-12 00:48:23.249872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.641 [2024-07-12 00:48:23.249898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.641 qpair failed and we were unable to recover it. 00:35:55.641 [2024-07-12 00:48:23.249974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.641 [2024-07-12 00:48:23.250001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.641 qpair failed and we were unable to recover it. 00:35:55.641 [2024-07-12 00:48:23.250091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.641 [2024-07-12 00:48:23.250119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.641 qpair failed and we were unable to recover it. 00:35:55.641 [2024-07-12 00:48:23.250217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.641 [2024-07-12 00:48:23.250245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.641 qpair failed and we were unable to recover it. 00:35:55.641 [2024-07-12 00:48:23.250329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.641 [2024-07-12 00:48:23.250356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.641 qpair failed and we were unable to recover it. 00:35:55.641 [2024-07-12 00:48:23.250437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.641 [2024-07-12 00:48:23.250462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.641 qpair failed and we were unable to recover it. 00:35:55.641 [2024-07-12 00:48:23.250544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.641 [2024-07-12 00:48:23.250570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.641 qpair failed and we were unable to recover it. 00:35:55.641 [2024-07-12 00:48:23.250669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.641 [2024-07-12 00:48:23.250703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.641 qpair failed and we were unable to recover it. 00:35:55.641 [2024-07-12 00:48:23.250789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.641 [2024-07-12 00:48:23.250815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.641 qpair failed and we were unable to recover it. 00:35:55.641 [2024-07-12 00:48:23.250921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.641 [2024-07-12 00:48:23.250947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.641 qpair failed and we were unable to recover it. 00:35:55.641 [2024-07-12 00:48:23.251023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.641 [2024-07-12 00:48:23.251049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.641 qpair failed and we were unable to recover it. 00:35:55.641 [2024-07-12 00:48:23.251126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.641 [2024-07-12 00:48:23.251152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.641 qpair failed and we were unable to recover it. 00:35:55.641 [2024-07-12 00:48:23.251235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.641 [2024-07-12 00:48:23.251261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.641 qpair failed and we were unable to recover it. 00:35:55.641 [2024-07-12 00:48:23.251348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.641 [2024-07-12 00:48:23.251374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.641 qpair failed and we were unable to recover it. 00:35:55.641 [2024-07-12 00:48:23.251453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.641 [2024-07-12 00:48:23.251479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.641 qpair failed and we were unable to recover it. 00:35:55.641 [2024-07-12 00:48:23.251563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.641 [2024-07-12 00:48:23.251595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.641 qpair failed and we were unable to recover it. 00:35:55.641 [2024-07-12 00:48:23.251683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.641 [2024-07-12 00:48:23.251714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.641 qpair failed and we were unable to recover it. 00:35:55.641 [2024-07-12 00:48:23.251797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.641 [2024-07-12 00:48:23.251823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.641 qpair failed and we were unable to recover it. 00:35:55.641 [2024-07-12 00:48:23.251908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.641 [2024-07-12 00:48:23.251937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.641 qpair failed and we were unable to recover it. 00:35:55.641 [2024-07-12 00:48:23.252022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.641 [2024-07-12 00:48:23.252048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.641 qpair failed and we were unable to recover it. 00:35:55.641 [2024-07-12 00:48:23.252129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.642 [2024-07-12 00:48:23.252156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.642 qpair failed and we were unable to recover it. 00:35:55.642 [2024-07-12 00:48:23.252257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.642 [2024-07-12 00:48:23.252283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.642 qpair failed and we were unable to recover it. 00:35:55.642 [2024-07-12 00:48:23.252371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.642 [2024-07-12 00:48:23.252397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.642 qpair failed and we were unable to recover it. 00:35:55.642 [2024-07-12 00:48:23.252480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.642 [2024-07-12 00:48:23.252509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.642 qpair failed and we were unable to recover it. 00:35:55.642 [2024-07-12 00:48:23.252599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.642 [2024-07-12 00:48:23.252626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.642 qpair failed and we were unable to recover it. 00:35:55.642 [2024-07-12 00:48:23.252717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.642 [2024-07-12 00:48:23.252745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.642 qpair failed and we were unable to recover it. 00:35:55.642 [2024-07-12 00:48:23.252834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.642 [2024-07-12 00:48:23.252861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.642 qpair failed and we were unable to recover it. 00:35:55.642 [2024-07-12 00:48:23.252942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.642 [2024-07-12 00:48:23.252969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.642 qpair failed and we were unable to recover it. 00:35:55.642 [2024-07-12 00:48:23.253044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.642 [2024-07-12 00:48:23.253071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.642 qpair failed and we were unable to recover it. 00:35:55.642 [2024-07-12 00:48:23.253155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.642 [2024-07-12 00:48:23.253183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.642 qpair failed and we were unable to recover it. 00:35:55.642 [2024-07-12 00:48:23.253272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.642 [2024-07-12 00:48:23.253298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.642 qpair failed and we were unable to recover it. 00:35:55.642 [2024-07-12 00:48:23.253387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.642 [2024-07-12 00:48:23.253414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.642 qpair failed and we were unable to recover it. 00:35:55.642 [2024-07-12 00:48:23.253503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.642 [2024-07-12 00:48:23.253529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.642 qpair failed and we were unable to recover it. 00:35:55.642 [2024-07-12 00:48:23.253618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.642 [2024-07-12 00:48:23.253647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.642 qpair failed and we were unable to recover it. 00:35:55.642 [2024-07-12 00:48:23.253746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.642 [2024-07-12 00:48:23.253772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.642 qpair failed and we were unable to recover it. 00:35:55.643 [2024-07-12 00:48:23.253851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.643 [2024-07-12 00:48:23.253879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.643 qpair failed and we were unable to recover it. 00:35:55.643 [2024-07-12 00:48:23.253978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.643 [2024-07-12 00:48:23.254004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.643 qpair failed and we were unable to recover it. 00:35:55.643 [2024-07-12 00:48:23.254085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.643 [2024-07-12 00:48:23.254111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.643 qpair failed and we were unable to recover it. 00:35:55.643 [2024-07-12 00:48:23.254193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.643 [2024-07-12 00:48:23.254219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.643 qpair failed and we were unable to recover it. 00:35:55.643 [2024-07-12 00:48:23.254308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.643 [2024-07-12 00:48:23.254335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.643 qpair failed and we were unable to recover it. 00:35:55.643 [2024-07-12 00:48:23.254411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.643 [2024-07-12 00:48:23.254437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.643 qpair failed and we were unable to recover it. 00:35:55.643 [2024-07-12 00:48:23.254510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.643 [2024-07-12 00:48:23.254536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.643 qpair failed and we were unable to recover it. 00:35:55.643 [2024-07-12 00:48:23.254630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.643 [2024-07-12 00:48:23.254658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.643 qpair failed and we were unable to recover it. 00:35:55.643 [2024-07-12 00:48:23.254747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.643 [2024-07-12 00:48:23.254775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.643 qpair failed and we were unable to recover it. 00:35:55.643 [2024-07-12 00:48:23.254863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.643 [2024-07-12 00:48:23.254890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.643 qpair failed and we were unable to recover it. 00:35:55.643 [2024-07-12 00:48:23.254967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.643 [2024-07-12 00:48:23.254993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.643 qpair failed and we were unable to recover it. 00:35:55.643 [2024-07-12 00:48:23.255078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.643 [2024-07-12 00:48:23.255104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.643 qpair failed and we were unable to recover it. 00:35:55.643 [2024-07-12 00:48:23.255182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.643 [2024-07-12 00:48:23.255210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.643 qpair failed and we were unable to recover it. 00:35:55.643 [2024-07-12 00:48:23.255290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.643 [2024-07-12 00:48:23.255316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.643 qpair failed and we were unable to recover it. 00:35:55.643 [2024-07-12 00:48:23.255396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.643 [2024-07-12 00:48:23.255423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.643 qpair failed and we were unable to recover it. 00:35:55.643 [2024-07-12 00:48:23.255497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.643 [2024-07-12 00:48:23.255523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.643 qpair failed and we were unable to recover it. 00:35:55.643 [2024-07-12 00:48:23.255606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.643 [2024-07-12 00:48:23.255633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.643 qpair failed and we were unable to recover it. 00:35:55.643 [2024-07-12 00:48:23.255722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.643 [2024-07-12 00:48:23.255746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.643 qpair failed and we were unable to recover it. 00:35:55.643 [2024-07-12 00:48:23.255821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.643 [2024-07-12 00:48:23.255849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.643 qpair failed and we were unable to recover it. 00:35:55.643 [2024-07-12 00:48:23.255934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.643 [2024-07-12 00:48:23.255962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.643 qpair failed and we were unable to recover it. 00:35:55.643 [2024-07-12 00:48:23.256050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.643 [2024-07-12 00:48:23.256077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.643 qpair failed and we were unable to recover it. 00:35:55.643 [2024-07-12 00:48:23.256156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.643 [2024-07-12 00:48:23.256188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.643 qpair failed and we were unable to recover it. 00:35:55.643 [2024-07-12 00:48:23.256272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.643 [2024-07-12 00:48:23.256298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.643 qpair failed and we were unable to recover it. 00:35:55.643 [2024-07-12 00:48:23.256384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.643 [2024-07-12 00:48:23.256410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.643 qpair failed and we were unable to recover it. 00:35:55.643 [2024-07-12 00:48:23.256499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.643 [2024-07-12 00:48:23.256528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.643 qpair failed and we were unable to recover it. 00:35:55.643 [2024-07-12 00:48:23.256608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.643 [2024-07-12 00:48:23.256636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.643 qpair failed and we were unable to recover it. 00:35:55.643 [2024-07-12 00:48:23.256731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.643 [2024-07-12 00:48:23.256756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.643 qpair failed and we were unable to recover it. 00:35:55.643 [2024-07-12 00:48:23.256835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.643 [2024-07-12 00:48:23.256860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.643 qpair failed and we were unable to recover it. 00:35:55.644 [2024-07-12 00:48:23.256946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.644 [2024-07-12 00:48:23.256972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.644 qpair failed and we were unable to recover it. 00:35:55.644 [2024-07-12 00:48:23.257047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.644 [2024-07-12 00:48:23.257074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.644 qpair failed and we were unable to recover it. 00:35:55.644 [2024-07-12 00:48:23.257166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.644 [2024-07-12 00:48:23.257194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.644 qpair failed and we were unable to recover it. 00:35:55.644 [2024-07-12 00:48:23.257281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.644 [2024-07-12 00:48:23.257308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.644 qpair failed and we were unable to recover it. 00:35:55.644 [2024-07-12 00:48:23.257399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.644 [2024-07-12 00:48:23.257425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.644 qpair failed and we were unable to recover it. 00:35:55.644 [2024-07-12 00:48:23.257507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.644 [2024-07-12 00:48:23.257533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.644 qpair failed and we were unable to recover it. 00:35:55.644 [2024-07-12 00:48:23.257613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.644 [2024-07-12 00:48:23.257641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.644 qpair failed and we were unable to recover it. 00:35:55.644 [2024-07-12 00:48:23.257729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.644 [2024-07-12 00:48:23.257755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.644 qpair failed and we were unable to recover it. 00:35:55.644 [2024-07-12 00:48:23.257840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.644 [2024-07-12 00:48:23.257868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.644 qpair failed and we were unable to recover it. 00:35:55.644 [2024-07-12 00:48:23.257944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.644 [2024-07-12 00:48:23.257971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.644 qpair failed and we were unable to recover it. 00:35:55.644 [2024-07-12 00:48:23.258052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.644 [2024-07-12 00:48:23.258078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.644 qpair failed and we were unable to recover it. 00:35:55.644 [2024-07-12 00:48:23.258158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.644 [2024-07-12 00:48:23.258184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.644 qpair failed and we were unable to recover it. 00:35:55.644 [2024-07-12 00:48:23.258271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.644 [2024-07-12 00:48:23.258297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.644 qpair failed and we were unable to recover it. 00:35:55.644 [2024-07-12 00:48:23.258389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.644 [2024-07-12 00:48:23.258415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.644 qpair failed and we were unable to recover it. 00:35:55.644 [2024-07-12 00:48:23.258493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.644 [2024-07-12 00:48:23.258520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.644 qpair failed and we were unable to recover it. 00:35:55.644 [2024-07-12 00:48:23.258609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.644 [2024-07-12 00:48:23.258644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.644 qpair failed and we were unable to recover it. 00:35:55.644 [2024-07-12 00:48:23.258735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.644 [2024-07-12 00:48:23.258761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.644 qpair failed and we were unable to recover it. 00:35:55.644 [2024-07-12 00:48:23.258835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.644 [2024-07-12 00:48:23.258861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.644 qpair failed and we were unable to recover it. 00:35:55.644 [2024-07-12 00:48:23.258945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.644 [2024-07-12 00:48:23.258971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.644 qpair failed and we were unable to recover it. 00:35:55.644 [2024-07-12 00:48:23.259050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.644 [2024-07-12 00:48:23.259075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.644 qpair failed and we were unable to recover it. 00:35:55.644 [2024-07-12 00:48:23.259155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.644 [2024-07-12 00:48:23.259185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.644 qpair failed and we were unable to recover it. 00:35:55.644 [2024-07-12 00:48:23.259267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.644 [2024-07-12 00:48:23.259293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.644 qpair failed and we were unable to recover it. 00:35:55.644 [2024-07-12 00:48:23.259372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.644 [2024-07-12 00:48:23.259398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.644 qpair failed and we were unable to recover it. 00:35:55.644 [2024-07-12 00:48:23.259483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.644 [2024-07-12 00:48:23.259509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.644 qpair failed and we were unable to recover it. 00:35:55.644 [2024-07-12 00:48:23.259597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.644 [2024-07-12 00:48:23.259625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.645 qpair failed and we were unable to recover it. 00:35:55.645 [2024-07-12 00:48:23.259737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.645 [2024-07-12 00:48:23.259763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.645 qpair failed and we were unable to recover it. 00:35:55.645 [2024-07-12 00:48:23.259842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.645 [2024-07-12 00:48:23.259868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.645 qpair failed and we were unable to recover it. 00:35:55.645 [2024-07-12 00:48:23.259950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.645 [2024-07-12 00:48:23.259979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.645 qpair failed and we were unable to recover it. 00:35:55.645 [2024-07-12 00:48:23.260070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.645 [2024-07-12 00:48:23.260097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.645 qpair failed and we were unable to recover it. 00:35:55.645 [2024-07-12 00:48:23.260179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.645 [2024-07-12 00:48:23.260205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.645 qpair failed and we were unable to recover it. 00:35:55.645 [2024-07-12 00:48:23.260281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.645 [2024-07-12 00:48:23.260308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.645 qpair failed and we were unable to recover it. 00:35:55.645 [2024-07-12 00:48:23.260389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.645 [2024-07-12 00:48:23.260414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.645 qpair failed and we were unable to recover it. 00:35:55.645 [2024-07-12 00:48:23.260494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.645 [2024-07-12 00:48:23.260521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.645 qpair failed and we were unable to recover it. 00:35:55.645 [2024-07-12 00:48:23.260598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.645 [2024-07-12 00:48:23.260625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.645 qpair failed and we were unable to recover it. 00:35:55.645 [2024-07-12 00:48:23.260765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.645 [2024-07-12 00:48:23.260790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.645 qpair failed and we were unable to recover it. 00:35:55.645 [2024-07-12 00:48:23.260888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.645 [2024-07-12 00:48:23.260928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.645 qpair failed and we were unable to recover it. 00:35:55.645 [2024-07-12 00:48:23.261014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.645 [2024-07-12 00:48:23.261041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.645 qpair failed and we were unable to recover it. 00:35:55.645 [2024-07-12 00:48:23.261123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.645 [2024-07-12 00:48:23.261150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.645 qpair failed and we were unable to recover it. 00:35:55.645 [2024-07-12 00:48:23.261235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.645 [2024-07-12 00:48:23.261262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.645 qpair failed and we were unable to recover it. 00:35:55.645 [2024-07-12 00:48:23.261348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.645 [2024-07-12 00:48:23.261374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.645 qpair failed and we were unable to recover it. 00:35:55.645 [2024-07-12 00:48:23.261452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.645 [2024-07-12 00:48:23.261477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.645 qpair failed and we were unable to recover it. 00:35:55.645 [2024-07-12 00:48:23.261561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.645 [2024-07-12 00:48:23.261593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.645 qpair failed and we were unable to recover it. 00:35:55.645 [2024-07-12 00:48:23.261677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.645 [2024-07-12 00:48:23.261704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.645 qpair failed and we were unable to recover it. 00:35:55.645 [2024-07-12 00:48:23.261785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.645 [2024-07-12 00:48:23.261811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.645 qpair failed and we were unable to recover it. 00:35:55.645 [2024-07-12 00:48:23.261895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.645 [2024-07-12 00:48:23.261921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.645 qpair failed and we were unable to recover it. 00:35:55.645 [2024-07-12 00:48:23.262006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.645 [2024-07-12 00:48:23.262032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.645 qpair failed and we were unable to recover it. 00:35:55.645 [2024-07-12 00:48:23.262109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.645 [2024-07-12 00:48:23.262135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.645 qpair failed and we were unable to recover it. 00:35:55.645 [2024-07-12 00:48:23.262226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.645 [2024-07-12 00:48:23.262257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.645 qpair failed and we were unable to recover it. 00:35:55.645 [2024-07-12 00:48:23.262338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.645 [2024-07-12 00:48:23.262364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.645 qpair failed and we were unable to recover it. 00:35:55.645 [2024-07-12 00:48:23.262449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.645 [2024-07-12 00:48:23.262475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.645 qpair failed and we were unable to recover it. 00:35:55.645 [2024-07-12 00:48:23.262555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.645 [2024-07-12 00:48:23.262580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.645 qpair failed and we were unable to recover it. 00:35:55.646 [2024-07-12 00:48:23.262679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.646 [2024-07-12 00:48:23.262705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.646 qpair failed and we were unable to recover it. 00:35:55.646 [2024-07-12 00:48:23.262786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.646 [2024-07-12 00:48:23.262813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.646 qpair failed and we were unable to recover it. 00:35:55.646 [2024-07-12 00:48:23.262893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.646 [2024-07-12 00:48:23.262918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.646 qpair failed and we were unable to recover it. 00:35:55.646 [2024-07-12 00:48:23.263003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.646 [2024-07-12 00:48:23.263032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.646 qpair failed and we were unable to recover it. 00:35:55.646 [2024-07-12 00:48:23.263112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.646 [2024-07-12 00:48:23.263138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.646 qpair failed and we were unable to recover it. 00:35:55.646 [2024-07-12 00:48:23.263219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.646 [2024-07-12 00:48:23.263245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.646 qpair failed and we were unable to recover it. 00:35:55.646 [2024-07-12 00:48:23.263320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.646 [2024-07-12 00:48:23.263346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.646 qpair failed and we were unable to recover it. 00:35:55.646 [2024-07-12 00:48:23.263429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.646 [2024-07-12 00:48:23.263456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.646 qpair failed and we were unable to recover it. 00:35:55.646 [2024-07-12 00:48:23.263545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.646 [2024-07-12 00:48:23.263576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.646 qpair failed and we were unable to recover it. 00:35:55.646 [2024-07-12 00:48:23.263663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.646 [2024-07-12 00:48:23.263690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.646 qpair failed and we were unable to recover it. 00:35:55.646 [2024-07-12 00:48:23.263776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.646 [2024-07-12 00:48:23.263803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.646 qpair failed and we were unable to recover it. 00:35:55.646 [2024-07-12 00:48:23.263882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.646 [2024-07-12 00:48:23.263911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.646 qpair failed and we were unable to recover it. 00:35:55.646 [2024-07-12 00:48:23.263996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.646 [2024-07-12 00:48:23.264023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.646 qpair failed and we were unable to recover it. 00:35:55.646 [2024-07-12 00:48:23.264108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.646 [2024-07-12 00:48:23.264136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.646 qpair failed and we were unable to recover it. 00:35:55.646 [2024-07-12 00:48:23.264213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.646 [2024-07-12 00:48:23.264239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.646 qpair failed and we were unable to recover it. 00:35:55.646 [2024-07-12 00:48:23.264321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.646 [2024-07-12 00:48:23.264349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.646 qpair failed and we were unable to recover it. 00:35:55.646 [2024-07-12 00:48:23.264427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.646 [2024-07-12 00:48:23.264456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.646 qpair failed and we were unable to recover it. 00:35:55.646 [2024-07-12 00:48:23.264542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.646 [2024-07-12 00:48:23.264567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.646 qpair failed and we were unable to recover it. 00:35:55.646 [2024-07-12 00:48:23.264653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.646 [2024-07-12 00:48:23.264679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.646 qpair failed and we were unable to recover it. 00:35:55.646 [2024-07-12 00:48:23.264760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.646 [2024-07-12 00:48:23.264787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.646 qpair failed and we were unable to recover it. 00:35:55.646 [2024-07-12 00:48:23.264864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.646 [2024-07-12 00:48:23.264891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.646 qpair failed and we were unable to recover it. 00:35:55.646 [2024-07-12 00:48:23.264983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.646 [2024-07-12 00:48:23.265010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.646 qpair failed and we were unable to recover it. 00:35:55.646 [2024-07-12 00:48:23.265093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.646 [2024-07-12 00:48:23.265120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.646 qpair failed and we were unable to recover it. 00:35:55.646 [2024-07-12 00:48:23.265201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.646 [2024-07-12 00:48:23.265231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.646 qpair failed and we were unable to recover it. 00:35:55.646 [2024-07-12 00:48:23.265316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.646 [2024-07-12 00:48:23.265344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.646 qpair failed and we were unable to recover it. 00:35:55.646 [2024-07-12 00:48:23.265420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.646 [2024-07-12 00:48:23.265446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.646 qpair failed and we were unable to recover it. 00:35:55.646 [2024-07-12 00:48:23.265528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.646 [2024-07-12 00:48:23.265562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.646 qpair failed and we were unable to recover it. 00:35:55.646 [2024-07-12 00:48:23.265655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.646 [2024-07-12 00:48:23.265682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.646 qpair failed and we were unable to recover it. 00:35:55.646 [2024-07-12 00:48:23.265769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.646 [2024-07-12 00:48:23.265795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.646 qpair failed and we were unable to recover it. 00:35:55.646 [2024-07-12 00:48:23.265880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.646 [2024-07-12 00:48:23.265908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.646 qpair failed and we were unable to recover it. 00:35:55.646 [2024-07-12 00:48:23.265984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.646 [2024-07-12 00:48:23.266011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.647 qpair failed and we were unable to recover it. 00:35:55.647 [2024-07-12 00:48:23.266093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.647 [2024-07-12 00:48:23.266119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.647 qpair failed and we were unable to recover it. 00:35:55.647 [2024-07-12 00:48:23.266200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.647 [2024-07-12 00:48:23.266226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.647 qpair failed and we were unable to recover it. 00:35:55.647 [2024-07-12 00:48:23.266307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.647 [2024-07-12 00:48:23.266334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.647 qpair failed and we were unable to recover it. 00:35:55.647 [2024-07-12 00:48:23.266410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.647 [2024-07-12 00:48:23.266436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.647 qpair failed and we were unable to recover it. 00:35:55.647 [2024-07-12 00:48:23.266516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.647 [2024-07-12 00:48:23.266545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.647 qpair failed and we were unable to recover it. 00:35:55.647 [2024-07-12 00:48:23.266635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.647 [2024-07-12 00:48:23.266663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.647 qpair failed and we were unable to recover it. 00:35:55.647 [2024-07-12 00:48:23.266751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.647 [2024-07-12 00:48:23.266777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.647 qpair failed and we were unable to recover it. 00:35:55.647 [2024-07-12 00:48:23.266877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.647 [2024-07-12 00:48:23.266906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.647 qpair failed and we were unable to recover it. 00:35:55.647 [2024-07-12 00:48:23.266988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.647 [2024-07-12 00:48:23.267014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.647 qpair failed and we were unable to recover it. 00:35:55.647 [2024-07-12 00:48:23.267105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.647 [2024-07-12 00:48:23.267131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.647 qpair failed and we were unable to recover it. 00:35:55.647 [2024-07-12 00:48:23.267217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.647 [2024-07-12 00:48:23.267243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.647 qpair failed and we were unable to recover it. 00:35:55.647 [2024-07-12 00:48:23.267334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.647 [2024-07-12 00:48:23.267364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.647 qpair failed and we were unable to recover it. 00:35:55.647 [2024-07-12 00:48:23.267445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.647 [2024-07-12 00:48:23.267470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.647 qpair failed and we were unable to recover it. 00:35:55.647 [2024-07-12 00:48:23.267546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.647 [2024-07-12 00:48:23.267572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.647 qpair failed and we were unable to recover it. 00:35:55.647 [2024-07-12 00:48:23.267656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.647 [2024-07-12 00:48:23.267684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.647 qpair failed and we were unable to recover it. 00:35:55.647 [2024-07-12 00:48:23.267765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.647 [2024-07-12 00:48:23.267791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.647 qpair failed and we were unable to recover it. 00:35:55.647 [2024-07-12 00:48:23.267872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.647 [2024-07-12 00:48:23.267897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.647 qpair failed and we were unable to recover it. 00:35:55.647 [2024-07-12 00:48:23.267972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.647 [2024-07-12 00:48:23.267998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.647 qpair failed and we were unable to recover it. 00:35:55.647 [2024-07-12 00:48:23.268079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.647 [2024-07-12 00:48:23.268104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.647 qpair failed and we were unable to recover it. 00:35:55.647 [2024-07-12 00:48:23.268183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.647 [2024-07-12 00:48:23.268212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.647 qpair failed and we were unable to recover it. 00:35:55.647 [2024-07-12 00:48:23.268299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.647 [2024-07-12 00:48:23.268328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.647 qpair failed and we were unable to recover it. 00:35:55.647 [2024-07-12 00:48:23.268416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.647 [2024-07-12 00:48:23.268443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.647 qpair failed and we were unable to recover it. 00:35:55.647 [2024-07-12 00:48:23.268532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.647 [2024-07-12 00:48:23.268559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.647 qpair failed and we were unable to recover it. 00:35:55.647 [2024-07-12 00:48:23.268648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.647 [2024-07-12 00:48:23.268675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.647 qpair failed and we were unable to recover it. 00:35:55.648 [2024-07-12 00:48:23.268756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.648 [2024-07-12 00:48:23.268783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.648 qpair failed and we were unable to recover it. 00:35:55.648 [2024-07-12 00:48:23.268864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.648 [2024-07-12 00:48:23.268890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.648 qpair failed and we were unable to recover it. 00:35:55.648 [2024-07-12 00:48:23.268986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.648 [2024-07-12 00:48:23.269012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.648 qpair failed and we were unable to recover it. 00:35:55.648 [2024-07-12 00:48:23.269096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.648 [2024-07-12 00:48:23.269127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.648 qpair failed and we were unable to recover it. 00:35:55.648 [2024-07-12 00:48:23.269207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.648 [2024-07-12 00:48:23.269234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.648 qpair failed and we were unable to recover it. 00:35:55.648 [2024-07-12 00:48:23.269320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.648 [2024-07-12 00:48:23.269348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.648 qpair failed and we were unable to recover it. 00:35:55.648 [2024-07-12 00:48:23.269431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.648 [2024-07-12 00:48:23.269457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.648 qpair failed and we were unable to recover it. 00:35:55.648 [2024-07-12 00:48:23.269540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.648 [2024-07-12 00:48:23.269566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.648 qpair failed and we were unable to recover it. 00:35:55.648 [2024-07-12 00:48:23.269670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.648 [2024-07-12 00:48:23.269701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.648 qpair failed and we were unable to recover it. 00:35:55.648 [2024-07-12 00:48:23.269788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.648 [2024-07-12 00:48:23.269814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.648 qpair failed and we were unable to recover it. 00:35:55.648 [2024-07-12 00:48:23.269894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.648 [2024-07-12 00:48:23.269920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.648 qpair failed and we were unable to recover it. 00:35:55.648 [2024-07-12 00:48:23.270007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.648 [2024-07-12 00:48:23.270033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.648 qpair failed and we were unable to recover it. 00:35:55.648 [2024-07-12 00:48:23.270116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.648 [2024-07-12 00:48:23.270142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.648 qpair failed and we were unable to recover it. 00:35:55.648 [2024-07-12 00:48:23.270228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.648 [2024-07-12 00:48:23.270253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.648 qpair failed and we were unable to recover it. 00:35:55.648 [2024-07-12 00:48:23.270332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.648 [2024-07-12 00:48:23.270358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.648 qpair failed and we were unable to recover it. 00:35:55.648 [2024-07-12 00:48:23.270444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.648 [2024-07-12 00:48:23.270471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.648 qpair failed and we were unable to recover it. 00:35:55.648 [2024-07-12 00:48:23.270559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.648 [2024-07-12 00:48:23.270594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.648 qpair failed and we were unable to recover it. 00:35:55.648 [2024-07-12 00:48:23.270739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.648 [2024-07-12 00:48:23.270769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.648 qpair failed and we were unable to recover it. 00:35:55.648 [2024-07-12 00:48:23.270850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.648 [2024-07-12 00:48:23.270877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.648 qpair failed and we were unable to recover it. 00:35:55.648 [2024-07-12 00:48:23.270957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.648 [2024-07-12 00:48:23.270983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.648 qpair failed and we were unable to recover it. 00:35:55.648 [2024-07-12 00:48:23.271064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.648 [2024-07-12 00:48:23.271094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.648 qpair failed and we were unable to recover it. 00:35:55.648 [2024-07-12 00:48:23.271171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.648 [2024-07-12 00:48:23.271197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.648 qpair failed and we were unable to recover it. 00:35:55.648 [2024-07-12 00:48:23.271284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.648 [2024-07-12 00:48:23.271312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.648 qpair failed and we were unable to recover it. 00:35:55.648 [2024-07-12 00:48:23.271398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.648 [2024-07-12 00:48:23.271425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.648 qpair failed and we were unable to recover it. 00:35:55.648 [2024-07-12 00:48:23.271508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.648 [2024-07-12 00:48:23.271535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.649 qpair failed and we were unable to recover it. 00:35:55.649 [2024-07-12 00:48:23.271624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.649 [2024-07-12 00:48:23.271651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.649 qpair failed and we were unable to recover it. 00:35:55.649 [2024-07-12 00:48:23.271752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.649 [2024-07-12 00:48:23.271778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.649 qpair failed and we were unable to recover it. 00:35:55.649 [2024-07-12 00:48:23.271853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.649 [2024-07-12 00:48:23.271879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.649 qpair failed and we were unable to recover it. 00:35:55.649 [2024-07-12 00:48:23.271996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.649 [2024-07-12 00:48:23.272025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.649 qpair failed and we were unable to recover it. 00:35:55.649 [2024-07-12 00:48:23.272124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.649 [2024-07-12 00:48:23.272152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.649 qpair failed and we were unable to recover it. 00:35:55.649 [2024-07-12 00:48:23.272243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.649 [2024-07-12 00:48:23.272269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.649 qpair failed and we were unable to recover it. 00:35:55.649 [2024-07-12 00:48:23.272353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.649 [2024-07-12 00:48:23.272379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.649 qpair failed and we were unable to recover it. 00:35:55.649 [2024-07-12 00:48:23.272464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.649 [2024-07-12 00:48:23.272493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.649 qpair failed and we were unable to recover it. 00:35:55.649 [2024-07-12 00:48:23.272581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.649 [2024-07-12 00:48:23.272615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.649 qpair failed and we were unable to recover it. 00:35:55.649 [2024-07-12 00:48:23.272699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.649 [2024-07-12 00:48:23.272726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.649 qpair failed and we were unable to recover it. 00:35:55.649 [2024-07-12 00:48:23.272802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.649 [2024-07-12 00:48:23.272832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.649 qpair failed and we were unable to recover it. 00:35:55.649 [2024-07-12 00:48:23.272914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.649 [2024-07-12 00:48:23.272940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.649 qpair failed and we were unable to recover it. 00:35:55.649 [2024-07-12 00:48:23.273032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.649 [2024-07-12 00:48:23.273058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.649 qpair failed and we were unable to recover it. 00:35:55.649 [2024-07-12 00:48:23.273134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.649 [2024-07-12 00:48:23.273160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.649 qpair failed and we were unable to recover it. 00:35:55.649 [2024-07-12 00:48:23.273241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.649 [2024-07-12 00:48:23.273267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.649 qpair failed and we were unable to recover it. 00:35:55.649 [2024-07-12 00:48:23.273343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.649 [2024-07-12 00:48:23.273370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.649 qpair failed and we were unable to recover it. 00:35:55.649 [2024-07-12 00:48:23.273448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.649 [2024-07-12 00:48:23.273474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.649 qpair failed and we were unable to recover it. 00:35:55.649 [2024-07-12 00:48:23.273555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.649 [2024-07-12 00:48:23.273583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.649 qpair failed and we were unable to recover it. 00:35:55.649 [2024-07-12 00:48:23.273674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.649 [2024-07-12 00:48:23.273700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.649 qpair failed and we were unable to recover it. 00:35:55.649 [2024-07-12 00:48:23.273775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.649 [2024-07-12 00:48:23.273803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.649 qpair failed and we were unable to recover it. 00:35:55.649 [2024-07-12 00:48:23.273899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.649 [2024-07-12 00:48:23.273925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.649 qpair failed and we were unable to recover it. 00:35:55.649 [2024-07-12 00:48:23.274008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.649 [2024-07-12 00:48:23.274034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.649 qpair failed and we were unable to recover it. 00:35:55.649 [2024-07-12 00:48:23.274117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.649 [2024-07-12 00:48:23.274145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.649 qpair failed and we were unable to recover it. 00:35:55.649 [2024-07-12 00:48:23.274230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.649 [2024-07-12 00:48:23.274257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.649 qpair failed and we were unable to recover it. 00:35:55.649 [2024-07-12 00:48:23.274353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.649 [2024-07-12 00:48:23.274380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.649 qpair failed and we were unable to recover it. 00:35:55.649 [2024-07-12 00:48:23.274461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.649 [2024-07-12 00:48:23.274488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.649 qpair failed and we were unable to recover it. 00:35:55.649 [2024-07-12 00:48:23.274569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.649 [2024-07-12 00:48:23.274608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.649 qpair failed and we were unable to recover it. 00:35:55.649 [2024-07-12 00:48:23.274751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.649 [2024-07-12 00:48:23.274777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.649 qpair failed and we were unable to recover it. 00:35:55.649 [2024-07-12 00:48:23.274858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.649 [2024-07-12 00:48:23.274884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.649 qpair failed and we were unable to recover it. 00:35:55.649 [2024-07-12 00:48:23.274965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.650 [2024-07-12 00:48:23.274991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.650 qpair failed and we were unable to recover it. 00:35:55.650 [2024-07-12 00:48:23.275080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.650 [2024-07-12 00:48:23.275108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.650 qpair failed and we were unable to recover it. 00:35:55.650 [2024-07-12 00:48:23.275198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.650 [2024-07-12 00:48:23.275225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.650 qpair failed and we were unable to recover it. 00:35:55.650 [2024-07-12 00:48:23.275310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.650 [2024-07-12 00:48:23.275337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.650 qpair failed and we were unable to recover it. 00:35:55.650 [2024-07-12 00:48:23.275418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.650 [2024-07-12 00:48:23.275443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.650 qpair failed and we were unable to recover it. 00:35:55.650 [2024-07-12 00:48:23.275534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.650 [2024-07-12 00:48:23.275575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.650 qpair failed and we were unable to recover it. 00:35:55.650 [2024-07-12 00:48:23.275675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.650 [2024-07-12 00:48:23.275703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.650 qpair failed and we were unable to recover it. 00:35:55.650 [2024-07-12 00:48:23.275785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.650 [2024-07-12 00:48:23.275811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.650 qpair failed and we were unable to recover it. 00:35:55.650 [2024-07-12 00:48:23.275900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.650 [2024-07-12 00:48:23.275934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.650 qpair failed and we were unable to recover it. 00:35:55.650 [2024-07-12 00:48:23.276019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.650 [2024-07-12 00:48:23.276045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.650 qpair failed and we were unable to recover it. 00:35:55.650 [2024-07-12 00:48:23.276143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.650 [2024-07-12 00:48:23.276169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.650 qpair failed and we were unable to recover it. 00:35:55.650 [2024-07-12 00:48:23.276244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.650 [2024-07-12 00:48:23.276270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.650 qpair failed and we were unable to recover it. 00:35:55.650 [2024-07-12 00:48:23.276346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.650 [2024-07-12 00:48:23.276373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.650 qpair failed and we were unable to recover it. 00:35:55.650 [2024-07-12 00:48:23.276455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.650 [2024-07-12 00:48:23.276482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.650 qpair failed and we were unable to recover it. 00:35:55.650 [2024-07-12 00:48:23.276556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.650 [2024-07-12 00:48:23.276582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.650 qpair failed and we were unable to recover it. 00:35:55.650 [2024-07-12 00:48:23.276672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.650 [2024-07-12 00:48:23.276709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.650 qpair failed and we were unable to recover it. 00:35:55.650 [2024-07-12 00:48:23.276783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.650 [2024-07-12 00:48:23.276810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.650 qpair failed and we were unable to recover it. 00:35:55.650 [2024-07-12 00:48:23.276897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.650 [2024-07-12 00:48:23.276924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.650 qpair failed and we were unable to recover it. 00:35:55.650 [2024-07-12 00:48:23.277009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.650 [2024-07-12 00:48:23.277036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.650 qpair failed and we were unable to recover it. 00:35:55.650 [2024-07-12 00:48:23.277118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.650 [2024-07-12 00:48:23.277145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.650 qpair failed and we were unable to recover it. 00:35:55.650 [2024-07-12 00:48:23.277225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.650 [2024-07-12 00:48:23.277254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.650 qpair failed and we were unable to recover it. 00:35:55.650 [2024-07-12 00:48:23.277342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.650 [2024-07-12 00:48:23.277368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.650 qpair failed and we were unable to recover it. 00:35:55.650 [2024-07-12 00:48:23.277463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.650 [2024-07-12 00:48:23.277492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.650 qpair failed and we were unable to recover it. 00:35:55.650 [2024-07-12 00:48:23.277571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.650 [2024-07-12 00:48:23.277613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.650 qpair failed and we were unable to recover it. 00:35:55.650 [2024-07-12 00:48:23.277690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.650 [2024-07-12 00:48:23.277717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.650 qpair failed and we were unable to recover it. 00:35:55.650 [2024-07-12 00:48:23.277800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.650 [2024-07-12 00:48:23.277826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.650 qpair failed and we were unable to recover it. 00:35:55.650 [2024-07-12 00:48:23.277906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.650 [2024-07-12 00:48:23.277932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.650 qpair failed and we were unable to recover it. 00:35:55.650 [2024-07-12 00:48:23.278009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.650 [2024-07-12 00:48:23.278034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.650 qpair failed and we were unable to recover it. 00:35:55.650 [2024-07-12 00:48:23.278115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.650 [2024-07-12 00:48:23.278143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.651 qpair failed and we were unable to recover it. 00:35:55.651 [2024-07-12 00:48:23.278228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.651 [2024-07-12 00:48:23.278258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.651 qpair failed and we were unable to recover it. 00:35:55.651 [2024-07-12 00:48:23.278363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.651 [2024-07-12 00:48:23.278406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.651 qpair failed and we were unable to recover it. 00:35:55.651 [2024-07-12 00:48:23.278501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.651 [2024-07-12 00:48:23.278530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.651 qpair failed and we were unable to recover it. 00:35:55.651 [2024-07-12 00:48:23.278633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.651 [2024-07-12 00:48:23.278660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.651 qpair failed and we were unable to recover it. 00:35:55.651 [2024-07-12 00:48:23.278739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.651 [2024-07-12 00:48:23.278784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.651 qpair failed and we were unable to recover it. 00:35:55.651 [2024-07-12 00:48:23.278866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.651 [2024-07-12 00:48:23.278892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.651 qpair failed and we were unable to recover it. 00:35:55.651 [2024-07-12 00:48:23.278997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.651 [2024-07-12 00:48:23.279025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.651 qpair failed and we were unable to recover it. 00:35:55.651 [2024-07-12 00:48:23.279111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.651 [2024-07-12 00:48:23.279139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.651 qpair failed and we were unable to recover it. 00:35:55.651 [2024-07-12 00:48:23.279223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.651 [2024-07-12 00:48:23.279250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.651 qpair failed and we were unable to recover it. 00:35:55.651 [2024-07-12 00:48:23.279328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.651 [2024-07-12 00:48:23.279354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.651 qpair failed and we were unable to recover it. 00:35:55.651 [2024-07-12 00:48:23.279440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.651 [2024-07-12 00:48:23.279467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.651 qpair failed and we were unable to recover it. 00:35:55.651 [2024-07-12 00:48:23.279555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.651 [2024-07-12 00:48:23.279584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.651 qpair failed and we were unable to recover it. 00:35:55.651 [2024-07-12 00:48:23.279679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.651 [2024-07-12 00:48:23.279707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.651 qpair failed and we were unable to recover it. 00:35:55.651 [2024-07-12 00:48:23.279795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.651 [2024-07-12 00:48:23.279826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.651 qpair failed and we were unable to recover it. 00:35:55.651 [2024-07-12 00:48:23.279916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.651 [2024-07-12 00:48:23.279943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.651 qpair failed and we were unable to recover it. 00:35:55.651 [2024-07-12 00:48:23.280023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.651 [2024-07-12 00:48:23.280050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.651 qpair failed and we were unable to recover it. 00:35:55.651 [2024-07-12 00:48:23.280133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.651 [2024-07-12 00:48:23.280160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.651 qpair failed and we were unable to recover it. 00:35:55.651 [2024-07-12 00:48:23.280253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.651 [2024-07-12 00:48:23.280282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.651 qpair failed and we were unable to recover it. 00:35:55.651 [2024-07-12 00:48:23.280369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.651 [2024-07-12 00:48:23.280396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.651 qpair failed and we were unable to recover it. 00:35:55.651 [2024-07-12 00:48:23.280474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.651 [2024-07-12 00:48:23.280505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.651 qpair failed and we were unable to recover it. 00:35:55.651 [2024-07-12 00:48:23.280590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.651 [2024-07-12 00:48:23.280617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.651 qpair failed and we were unable to recover it. 00:35:55.651 [2024-07-12 00:48:23.280695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.651 [2024-07-12 00:48:23.280721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.651 qpair failed and we were unable to recover it. 00:35:55.651 [2024-07-12 00:48:23.280799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.651 [2024-07-12 00:48:23.280824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.651 qpair failed and we were unable to recover it. 00:35:55.651 [2024-07-12 00:48:23.280905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.651 [2024-07-12 00:48:23.280931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.651 qpair failed and we were unable to recover it. 00:35:55.651 [2024-07-12 00:48:23.281006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.651 [2024-07-12 00:48:23.281032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.651 qpair failed and we were unable to recover it. 00:35:55.651 [2024-07-12 00:48:23.281111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.652 [2024-07-12 00:48:23.281137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.652 qpair failed and we were unable to recover it. 00:35:55.652 [2024-07-12 00:48:23.281219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.652 [2024-07-12 00:48:23.281249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.652 qpair failed and we were unable to recover it. 00:35:55.652 [2024-07-12 00:48:23.281347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.652 [2024-07-12 00:48:23.281375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.652 qpair failed and we were unable to recover it. 00:35:55.652 [2024-07-12 00:48:23.281466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.652 [2024-07-12 00:48:23.281495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.652 qpair failed and we were unable to recover it. 00:35:55.652 [2024-07-12 00:48:23.281598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.652 [2024-07-12 00:48:23.281632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.652 qpair failed and we were unable to recover it. 00:35:55.652 [2024-07-12 00:48:23.281716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.652 [2024-07-12 00:48:23.281742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.652 qpair failed and we were unable to recover it. 00:35:55.652 [2024-07-12 00:48:23.281824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.652 [2024-07-12 00:48:23.281860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.652 qpair failed and we were unable to recover it. 00:35:55.652 [2024-07-12 00:48:23.281939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.652 [2024-07-12 00:48:23.281967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.652 qpair failed and we were unable to recover it. 00:35:55.652 [2024-07-12 00:48:23.282055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.652 [2024-07-12 00:48:23.282082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.652 qpair failed and we were unable to recover it. 00:35:55.652 [2024-07-12 00:48:23.282166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.652 [2024-07-12 00:48:23.282194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.652 qpair failed and we were unable to recover it. 00:35:55.652 [2024-07-12 00:48:23.282279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.652 [2024-07-12 00:48:23.282306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.652 qpair failed and we were unable to recover it. 00:35:55.652 [2024-07-12 00:48:23.282388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.652 [2024-07-12 00:48:23.282415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.652 qpair failed and we were unable to recover it. 00:35:55.652 [2024-07-12 00:48:23.282494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.652 [2024-07-12 00:48:23.282519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.652 qpair failed and we were unable to recover it. 00:35:55.652 [2024-07-12 00:48:23.282599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.652 [2024-07-12 00:48:23.282626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.652 qpair failed and we were unable to recover it. 00:35:55.652 [2024-07-12 00:48:23.282701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.652 [2024-07-12 00:48:23.282727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.652 qpair failed and we were unable to recover it. 00:35:55.652 [2024-07-12 00:48:23.282802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.652 [2024-07-12 00:48:23.282828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.652 qpair failed and we were unable to recover it. 00:35:55.652 [2024-07-12 00:48:23.282914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.652 [2024-07-12 00:48:23.282943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.652 qpair failed and we were unable to recover it. 00:35:55.652 [2024-07-12 00:48:23.283030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.652 [2024-07-12 00:48:23.283058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.652 qpair failed and we were unable to recover it. 00:35:55.652 [2024-07-12 00:48:23.283159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.652 [2024-07-12 00:48:23.283185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.652 qpair failed and we were unable to recover it. 00:35:55.652 [2024-07-12 00:48:23.283271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.652 [2024-07-12 00:48:23.283298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.652 qpair failed and we were unable to recover it. 00:35:55.652 [2024-07-12 00:48:23.283383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.652 [2024-07-12 00:48:23.283410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.652 qpair failed and we were unable to recover it. 00:35:55.652 [2024-07-12 00:48:23.283509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.652 [2024-07-12 00:48:23.283547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.652 qpair failed and we were unable to recover it. 00:35:55.652 [2024-07-12 00:48:23.283657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.652 [2024-07-12 00:48:23.283685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.652 qpair failed and we were unable to recover it. 00:35:55.652 [2024-07-12 00:48:23.283768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.652 [2024-07-12 00:48:23.283795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.652 qpair failed and we were unable to recover it. 00:35:55.652 [2024-07-12 00:48:23.283881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.652 [2024-07-12 00:48:23.283907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.652 qpair failed and we were unable to recover it. 00:35:55.652 [2024-07-12 00:48:23.283986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.652 [2024-07-12 00:48:23.284012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.652 qpair failed and we were unable to recover it. 00:35:55.652 [2024-07-12 00:48:23.284111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.652 [2024-07-12 00:48:23.284140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.652 qpair failed and we were unable to recover it. 00:35:55.653 [2024-07-12 00:48:23.284224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.653 [2024-07-12 00:48:23.284250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.653 qpair failed and we were unable to recover it. 00:35:55.653 [2024-07-12 00:48:23.284339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.653 [2024-07-12 00:48:23.284376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.653 qpair failed and we were unable to recover it. 00:35:55.653 [2024-07-12 00:48:23.284460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.653 [2024-07-12 00:48:23.284486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.653 qpair failed and we were unable to recover it. 00:35:55.653 [2024-07-12 00:48:23.284564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.653 [2024-07-12 00:48:23.284597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.653 qpair failed and we were unable to recover it. 00:35:55.653 [2024-07-12 00:48:23.284684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.653 [2024-07-12 00:48:23.284712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.653 qpair failed and we were unable to recover it. 00:35:55.653 [2024-07-12 00:48:23.284789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.653 [2024-07-12 00:48:23.284815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.653 qpair failed and we were unable to recover it. 00:35:55.653 [2024-07-12 00:48:23.284896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.653 [2024-07-12 00:48:23.284922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.653 qpair failed and we were unable to recover it. 00:35:55.653 [2024-07-12 00:48:23.284997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.653 [2024-07-12 00:48:23.285023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.653 qpair failed and we were unable to recover it. 00:35:55.653 [2024-07-12 00:48:23.285113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.653 [2024-07-12 00:48:23.285141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.653 qpair failed and we were unable to recover it. 00:35:55.653 [2024-07-12 00:48:23.285218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.653 [2024-07-12 00:48:23.285245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.653 qpair failed and we were unable to recover it. 00:35:55.653 [2024-07-12 00:48:23.285331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.653 [2024-07-12 00:48:23.285358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.653 qpair failed and we were unable to recover it. 00:35:55.653 [2024-07-12 00:48:23.285441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.653 [2024-07-12 00:48:23.285469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.653 qpair failed and we were unable to recover it. 00:35:55.653 [2024-07-12 00:48:23.285552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.653 [2024-07-12 00:48:23.285578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.653 qpair failed and we were unable to recover it. 00:35:55.653 [2024-07-12 00:48:23.285691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.653 [2024-07-12 00:48:23.285728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.653 qpair failed and we were unable to recover it. 00:35:55.653 [2024-07-12 00:48:23.285830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.653 [2024-07-12 00:48:23.285859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.653 qpair failed and we were unable to recover it. 00:35:55.653 [2024-07-12 00:48:23.285945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.653 [2024-07-12 00:48:23.285972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.653 qpair failed and we were unable to recover it. 00:35:55.653 [2024-07-12 00:48:23.286049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.653 [2024-07-12 00:48:23.286075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.653 qpair failed and we were unable to recover it. 00:35:55.653 [2024-07-12 00:48:23.286152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.653 [2024-07-12 00:48:23.286178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.653 qpair failed and we were unable to recover it. 00:35:55.653 [2024-07-12 00:48:23.286251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.653 [2024-07-12 00:48:23.286277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.653 qpair failed and we were unable to recover it. 00:35:55.653 [2024-07-12 00:48:23.286365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.653 [2024-07-12 00:48:23.286392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.653 qpair failed and we were unable to recover it. 00:35:55.653 [2024-07-12 00:48:23.286482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.653 [2024-07-12 00:48:23.286509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.653 qpair failed and we were unable to recover it. 00:35:55.653 [2024-07-12 00:48:23.286602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.653 [2024-07-12 00:48:23.286631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.653 qpair failed and we were unable to recover it. 00:35:55.653 [2024-07-12 00:48:23.286709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.653 [2024-07-12 00:48:23.286735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.653 qpair failed and we were unable to recover it. 00:35:55.653 [2024-07-12 00:48:23.286878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.653 [2024-07-12 00:48:23.286904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.653 qpair failed and we were unable to recover it. 00:35:55.653 [2024-07-12 00:48:23.286985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.653 [2024-07-12 00:48:23.287012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.653 qpair failed and we were unable to recover it. 00:35:55.653 [2024-07-12 00:48:23.287093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.653 [2024-07-12 00:48:23.287119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.653 qpair failed and we were unable to recover it. 00:35:55.653 [2024-07-12 00:48:23.287197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.653 [2024-07-12 00:48:23.287223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.653 qpair failed and we were unable to recover it. 00:35:55.653 [2024-07-12 00:48:23.287308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.654 [2024-07-12 00:48:23.287337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.654 qpair failed and we were unable to recover it. 00:35:55.654 [2024-07-12 00:48:23.287427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.654 [2024-07-12 00:48:23.287456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.654 qpair failed and we were unable to recover it. 00:35:55.654 [2024-07-12 00:48:23.287536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.654 [2024-07-12 00:48:23.287563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.654 qpair failed and we were unable to recover it. 00:35:55.654 [2024-07-12 00:48:23.287680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.654 [2024-07-12 00:48:23.287707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.654 qpair failed and we were unable to recover it. 00:35:55.654 [2024-07-12 00:48:23.287786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.654 [2024-07-12 00:48:23.287812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.654 qpair failed and we were unable to recover it. 00:35:55.654 [2024-07-12 00:48:23.287891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.654 [2024-07-12 00:48:23.287917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.654 qpair failed and we were unable to recover it. 00:35:55.654 [2024-07-12 00:48:23.287993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.654 [2024-07-12 00:48:23.288019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.654 qpair failed and we were unable to recover it. 00:35:55.654 [2024-07-12 00:48:23.288103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.654 [2024-07-12 00:48:23.288135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.654 qpair failed and we were unable to recover it. 00:35:55.654 [2024-07-12 00:48:23.288229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.654 [2024-07-12 00:48:23.288255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.654 qpair failed and we were unable to recover it. 00:35:55.654 [2024-07-12 00:48:23.288356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.654 [2024-07-12 00:48:23.288384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.654 qpair failed and we were unable to recover it. 00:35:55.654 [2024-07-12 00:48:23.288475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.654 [2024-07-12 00:48:23.288503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.654 qpair failed and we were unable to recover it. 00:35:55.654 [2024-07-12 00:48:23.288581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.654 [2024-07-12 00:48:23.288613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.654 qpair failed and we were unable to recover it. 00:35:55.654 [2024-07-12 00:48:23.288697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.654 [2024-07-12 00:48:23.288723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.654 qpair failed and we were unable to recover it. 00:35:55.654 [2024-07-12 00:48:23.288798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.654 [2024-07-12 00:48:23.288824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.654 qpair failed and we were unable to recover it. 00:35:55.654 [2024-07-12 00:48:23.288927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.654 [2024-07-12 00:48:23.288954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.654 qpair failed and we were unable to recover it. 00:35:55.654 [2024-07-12 00:48:23.289041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.655 [2024-07-12 00:48:23.289079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.655 qpair failed and we were unable to recover it. 00:35:55.655 [2024-07-12 00:48:23.289165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.655 [2024-07-12 00:48:23.289193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.655 qpair failed and we were unable to recover it. 00:35:55.655 [2024-07-12 00:48:23.289270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.655 [2024-07-12 00:48:23.289298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.655 qpair failed and we were unable to recover it. 00:35:55.655 [2024-07-12 00:48:23.289387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.655 [2024-07-12 00:48:23.289413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.655 qpair failed and we were unable to recover it. 00:35:55.655 [2024-07-12 00:48:23.289489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.655 [2024-07-12 00:48:23.289516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.655 qpair failed and we were unable to recover it. 00:35:55.655 [2024-07-12 00:48:23.289604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.655 [2024-07-12 00:48:23.289631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.655 qpair failed and we were unable to recover it. 00:35:55.655 [2024-07-12 00:48:23.289728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.655 [2024-07-12 00:48:23.289756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.655 qpair failed and we were unable to recover it. 00:35:55.655 [2024-07-12 00:48:23.289832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.655 [2024-07-12 00:48:23.289858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.655 qpair failed and we were unable to recover it. 00:35:55.655 [2024-07-12 00:48:23.289944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.655 [2024-07-12 00:48:23.289970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.655 qpair failed and we were unable to recover it. 00:35:55.655 [2024-07-12 00:48:23.290050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.655 [2024-07-12 00:48:23.290077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.655 qpair failed and we were unable to recover it. 00:35:55.655 [2024-07-12 00:48:23.290157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.655 [2024-07-12 00:48:23.290183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.655 qpair failed and we were unable to recover it. 00:35:55.655 [2024-07-12 00:48:23.290264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.655 [2024-07-12 00:48:23.290291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.655 qpair failed and we were unable to recover it. 00:35:55.655 [2024-07-12 00:48:23.290392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.655 [2024-07-12 00:48:23.290418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.655 qpair failed and we were unable to recover it. 00:35:55.655 [2024-07-12 00:48:23.290500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.655 [2024-07-12 00:48:23.290529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.655 qpair failed and we were unable to recover it. 00:35:55.655 [2024-07-12 00:48:23.290610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.655 [2024-07-12 00:48:23.290636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.655 qpair failed and we were unable to recover it. 00:35:55.655 [2024-07-12 00:48:23.290723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.655 [2024-07-12 00:48:23.290749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.655 qpair failed and we were unable to recover it. 00:35:55.655 [2024-07-12 00:48:23.290834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.655 [2024-07-12 00:48:23.290860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.655 qpair failed and we were unable to recover it. 00:35:55.655 [2024-07-12 00:48:23.290954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.655 [2024-07-12 00:48:23.290980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.655 qpair failed and we were unable to recover it. 00:35:55.655 [2024-07-12 00:48:23.291062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.655 [2024-07-12 00:48:23.291088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.655 qpair failed and we were unable to recover it. 00:35:55.655 [2024-07-12 00:48:23.291185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.655 [2024-07-12 00:48:23.291213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.655 qpair failed and we were unable to recover it. 00:35:55.655 [2024-07-12 00:48:23.291299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.655 [2024-07-12 00:48:23.291326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.655 qpair failed and we were unable to recover it. 00:35:55.655 [2024-07-12 00:48:23.291442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.655 [2024-07-12 00:48:23.291467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.655 qpair failed and we were unable to recover it. 00:35:55.655 [2024-07-12 00:48:23.291545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.655 [2024-07-12 00:48:23.291570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.655 qpair failed and we were unable to recover it. 00:35:55.655 [2024-07-12 00:48:23.291669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.655 [2024-07-12 00:48:23.291695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.655 qpair failed and we were unable to recover it. 00:35:55.655 [2024-07-12 00:48:23.291783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.655 [2024-07-12 00:48:23.291810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.655 qpair failed and we were unable to recover it. 00:35:55.655 [2024-07-12 00:48:23.291896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.655 [2024-07-12 00:48:23.291922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.655 qpair failed and we were unable to recover it. 00:35:55.655 [2024-07-12 00:48:23.292005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.655 [2024-07-12 00:48:23.292032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.655 qpair failed and we were unable to recover it. 00:35:55.655 [2024-07-12 00:48:23.292113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.655 [2024-07-12 00:48:23.292138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.655 qpair failed and we were unable to recover it. 00:35:55.655 [2024-07-12 00:48:23.292212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.655 [2024-07-12 00:48:23.292238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.655 qpair failed and we were unable to recover it. 00:35:55.655 [2024-07-12 00:48:23.292317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.655 [2024-07-12 00:48:23.292349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.655 qpair failed and we were unable to recover it. 00:35:55.655 [2024-07-12 00:48:23.292423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.656 [2024-07-12 00:48:23.292464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.656 qpair failed and we were unable to recover it. 00:35:55.656 [2024-07-12 00:48:23.292553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.656 [2024-07-12 00:48:23.292581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.656 qpair failed and we were unable to recover it. 00:35:55.656 [2024-07-12 00:48:23.292671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.656 [2024-07-12 00:48:23.292702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.656 qpair failed and we were unable to recover it. 00:35:55.656 [2024-07-12 00:48:23.292781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.656 [2024-07-12 00:48:23.292806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.656 qpair failed and we were unable to recover it. 00:35:55.656 [2024-07-12 00:48:23.292890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.656 [2024-07-12 00:48:23.292916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.656 qpair failed and we were unable to recover it. 00:35:55.656 [2024-07-12 00:48:23.293006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.656 [2024-07-12 00:48:23.293032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.656 qpair failed and we were unable to recover it. 00:35:55.656 [2024-07-12 00:48:23.293116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.656 [2024-07-12 00:48:23.293146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.656 qpair failed and we were unable to recover it. 00:35:55.656 [2024-07-12 00:48:23.293232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.656 [2024-07-12 00:48:23.293262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.656 qpair failed and we were unable to recover it. 00:35:55.656 [2024-07-12 00:48:23.293349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.656 [2024-07-12 00:48:23.293377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.656 qpair failed and we were unable to recover it. 00:35:55.656 [2024-07-12 00:48:23.293460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.656 [2024-07-12 00:48:23.293486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.656 qpair failed and we were unable to recover it. 00:35:55.656 [2024-07-12 00:48:23.293564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.656 [2024-07-12 00:48:23.293598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.656 qpair failed and we were unable to recover it. 00:35:55.656 [2024-07-12 00:48:23.293686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.656 [2024-07-12 00:48:23.293713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.656 qpair failed and we were unable to recover it. 00:35:55.656 [2024-07-12 00:48:23.293802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.656 [2024-07-12 00:48:23.293828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.656 qpair failed and we were unable to recover it. 00:35:55.656 [2024-07-12 00:48:23.293914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.656 [2024-07-12 00:48:23.293942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.656 qpair failed and we were unable to recover it. 00:35:55.656 [2024-07-12 00:48:23.294017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.656 [2024-07-12 00:48:23.294043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.656 qpair failed and we were unable to recover it. 00:35:55.656 [2024-07-12 00:48:23.294126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.656 [2024-07-12 00:48:23.294152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.656 qpair failed and we were unable to recover it. 00:35:55.656 [2024-07-12 00:48:23.294243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.656 [2024-07-12 00:48:23.294271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.656 qpair failed and we were unable to recover it. 00:35:55.656 [2024-07-12 00:48:23.294350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.656 [2024-07-12 00:48:23.294376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.656 qpair failed and we were unable to recover it. 00:35:55.656 [2024-07-12 00:48:23.294454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.656 [2024-07-12 00:48:23.294480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.656 qpair failed and we were unable to recover it. 00:35:55.656 [2024-07-12 00:48:23.294554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.656 [2024-07-12 00:48:23.294580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.656 qpair failed and we were unable to recover it. 00:35:55.656 [2024-07-12 00:48:23.294677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.656 [2024-07-12 00:48:23.294702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.656 qpair failed and we were unable to recover it. 00:35:55.656 [2024-07-12 00:48:23.294783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.656 [2024-07-12 00:48:23.294811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.656 qpair failed and we were unable to recover it. 00:35:55.656 [2024-07-12 00:48:23.294892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.656 [2024-07-12 00:48:23.294918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.656 qpair failed and we were unable to recover it. 00:35:55.656 [2024-07-12 00:48:23.295001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.656 [2024-07-12 00:48:23.295027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.656 qpair failed and we were unable to recover it. 00:35:55.656 [2024-07-12 00:48:23.295107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.656 [2024-07-12 00:48:23.295133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.656 qpair failed and we were unable to recover it. 00:35:55.656 [2024-07-12 00:48:23.295216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.656 [2024-07-12 00:48:23.295244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.656 qpair failed and we were unable to recover it. 00:35:55.657 [2024-07-12 00:48:23.295325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.657 [2024-07-12 00:48:23.295351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.657 qpair failed and we were unable to recover it. 00:35:55.657 [2024-07-12 00:48:23.295453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.657 [2024-07-12 00:48:23.295482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.657 qpair failed and we were unable to recover it. 00:35:55.657 [2024-07-12 00:48:23.295565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.657 [2024-07-12 00:48:23.295599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.657 qpair failed and we were unable to recover it. 00:35:55.657 [2024-07-12 00:48:23.295699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.657 [2024-07-12 00:48:23.295727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.657 qpair failed and we were unable to recover it. 00:35:55.657 [2024-07-12 00:48:23.295812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.657 [2024-07-12 00:48:23.295838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.657 qpair failed and we were unable to recover it. 00:35:55.657 [2024-07-12 00:48:23.295930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.657 [2024-07-12 00:48:23.295957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.657 qpair failed and we were unable to recover it. 00:35:55.657 [2024-07-12 00:48:23.296045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.657 [2024-07-12 00:48:23.296071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.657 qpair failed and we were unable to recover it. 00:35:55.657 [2024-07-12 00:48:23.296147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.657 [2024-07-12 00:48:23.296173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.657 qpair failed and we were unable to recover it. 00:35:55.657 [2024-07-12 00:48:23.296257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.657 [2024-07-12 00:48:23.296284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.657 qpair failed and we were unable to recover it. 00:35:55.657 [2024-07-12 00:48:23.296364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.657 [2024-07-12 00:48:23.296390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.657 qpair failed and we were unable to recover it. 00:35:55.657 [2024-07-12 00:48:23.296464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.657 [2024-07-12 00:48:23.296494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.657 qpair failed and we were unable to recover it. 00:35:55.657 [2024-07-12 00:48:23.296574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.657 [2024-07-12 00:48:23.296606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.657 qpair failed and we were unable to recover it. 00:35:55.657 [2024-07-12 00:48:23.296693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.657 [2024-07-12 00:48:23.296721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.657 qpair failed and we were unable to recover it. 00:35:55.657 [2024-07-12 00:48:23.296804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.657 [2024-07-12 00:48:23.296831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.657 qpair failed and we were unable to recover it. 00:35:55.657 [2024-07-12 00:48:23.296912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.657 [2024-07-12 00:48:23.296939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.657 qpair failed and we were unable to recover it. 00:35:55.657 [2024-07-12 00:48:23.297024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.657 [2024-07-12 00:48:23.297051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.657 qpair failed and we were unable to recover it. 00:35:55.657 [2024-07-12 00:48:23.297132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.657 [2024-07-12 00:48:23.297166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.657 qpair failed and we were unable to recover it. 00:35:55.657 [2024-07-12 00:48:23.297254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.657 [2024-07-12 00:48:23.297281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.657 qpair failed and we were unable to recover it. 00:35:55.657 [2024-07-12 00:48:23.297356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.657 [2024-07-12 00:48:23.297382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.657 qpair failed and we were unable to recover it. 00:35:55.657 [2024-07-12 00:48:23.297475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.657 [2024-07-12 00:48:23.297504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.657 qpair failed and we were unable to recover it. 00:35:55.657 [2024-07-12 00:48:23.297583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.657 [2024-07-12 00:48:23.297618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.657 qpair failed and we were unable to recover it. 00:35:55.657 [2024-07-12 00:48:23.297702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.657 [2024-07-12 00:48:23.297730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.657 qpair failed and we were unable to recover it. 00:35:55.657 [2024-07-12 00:48:23.297819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.657 [2024-07-12 00:48:23.297846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.657 qpair failed and we were unable to recover it. 00:35:55.657 [2024-07-12 00:48:23.297938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.657 [2024-07-12 00:48:23.297965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.657 qpair failed and we were unable to recover it. 00:35:55.657 [2024-07-12 00:48:23.298050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.657 [2024-07-12 00:48:23.298076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.657 qpair failed and we were unable to recover it. 00:35:55.658 [2024-07-12 00:48:23.298158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.658 [2024-07-12 00:48:23.298184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.658 qpair failed and we were unable to recover it. 00:35:55.658 [2024-07-12 00:48:23.298269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.658 [2024-07-12 00:48:23.298294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.658 qpair failed and we were unable to recover it. 00:35:55.658 [2024-07-12 00:48:23.298381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.658 [2024-07-12 00:48:23.298406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.658 qpair failed and we were unable to recover it. 00:35:55.658 [2024-07-12 00:48:23.298485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.658 [2024-07-12 00:48:23.298511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.658 qpair failed and we were unable to recover it. 00:35:55.658 [2024-07-12 00:48:23.298593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.658 [2024-07-12 00:48:23.298619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.658 qpair failed and we were unable to recover it. 00:35:55.658 [2024-07-12 00:48:23.298708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.658 [2024-07-12 00:48:23.298734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.658 qpair failed and we were unable to recover it. 00:35:55.658 [2024-07-12 00:48:23.298811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.658 [2024-07-12 00:48:23.298836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.658 qpair failed and we were unable to recover it. 00:35:55.658 [2024-07-12 00:48:23.298940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.658 [2024-07-12 00:48:23.298968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.658 qpair failed and we were unable to recover it. 00:35:55.658 [2024-07-12 00:48:23.299068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.658 [2024-07-12 00:48:23.299095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.658 qpair failed and we were unable to recover it. 00:35:55.658 [2024-07-12 00:48:23.299193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.658 [2024-07-12 00:48:23.299221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.658 qpair failed and we were unable to recover it. 00:35:55.658 [2024-07-12 00:48:23.299351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.658 [2024-07-12 00:48:23.299377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.658 qpair failed and we were unable to recover it. 00:35:55.658 [2024-07-12 00:48:23.299454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.658 [2024-07-12 00:48:23.299481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.658 qpair failed and we were unable to recover it. 00:35:55.658 [2024-07-12 00:48:23.299560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.658 [2024-07-12 00:48:23.299596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.658 qpair failed and we were unable to recover it. 00:35:55.658 [2024-07-12 00:48:23.299674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.658 [2024-07-12 00:48:23.299707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.658 qpair failed and we were unable to recover it. 00:35:55.658 [2024-07-12 00:48:23.299818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.658 [2024-07-12 00:48:23.299846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.658 qpair failed and we were unable to recover it. 00:35:55.658 [2024-07-12 00:48:23.299929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.658 [2024-07-12 00:48:23.299955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.658 qpair failed and we were unable to recover it. 00:35:55.658 [2024-07-12 00:48:23.300037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.658 [2024-07-12 00:48:23.300064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.658 qpair failed and we were unable to recover it. 00:35:55.658 [2024-07-12 00:48:23.300142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.658 [2024-07-12 00:48:23.300168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.658 qpair failed and we were unable to recover it. 00:35:55.658 [2024-07-12 00:48:23.300250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.658 [2024-07-12 00:48:23.300285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.658 qpair failed and we were unable to recover it. 00:35:55.658 [2024-07-12 00:48:23.300413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.658 [2024-07-12 00:48:23.300439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.658 qpair failed and we were unable to recover it. 00:35:55.658 [2024-07-12 00:48:23.300513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.658 [2024-07-12 00:48:23.300539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.658 qpair failed and we were unable to recover it. 00:35:55.658 [2024-07-12 00:48:23.300622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.658 [2024-07-12 00:48:23.300649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.658 qpair failed and we were unable to recover it. 00:35:55.658 [2024-07-12 00:48:23.300739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.658 [2024-07-12 00:48:23.300766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.658 qpair failed and we were unable to recover it. 00:35:55.658 [2024-07-12 00:48:23.300843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.658 [2024-07-12 00:48:23.300869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.658 qpair failed and we were unable to recover it. 00:35:55.658 [2024-07-12 00:48:23.300954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.658 [2024-07-12 00:48:23.300983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.658 qpair failed and we were unable to recover it. 00:35:55.658 [2024-07-12 00:48:23.301061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.658 [2024-07-12 00:48:23.301088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.658 qpair failed and we were unable to recover it. 00:35:55.658 [2024-07-12 00:48:23.301167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.658 [2024-07-12 00:48:23.301193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.658 qpair failed and we were unable to recover it. 00:35:55.658 [2024-07-12 00:48:23.301272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.659 [2024-07-12 00:48:23.301298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.659 qpair failed and we were unable to recover it. 00:35:55.659 [2024-07-12 00:48:23.301375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.659 [2024-07-12 00:48:23.301401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.659 qpair failed and we were unable to recover it. 00:35:55.659 [2024-07-12 00:48:23.301486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.659 [2024-07-12 00:48:23.301513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.659 qpair failed and we were unable to recover it. 00:35:55.659 [2024-07-12 00:48:23.301598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.659 [2024-07-12 00:48:23.301626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.659 qpair failed and we were unable to recover it. 00:35:55.659 [2024-07-12 00:48:23.301722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.659 [2024-07-12 00:48:23.301755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.659 qpair failed and we were unable to recover it. 00:35:55.659 [2024-07-12 00:48:23.301835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.659 [2024-07-12 00:48:23.301862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.659 qpair failed and we were unable to recover it. 00:35:55.659 [2024-07-12 00:48:23.301939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.659 [2024-07-12 00:48:23.301965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.659 qpair failed and we were unable to recover it. 00:35:55.659 [2024-07-12 00:48:23.302059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.659 [2024-07-12 00:48:23.302105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.659 qpair failed and we were unable to recover it. 00:35:55.659 [2024-07-12 00:48:23.302188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.659 [2024-07-12 00:48:23.302225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.659 qpair failed and we were unable to recover it. 00:35:55.659 [2024-07-12 00:48:23.302319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.659 [2024-07-12 00:48:23.302347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.659 qpair failed and we were unable to recover it. 00:35:55.659 [2024-07-12 00:48:23.302437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.659 [2024-07-12 00:48:23.302463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.659 qpair failed and we were unable to recover it. 00:35:55.659 [2024-07-12 00:48:23.302544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.659 [2024-07-12 00:48:23.302570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.659 qpair failed and we were unable to recover it. 00:35:55.659 [2024-07-12 00:48:23.302662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.659 [2024-07-12 00:48:23.302689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.659 qpair failed and we were unable to recover it. 00:35:55.659 [2024-07-12 00:48:23.302778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.659 [2024-07-12 00:48:23.302804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.659 qpair failed and we were unable to recover it. 00:35:55.659 [2024-07-12 00:48:23.302885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.659 [2024-07-12 00:48:23.302913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.659 qpair failed and we were unable to recover it. 00:35:55.659 [2024-07-12 00:48:23.302992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.659 [2024-07-12 00:48:23.303018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.659 qpair failed and we were unable to recover it. 00:35:55.659 [2024-07-12 00:48:23.303100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.659 [2024-07-12 00:48:23.303127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.659 qpair failed and we were unable to recover it. 00:35:55.659 [2024-07-12 00:48:23.303203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.659 [2024-07-12 00:48:23.303229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.659 qpair failed and we were unable to recover it. 00:35:55.659 [2024-07-12 00:48:23.303315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.659 [2024-07-12 00:48:23.303343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.659 qpair failed and we were unable to recover it. 00:35:55.659 [2024-07-12 00:48:23.303429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.659 [2024-07-12 00:48:23.303457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.659 qpair failed and we were unable to recover it. 00:35:55.659 [2024-07-12 00:48:23.303532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.659 [2024-07-12 00:48:23.303558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.659 qpair failed and we were unable to recover it. 00:35:55.659 [2024-07-12 00:48:23.303657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.659 [2024-07-12 00:48:23.303685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.659 qpair failed and we were unable to recover it. 00:35:55.659 [2024-07-12 00:48:23.303769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.659 [2024-07-12 00:48:23.303796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.659 qpair failed and we were unable to recover it. 00:35:55.659 [2024-07-12 00:48:23.303881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.659 [2024-07-12 00:48:23.303907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.659 qpair failed and we were unable to recover it. 00:35:55.659 [2024-07-12 00:48:23.303988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.659 [2024-07-12 00:48:23.304014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.659 qpair failed and we were unable to recover it. 00:35:55.659 [2024-07-12 00:48:23.304142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.659 [2024-07-12 00:48:23.304168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.659 qpair failed and we were unable to recover it. 00:35:55.659 [2024-07-12 00:48:23.304294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.659 [2024-07-12 00:48:23.304321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.659 qpair failed and we were unable to recover it. 00:35:55.659 [2024-07-12 00:48:23.304401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.659 [2024-07-12 00:48:23.304430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.659 qpair failed and we were unable to recover it. 00:35:55.659 [2024-07-12 00:48:23.304511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.659 [2024-07-12 00:48:23.304537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.660 qpair failed and we were unable to recover it. 00:35:55.660 [2024-07-12 00:48:23.304624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.660 [2024-07-12 00:48:23.304654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.660 qpair failed and we were unable to recover it. 00:35:55.660 [2024-07-12 00:48:23.304749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.660 [2024-07-12 00:48:23.304776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.660 qpair failed and we were unable to recover it. 00:35:55.660 [2024-07-12 00:48:23.304861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.660 [2024-07-12 00:48:23.304887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.660 qpair failed and we were unable to recover it. 00:35:55.660 [2024-07-12 00:48:23.304971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.660 [2024-07-12 00:48:23.304996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.660 qpair failed and we were unable to recover it. 00:35:55.660 [2024-07-12 00:48:23.305075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.660 [2024-07-12 00:48:23.305111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.660 qpair failed and we were unable to recover it. 00:35:55.660 [2024-07-12 00:48:23.305198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.660 [2024-07-12 00:48:23.305227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.660 qpair failed and we were unable to recover it. 00:35:55.660 [2024-07-12 00:48:23.305305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.660 [2024-07-12 00:48:23.305332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.660 qpair failed and we were unable to recover it. 00:35:55.660 [2024-07-12 00:48:23.305408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.660 [2024-07-12 00:48:23.305434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.660 qpair failed and we were unable to recover it. 00:35:55.660 [2024-07-12 00:48:23.305509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.660 [2024-07-12 00:48:23.305535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.660 qpair failed and we were unable to recover it. 00:35:55.660 [2024-07-12 00:48:23.305627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.660 [2024-07-12 00:48:23.305655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.660 qpair failed and we were unable to recover it. 00:35:55.660 [2024-07-12 00:48:23.305744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.660 [2024-07-12 00:48:23.305772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.660 qpair failed and we were unable to recover it. 00:35:55.660 [2024-07-12 00:48:23.305858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.660 [2024-07-12 00:48:23.305885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.660 qpair failed and we were unable to recover it. 00:35:55.660 [2024-07-12 00:48:23.305972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.660 [2024-07-12 00:48:23.305998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.660 qpair failed and we were unable to recover it. 00:35:55.660 [2024-07-12 00:48:23.306086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.660 [2024-07-12 00:48:23.306113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.660 qpair failed and we were unable to recover it. 00:35:55.660 [2024-07-12 00:48:23.306204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.660 [2024-07-12 00:48:23.306231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.660 qpair failed and we were unable to recover it. 00:35:55.660 [2024-07-12 00:48:23.306322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.660 [2024-07-12 00:48:23.306353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.660 qpair failed and we were unable to recover it. 00:35:55.660 [2024-07-12 00:48:23.306441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.660 [2024-07-12 00:48:23.306469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.660 qpair failed and we were unable to recover it. 00:35:55.660 [2024-07-12 00:48:23.306555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.660 [2024-07-12 00:48:23.306583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.660 qpair failed and we were unable to recover it. 00:35:55.660 [2024-07-12 00:48:23.306690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.660 [2024-07-12 00:48:23.306717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.660 qpair failed and we were unable to recover it. 00:35:55.660 [2024-07-12 00:48:23.306792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.660 [2024-07-12 00:48:23.306818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.660 qpair failed and we were unable to recover it. 00:35:55.660 [2024-07-12 00:48:23.306901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.660 [2024-07-12 00:48:23.306928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.660 qpair failed and we were unable to recover it. 00:35:55.660 [2024-07-12 00:48:23.307015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.660 [2024-07-12 00:48:23.307042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.660 qpair failed and we were unable to recover it. 00:35:55.660 [2024-07-12 00:48:23.307118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.660 [2024-07-12 00:48:23.307148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.660 qpair failed and we were unable to recover it. 00:35:55.660 [2024-07-12 00:48:23.307238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.660 [2024-07-12 00:48:23.307266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.660 qpair failed and we were unable to recover it. 00:35:55.661 [2024-07-12 00:48:23.307351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.661 [2024-07-12 00:48:23.307377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.661 qpair failed and we were unable to recover it. 00:35:55.661 [2024-07-12 00:48:23.307476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.661 [2024-07-12 00:48:23.307505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.661 qpair failed and we were unable to recover it. 00:35:55.661 [2024-07-12 00:48:23.307601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.661 [2024-07-12 00:48:23.307629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.661 qpair failed and we were unable to recover it. 00:35:55.661 [2024-07-12 00:48:23.307719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.661 [2024-07-12 00:48:23.307744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.661 qpair failed and we were unable to recover it. 00:35:55.661 [2024-07-12 00:48:23.307829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.661 [2024-07-12 00:48:23.307855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.661 qpair failed and we were unable to recover it. 00:35:55.661 [2024-07-12 00:48:23.307939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.661 [2024-07-12 00:48:23.307967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.661 qpair failed and we were unable to recover it. 00:35:55.661 [2024-07-12 00:48:23.308048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.661 [2024-07-12 00:48:23.308075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.661 qpair failed and we were unable to recover it. 00:35:55.661 [2024-07-12 00:48:23.308150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.661 [2024-07-12 00:48:23.308178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.661 qpair failed and we were unable to recover it. 00:35:55.661 [2024-07-12 00:48:23.308263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.661 [2024-07-12 00:48:23.308289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.661 qpair failed and we were unable to recover it. 00:35:55.661 [2024-07-12 00:48:23.308378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.661 [2024-07-12 00:48:23.308404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.661 qpair failed and we were unable to recover it. 00:35:55.661 [2024-07-12 00:48:23.308489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.661 [2024-07-12 00:48:23.308516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.661 qpair failed and we were unable to recover it. 00:35:55.661 [2024-07-12 00:48:23.308610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.661 [2024-07-12 00:48:23.308638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.661 qpair failed and we were unable to recover it. 00:35:55.661 [2024-07-12 00:48:23.308722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.661 [2024-07-12 00:48:23.308749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.661 qpair failed and we were unable to recover it. 00:35:55.661 [2024-07-12 00:48:23.308823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.661 [2024-07-12 00:48:23.308849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.661 qpair failed and we were unable to recover it. 00:35:55.661 [2024-07-12 00:48:23.308933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.661 [2024-07-12 00:48:23.308962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.661 qpair failed and we were unable to recover it. 00:35:55.661 [2024-07-12 00:48:23.309039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.661 [2024-07-12 00:48:23.309065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.661 qpair failed and we were unable to recover it. 00:35:55.661 [2024-07-12 00:48:23.309146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.661 [2024-07-12 00:48:23.309172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.661 qpair failed and we were unable to recover it. 00:35:55.661 [2024-07-12 00:48:23.309253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.661 [2024-07-12 00:48:23.309280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.661 qpair failed and we were unable to recover it. 00:35:55.661 [2024-07-12 00:48:23.309360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.661 [2024-07-12 00:48:23.309387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.661 qpair failed and we were unable to recover it. 00:35:55.661 [2024-07-12 00:48:23.309463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.661 [2024-07-12 00:48:23.309490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.661 qpair failed and we were unable to recover it. 00:35:55.661 [2024-07-12 00:48:23.309571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.661 [2024-07-12 00:48:23.309605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.661 qpair failed and we were unable to recover it. 00:35:55.661 [2024-07-12 00:48:23.309690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.661 [2024-07-12 00:48:23.309716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.661 qpair failed and we were unable to recover it. 00:35:55.661 [2024-07-12 00:48:23.309796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.661 [2024-07-12 00:48:23.309823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.661 qpair failed and we were unable to recover it. 00:35:55.661 [2024-07-12 00:48:23.309905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.661 [2024-07-12 00:48:23.309932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.661 qpair failed and we were unable to recover it. 00:35:55.661 [2024-07-12 00:48:23.310011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.662 [2024-07-12 00:48:23.310037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.662 qpair failed and we were unable to recover it. 00:35:55.662 [2024-07-12 00:48:23.310128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.662 [2024-07-12 00:48:23.310155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.662 qpair failed and we were unable to recover it. 00:35:55.662 [2024-07-12 00:48:23.310233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.662 [2024-07-12 00:48:23.310262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.662 qpair failed and we were unable to recover it. 00:35:55.662 [2024-07-12 00:48:23.310350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.662 [2024-07-12 00:48:23.310376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.662 qpair failed and we were unable to recover it. 00:35:55.662 [2024-07-12 00:48:23.310451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.662 [2024-07-12 00:48:23.310477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.662 qpair failed and we were unable to recover it. 00:35:55.662 [2024-07-12 00:48:23.310558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.662 [2024-07-12 00:48:23.310592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.662 qpair failed and we were unable to recover it. 00:35:55.662 [2024-07-12 00:48:23.310672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.662 [2024-07-12 00:48:23.310698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.662 qpair failed and we were unable to recover it. 00:35:55.662 [2024-07-12 00:48:23.310782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.662 [2024-07-12 00:48:23.310819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.662 qpair failed and we were unable to recover it. 00:35:55.662 [2024-07-12 00:48:23.310922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.662 [2024-07-12 00:48:23.310950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.662 qpair failed and we were unable to recover it. 00:35:55.662 [2024-07-12 00:48:23.311031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.662 [2024-07-12 00:48:23.311057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.662 qpair failed and we were unable to recover it. 00:35:55.662 [2024-07-12 00:48:23.311138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.662 [2024-07-12 00:48:23.311163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.662 qpair failed and we were unable to recover it. 00:35:55.662 [2024-07-12 00:48:23.311242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.662 [2024-07-12 00:48:23.311268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.662 qpair failed and we were unable to recover it. 00:35:55.662 [2024-07-12 00:48:23.311347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.662 [2024-07-12 00:48:23.311375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.662 qpair failed and we were unable to recover it. 00:35:55.662 [2024-07-12 00:48:23.311451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.662 [2024-07-12 00:48:23.311477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.662 qpair failed and we were unable to recover it. 00:35:55.662 [2024-07-12 00:48:23.311556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.662 [2024-07-12 00:48:23.311581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.662 qpair failed and we were unable to recover it. 00:35:55.662 [2024-07-12 00:48:23.311675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.662 [2024-07-12 00:48:23.311701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.662 qpair failed and we were unable to recover it. 00:35:55.662 [2024-07-12 00:48:23.311782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.662 [2024-07-12 00:48:23.311811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.662 qpair failed and we were unable to recover it. 00:35:55.662 [2024-07-12 00:48:23.311897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.662 [2024-07-12 00:48:23.311925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.662 qpair failed and we were unable to recover it. 00:35:55.662 [2024-07-12 00:48:23.312013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.662 [2024-07-12 00:48:23.312039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.662 qpair failed and we were unable to recover it. 00:35:55.662 [2024-07-12 00:48:23.312117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.662 [2024-07-12 00:48:23.312143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.662 qpair failed and we were unable to recover it. 00:35:55.662 [2024-07-12 00:48:23.312223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.662 [2024-07-12 00:48:23.312249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.662 qpair failed and we were unable to recover it. 00:35:55.662 [2024-07-12 00:48:23.312336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.662 [2024-07-12 00:48:23.312362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.662 qpair failed and we were unable to recover it. 00:35:55.662 [2024-07-12 00:48:23.312437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.662 [2024-07-12 00:48:23.312463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.662 qpair failed and we were unable to recover it. 00:35:55.662 [2024-07-12 00:48:23.312556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.662 [2024-07-12 00:48:23.312582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.662 qpair failed and we were unable to recover it. 00:35:55.662 [2024-07-12 00:48:23.312692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.662 [2024-07-12 00:48:23.312718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.662 qpair failed and we were unable to recover it. 00:35:55.662 [2024-07-12 00:48:23.312797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.662 [2024-07-12 00:48:23.312823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.662 qpair failed and we were unable to recover it. 00:35:55.662 [2024-07-12 00:48:23.312911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.662 [2024-07-12 00:48:23.312938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.662 qpair failed and we were unable to recover it. 00:35:55.662 [2024-07-12 00:48:23.313016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.662 [2024-07-12 00:48:23.313042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.662 qpair failed and we were unable to recover it. 00:35:55.662 [2024-07-12 00:48:23.313117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.662 [2024-07-12 00:48:23.313142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.662 qpair failed and we were unable to recover it. 00:35:55.662 [2024-07-12 00:48:23.313217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.662 [2024-07-12 00:48:23.313243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.662 qpair failed and we were unable to recover it. 00:35:55.662 [2024-07-12 00:48:23.313329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.662 [2024-07-12 00:48:23.313355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.662 qpair failed and we were unable to recover it. 00:35:55.662 [2024-07-12 00:48:23.313447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.662 [2024-07-12 00:48:23.313476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.662 qpair failed and we were unable to recover it. 00:35:55.662 [2024-07-12 00:48:23.313568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.662 [2024-07-12 00:48:23.313602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.662 qpair failed and we were unable to recover it. 00:35:55.662 [2024-07-12 00:48:23.313692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.663 [2024-07-12 00:48:23.313717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.663 qpair failed and we were unable to recover it. 00:35:55.663 [2024-07-12 00:48:23.313800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.663 [2024-07-12 00:48:23.313827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.663 qpair failed and we were unable to recover it. 00:35:55.663 [2024-07-12 00:48:23.313905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.663 [2024-07-12 00:48:23.313931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.663 qpair failed and we were unable to recover it. 00:35:55.663 [2024-07-12 00:48:23.314017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.663 [2024-07-12 00:48:23.314043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.663 qpair failed and we were unable to recover it. 00:35:55.663 [2024-07-12 00:48:23.314127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.663 [2024-07-12 00:48:23.314154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.663 qpair failed and we were unable to recover it. 00:35:55.663 [2024-07-12 00:48:23.314235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.663 [2024-07-12 00:48:23.314261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.663 qpair failed and we were unable to recover it. 00:35:55.663 [2024-07-12 00:48:23.314347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.663 [2024-07-12 00:48:23.314374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.663 qpair failed and we were unable to recover it. 00:35:55.663 [2024-07-12 00:48:23.314450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.663 [2024-07-12 00:48:23.314477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.663 qpair failed and we were unable to recover it. 00:35:55.663 [2024-07-12 00:48:23.314559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.663 [2024-07-12 00:48:23.314596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.663 qpair failed and we were unable to recover it. 00:35:55.663 [2024-07-12 00:48:23.314695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.663 [2024-07-12 00:48:23.314721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.663 qpair failed and we were unable to recover it. 00:35:55.663 [2024-07-12 00:48:23.314803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.663 [2024-07-12 00:48:23.314830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.663 qpair failed and we were unable to recover it. 00:35:55.663 [2024-07-12 00:48:23.314914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.663 [2024-07-12 00:48:23.314941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.663 qpair failed and we were unable to recover it. 00:35:55.663 [2024-07-12 00:48:23.315023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.663 [2024-07-12 00:48:23.315049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.663 qpair failed and we were unable to recover it. 00:35:55.663 [2024-07-12 00:48:23.315141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.663 [2024-07-12 00:48:23.315171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.663 qpair failed and we were unable to recover it. 00:35:55.663 [2024-07-12 00:48:23.315257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.663 [2024-07-12 00:48:23.315303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.663 qpair failed and we were unable to recover it. 00:35:55.663 [2024-07-12 00:48:23.315399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.663 [2024-07-12 00:48:23.315426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.663 qpair failed and we were unable to recover it. 00:35:55.663 [2024-07-12 00:48:23.315506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.663 [2024-07-12 00:48:23.315532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.663 qpair failed and we were unable to recover it. 00:35:55.663 [2024-07-12 00:48:23.315606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.663 [2024-07-12 00:48:23.315641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.663 qpair failed and we were unable to recover it. 00:35:55.663 [2024-07-12 00:48:23.315724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.663 [2024-07-12 00:48:23.315749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.663 qpair failed and we were unable to recover it. 00:35:55.663 [2024-07-12 00:48:23.315825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.663 [2024-07-12 00:48:23.315852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.663 qpair failed and we were unable to recover it. 00:35:55.663 [2024-07-12 00:48:23.315935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.663 [2024-07-12 00:48:23.315962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.663 qpair failed and we were unable to recover it. 00:35:55.663 [2024-07-12 00:48:23.316038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.663 [2024-07-12 00:48:23.316065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.663 qpair failed and we were unable to recover it. 00:35:55.663 [2024-07-12 00:48:23.316144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.663 [2024-07-12 00:48:23.316171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.663 qpair failed and we were unable to recover it. 00:35:55.663 [2024-07-12 00:48:23.316263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.663 [2024-07-12 00:48:23.316293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.663 qpair failed and we were unable to recover it. 00:35:55.663 [2024-07-12 00:48:23.316382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.663 [2024-07-12 00:48:23.316409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.663 qpair failed and we were unable to recover it. 00:35:55.663 [2024-07-12 00:48:23.316493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.663 [2024-07-12 00:48:23.316520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.663 qpair failed and we were unable to recover it. 00:35:55.663 [2024-07-12 00:48:23.316609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.663 [2024-07-12 00:48:23.316636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.663 qpair failed and we were unable to recover it. 00:35:55.663 [2024-07-12 00:48:23.316724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.663 [2024-07-12 00:48:23.316750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.663 qpair failed and we were unable to recover it. 00:35:55.663 [2024-07-12 00:48:23.316838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.663 [2024-07-12 00:48:23.316865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.663 qpair failed and we were unable to recover it. 00:35:55.663 [2024-07-12 00:48:23.316953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.663 [2024-07-12 00:48:23.316979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.663 qpair failed and we were unable to recover it. 00:35:55.663 [2024-07-12 00:48:23.317071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.663 [2024-07-12 00:48:23.317098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.663 qpair failed and we were unable to recover it. 00:35:55.663 [2024-07-12 00:48:23.317179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.663 [2024-07-12 00:48:23.317205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.663 qpair failed and we were unable to recover it. 00:35:55.663 [2024-07-12 00:48:23.317288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.663 [2024-07-12 00:48:23.317314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.663 qpair failed and we were unable to recover it. 00:35:55.663 [2024-07-12 00:48:23.317389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.663 [2024-07-12 00:48:23.317416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.663 qpair failed and we were unable to recover it. 00:35:55.663 [2024-07-12 00:48:23.317497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.663 [2024-07-12 00:48:23.317523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.663 qpair failed and we were unable to recover it. 00:35:55.663 [2024-07-12 00:48:23.317608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.663 [2024-07-12 00:48:23.317634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.663 qpair failed and we were unable to recover it. 00:35:55.663 [2024-07-12 00:48:23.317719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.663 [2024-07-12 00:48:23.317744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.663 qpair failed and we were unable to recover it. 00:35:55.663 [2024-07-12 00:48:23.317827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.663 [2024-07-12 00:48:23.317853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.663 qpair failed and we were unable to recover it. 00:35:55.663 [2024-07-12 00:48:23.317938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.663 [2024-07-12 00:48:23.317966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.663 qpair failed and we were unable to recover it. 00:35:55.663 [2024-07-12 00:48:23.318047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.663 [2024-07-12 00:48:23.318073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.663 qpair failed and we were unable to recover it. 00:35:55.663 [2024-07-12 00:48:23.318150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.663 [2024-07-12 00:48:23.318177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.663 qpair failed and we were unable to recover it. 00:35:55.663 [2024-07-12 00:48:23.318255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.663 [2024-07-12 00:48:23.318281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.663 qpair failed and we were unable to recover it. 00:35:55.663 [2024-07-12 00:48:23.318368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.663 [2024-07-12 00:48:23.318394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.663 qpair failed and we were unable to recover it. 00:35:55.663 [2024-07-12 00:48:23.318476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.663 [2024-07-12 00:48:23.318502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.663 qpair failed and we were unable to recover it. 00:35:55.663 [2024-07-12 00:48:23.318596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.663 [2024-07-12 00:48:23.318623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.663 qpair failed and we were unable to recover it. 00:35:55.663 [2024-07-12 00:48:23.318700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.663 [2024-07-12 00:48:23.318725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.663 qpair failed and we were unable to recover it. 00:35:55.663 [2024-07-12 00:48:23.318813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.663 [2024-07-12 00:48:23.318842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.663 qpair failed and we were unable to recover it. 00:35:55.663 [2024-07-12 00:48:23.318921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.663 [2024-07-12 00:48:23.318947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.663 qpair failed and we were unable to recover it. 00:35:55.663 [2024-07-12 00:48:23.319038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.663 [2024-07-12 00:48:23.319065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.663 qpair failed and we were unable to recover it. 00:35:55.663 [2024-07-12 00:48:23.319141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.663 [2024-07-12 00:48:23.319168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.663 qpair failed and we were unable to recover it. 00:35:55.663 [2024-07-12 00:48:23.319251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.664 [2024-07-12 00:48:23.319279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.664 qpair failed and we were unable to recover it. 00:35:55.664 [2024-07-12 00:48:23.319357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.664 [2024-07-12 00:48:23.319384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.664 qpair failed and we were unable to recover it. 00:35:55.664 [2024-07-12 00:48:23.319464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.664 [2024-07-12 00:48:23.319492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.664 qpair failed and we were unable to recover it. 00:35:55.664 [2024-07-12 00:48:23.319571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.664 [2024-07-12 00:48:23.319605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.664 qpair failed and we were unable to recover it. 00:35:55.664 [2024-07-12 00:48:23.319696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.664 [2024-07-12 00:48:23.319725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.664 qpair failed and we were unable to recover it. 00:35:55.664 [2024-07-12 00:48:23.319809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.664 [2024-07-12 00:48:23.319836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.664 qpair failed and we were unable to recover it. 00:35:55.664 [2024-07-12 00:48:23.319915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.664 [2024-07-12 00:48:23.319941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.664 qpair failed and we were unable to recover it. 00:35:55.664 [2024-07-12 00:48:23.320021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.664 [2024-07-12 00:48:23.320047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.664 qpair failed and we were unable to recover it. 00:35:55.664 [2024-07-12 00:48:23.320125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.664 [2024-07-12 00:48:23.320151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.664 qpair failed and we were unable to recover it. 00:35:55.664 [2024-07-12 00:48:23.320227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.664 [2024-07-12 00:48:23.320252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.664 qpair failed and we were unable to recover it. 00:35:55.664 [2024-07-12 00:48:23.320334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.664 [2024-07-12 00:48:23.320362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.664 qpair failed and we were unable to recover it. 00:35:55.664 [2024-07-12 00:48:23.320446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.664 [2024-07-12 00:48:23.320472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.664 qpair failed and we were unable to recover it. 00:35:55.664 [2024-07-12 00:48:23.320560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.664 [2024-07-12 00:48:23.320594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.664 qpair failed and we were unable to recover it. 00:35:55.664 [2024-07-12 00:48:23.320680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.664 [2024-07-12 00:48:23.320715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.664 qpair failed and we were unable to recover it. 00:35:55.664 [2024-07-12 00:48:23.320792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.664 [2024-07-12 00:48:23.320818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.664 qpair failed and we were unable to recover it. 00:35:55.664 [2024-07-12 00:48:23.320910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.664 [2024-07-12 00:48:23.320939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.664 qpair failed and we were unable to recover it. 00:35:55.664 [2024-07-12 00:48:23.321019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.664 [2024-07-12 00:48:23.321046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.664 qpair failed and we were unable to recover it. 00:35:55.664 [2024-07-12 00:48:23.321123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.664 [2024-07-12 00:48:23.321150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.664 qpair failed and we were unable to recover it. 00:35:55.664 [2024-07-12 00:48:23.321243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.664 [2024-07-12 00:48:23.321271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.664 qpair failed and we were unable to recover it. 00:35:55.664 [2024-07-12 00:48:23.321351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.664 [2024-07-12 00:48:23.321377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.664 qpair failed and we were unable to recover it. 00:35:55.664 [2024-07-12 00:48:23.321461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.664 [2024-07-12 00:48:23.321488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.664 qpair failed and we were unable to recover it. 00:35:55.664 [2024-07-12 00:48:23.321573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.664 [2024-07-12 00:48:23.321609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.664 qpair failed and we were unable to recover it. 00:35:55.664 [2024-07-12 00:48:23.321687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.664 [2024-07-12 00:48:23.321712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.664 qpair failed and we were unable to recover it. 00:35:55.664 [2024-07-12 00:48:23.321794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.664 [2024-07-12 00:48:23.321820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.664 qpair failed and we were unable to recover it. 00:35:55.664 [2024-07-12 00:48:23.321904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.664 [2024-07-12 00:48:23.321930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.664 qpair failed and we were unable to recover it. 00:35:55.664 [2024-07-12 00:48:23.322017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.664 [2024-07-12 00:48:23.322044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.664 qpair failed and we were unable to recover it. 00:35:55.664 [2024-07-12 00:48:23.322122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.664 [2024-07-12 00:48:23.322167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.664 qpair failed and we were unable to recover it. 00:35:55.664 [2024-07-12 00:48:23.322252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.664 [2024-07-12 00:48:23.322281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.664 qpair failed and we were unable to recover it. 00:35:55.664 [2024-07-12 00:48:23.322368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.664 [2024-07-12 00:48:23.322396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.664 qpair failed and we were unable to recover it. 00:35:55.664 [2024-07-12 00:48:23.322487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.664 [2024-07-12 00:48:23.322514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.664 qpair failed and we were unable to recover it. 00:35:55.664 [2024-07-12 00:48:23.322602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.664 [2024-07-12 00:48:23.322630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.664 qpair failed and we were unable to recover it. 00:35:55.664 [2024-07-12 00:48:23.322716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.664 [2024-07-12 00:48:23.322743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.664 qpair failed and we were unable to recover it. 00:35:55.664 [2024-07-12 00:48:23.322819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.664 [2024-07-12 00:48:23.322846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.664 qpair failed and we were unable to recover it. 00:35:55.664 [2024-07-12 00:48:23.322966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.664 [2024-07-12 00:48:23.322992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.664 qpair failed and we were unable to recover it. 00:35:55.664 [2024-07-12 00:48:23.323070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.664 [2024-07-12 00:48:23.323096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.664 qpair failed and we were unable to recover it. 00:35:55.664 [2024-07-12 00:48:23.323171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.664 [2024-07-12 00:48:23.323197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.664 qpair failed and we were unable to recover it. 00:35:55.664 [2024-07-12 00:48:23.323277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.664 [2024-07-12 00:48:23.323305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.664 qpair failed and we were unable to recover it. 00:35:55.664 [2024-07-12 00:48:23.323382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.664 [2024-07-12 00:48:23.323409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.664 qpair failed and we were unable to recover it. 00:35:55.664 [2024-07-12 00:48:23.323483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.664 [2024-07-12 00:48:23.323509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.664 qpair failed and we were unable to recover it. 00:35:55.664 [2024-07-12 00:48:23.323618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.664 [2024-07-12 00:48:23.323650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.664 qpair failed and we were unable to recover it. 00:35:55.664 [2024-07-12 00:48:23.323732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.664 [2024-07-12 00:48:23.323759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.664 qpair failed and we were unable to recover it. 00:35:55.664 [2024-07-12 00:48:23.323840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.664 [2024-07-12 00:48:23.323866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.664 qpair failed and we were unable to recover it. 00:35:55.664 [2024-07-12 00:48:23.323943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.664 [2024-07-12 00:48:23.323969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.664 qpair failed and we were unable to recover it. 00:35:55.664 [2024-07-12 00:48:23.324045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.664 [2024-07-12 00:48:23.324071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.664 qpair failed and we were unable to recover it. 00:35:55.664 [2024-07-12 00:48:23.324160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.664 [2024-07-12 00:48:23.324194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.664 qpair failed and we were unable to recover it. 00:35:55.664 [2024-07-12 00:48:23.324279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.664 [2024-07-12 00:48:23.324307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.664 qpair failed and we were unable to recover it. 00:35:55.664 [2024-07-12 00:48:23.324396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.664 [2024-07-12 00:48:23.324422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.664 qpair failed and we were unable to recover it. 00:35:55.664 [2024-07-12 00:48:23.324505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.664 [2024-07-12 00:48:23.324531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.664 qpair failed and we were unable to recover it. 00:35:55.664 [2024-07-12 00:48:23.324611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.664 [2024-07-12 00:48:23.324637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.664 qpair failed and we were unable to recover it. 00:35:55.664 [2024-07-12 00:48:23.324719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.664 [2024-07-12 00:48:23.324745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.664 qpair failed and we were unable to recover it. 00:35:55.664 [2024-07-12 00:48:23.324825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.664 [2024-07-12 00:48:23.324852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.664 qpair failed and we were unable to recover it. 00:35:55.664 [2024-07-12 00:48:23.324936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.664 [2024-07-12 00:48:23.324962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.664 qpair failed and we were unable to recover it. 00:35:55.664 [2024-07-12 00:48:23.325038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.664 [2024-07-12 00:48:23.325065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.664 qpair failed and we were unable to recover it. 00:35:55.665 [2024-07-12 00:48:23.325145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.665 [2024-07-12 00:48:23.325171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.665 qpair failed and we were unable to recover it. 00:35:55.665 [2024-07-12 00:48:23.325254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.665 [2024-07-12 00:48:23.325283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.665 qpair failed and we were unable to recover it. 00:35:55.665 [2024-07-12 00:48:23.325370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.665 [2024-07-12 00:48:23.325396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.665 qpair failed and we were unable to recover it. 00:35:55.665 [2024-07-12 00:48:23.325471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.665 [2024-07-12 00:48:23.325497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.665 qpair failed and we were unable to recover it. 00:35:55.665 [2024-07-12 00:48:23.325576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.665 [2024-07-12 00:48:23.325608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.665 qpair failed and we were unable to recover it. 00:35:55.665 [2024-07-12 00:48:23.325693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.665 [2024-07-12 00:48:23.325720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.665 qpair failed and we were unable to recover it. 00:35:55.665 [2024-07-12 00:48:23.325794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.665 [2024-07-12 00:48:23.325820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.665 qpair failed and we were unable to recover it. 00:35:55.665 [2024-07-12 00:48:23.325910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.665 [2024-07-12 00:48:23.325937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.665 qpair failed and we were unable to recover it. 00:35:55.665 [2024-07-12 00:48:23.326020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.665 [2024-07-12 00:48:23.326047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.665 qpair failed and we were unable to recover it. 00:35:55.665 [2024-07-12 00:48:23.326131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.665 [2024-07-12 00:48:23.326158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.665 qpair failed and we were unable to recover it. 00:35:55.665 [2024-07-12 00:48:23.326248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.665 [2024-07-12 00:48:23.326273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.665 qpair failed and we were unable to recover it. 00:35:55.665 [2024-07-12 00:48:23.326358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.665 [2024-07-12 00:48:23.326384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.665 qpair failed and we were unable to recover it. 00:35:55.665 [2024-07-12 00:48:23.326464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.665 [2024-07-12 00:48:23.326489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.665 qpair failed and we were unable to recover it. 00:35:55.665 [2024-07-12 00:48:23.326566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.665 [2024-07-12 00:48:23.326598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.665 qpair failed and we were unable to recover it. 00:35:55.665 [2024-07-12 00:48:23.326677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.665 [2024-07-12 00:48:23.326704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.665 qpair failed and we were unable to recover it. 00:35:55.665 [2024-07-12 00:48:23.326781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.665 [2024-07-12 00:48:23.326806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.665 qpair failed and we were unable to recover it. 00:35:55.665 [2024-07-12 00:48:23.326890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.665 [2024-07-12 00:48:23.326915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.665 qpair failed and we were unable to recover it. 00:35:55.665 [2024-07-12 00:48:23.326997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.665 [2024-07-12 00:48:23.327037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.665 qpair failed and we were unable to recover it. 00:35:55.665 [2024-07-12 00:48:23.327126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.665 [2024-07-12 00:48:23.327154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.665 qpair failed and we were unable to recover it. 00:35:55.665 [2024-07-12 00:48:23.327246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.665 [2024-07-12 00:48:23.327272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.665 qpair failed and we were unable to recover it. 00:35:55.665 [2024-07-12 00:48:23.327355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.665 [2024-07-12 00:48:23.327381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.665 qpair failed and we were unable to recover it. 00:35:55.665 [2024-07-12 00:48:23.327460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.665 [2024-07-12 00:48:23.327486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.665 qpair failed and we were unable to recover it. 00:35:55.665 [2024-07-12 00:48:23.327571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.665 [2024-07-12 00:48:23.327605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.665 qpair failed and we were unable to recover it. 00:35:55.665 [2024-07-12 00:48:23.327698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.665 [2024-07-12 00:48:23.327724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.665 qpair failed and we were unable to recover it. 00:35:55.665 [2024-07-12 00:48:23.327808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.665 [2024-07-12 00:48:23.327834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.665 qpair failed and we were unable to recover it. 00:35:55.665 [2024-07-12 00:48:23.327909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.665 [2024-07-12 00:48:23.327935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.665 qpair failed and we were unable to recover it. 00:35:55.665 [2024-07-12 00:48:23.328025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.665 [2024-07-12 00:48:23.328050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.665 qpair failed and we were unable to recover it. 00:35:55.665 [2024-07-12 00:48:23.328129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.665 [2024-07-12 00:48:23.328154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.665 qpair failed and we were unable to recover it. 00:35:55.665 [2024-07-12 00:48:23.328228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.665 [2024-07-12 00:48:23.328253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.665 qpair failed and we were unable to recover it. 00:35:55.665 [2024-07-12 00:48:23.328352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.665 [2024-07-12 00:48:23.328379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.665 qpair failed and we were unable to recover it. 00:35:55.665 [2024-07-12 00:48:23.328463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.665 [2024-07-12 00:48:23.328488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.665 qpair failed and we were unable to recover it. 00:35:55.665 [2024-07-12 00:48:23.328576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.665 [2024-07-12 00:48:23.328614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.665 qpair failed and we were unable to recover it. 00:35:55.665 [2024-07-12 00:48:23.328710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.665 [2024-07-12 00:48:23.328736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.665 qpair failed and we were unable to recover it. 00:35:55.665 [2024-07-12 00:48:23.328822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.665 [2024-07-12 00:48:23.328851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.665 qpair failed and we were unable to recover it. 00:35:55.665 [2024-07-12 00:48:23.328931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.665 [2024-07-12 00:48:23.328956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.665 qpair failed and we were unable to recover it. 00:35:55.665 [2024-07-12 00:48:23.329034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.665 [2024-07-12 00:48:23.329059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.665 qpair failed and we were unable to recover it. 00:35:55.665 [2024-07-12 00:48:23.329142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.665 [2024-07-12 00:48:23.329167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.665 qpair failed and we were unable to recover it. 00:35:55.665 [2024-07-12 00:48:23.329246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.665 [2024-07-12 00:48:23.329271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.665 qpair failed and we were unable to recover it. 00:35:55.665 [2024-07-12 00:48:23.329358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.665 [2024-07-12 00:48:23.329384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.665 qpair failed and we were unable to recover it. 00:35:55.665 [2024-07-12 00:48:23.329473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.665 [2024-07-12 00:48:23.329501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.665 qpair failed and we were unable to recover it. 00:35:55.665 [2024-07-12 00:48:23.329583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.665 [2024-07-12 00:48:23.329617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.665 qpair failed and we were unable to recover it. 00:35:55.665 [2024-07-12 00:48:23.329707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.665 [2024-07-12 00:48:23.329733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.665 qpair failed and we were unable to recover it. 00:35:55.665 [2024-07-12 00:48:23.329814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.665 [2024-07-12 00:48:23.329839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.665 qpair failed and we were unable to recover it. 00:35:55.665 [2024-07-12 00:48:23.329931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.665 [2024-07-12 00:48:23.329957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.665 qpair failed and we were unable to recover it. 00:35:55.665 [2024-07-12 00:48:23.330040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.665 [2024-07-12 00:48:23.330065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.665 qpair failed and we were unable to recover it. 00:35:55.665 [2024-07-12 00:48:23.330151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.665 [2024-07-12 00:48:23.330176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.665 qpair failed and we were unable to recover it. 00:35:55.666 [2024-07-12 00:48:23.330257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.666 [2024-07-12 00:48:23.330282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.666 qpair failed and we were unable to recover it. 00:35:55.666 [2024-07-12 00:48:23.330357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.666 [2024-07-12 00:48:23.330383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.666 qpair failed and we were unable to recover it. 00:35:55.666 [2024-07-12 00:48:23.330459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.666 [2024-07-12 00:48:23.330484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.666 qpair failed and we were unable to recover it. 00:35:55.666 [2024-07-12 00:48:23.330558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.666 [2024-07-12 00:48:23.330583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.666 qpair failed and we were unable to recover it. 00:35:55.666 [2024-07-12 00:48:23.330676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.666 [2024-07-12 00:48:23.330701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.666 qpair failed and we were unable to recover it. 00:35:55.666 [2024-07-12 00:48:23.330774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.666 [2024-07-12 00:48:23.330799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.666 qpair failed and we were unable to recover it. 00:35:55.666 [2024-07-12 00:48:23.330873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.666 [2024-07-12 00:48:23.330898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.666 qpair failed and we were unable to recover it. 00:35:55.666 [2024-07-12 00:48:23.330974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.666 [2024-07-12 00:48:23.330999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.666 qpair failed and we were unable to recover it. 00:35:55.666 [2024-07-12 00:48:23.331074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.666 [2024-07-12 00:48:23.331098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.666 qpair failed and we were unable to recover it. 00:35:55.666 [2024-07-12 00:48:23.331189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.666 [2024-07-12 00:48:23.331215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.666 qpair failed and we were unable to recover it. 00:35:55.666 [2024-07-12 00:48:23.331297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.666 [2024-07-12 00:48:23.331324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.666 qpair failed and we were unable to recover it. 00:35:55.666 [2024-07-12 00:48:23.331395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.666 [2024-07-12 00:48:23.331421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.666 qpair failed and we were unable to recover it. 00:35:55.666 [2024-07-12 00:48:23.331514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.666 [2024-07-12 00:48:23.331541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.666 qpair failed and we were unable to recover it. 00:35:55.666 [2024-07-12 00:48:23.331643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.666 [2024-07-12 00:48:23.331670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.666 qpair failed and we were unable to recover it. 00:35:55.666 [2024-07-12 00:48:23.331769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.666 [2024-07-12 00:48:23.331795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.666 qpair failed and we were unable to recover it. 00:35:55.666 [2024-07-12 00:48:23.331874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.666 [2024-07-12 00:48:23.331901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.666 qpair failed and we were unable to recover it. 00:35:55.666 [2024-07-12 00:48:23.331990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.666 [2024-07-12 00:48:23.332019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.666 qpair failed and we were unable to recover it. 00:35:55.666 [2024-07-12 00:48:23.332101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.666 [2024-07-12 00:48:23.332127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.666 qpair failed and we were unable to recover it. 00:35:55.666 [2024-07-12 00:48:23.332214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.666 [2024-07-12 00:48:23.332240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.666 qpair failed and we were unable to recover it. 00:35:55.666 [2024-07-12 00:48:23.332383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.666 [2024-07-12 00:48:23.332409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.666 qpair failed and we were unable to recover it. 00:35:55.666 [2024-07-12 00:48:23.332486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.666 [2024-07-12 00:48:23.332512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.666 qpair failed and we were unable to recover it. 00:35:55.666 [2024-07-12 00:48:23.332596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.666 [2024-07-12 00:48:23.332623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.666 qpair failed and we were unable to recover it. 00:35:55.666 [2024-07-12 00:48:23.332706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.666 [2024-07-12 00:48:23.332733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.666 qpair failed and we were unable to recover it. 00:35:55.666 [2024-07-12 00:48:23.332856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.666 [2024-07-12 00:48:23.332882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.666 qpair failed and we were unable to recover it. 00:35:55.666 [2024-07-12 00:48:23.332970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.666 [2024-07-12 00:48:23.332997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.666 qpair failed and we were unable to recover it. 00:35:55.666 [2024-07-12 00:48:23.333073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.666 [2024-07-12 00:48:23.333104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.666 qpair failed and we were unable to recover it. 00:35:55.666 [2024-07-12 00:48:23.333183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.666 [2024-07-12 00:48:23.333209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.666 qpair failed and we were unable to recover it. 00:35:55.666 [2024-07-12 00:48:23.333293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.666 [2024-07-12 00:48:23.333319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.666 qpair failed and we were unable to recover it. 00:35:55.666 [2024-07-12 00:48:23.333393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.666 [2024-07-12 00:48:23.333420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.666 qpair failed and we were unable to recover it. 00:35:55.666 [2024-07-12 00:48:23.333515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.666 [2024-07-12 00:48:23.333556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.666 qpair failed and we were unable to recover it. 00:35:55.666 [2024-07-12 00:48:23.333688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.666 [2024-07-12 00:48:23.333717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.666 qpair failed and we were unable to recover it. 00:35:55.666 [2024-07-12 00:48:23.333804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.666 [2024-07-12 00:48:23.333831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.666 qpair failed and we were unable to recover it. 00:35:55.666 [2024-07-12 00:48:23.333915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.666 [2024-07-12 00:48:23.333943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.666 qpair failed and we were unable to recover it. 00:35:55.666 [2024-07-12 00:48:23.334026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.666 [2024-07-12 00:48:23.334053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.666 qpair failed and we were unable to recover it. 00:35:55.666 [2024-07-12 00:48:23.334131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.666 [2024-07-12 00:48:23.334163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.666 qpair failed and we were unable to recover it. 00:35:55.666 [2024-07-12 00:48:23.334241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.666 [2024-07-12 00:48:23.334267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.666 qpair failed and we were unable to recover it. 00:35:55.666 [2024-07-12 00:48:23.334378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.666 [2024-07-12 00:48:23.334405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.666 qpair failed and we were unable to recover it. 00:35:55.666 [2024-07-12 00:48:23.334480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.666 [2024-07-12 00:48:23.334506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.666 qpair failed and we were unable to recover it. 00:35:55.666 [2024-07-12 00:48:23.334597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.666 [2024-07-12 00:48:23.334624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.666 qpair failed and we were unable to recover it. 00:35:55.666 [2024-07-12 00:48:23.334717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.666 [2024-07-12 00:48:23.334744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.666 qpair failed and we were unable to recover it. 00:35:55.666 [2024-07-12 00:48:23.334825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.666 [2024-07-12 00:48:23.334852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.666 qpair failed and we were unable to recover it. 00:35:55.666 [2024-07-12 00:48:23.334931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.666 [2024-07-12 00:48:23.334957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.666 qpair failed and we were unable to recover it. 00:35:55.666 [2024-07-12 00:48:23.335052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.666 [2024-07-12 00:48:23.335083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.666 qpair failed and we were unable to recover it. 00:35:55.666 [2024-07-12 00:48:23.335163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.666 [2024-07-12 00:48:23.335190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.666 qpair failed and we were unable to recover it. 00:35:55.666 [2024-07-12 00:48:23.335277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.666 [2024-07-12 00:48:23.335306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.666 qpair failed and we were unable to recover it. 00:35:55.666 [2024-07-12 00:48:23.335386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.666 [2024-07-12 00:48:23.335413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.666 qpair failed and we were unable to recover it. 00:35:55.666 [2024-07-12 00:48:23.335496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.666 [2024-07-12 00:48:23.335524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.666 qpair failed and we were unable to recover it. 00:35:55.666 [2024-07-12 00:48:23.335602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.666 [2024-07-12 00:48:23.335629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.666 qpair failed and we were unable to recover it. 00:35:55.666 [2024-07-12 00:48:23.335718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.666 [2024-07-12 00:48:23.335745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.666 qpair failed and we were unable to recover it. 00:35:55.666 [2024-07-12 00:48:23.335855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.666 [2024-07-12 00:48:23.335881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.666 qpair failed and we were unable to recover it. 00:35:55.666 [2024-07-12 00:48:23.335964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.666 [2024-07-12 00:48:23.335991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.666 qpair failed and we were unable to recover it. 00:35:55.666 [2024-07-12 00:48:23.336082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.666 [2024-07-12 00:48:23.336109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.666 qpair failed and we were unable to recover it. 00:35:55.666 [2024-07-12 00:48:23.336222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.666 [2024-07-12 00:48:23.336251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.666 qpair failed and we were unable to recover it. 00:35:55.666 [2024-07-12 00:48:23.336372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.666 [2024-07-12 00:48:23.336398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.666 qpair failed and we were unable to recover it. 00:35:55.666 [2024-07-12 00:48:23.336510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.666 [2024-07-12 00:48:23.336537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.666 qpair failed and we were unable to recover it. 00:35:55.666 [2024-07-12 00:48:23.336628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.666 [2024-07-12 00:48:23.336666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.666 qpair failed and we were unable to recover it. 00:35:55.666 [2024-07-12 00:48:23.336780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.666 [2024-07-12 00:48:23.336806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.666 qpair failed and we were unable to recover it. 00:35:55.666 [2024-07-12 00:48:23.336885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.666 [2024-07-12 00:48:23.336911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.666 qpair failed and we were unable to recover it. 00:35:55.666 [2024-07-12 00:48:23.336992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.667 [2024-07-12 00:48:23.337020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.667 qpair failed and we were unable to recover it. 00:35:55.667 [2024-07-12 00:48:23.337128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.667 [2024-07-12 00:48:23.337154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.667 qpair failed and we were unable to recover it. 00:35:55.667 [2024-07-12 00:48:23.337234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.667 [2024-07-12 00:48:23.337260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.667 qpair failed and we were unable to recover it. 00:35:55.667 [2024-07-12 00:48:23.337364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.667 [2024-07-12 00:48:23.337392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.667 qpair failed and we were unable to recover it. 00:35:55.667 [2024-07-12 00:48:23.337479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.667 [2024-07-12 00:48:23.337505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.667 qpair failed and we were unable to recover it. 00:35:55.667 [2024-07-12 00:48:23.337582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.667 [2024-07-12 00:48:23.337617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.667 qpair failed and we were unable to recover it. 00:35:55.667 [2024-07-12 00:48:23.337695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.667 [2024-07-12 00:48:23.337721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.667 qpair failed and we were unable to recover it. 00:35:55.667 [2024-07-12 00:48:23.337830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.667 [2024-07-12 00:48:23.337861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.667 qpair failed and we were unable to recover it. 00:35:55.667 [2024-07-12 00:48:23.337948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.667 [2024-07-12 00:48:23.337975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.667 qpair failed and we were unable to recover it. 00:35:55.667 [2024-07-12 00:48:23.338056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.667 [2024-07-12 00:48:23.338082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.667 qpair failed and we were unable to recover it. 00:35:55.667 [2024-07-12 00:48:23.338165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.667 [2024-07-12 00:48:23.338192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.667 qpair failed and we were unable to recover it. 00:35:55.667 [2024-07-12 00:48:23.338279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.667 [2024-07-12 00:48:23.338307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.667 qpair failed and we were unable to recover it. 00:35:55.667 [2024-07-12 00:48:23.338385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.667 [2024-07-12 00:48:23.338412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.667 qpair failed and we were unable to recover it. 00:35:55.667 [2024-07-12 00:48:23.338495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.667 [2024-07-12 00:48:23.338522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.667 qpair failed and we were unable to recover it. 00:35:55.667 [2024-07-12 00:48:23.338612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.667 [2024-07-12 00:48:23.338644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.667 qpair failed and we were unable to recover it. 00:35:55.667 [2024-07-12 00:48:23.338757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.667 [2024-07-12 00:48:23.338783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.667 qpair failed and we were unable to recover it. 00:35:55.667 [2024-07-12 00:48:23.338868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.667 [2024-07-12 00:48:23.338896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.667 qpair failed and we were unable to recover it. 00:35:55.667 [2024-07-12 00:48:23.339024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.667 [2024-07-12 00:48:23.339074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.667 qpair failed and we were unable to recover it. 00:35:55.667 [2024-07-12 00:48:23.339161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.667 [2024-07-12 00:48:23.339190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.667 qpair failed and we were unable to recover it. 00:35:55.667 [2024-07-12 00:48:23.339280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.667 [2024-07-12 00:48:23.339306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.667 qpair failed and we were unable to recover it. 00:35:55.667 [2024-07-12 00:48:23.339400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.667 [2024-07-12 00:48:23.339428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.667 qpair failed and we were unable to recover it. 00:35:55.667 [2024-07-12 00:48:23.339525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.667 [2024-07-12 00:48:23.339552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.667 qpair failed and we were unable to recover it. 00:35:55.667 [2024-07-12 00:48:23.339658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.667 [2024-07-12 00:48:23.339685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.667 qpair failed and we were unable to recover it. 00:35:55.667 [2024-07-12 00:48:23.339775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.667 [2024-07-12 00:48:23.339801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.667 qpair failed and we were unable to recover it. 00:35:55.667 [2024-07-12 00:48:23.339878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.667 [2024-07-12 00:48:23.339904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.667 qpair failed and we were unable to recover it. 00:35:55.667 [2024-07-12 00:48:23.339986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.667 [2024-07-12 00:48:23.340012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.667 qpair failed and we were unable to recover it. 00:35:55.667 [2024-07-12 00:48:23.340101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.667 [2024-07-12 00:48:23.340127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.667 qpair failed and we were unable to recover it. 00:35:55.667 [2024-07-12 00:48:23.340214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.667 [2024-07-12 00:48:23.340240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.667 qpair failed and we were unable to recover it. 00:35:55.667 [2024-07-12 00:48:23.340331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.667 [2024-07-12 00:48:23.340357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.667 qpair failed and we were unable to recover it. 00:35:55.667 [2024-07-12 00:48:23.340438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.667 [2024-07-12 00:48:23.340465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.667 qpair failed and we were unable to recover it. 00:35:55.667 [2024-07-12 00:48:23.340548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.667 [2024-07-12 00:48:23.340575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.667 qpair failed and we were unable to recover it. 00:35:55.667 [2024-07-12 00:48:23.340668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.667 [2024-07-12 00:48:23.340695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.667 qpair failed and we were unable to recover it. 00:35:55.667 [2024-07-12 00:48:23.340783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.667 [2024-07-12 00:48:23.340811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.667 qpair failed and we were unable to recover it. 00:35:55.667 [2024-07-12 00:48:23.340892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.667 [2024-07-12 00:48:23.340918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.667 qpair failed and we were unable to recover it. 00:35:55.667 [2024-07-12 00:48:23.341011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.667 [2024-07-12 00:48:23.341042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.667 qpair failed and we were unable to recover it. 00:35:55.667 [2024-07-12 00:48:23.341134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.667 [2024-07-12 00:48:23.341160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.667 qpair failed and we were unable to recover it. 00:35:55.667 [2024-07-12 00:48:23.341244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.667 [2024-07-12 00:48:23.341270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.667 qpair failed and we were unable to recover it. 00:35:55.667 [2024-07-12 00:48:23.341361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.667 [2024-07-12 00:48:23.341401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.667 qpair failed and we were unable to recover it. 00:35:55.667 [2024-07-12 00:48:23.341498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.667 [2024-07-12 00:48:23.341527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.667 qpair failed and we were unable to recover it. 00:35:55.667 [2024-07-12 00:48:23.341619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.667 [2024-07-12 00:48:23.341646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.667 qpair failed and we were unable to recover it. 00:35:55.667 [2024-07-12 00:48:23.341735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.667 [2024-07-12 00:48:23.341763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.667 qpair failed and we were unable to recover it. 00:35:55.667 [2024-07-12 00:48:23.341852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.667 [2024-07-12 00:48:23.341880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.667 qpair failed and we were unable to recover it. 00:35:55.667 [2024-07-12 00:48:23.341964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.667 [2024-07-12 00:48:23.341991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.667 qpair failed and we were unable to recover it. 00:35:55.667 [2024-07-12 00:48:23.342079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.667 [2024-07-12 00:48:23.342106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.667 qpair failed and we were unable to recover it. 00:35:55.667 [2024-07-12 00:48:23.342185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.667 [2024-07-12 00:48:23.342212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.667 qpair failed and we were unable to recover it. 00:35:55.667 [2024-07-12 00:48:23.342305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.667 [2024-07-12 00:48:23.342335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.667 qpair failed and we were unable to recover it. 00:35:55.667 [2024-07-12 00:48:23.342427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.667 [2024-07-12 00:48:23.342456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.667 qpair failed and we were unable to recover it. 00:35:55.667 [2024-07-12 00:48:23.342541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.667 [2024-07-12 00:48:23.342568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.667 qpair failed and we were unable to recover it. 00:35:55.667 [2024-07-12 00:48:23.342675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.667 [2024-07-12 00:48:23.342701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.667 qpair failed and we were unable to recover it. 00:35:55.667 [2024-07-12 00:48:23.342793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.667 [2024-07-12 00:48:23.342820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.667 qpair failed and we were unable to recover it. 00:35:55.667 [2024-07-12 00:48:23.342922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.667 [2024-07-12 00:48:23.342949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.667 qpair failed and we were unable to recover it. 00:35:55.667 [2024-07-12 00:48:23.343032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.667 [2024-07-12 00:48:23.343060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.667 qpair failed and we were unable to recover it. 00:35:55.667 [2024-07-12 00:48:23.343148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.667 [2024-07-12 00:48:23.343175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.667 qpair failed and we were unable to recover it. 00:35:55.667 [2024-07-12 00:48:23.343266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.668 [2024-07-12 00:48:23.343293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.668 qpair failed and we were unable to recover it. 00:35:55.668 [2024-07-12 00:48:23.343373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.668 [2024-07-12 00:48:23.343399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.668 qpair failed and we were unable to recover it. 00:35:55.668 [2024-07-12 00:48:23.343476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.668 [2024-07-12 00:48:23.343501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.668 qpair failed and we were unable to recover it. 00:35:55.668 [2024-07-12 00:48:23.343593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.668 [2024-07-12 00:48:23.343622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.668 qpair failed and we were unable to recover it. 00:35:55.668 [2024-07-12 00:48:23.343706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.668 [2024-07-12 00:48:23.343731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.668 qpair failed and we were unable to recover it. 00:35:55.668 [2024-07-12 00:48:23.343811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.668 [2024-07-12 00:48:23.343837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.668 qpair failed and we were unable to recover it. 00:35:55.668 [2024-07-12 00:48:23.343926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.668 [2024-07-12 00:48:23.343952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.668 qpair failed and we were unable to recover it. 00:35:55.668 [2024-07-12 00:48:23.344026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.668 [2024-07-12 00:48:23.344051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.668 qpair failed and we were unable to recover it. 00:35:55.668 [2024-07-12 00:48:23.344150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.668 [2024-07-12 00:48:23.344176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.668 qpair failed and we were unable to recover it. 00:35:55.668 [2024-07-12 00:48:23.344253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.668 [2024-07-12 00:48:23.344278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.668 qpair failed and we were unable to recover it. 00:35:55.668 [2024-07-12 00:48:23.344362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.668 [2024-07-12 00:48:23.344388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.668 qpair failed and we were unable to recover it. 00:35:55.668 [2024-07-12 00:48:23.344484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.668 [2024-07-12 00:48:23.344510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.668 qpair failed and we were unable to recover it. 00:35:55.668 [2024-07-12 00:48:23.344597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.668 [2024-07-12 00:48:23.344624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.668 qpair failed and we were unable to recover it. 00:35:55.668 [2024-07-12 00:48:23.344711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.668 [2024-07-12 00:48:23.344736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.668 qpair failed and we were unable to recover it. 00:35:55.668 [2024-07-12 00:48:23.344817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.668 [2024-07-12 00:48:23.344844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.668 qpair failed and we were unable to recover it. 00:35:55.668 [2024-07-12 00:48:23.344927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.668 [2024-07-12 00:48:23.344954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.668 qpair failed and we were unable to recover it. 00:35:55.668 [2024-07-12 00:48:23.345062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.668 [2024-07-12 00:48:23.345089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.668 qpair failed and we were unable to recover it. 00:35:55.668 [2024-07-12 00:48:23.345174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.668 [2024-07-12 00:48:23.345203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.668 qpair failed and we were unable to recover it. 00:35:55.668 [2024-07-12 00:48:23.345304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.668 [2024-07-12 00:48:23.345331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.668 qpair failed and we were unable to recover it. 00:35:55.668 [2024-07-12 00:48:23.345420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.668 [2024-07-12 00:48:23.345449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.668 qpair failed and we were unable to recover it. 00:35:55.668 [2024-07-12 00:48:23.345536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.668 [2024-07-12 00:48:23.345562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.668 qpair failed and we were unable to recover it. 00:35:55.668 [2024-07-12 00:48:23.345654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.668 [2024-07-12 00:48:23.345686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.668 qpair failed and we were unable to recover it. 00:35:55.668 [2024-07-12 00:48:23.345768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.668 [2024-07-12 00:48:23.345795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.668 qpair failed and we were unable to recover it. 00:35:55.668 [2024-07-12 00:48:23.345882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.668 [2024-07-12 00:48:23.345908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.668 qpair failed and we were unable to recover it. 00:35:55.668 [2024-07-12 00:48:23.345988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.668 [2024-07-12 00:48:23.346014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.668 qpair failed and we were unable to recover it. 00:35:55.668 [2024-07-12 00:48:23.346096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.668 [2024-07-12 00:48:23.346122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.668 qpair failed and we were unable to recover it. 00:35:55.668 [2024-07-12 00:48:23.346195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.668 [2024-07-12 00:48:23.346221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.668 qpair failed and we were unable to recover it. 00:35:55.668 [2024-07-12 00:48:23.346302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.668 [2024-07-12 00:48:23.346328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.668 qpair failed and we were unable to recover it. 00:35:55.668 [2024-07-12 00:48:23.346403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.668 [2024-07-12 00:48:23.346429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.668 qpair failed and we were unable to recover it. 00:35:55.668 [2024-07-12 00:48:23.346505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.668 [2024-07-12 00:48:23.346531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.668 qpair failed and we were unable to recover it. 00:35:55.668 [2024-07-12 00:48:23.346612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.668 [2024-07-12 00:48:23.346639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.668 qpair failed and we were unable to recover it. 00:35:55.668 [2024-07-12 00:48:23.346726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.668 [2024-07-12 00:48:23.346754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.668 qpair failed and we were unable to recover it. 00:35:55.668 [2024-07-12 00:48:23.346857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.668 [2024-07-12 00:48:23.346883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.668 qpair failed and we were unable to recover it. 00:35:55.668 [2024-07-12 00:48:23.346969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.668 [2024-07-12 00:48:23.346995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.668 qpair failed and we were unable to recover it. 00:35:55.668 [2024-07-12 00:48:23.347073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.668 [2024-07-12 00:48:23.347099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.668 qpair failed and we were unable to recover it. 00:35:55.668 [2024-07-12 00:48:23.347186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.668 [2024-07-12 00:48:23.347213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.668 qpair failed and we were unable to recover it. 00:35:55.668 [2024-07-12 00:48:23.347301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.668 [2024-07-12 00:48:23.347328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.668 qpair failed and we were unable to recover it. 00:35:55.668 [2024-07-12 00:48:23.347418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.668 [2024-07-12 00:48:23.347445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.668 qpair failed and we were unable to recover it. 00:35:55.668 [2024-07-12 00:48:23.347532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.668 [2024-07-12 00:48:23.347560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.668 qpair failed and we were unable to recover it. 00:35:55.668 [2024-07-12 00:48:23.347654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.668 [2024-07-12 00:48:23.347681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.668 qpair failed and we were unable to recover it. 00:35:55.668 [2024-07-12 00:48:23.347773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.669 [2024-07-12 00:48:23.347800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.669 qpair failed and we were unable to recover it. 00:35:55.669 [2024-07-12 00:48:23.347878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.669 [2024-07-12 00:48:23.347907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.669 qpair failed and we were unable to recover it. 00:35:55.669 [2024-07-12 00:48:23.348010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.669 [2024-07-12 00:48:23.348047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.669 qpair failed and we were unable to recover it. 00:35:55.669 [2024-07-12 00:48:23.348138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.669 [2024-07-12 00:48:23.348167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.669 qpair failed and we were unable to recover it. 00:35:55.669 [2024-07-12 00:48:23.348268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.669 [2024-07-12 00:48:23.348294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.669 qpair failed and we were unable to recover it. 00:35:55.669 [2024-07-12 00:48:23.348375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.669 [2024-07-12 00:48:23.348400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.669 qpair failed and we were unable to recover it. 00:35:55.669 [2024-07-12 00:48:23.348486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.669 [2024-07-12 00:48:23.348513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.669 qpair failed and we were unable to recover it. 00:35:55.669 [2024-07-12 00:48:23.348616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.669 [2024-07-12 00:48:23.348644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.669 qpair failed and we were unable to recover it. 00:35:55.669 [2024-07-12 00:48:23.348732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.669 [2024-07-12 00:48:23.348760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.669 qpair failed and we were unable to recover it. 00:35:55.669 [2024-07-12 00:48:23.348842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.669 [2024-07-12 00:48:23.348868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.669 qpair failed and we were unable to recover it. 00:35:55.669 [2024-07-12 00:48:23.348946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.669 [2024-07-12 00:48:23.348972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.669 qpair failed and we were unable to recover it. 00:35:55.669 [2024-07-12 00:48:23.349063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.669 [2024-07-12 00:48:23.349089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.669 qpair failed and we were unable to recover it. 00:35:55.669 [2024-07-12 00:48:23.349170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.669 [2024-07-12 00:48:23.349197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.669 qpair failed and we were unable to recover it. 00:35:55.669 [2024-07-12 00:48:23.349289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.669 [2024-07-12 00:48:23.349315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.669 qpair failed and we were unable to recover it. 00:35:55.669 [2024-07-12 00:48:23.349394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.669 [2024-07-12 00:48:23.349425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.669 qpair failed and we were unable to recover it. 00:35:55.669 [2024-07-12 00:48:23.349503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.669 [2024-07-12 00:48:23.349529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.669 qpair failed and we were unable to recover it. 00:35:55.669 [2024-07-12 00:48:23.349632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.669 [2024-07-12 00:48:23.349659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.669 qpair failed and we were unable to recover it. 00:35:55.669 [2024-07-12 00:48:23.349745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.669 [2024-07-12 00:48:23.349770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.669 qpair failed and we were unable to recover it. 00:35:55.669 [2024-07-12 00:48:23.349855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.669 [2024-07-12 00:48:23.349883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.669 qpair failed and we were unable to recover it. 00:35:55.669 [2024-07-12 00:48:23.349986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.669 [2024-07-12 00:48:23.350013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.669 qpair failed and we were unable to recover it. 00:35:55.669 [2024-07-12 00:48:23.350097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.669 [2024-07-12 00:48:23.350124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.669 qpair failed and we were unable to recover it. 00:35:55.669 [2024-07-12 00:48:23.350211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.669 [2024-07-12 00:48:23.350245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.669 qpair failed and we were unable to recover it. 00:35:55.669 [2024-07-12 00:48:23.350344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.669 [2024-07-12 00:48:23.350372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.669 qpair failed and we were unable to recover it. 00:35:55.669 [2024-07-12 00:48:23.350460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.669 [2024-07-12 00:48:23.350487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.669 qpair failed and we were unable to recover it. 00:35:55.669 [2024-07-12 00:48:23.350570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.669 [2024-07-12 00:48:23.350605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.669 qpair failed and we were unable to recover it. 00:35:55.669 [2024-07-12 00:48:23.350686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.669 [2024-07-12 00:48:23.350713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.669 qpair failed and we were unable to recover it. 00:35:55.669 [2024-07-12 00:48:23.350807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.669 [2024-07-12 00:48:23.350834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.669 qpair failed and we were unable to recover it. 00:35:55.669 [2024-07-12 00:48:23.350918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.669 [2024-07-12 00:48:23.350945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.669 qpair failed and we were unable to recover it. 00:35:55.669 [2024-07-12 00:48:23.351022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.669 [2024-07-12 00:48:23.351049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.669 qpair failed and we were unable to recover it. 00:35:55.669 [2024-07-12 00:48:23.351126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.669 [2024-07-12 00:48:23.351152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.669 qpair failed and we were unable to recover it. 00:35:55.669 [2024-07-12 00:48:23.351241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.669 [2024-07-12 00:48:23.351267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.669 qpair failed and we were unable to recover it. 00:35:55.669 [2024-07-12 00:48:23.351350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.669 [2024-07-12 00:48:23.351376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.669 qpair failed and we were unable to recover it. 00:35:55.669 [2024-07-12 00:48:23.351450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.669 [2024-07-12 00:48:23.351476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.669 qpair failed and we were unable to recover it. 00:35:55.669 [2024-07-12 00:48:23.351555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.669 [2024-07-12 00:48:23.351583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.669 qpair failed and we were unable to recover it. 00:35:55.669 [2024-07-12 00:48:23.351670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.669 [2024-07-12 00:48:23.351696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.669 qpair failed and we were unable to recover it. 00:35:55.669 [2024-07-12 00:48:23.351805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.669 [2024-07-12 00:48:23.351847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.669 qpair failed and we were unable to recover it. 00:35:55.669 [2024-07-12 00:48:23.351928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.669 [2024-07-12 00:48:23.351954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.669 qpair failed and we were unable to recover it. 00:35:55.669 [2024-07-12 00:48:23.352031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.669 [2024-07-12 00:48:23.352057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.669 qpair failed and we were unable to recover it. 00:35:55.669 [2024-07-12 00:48:23.352133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.669 [2024-07-12 00:48:23.352160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.669 qpair failed and we were unable to recover it. 00:35:55.669 [2024-07-12 00:48:23.352241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.669 [2024-07-12 00:48:23.352267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.669 qpair failed and we were unable to recover it. 00:35:55.669 [2024-07-12 00:48:23.352346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.669 [2024-07-12 00:48:23.352372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.669 qpair failed and we were unable to recover it. 00:35:55.669 [2024-07-12 00:48:23.352450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.669 [2024-07-12 00:48:23.352477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.669 qpair failed and we were unable to recover it. 00:35:55.669 [2024-07-12 00:48:23.352564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.669 [2024-07-12 00:48:23.352601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.669 qpair failed and we were unable to recover it. 00:35:55.669 [2024-07-12 00:48:23.352691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.669 [2024-07-12 00:48:23.352721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.669 qpair failed and we were unable to recover it. 00:35:55.669 [2024-07-12 00:48:23.352804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.669 [2024-07-12 00:48:23.352831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.669 qpair failed and we were unable to recover it. 00:35:55.669 [2024-07-12 00:48:23.352919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.669 [2024-07-12 00:48:23.352947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.669 qpair failed and we were unable to recover it. 00:35:55.669 [2024-07-12 00:48:23.353030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.669 [2024-07-12 00:48:23.353056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.669 qpair failed and we were unable to recover it. 00:35:55.669 [2024-07-12 00:48:23.353138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.669 [2024-07-12 00:48:23.353163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.669 qpair failed and we were unable to recover it. 00:35:55.669 [2024-07-12 00:48:23.353250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.669 [2024-07-12 00:48:23.353275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.669 qpair failed and we were unable to recover it. 00:35:55.669 [2024-07-12 00:48:23.353364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.669 [2024-07-12 00:48:23.353390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.669 qpair failed and we were unable to recover it. 00:35:55.669 [2024-07-12 00:48:23.353473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.669 [2024-07-12 00:48:23.353499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.669 qpair failed and we were unable to recover it. 00:35:55.669 [2024-07-12 00:48:23.353580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.669 [2024-07-12 00:48:23.353614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.669 qpair failed and we were unable to recover it. 00:35:55.669 [2024-07-12 00:48:23.353702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.669 [2024-07-12 00:48:23.353729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.669 qpair failed and we were unable to recover it. 00:35:55.669 [2024-07-12 00:48:23.353812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.669 [2024-07-12 00:48:23.353838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.669 qpair failed and we were unable to recover it. 00:35:55.670 [2024-07-12 00:48:23.353926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.670 [2024-07-12 00:48:23.353953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.670 qpair failed and we were unable to recover it. 00:35:55.670 [2024-07-12 00:48:23.354043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.670 [2024-07-12 00:48:23.354073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.670 qpair failed and we were unable to recover it. 00:35:55.670 [2024-07-12 00:48:23.354162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.670 [2024-07-12 00:48:23.354190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.670 qpair failed and we were unable to recover it. 00:35:55.670 [2024-07-12 00:48:23.354277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.670 [2024-07-12 00:48:23.354304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.670 qpair failed and we were unable to recover it. 00:35:55.670 [2024-07-12 00:48:23.354388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.670 [2024-07-12 00:48:23.354415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.670 qpair failed and we were unable to recover it. 00:35:55.670 [2024-07-12 00:48:23.354495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.670 [2024-07-12 00:48:23.354522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.670 qpair failed and we were unable to recover it. 00:35:55.670 [2024-07-12 00:48:23.354608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.670 [2024-07-12 00:48:23.354636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.670 qpair failed and we were unable to recover it. 00:35:55.670 [2024-07-12 00:48:23.354715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.670 [2024-07-12 00:48:23.354747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.670 qpair failed and we were unable to recover it. 00:35:55.670 [2024-07-12 00:48:23.354829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.670 [2024-07-12 00:48:23.354855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.670 qpair failed and we were unable to recover it. 00:35:55.670 [2024-07-12 00:48:23.354940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.670 [2024-07-12 00:48:23.354970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.670 qpair failed and we were unable to recover it. 00:35:55.670 [2024-07-12 00:48:23.355048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.670 [2024-07-12 00:48:23.355074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.670 qpair failed and we were unable to recover it. 00:35:55.670 [2024-07-12 00:48:23.355148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.670 [2024-07-12 00:48:23.355174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.670 qpair failed and we were unable to recover it. 00:35:55.670 [2024-07-12 00:48:23.355257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.670 [2024-07-12 00:48:23.355284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.670 qpair failed and we were unable to recover it. 00:35:55.670 [2024-07-12 00:48:23.355370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.670 [2024-07-12 00:48:23.355399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.670 qpair failed and we were unable to recover it. 00:35:55.670 [2024-07-12 00:48:23.355486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.670 [2024-07-12 00:48:23.355517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.670 qpair failed and we were unable to recover it. 00:35:55.670 [2024-07-12 00:48:23.355606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.670 [2024-07-12 00:48:23.355635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.670 qpair failed and we were unable to recover it. 00:35:55.670 [2024-07-12 00:48:23.355725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.670 [2024-07-12 00:48:23.355753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.670 qpair failed and we were unable to recover it. 00:35:55.670 [2024-07-12 00:48:23.355834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.670 [2024-07-12 00:48:23.355860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.670 qpair failed and we were unable to recover it. 00:35:55.670 [2024-07-12 00:48:23.355947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.670 [2024-07-12 00:48:23.355973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.670 qpair failed and we were unable to recover it. 00:35:55.670 [2024-07-12 00:48:23.356050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.670 [2024-07-12 00:48:23.356076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.670 qpair failed and we were unable to recover it. 00:35:55.670 [2024-07-12 00:48:23.356162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.670 [2024-07-12 00:48:23.356188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.670 qpair failed and we were unable to recover it. 00:35:55.670 [2024-07-12 00:48:23.356279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.670 [2024-07-12 00:48:23.356305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.670 qpair failed and we were unable to recover it. 00:35:55.670 [2024-07-12 00:48:23.356417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.670 [2024-07-12 00:48:23.356443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.670 qpair failed and we were unable to recover it. 00:35:55.670 [2024-07-12 00:48:23.356530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.670 [2024-07-12 00:48:23.356558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.670 qpair failed and we were unable to recover it. 00:35:55.670 [2024-07-12 00:48:23.356642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.670 [2024-07-12 00:48:23.356668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.670 qpair failed and we were unable to recover it. 00:35:55.670 [2024-07-12 00:48:23.356760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.670 [2024-07-12 00:48:23.356788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.670 qpair failed and we were unable to recover it. 00:35:55.670 [2024-07-12 00:48:23.356874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.670 [2024-07-12 00:48:23.356900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.670 qpair failed and we were unable to recover it. 00:35:55.670 [2024-07-12 00:48:23.356979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.670 [2024-07-12 00:48:23.357007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.670 qpair failed and we were unable to recover it. 00:35:55.670 [2024-07-12 00:48:23.357091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.670 [2024-07-12 00:48:23.357121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.670 qpair failed and we were unable to recover it. 00:35:55.670 [2024-07-12 00:48:23.357232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.670 [2024-07-12 00:48:23.357258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.670 qpair failed and we were unable to recover it. 00:35:55.670 [2024-07-12 00:48:23.357377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.670 [2024-07-12 00:48:23.357403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.670 qpair failed and we were unable to recover it. 00:35:55.670 [2024-07-12 00:48:23.357533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.670 [2024-07-12 00:48:23.357596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.670 qpair failed and we were unable to recover it. 00:35:55.670 [2024-07-12 00:48:23.357694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.670 [2024-07-12 00:48:23.357719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.670 qpair failed and we were unable to recover it. 00:35:55.670 [2024-07-12 00:48:23.357802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.670 [2024-07-12 00:48:23.357830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.670 qpair failed and we were unable to recover it. 00:35:55.670 [2024-07-12 00:48:23.357920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.670 [2024-07-12 00:48:23.357948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.670 qpair failed and we were unable to recover it. 00:35:55.670 [2024-07-12 00:48:23.358036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.670 [2024-07-12 00:48:23.358063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.670 qpair failed and we were unable to recover it. 00:35:55.670 [2024-07-12 00:48:23.358147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.670 [2024-07-12 00:48:23.358174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.670 qpair failed and we were unable to recover it. 00:35:55.670 [2024-07-12 00:48:23.358270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.670 [2024-07-12 00:48:23.358297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.670 qpair failed and we were unable to recover it. 00:35:55.670 [2024-07-12 00:48:23.358394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.670 [2024-07-12 00:48:23.358421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.670 qpair failed and we were unable to recover it. 00:35:55.670 [2024-07-12 00:48:23.358507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.670 [2024-07-12 00:48:23.358534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.670 qpair failed and we were unable to recover it. 00:35:55.670 [2024-07-12 00:48:23.358633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.670 [2024-07-12 00:48:23.358661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.670 qpair failed and we were unable to recover it. 00:35:55.670 [2024-07-12 00:48:23.358763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.670 [2024-07-12 00:48:23.358792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.670 qpair failed and we were unable to recover it. 00:35:55.670 [2024-07-12 00:48:23.358875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.670 [2024-07-12 00:48:23.358902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.670 qpair failed and we were unable to recover it. 00:35:55.670 [2024-07-12 00:48:23.358982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.670 [2024-07-12 00:48:23.359011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.670 qpair failed and we were unable to recover it. 00:35:55.670 [2024-07-12 00:48:23.359094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.670 [2024-07-12 00:48:23.359121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.670 qpair failed and we were unable to recover it. 00:35:55.670 [2024-07-12 00:48:23.359207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.670 [2024-07-12 00:48:23.359235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.670 qpair failed and we were unable to recover it. 00:35:55.670 [2024-07-12 00:48:23.359317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.670 [2024-07-12 00:48:23.359347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.670 qpair failed and we were unable to recover it. 00:35:55.670 [2024-07-12 00:48:23.359422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.670 [2024-07-12 00:48:23.359454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.670 qpair failed and we were unable to recover it. 00:35:55.670 [2024-07-12 00:48:23.359537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.670 [2024-07-12 00:48:23.359565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.670 qpair failed and we were unable to recover it. 00:35:55.670 [2024-07-12 00:48:23.359657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.670 [2024-07-12 00:48:23.359686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.670 qpair failed and we were unable to recover it. 00:35:55.670 [2024-07-12 00:48:23.359770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.670 [2024-07-12 00:48:23.359799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.670 qpair failed and we were unable to recover it. 00:35:55.670 [2024-07-12 00:48:23.359882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.670 [2024-07-12 00:48:23.359908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.670 qpair failed and we were unable to recover it. 00:35:55.670 [2024-07-12 00:48:23.359995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.670 [2024-07-12 00:48:23.360021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.670 qpair failed and we were unable to recover it. 00:35:55.670 [2024-07-12 00:48:23.360094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.670 [2024-07-12 00:48:23.360120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.670 qpair failed and we were unable to recover it. 00:35:55.670 [2024-07-12 00:48:23.360201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.670 [2024-07-12 00:48:23.360228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.670 qpair failed and we were unable to recover it. 00:35:55.670 [2024-07-12 00:48:23.360324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.671 [2024-07-12 00:48:23.360352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.671 qpair failed and we were unable to recover it. 00:35:55.671 [2024-07-12 00:48:23.360437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.671 [2024-07-12 00:48:23.360465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.671 qpair failed and we were unable to recover it. 00:35:55.671 [2024-07-12 00:48:23.360548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.671 [2024-07-12 00:48:23.360574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.671 qpair failed and we were unable to recover it. 00:35:55.671 [2024-07-12 00:48:23.360692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.671 [2024-07-12 00:48:23.360718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.671 qpair failed and we were unable to recover it. 00:35:55.671 [2024-07-12 00:48:23.360806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.671 [2024-07-12 00:48:23.360833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.671 qpair failed and we were unable to recover it. 00:35:55.671 [2024-07-12 00:48:23.360920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.671 [2024-07-12 00:48:23.360946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.671 qpair failed and we were unable to recover it. 00:35:55.671 [2024-07-12 00:48:23.361033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.671 [2024-07-12 00:48:23.361060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.671 qpair failed and we were unable to recover it. 00:35:55.671 [2024-07-12 00:48:23.361139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.671 [2024-07-12 00:48:23.361165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.671 qpair failed and we were unable to recover it. 00:35:55.671 [2024-07-12 00:48:23.361243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.671 [2024-07-12 00:48:23.361269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.671 qpair failed and we were unable to recover it. 00:35:55.671 [2024-07-12 00:48:23.361356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.671 [2024-07-12 00:48:23.361385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.671 qpair failed and we were unable to recover it. 00:35:55.671 [2024-07-12 00:48:23.361471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.671 [2024-07-12 00:48:23.361498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.671 qpair failed and we were unable to recover it. 00:35:55.671 [2024-07-12 00:48:23.361573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.671 [2024-07-12 00:48:23.361604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.671 qpair failed and we were unable to recover it. 00:35:55.671 [2024-07-12 00:48:23.361690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.671 [2024-07-12 00:48:23.361717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.671 qpair failed and we were unable to recover it. 00:35:55.671 [2024-07-12 00:48:23.361808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.671 [2024-07-12 00:48:23.361836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.671 qpair failed and we were unable to recover it. 00:35:55.671 [2024-07-12 00:48:23.361924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.671 [2024-07-12 00:48:23.361950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.671 qpair failed and we were unable to recover it. 00:35:55.671 [2024-07-12 00:48:23.362033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.671 [2024-07-12 00:48:23.362059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.671 qpair failed and we were unable to recover it. 00:35:55.671 [2024-07-12 00:48:23.362143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.671 [2024-07-12 00:48:23.362172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.671 qpair failed and we were unable to recover it. 00:35:55.671 [2024-07-12 00:48:23.362259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.671 [2024-07-12 00:48:23.362287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.671 qpair failed and we were unable to recover it. 00:35:55.671 [2024-07-12 00:48:23.362373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.671 [2024-07-12 00:48:23.362400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.671 qpair failed and we were unable to recover it. 00:35:55.671 [2024-07-12 00:48:23.362485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.671 [2024-07-12 00:48:23.362517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.671 qpair failed and we were unable to recover it. 00:35:55.671 [2024-07-12 00:48:23.362611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.671 [2024-07-12 00:48:23.362639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.671 qpair failed and we were unable to recover it. 00:35:55.671 [2024-07-12 00:48:23.362726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.671 [2024-07-12 00:48:23.362753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.671 qpair failed and we were unable to recover it. 00:35:55.671 [2024-07-12 00:48:23.362828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.671 [2024-07-12 00:48:23.362854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.671 qpair failed and we were unable to recover it. 00:35:55.671 [2024-07-12 00:48:23.362931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.671 [2024-07-12 00:48:23.362957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.671 qpair failed and we were unable to recover it. 00:35:55.671 [2024-07-12 00:48:23.363043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.671 [2024-07-12 00:48:23.363071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.671 qpair failed and we were unable to recover it. 00:35:55.671 [2024-07-12 00:48:23.363156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.671 [2024-07-12 00:48:23.363183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.671 qpair failed and we were unable to recover it. 00:35:55.671 [2024-07-12 00:48:23.363268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.671 [2024-07-12 00:48:23.363297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.671 qpair failed and we were unable to recover it. 00:35:55.671 [2024-07-12 00:48:23.363383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.671 [2024-07-12 00:48:23.363410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.671 qpair failed and we were unable to recover it. 00:35:55.671 [2024-07-12 00:48:23.363488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.671 [2024-07-12 00:48:23.363514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.671 qpair failed and we were unable to recover it. 00:35:55.671 [2024-07-12 00:48:23.363599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.671 [2024-07-12 00:48:23.363626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.671 qpair failed and we were unable to recover it. 00:35:55.671 [2024-07-12 00:48:23.363703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.671 [2024-07-12 00:48:23.363729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.671 qpair failed and we were unable to recover it. 00:35:55.671 [2024-07-12 00:48:23.363821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.671 [2024-07-12 00:48:23.363850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.671 qpair failed and we were unable to recover it. 00:35:55.671 [2024-07-12 00:48:23.363934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.671 [2024-07-12 00:48:23.363960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.671 qpair failed and we were unable to recover it. 00:35:55.671 [2024-07-12 00:48:23.364053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.671 [2024-07-12 00:48:23.364079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.671 qpair failed and we were unable to recover it. 00:35:55.671 [2024-07-12 00:48:23.364153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.671 [2024-07-12 00:48:23.364179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.671 qpair failed and we were unable to recover it. 00:35:55.671 [2024-07-12 00:48:23.364266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.671 [2024-07-12 00:48:23.364294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.671 qpair failed and we were unable to recover it. 00:35:55.671 [2024-07-12 00:48:23.364383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.671 [2024-07-12 00:48:23.364410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.671 qpair failed and we were unable to recover it. 00:35:55.671 [2024-07-12 00:48:23.364487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.671 [2024-07-12 00:48:23.364516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.671 qpair failed and we were unable to recover it. 00:35:55.671 [2024-07-12 00:48:23.364604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.671 [2024-07-12 00:48:23.364631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.671 qpair failed and we were unable to recover it. 00:35:55.671 [2024-07-12 00:48:23.364714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.671 [2024-07-12 00:48:23.364740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.671 qpair failed and we were unable to recover it. 00:35:55.671 [2024-07-12 00:48:23.364825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.671 [2024-07-12 00:48:23.364852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.671 qpair failed and we were unable to recover it. 00:35:55.671 [2024-07-12 00:48:23.364929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.671 [2024-07-12 00:48:23.364955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.671 qpair failed and we were unable to recover it. 00:35:55.671 [2024-07-12 00:48:23.365033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.671 [2024-07-12 00:48:23.365059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.671 qpair failed and we were unable to recover it. 00:35:55.671 [2024-07-12 00:48:23.365141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.671 [2024-07-12 00:48:23.365167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.671 qpair failed and we were unable to recover it. 00:35:55.671 [2024-07-12 00:48:23.365249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.671 [2024-07-12 00:48:23.365278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.671 qpair failed and we were unable to recover it. 00:35:55.671 [2024-07-12 00:48:23.365366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.671 [2024-07-12 00:48:23.365392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.671 qpair failed and we were unable to recover it. 00:35:55.671 [2024-07-12 00:48:23.365479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.672 [2024-07-12 00:48:23.365508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.672 qpair failed and we were unable to recover it. 00:35:55.672 [2024-07-12 00:48:23.365640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.672 [2024-07-12 00:48:23.365667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.672 qpair failed and we were unable to recover it. 00:35:55.672 [2024-07-12 00:48:23.365754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.672 [2024-07-12 00:48:23.365780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.672 qpair failed and we were unable to recover it. 00:35:55.672 [2024-07-12 00:48:23.365889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.672 [2024-07-12 00:48:23.365915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.672 qpair failed and we were unable to recover it. 00:35:55.672 [2024-07-12 00:48:23.365998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.672 [2024-07-12 00:48:23.366026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.672 qpair failed and we were unable to recover it. 00:35:55.672 [2024-07-12 00:48:23.366104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.672 [2024-07-12 00:48:23.366131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.672 qpair failed and we were unable to recover it. 00:35:55.672 [2024-07-12 00:48:23.366251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.672 [2024-07-12 00:48:23.366280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.672 qpair failed and we were unable to recover it. 00:35:55.672 [2024-07-12 00:48:23.366364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.672 [2024-07-12 00:48:23.366391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.672 qpair failed and we were unable to recover it. 00:35:55.672 [2024-07-12 00:48:23.366466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.672 [2024-07-12 00:48:23.366493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.672 qpair failed and we were unable to recover it. 00:35:55.672 [2024-07-12 00:48:23.366577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.672 [2024-07-12 00:48:23.366610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.672 qpair failed and we were unable to recover it. 00:35:55.672 [2024-07-12 00:48:23.366719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.672 [2024-07-12 00:48:23.366745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.672 qpair failed and we were unable to recover it. 00:35:55.672 [2024-07-12 00:48:23.366828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.672 [2024-07-12 00:48:23.366854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.672 qpair failed and we were unable to recover it. 00:35:55.672 [2024-07-12 00:48:23.366929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.672 [2024-07-12 00:48:23.366955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.672 qpair failed and we were unable to recover it. 00:35:55.672 [2024-07-12 00:48:23.367047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.672 [2024-07-12 00:48:23.367074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.672 qpair failed and we were unable to recover it. 00:35:55.672 [2024-07-12 00:48:23.367161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.672 [2024-07-12 00:48:23.367190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.672 qpair failed and we were unable to recover it. 00:35:55.672 [2024-07-12 00:48:23.367281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.672 [2024-07-12 00:48:23.367309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.672 qpair failed and we were unable to recover it. 00:35:55.672 [2024-07-12 00:48:23.367385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.672 [2024-07-12 00:48:23.367410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.672 qpair failed and we were unable to recover it. 00:35:55.672 [2024-07-12 00:48:23.367489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.672 [2024-07-12 00:48:23.367515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.672 qpair failed and we were unable to recover it. 00:35:55.672 [2024-07-12 00:48:23.367605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.672 [2024-07-12 00:48:23.367634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.672 qpair failed and we were unable to recover it. 00:35:55.672 [2024-07-12 00:48:23.367721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.672 [2024-07-12 00:48:23.367748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.672 qpair failed and we were unable to recover it. 00:35:55.672 [2024-07-12 00:48:23.367840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.672 [2024-07-12 00:48:23.367868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.672 qpair failed and we were unable to recover it. 00:35:55.672 [2024-07-12 00:48:23.367957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.672 [2024-07-12 00:48:23.367984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.672 qpair failed and we were unable to recover it. 00:35:55.672 [2024-07-12 00:48:23.368065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.672 [2024-07-12 00:48:23.368093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.672 qpair failed and we were unable to recover it. 00:35:55.672 [2024-07-12 00:48:23.368172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.672 [2024-07-12 00:48:23.368198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.672 qpair failed and we were unable to recover it. 00:35:55.672 [2024-07-12 00:48:23.368280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.672 [2024-07-12 00:48:23.368307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.672 qpair failed and we were unable to recover it. 00:35:55.672 [2024-07-12 00:48:23.368388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.672 [2024-07-12 00:48:23.368415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.672 qpair failed and we were unable to recover it. 00:35:55.672 [2024-07-12 00:48:23.368490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.672 [2024-07-12 00:48:23.368516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.672 qpair failed and we were unable to recover it. 00:35:55.672 [2024-07-12 00:48:23.368617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.672 [2024-07-12 00:48:23.368644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.672 qpair failed and we were unable to recover it. 00:35:55.672 [2024-07-12 00:48:23.368736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.672 [2024-07-12 00:48:23.368764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.672 qpair failed and we were unable to recover it. 00:35:55.672 [2024-07-12 00:48:23.368846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.672 [2024-07-12 00:48:23.368872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.672 qpair failed and we were unable to recover it. 00:35:55.672 [2024-07-12 00:48:23.368975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.672 [2024-07-12 00:48:23.369038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.672 qpair failed and we were unable to recover it. 00:35:55.672 [2024-07-12 00:48:23.369122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.672 [2024-07-12 00:48:23.369149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.672 qpair failed and we were unable to recover it. 00:35:55.672 [2024-07-12 00:48:23.369268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.672 [2024-07-12 00:48:23.369299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.672 qpair failed and we were unable to recover it. 00:35:55.672 [2024-07-12 00:48:23.369414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.672 [2024-07-12 00:48:23.369443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.672 qpair failed and we were unable to recover it. 00:35:55.672 [2024-07-12 00:48:23.369532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.672 [2024-07-12 00:48:23.369560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.672 qpair failed and we were unable to recover it. 00:35:55.672 [2024-07-12 00:48:23.369655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.672 [2024-07-12 00:48:23.369682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.672 qpair failed and we were unable to recover it. 00:35:55.672 [2024-07-12 00:48:23.369767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.672 [2024-07-12 00:48:23.369792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.672 qpair failed and we were unable to recover it. 00:35:55.672 [2024-07-12 00:48:23.369875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.672 [2024-07-12 00:48:23.369902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.672 qpair failed and we were unable to recover it. 00:35:55.672 [2024-07-12 00:48:23.369983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.672 [2024-07-12 00:48:23.370010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.672 qpair failed and we were unable to recover it. 00:35:55.672 [2024-07-12 00:48:23.370095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.672 [2024-07-12 00:48:23.370121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.672 qpair failed and we were unable to recover it. 00:35:55.672 [2024-07-12 00:48:23.370197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.672 [2024-07-12 00:48:23.370228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.672 qpair failed and we were unable to recover it. 00:35:55.672 [2024-07-12 00:48:23.370318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.672 [2024-07-12 00:48:23.370347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.672 qpair failed and we were unable to recover it. 00:35:55.672 [2024-07-12 00:48:23.370438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.672 [2024-07-12 00:48:23.370464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.672 qpair failed and we were unable to recover it. 00:35:55.672 [2024-07-12 00:48:23.370556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.672 [2024-07-12 00:48:23.370593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.672 qpair failed and we were unable to recover it. 00:35:55.672 [2024-07-12 00:48:23.370679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.672 [2024-07-12 00:48:23.370705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.672 qpair failed and we were unable to recover it. 00:35:55.672 [2024-07-12 00:48:23.370782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.672 [2024-07-12 00:48:23.370809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.672 qpair failed and we were unable to recover it. 00:35:55.672 [2024-07-12 00:48:23.370893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.672 [2024-07-12 00:48:23.370919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.672 qpair failed and we were unable to recover it. 00:35:55.672 [2024-07-12 00:48:23.371058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.672 [2024-07-12 00:48:23.371084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.672 qpair failed and we were unable to recover it. 00:35:55.672 [2024-07-12 00:48:23.371162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.672 [2024-07-12 00:48:23.371187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.672 qpair failed and we were unable to recover it. 00:35:55.672 [2024-07-12 00:48:23.371267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.672 [2024-07-12 00:48:23.371293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.672 qpair failed and we were unable to recover it. 00:35:55.672 [2024-07-12 00:48:23.371375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.672 [2024-07-12 00:48:23.371401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.672 qpair failed and we were unable to recover it. 00:35:55.672 [2024-07-12 00:48:23.371483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.672 [2024-07-12 00:48:23.371511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.672 qpair failed and we were unable to recover it. 00:35:55.672 [2024-07-12 00:48:23.371607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.672 [2024-07-12 00:48:23.371636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.672 qpair failed and we were unable to recover it. 00:35:55.672 [2024-07-12 00:48:23.371721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.672 [2024-07-12 00:48:23.371747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.672 qpair failed and we were unable to recover it. 00:35:55.672 [2024-07-12 00:48:23.371828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.672 [2024-07-12 00:48:23.371854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.672 qpair failed and we were unable to recover it. 00:35:55.672 [2024-07-12 00:48:23.371939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.672 [2024-07-12 00:48:23.371966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.672 qpair failed and we were unable to recover it. 00:35:55.672 [2024-07-12 00:48:23.372056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.672 [2024-07-12 00:48:23.372084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.672 qpair failed and we were unable to recover it. 00:35:55.672 [2024-07-12 00:48:23.372175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.672 [2024-07-12 00:48:23.372204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.672 qpair failed and we were unable to recover it. 00:35:55.672 [2024-07-12 00:48:23.372296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.672 [2024-07-12 00:48:23.372325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.672 qpair failed and we were unable to recover it. 00:35:55.672 [2024-07-12 00:48:23.372407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.672 [2024-07-12 00:48:23.372433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.672 qpair failed and we were unable to recover it. 00:35:55.672 [2024-07-12 00:48:23.372514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.672 [2024-07-12 00:48:23.372540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.672 qpair failed and we were unable to recover it. 00:35:55.672 [2024-07-12 00:48:23.372624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.672 [2024-07-12 00:48:23.372651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.672 qpair failed and we were unable to recover it. 00:35:55.672 [2024-07-12 00:48:23.372733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.672 [2024-07-12 00:48:23.372759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.672 qpair failed and we were unable to recover it. 00:35:55.672 [2024-07-12 00:48:23.372841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.672 [2024-07-12 00:48:23.372866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.672 qpair failed and we were unable to recover it. 00:35:55.672 [2024-07-12 00:48:23.372953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.672 [2024-07-12 00:48:23.372982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.672 qpair failed and we were unable to recover it. 00:35:55.672 [2024-07-12 00:48:23.373072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.672 [2024-07-12 00:48:23.373099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.672 qpair failed and we were unable to recover it. 00:35:55.672 [2024-07-12 00:48:23.373180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.673 [2024-07-12 00:48:23.373206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.673 qpair failed and we were unable to recover it. 00:35:55.673 [2024-07-12 00:48:23.373287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.673 [2024-07-12 00:48:23.373318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.673 qpair failed and we were unable to recover it. 00:35:55.673 [2024-07-12 00:48:23.373402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.673 [2024-07-12 00:48:23.373428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.673 qpair failed and we were unable to recover it. 00:35:55.673 [2024-07-12 00:48:23.373515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.673 [2024-07-12 00:48:23.373542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.673 qpair failed and we were unable to recover it. 00:35:55.673 [2024-07-12 00:48:23.373627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.673 [2024-07-12 00:48:23.373661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.673 qpair failed and we were unable to recover it. 00:35:55.673 [2024-07-12 00:48:23.373746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.673 [2024-07-12 00:48:23.373775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.673 qpair failed and we were unable to recover it. 00:35:55.673 [2024-07-12 00:48:23.373860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.673 [2024-07-12 00:48:23.373886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.673 qpair failed and we were unable to recover it. 00:35:55.673 [2024-07-12 00:48:23.373976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.673 [2024-07-12 00:48:23.374003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.673 qpair failed and we were unable to recover it. 00:35:55.673 [2024-07-12 00:48:23.374088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.673 [2024-07-12 00:48:23.374116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.673 qpair failed and we were unable to recover it. 00:35:55.673 [2024-07-12 00:48:23.374201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.673 [2024-07-12 00:48:23.374230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.673 qpair failed and we were unable to recover it. 00:35:55.673 [2024-07-12 00:48:23.374321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.673 [2024-07-12 00:48:23.374349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.673 qpair failed and we were unable to recover it. 00:35:55.673 [2024-07-12 00:48:23.374433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.673 [2024-07-12 00:48:23.374459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.673 qpair failed and we were unable to recover it. 00:35:55.673 [2024-07-12 00:48:23.374551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.673 [2024-07-12 00:48:23.374578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.673 qpair failed and we were unable to recover it. 00:35:55.673 [2024-07-12 00:48:23.374694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.673 [2024-07-12 00:48:23.374728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.673 qpair failed and we were unable to recover it. 00:35:55.673 [2024-07-12 00:48:23.374815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.673 [2024-07-12 00:48:23.374841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.673 qpair failed and we were unable to recover it. 00:35:55.673 [2024-07-12 00:48:23.374927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.673 [2024-07-12 00:48:23.374954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.673 qpair failed and we were unable to recover it. 00:35:55.673 [2024-07-12 00:48:23.375041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.673 [2024-07-12 00:48:23.375070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.673 qpair failed and we were unable to recover it. 00:35:55.673 [2024-07-12 00:48:23.375164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.673 [2024-07-12 00:48:23.375193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.673 qpair failed and we were unable to recover it. 00:35:55.673 [2024-07-12 00:48:23.375276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.673 [2024-07-12 00:48:23.375312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.673 qpair failed and we were unable to recover it. 00:35:55.673 [2024-07-12 00:48:23.375404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.673 [2024-07-12 00:48:23.375432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.673 qpair failed and we were unable to recover it. 00:35:55.673 [2024-07-12 00:48:23.375510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.673 [2024-07-12 00:48:23.375536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.673 qpair failed and we were unable to recover it. 00:35:55.673 [2024-07-12 00:48:23.375621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.673 [2024-07-12 00:48:23.375648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.673 qpair failed and we were unable to recover it. 00:35:55.673 [2024-07-12 00:48:23.375727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.673 [2024-07-12 00:48:23.375754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.673 qpair failed and we were unable to recover it. 00:35:55.673 [2024-07-12 00:48:23.375839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.673 [2024-07-12 00:48:23.375865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.673 qpair failed and we were unable to recover it. 00:35:55.673 [2024-07-12 00:48:23.375951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.673 [2024-07-12 00:48:23.375977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.673 qpair failed and we were unable to recover it. 00:35:55.673 [2024-07-12 00:48:23.376069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.673 [2024-07-12 00:48:23.376097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.673 qpair failed and we were unable to recover it. 00:35:55.673 [2024-07-12 00:48:23.376185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.673 [2024-07-12 00:48:23.376213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.673 qpair failed and we were unable to recover it. 00:35:55.673 [2024-07-12 00:48:23.376296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.673 [2024-07-12 00:48:23.376323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.673 qpair failed and we were unable to recover it. 00:35:55.673 [2024-07-12 00:48:23.376407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.673 [2024-07-12 00:48:23.376435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.673 qpair failed and we were unable to recover it. 00:35:55.673 [2024-07-12 00:48:23.376522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.673 [2024-07-12 00:48:23.376548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.673 qpair failed and we were unable to recover it. 00:35:55.673 [2024-07-12 00:48:23.376649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.673 [2024-07-12 00:48:23.376677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.673 qpair failed and we were unable to recover it. 00:35:55.673 [2024-07-12 00:48:23.376757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.673 [2024-07-12 00:48:23.376783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.673 qpair failed and we were unable to recover it. 00:35:55.673 [2024-07-12 00:48:23.376864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.673 [2024-07-12 00:48:23.376894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.673 qpair failed and we were unable to recover it. 00:35:55.673 [2024-07-12 00:48:23.376969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.673 [2024-07-12 00:48:23.376995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.673 qpair failed and we were unable to recover it. 00:35:55.673 [2024-07-12 00:48:23.377082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.673 [2024-07-12 00:48:23.377111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.673 qpair failed and we were unable to recover it. 00:35:55.673 [2024-07-12 00:48:23.377193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.673 [2024-07-12 00:48:23.377221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.673 qpair failed and we were unable to recover it. 00:35:55.673 [2024-07-12 00:48:23.377314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.673 [2024-07-12 00:48:23.377343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.673 qpair failed and we were unable to recover it. 00:35:55.673 [2024-07-12 00:48:23.377422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.673 [2024-07-12 00:48:23.377449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.673 qpair failed and we were unable to recover it. 00:35:55.673 [2024-07-12 00:48:23.377540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.673 [2024-07-12 00:48:23.377567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.673 qpair failed and we were unable to recover it. 00:35:55.673 [2024-07-12 00:48:23.377659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.673 [2024-07-12 00:48:23.377685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.673 qpair failed and we were unable to recover it. 00:35:55.673 [2024-07-12 00:48:23.377763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.673 [2024-07-12 00:48:23.377791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.673 qpair failed and we were unable to recover it. 00:35:55.673 [2024-07-12 00:48:23.377869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.673 [2024-07-12 00:48:23.377900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.673 qpair failed and we were unable to recover it. 00:35:55.673 [2024-07-12 00:48:23.377987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.673 [2024-07-12 00:48:23.378014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.673 qpair failed and we were unable to recover it. 00:35:55.673 [2024-07-12 00:48:23.378099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.673 [2024-07-12 00:48:23.378125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.673 qpair failed and we were unable to recover it. 00:35:55.673 [2024-07-12 00:48:23.378202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.673 [2024-07-12 00:48:23.378230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.673 qpair failed and we were unable to recover it. 00:35:55.673 [2024-07-12 00:48:23.378320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.673 [2024-07-12 00:48:23.378348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.673 qpair failed and we were unable to recover it. 00:35:55.673 [2024-07-12 00:48:23.378438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.673 [2024-07-12 00:48:23.378465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.673 qpair failed and we were unable to recover it. 00:35:55.673 [2024-07-12 00:48:23.378542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.673 [2024-07-12 00:48:23.378569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.673 qpair failed and we were unable to recover it. 00:35:55.673 [2024-07-12 00:48:23.378652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.673 [2024-07-12 00:48:23.378678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.673 qpair failed and we were unable to recover it. 00:35:55.673 [2024-07-12 00:48:23.378759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.673 [2024-07-12 00:48:23.378786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.673 qpair failed and we were unable to recover it. 00:35:55.673 [2024-07-12 00:48:23.378869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.673 [2024-07-12 00:48:23.378895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.673 qpair failed and we were unable to recover it. 00:35:55.673 [2024-07-12 00:48:23.378993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.673 [2024-07-12 00:48:23.379021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.673 qpair failed and we were unable to recover it. 00:35:55.673 [2024-07-12 00:48:23.379113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.673 [2024-07-12 00:48:23.379143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.673 qpair failed and we were unable to recover it. 00:35:55.673 [2024-07-12 00:48:23.379224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.673 [2024-07-12 00:48:23.379249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.673 qpair failed and we were unable to recover it. 00:35:55.673 [2024-07-12 00:48:23.379326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.673 [2024-07-12 00:48:23.379352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.673 qpair failed and we were unable to recover it. 00:35:55.673 [2024-07-12 00:48:23.379444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.673 [2024-07-12 00:48:23.379470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.673 qpair failed and we were unable to recover it. 00:35:55.673 [2024-07-12 00:48:23.379551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.673 [2024-07-12 00:48:23.379578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.673 qpair failed and we were unable to recover it. 00:35:55.673 [2024-07-12 00:48:23.379678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.673 [2024-07-12 00:48:23.379705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.673 qpair failed and we were unable to recover it. 00:35:55.673 [2024-07-12 00:48:23.379786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.673 [2024-07-12 00:48:23.379812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.673 qpair failed and we were unable to recover it. 00:35:55.673 [2024-07-12 00:48:23.379896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.673 [2024-07-12 00:48:23.379922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.673 qpair failed and we were unable to recover it. 00:35:55.673 [2024-07-12 00:48:23.380001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.673 [2024-07-12 00:48:23.380027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.673 qpair failed and we were unable to recover it. 00:35:55.673 [2024-07-12 00:48:23.380112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.673 [2024-07-12 00:48:23.380137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.673 qpair failed and we were unable to recover it. 00:35:55.673 [2024-07-12 00:48:23.380225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.673 [2024-07-12 00:48:23.380254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.673 qpair failed and we were unable to recover it. 00:35:55.673 [2024-07-12 00:48:23.380338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.674 [2024-07-12 00:48:23.380367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.674 qpair failed and we were unable to recover it. 00:35:55.674 [2024-07-12 00:48:23.380447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.674 [2024-07-12 00:48:23.380473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.674 qpair failed and we were unable to recover it. 00:35:55.674 [2024-07-12 00:48:23.380557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.674 [2024-07-12 00:48:23.380591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.674 qpair failed and we were unable to recover it. 00:35:55.674 [2024-07-12 00:48:23.380678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.674 [2024-07-12 00:48:23.380704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.674 qpair failed and we were unable to recover it. 00:35:55.674 [2024-07-12 00:48:23.380786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.674 [2024-07-12 00:48:23.380812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.674 qpair failed and we were unable to recover it. 00:35:55.674 [2024-07-12 00:48:23.380889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.674 [2024-07-12 00:48:23.380925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.674 qpair failed and we were unable to recover it. 00:35:55.674 [2024-07-12 00:48:23.381003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.674 [2024-07-12 00:48:23.381029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.674 qpair failed and we were unable to recover it. 00:35:55.674 [2024-07-12 00:48:23.381104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.674 [2024-07-12 00:48:23.381130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.674 qpair failed and we were unable to recover it. 00:35:55.674 [2024-07-12 00:48:23.381213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.674 [2024-07-12 00:48:23.381240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.674 qpair failed and we were unable to recover it. 00:35:55.674 [2024-07-12 00:48:23.381325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.674 [2024-07-12 00:48:23.381354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.674 qpair failed and we were unable to recover it. 00:35:55.674 [2024-07-12 00:48:23.381460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.674 [2024-07-12 00:48:23.381489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.674 qpair failed and we were unable to recover it. 00:35:55.674 [2024-07-12 00:48:23.381577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.674 [2024-07-12 00:48:23.381611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.674 qpair failed and we were unable to recover it. 00:35:55.674 [2024-07-12 00:48:23.381696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.674 [2024-07-12 00:48:23.381722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.674 qpair failed and we were unable to recover it. 00:35:55.674 [2024-07-12 00:48:23.381810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.674 [2024-07-12 00:48:23.381836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.674 qpair failed and we were unable to recover it. 00:35:55.674 [2024-07-12 00:48:23.381912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.674 [2024-07-12 00:48:23.381938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.674 qpair failed and we were unable to recover it. 00:35:55.674 [2024-07-12 00:48:23.382016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.674 [2024-07-12 00:48:23.382044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.674 qpair failed and we were unable to recover it. 00:35:55.674 [2024-07-12 00:48:23.382127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.674 [2024-07-12 00:48:23.382152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.674 qpair failed and we were unable to recover it. 00:35:55.674 [2024-07-12 00:48:23.382238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.674 [2024-07-12 00:48:23.382264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.674 qpair failed and we were unable to recover it. 00:35:55.674 [2024-07-12 00:48:23.382341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.674 [2024-07-12 00:48:23.382367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.674 qpair failed and we were unable to recover it. 00:35:55.674 [2024-07-12 00:48:23.382456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.674 [2024-07-12 00:48:23.382482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.674 qpair failed and we were unable to recover it. 00:35:55.674 [2024-07-12 00:48:23.382568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.674 [2024-07-12 00:48:23.382598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.674 qpair failed and we were unable to recover it. 00:35:55.674 [2024-07-12 00:48:23.382677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.674 [2024-07-12 00:48:23.382705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.674 qpair failed and we were unable to recover it. 00:35:55.674 [2024-07-12 00:48:23.382797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.674 [2024-07-12 00:48:23.382826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.674 qpair failed and we were unable to recover it. 00:35:55.674 [2024-07-12 00:48:23.382907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.674 [2024-07-12 00:48:23.382933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.674 qpair failed and we were unable to recover it. 00:35:55.674 [2024-07-12 00:48:23.383014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.674 [2024-07-12 00:48:23.383040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.674 qpair failed and we were unable to recover it. 00:35:55.674 [2024-07-12 00:48:23.383168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.674 [2024-07-12 00:48:23.383194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.674 qpair failed and we were unable to recover it. 00:35:55.674 [2024-07-12 00:48:23.383281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.674 [2024-07-12 00:48:23.383307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.674 qpair failed and we were unable to recover it. 00:35:55.674 [2024-07-12 00:48:23.383401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.674 [2024-07-12 00:48:23.383430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.674 qpair failed and we were unable to recover it. 00:35:55.674 [2024-07-12 00:48:23.383515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.674 [2024-07-12 00:48:23.383543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.674 qpair failed and we were unable to recover it. 00:35:55.674 [2024-07-12 00:48:23.383636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.674 [2024-07-12 00:48:23.383665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.674 qpair failed and we were unable to recover it. 00:35:55.674 [2024-07-12 00:48:23.383753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.674 [2024-07-12 00:48:23.383780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.674 qpair failed and we were unable to recover it. 00:35:55.674 [2024-07-12 00:48:23.383862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.674 [2024-07-12 00:48:23.383888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.674 qpair failed and we were unable to recover it. 00:35:55.674 [2024-07-12 00:48:23.383971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.674 [2024-07-12 00:48:23.384002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.674 qpair failed and we were unable to recover it. 00:35:55.674 [2024-07-12 00:48:23.384085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.674 [2024-07-12 00:48:23.384111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.674 qpair failed and we were unable to recover it. 00:35:55.674 [2024-07-12 00:48:23.384200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.674 [2024-07-12 00:48:23.384227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.674 qpair failed and we were unable to recover it. 00:35:55.674 [2024-07-12 00:48:23.384312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.674 [2024-07-12 00:48:23.384340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.674 qpair failed and we were unable to recover it. 00:35:55.674 [2024-07-12 00:48:23.384424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.674 [2024-07-12 00:48:23.384450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.674 qpair failed and we were unable to recover it. 00:35:55.674 [2024-07-12 00:48:23.384528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.674 [2024-07-12 00:48:23.384554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.674 qpair failed and we were unable to recover it. 00:35:55.674 [2024-07-12 00:48:23.384676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.674 [2024-07-12 00:48:23.384703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.674 qpair failed and we were unable to recover it. 00:35:55.674 [2024-07-12 00:48:23.384776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.674 [2024-07-12 00:48:23.384803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.674 qpair failed and we were unable to recover it. 00:35:55.674 [2024-07-12 00:48:23.384879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.674 [2024-07-12 00:48:23.384906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.674 qpair failed and we were unable to recover it. 00:35:55.674 [2024-07-12 00:48:23.384992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.674 [2024-07-12 00:48:23.385020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.674 qpair failed and we were unable to recover it. 00:35:55.674 [2024-07-12 00:48:23.385107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.674 [2024-07-12 00:48:23.385134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.674 qpair failed and we were unable to recover it. 00:35:55.674 [2024-07-12 00:48:23.385213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.674 [2024-07-12 00:48:23.385241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.674 qpair failed and we were unable to recover it. 00:35:55.674 [2024-07-12 00:48:23.385326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.674 [2024-07-12 00:48:23.385355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.674 qpair failed and we were unable to recover it. 00:35:55.674 [2024-07-12 00:48:23.385431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.674 [2024-07-12 00:48:23.385457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.674 qpair failed and we were unable to recover it. 00:35:55.674 [2024-07-12 00:48:23.385552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.674 [2024-07-12 00:48:23.385579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.674 qpair failed and we were unable to recover it. 00:35:55.674 [2024-07-12 00:48:23.385674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.674 [2024-07-12 00:48:23.385700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.674 qpair failed and we were unable to recover it. 00:35:55.674 [2024-07-12 00:48:23.385783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.674 [2024-07-12 00:48:23.385809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.674 qpair failed and we were unable to recover it. 00:35:55.674 [2024-07-12 00:48:23.385884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.674 [2024-07-12 00:48:23.385910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.674 qpair failed and we were unable to recover it. 00:35:55.674 [2024-07-12 00:48:23.385989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.674 [2024-07-12 00:48:23.386016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.674 qpair failed and we were unable to recover it. 00:35:55.674 [2024-07-12 00:48:23.386104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.674 [2024-07-12 00:48:23.386133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.674 qpair failed and we were unable to recover it. 00:35:55.674 [2024-07-12 00:48:23.386210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.674 [2024-07-12 00:48:23.386236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.674 qpair failed and we were unable to recover it. 00:35:55.674 [2024-07-12 00:48:23.386312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.674 [2024-07-12 00:48:23.386338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.674 qpair failed and we were unable to recover it. 00:35:55.674 [2024-07-12 00:48:23.386412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.674 [2024-07-12 00:48:23.386438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.675 qpair failed and we were unable to recover it. 00:35:55.675 [2024-07-12 00:48:23.386530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.675 [2024-07-12 00:48:23.386558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.675 qpair failed and we were unable to recover it. 00:35:55.675 [2024-07-12 00:48:23.386660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.675 [2024-07-12 00:48:23.386688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.675 qpair failed and we were unable to recover it. 00:35:55.675 [2024-07-12 00:48:23.386771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.675 [2024-07-12 00:48:23.386799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.675 qpair failed and we were unable to recover it. 00:35:55.675 [2024-07-12 00:48:23.386877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.675 [2024-07-12 00:48:23.386902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.675 qpair failed and we were unable to recover it. 00:35:55.675 [2024-07-12 00:48:23.386984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.675 [2024-07-12 00:48:23.387012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.675 qpair failed and we were unable to recover it. 00:35:55.675 [2024-07-12 00:48:23.387091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.675 [2024-07-12 00:48:23.387118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.675 qpair failed and we were unable to recover it. 00:35:55.675 [2024-07-12 00:48:23.387196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.675 [2024-07-12 00:48:23.387222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.675 qpair failed and we were unable to recover it. 00:35:55.675 [2024-07-12 00:48:23.387309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.675 [2024-07-12 00:48:23.387335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.675 qpair failed and we were unable to recover it. 00:35:55.675 [2024-07-12 00:48:23.387418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.675 [2024-07-12 00:48:23.387445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.675 qpair failed and we were unable to recover it. 00:35:55.675 [2024-07-12 00:48:23.387524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.675 [2024-07-12 00:48:23.387551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.675 qpair failed and we were unable to recover it. 00:35:55.675 [2024-07-12 00:48:23.387648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.675 [2024-07-12 00:48:23.387676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.675 qpair failed and we were unable to recover it. 00:35:55.675 [2024-07-12 00:48:23.387758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.675 [2024-07-12 00:48:23.387785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.675 qpair failed and we were unable to recover it. 00:35:55.675 [2024-07-12 00:48:23.387862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.675 [2024-07-12 00:48:23.387889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.675 qpair failed and we were unable to recover it. 00:35:55.675 [2024-07-12 00:48:23.387965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.675 [2024-07-12 00:48:23.387991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.675 qpair failed and we were unable to recover it. 00:35:55.675 [2024-07-12 00:48:23.388075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.675 [2024-07-12 00:48:23.388104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.675 qpair failed and we were unable to recover it. 00:35:55.675 [2024-07-12 00:48:23.388182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.675 [2024-07-12 00:48:23.388209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.675 qpair failed and we were unable to recover it. 00:35:55.675 [2024-07-12 00:48:23.388292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.675 [2024-07-12 00:48:23.388318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.675 qpair failed and we were unable to recover it. 00:35:55.675 [2024-07-12 00:48:23.388405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.675 [2024-07-12 00:48:23.388439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.675 qpair failed and we were unable to recover it. 00:35:55.675 [2024-07-12 00:48:23.388520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.675 [2024-07-12 00:48:23.388546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.675 qpair failed and we were unable to recover it. 00:35:55.675 [2024-07-12 00:48:23.388642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.675 [2024-07-12 00:48:23.388670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.675 qpair failed and we were unable to recover it. 00:35:55.675 [2024-07-12 00:48:23.388756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.675 [2024-07-12 00:48:23.388784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.675 qpair failed and we were unable to recover it. 00:35:55.675 [2024-07-12 00:48:23.388870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.675 [2024-07-12 00:48:23.388897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.675 qpair failed and we were unable to recover it. 00:35:55.675 [2024-07-12 00:48:23.388974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.675 [2024-07-12 00:48:23.389000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.675 qpair failed and we were unable to recover it. 00:35:55.675 [2024-07-12 00:48:23.389081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.675 [2024-07-12 00:48:23.389109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.675 qpair failed and we were unable to recover it. 00:35:55.675 [2024-07-12 00:48:23.389184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.675 [2024-07-12 00:48:23.389210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.675 qpair failed and we were unable to recover it. 00:35:55.675 [2024-07-12 00:48:23.389289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.675 [2024-07-12 00:48:23.389315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.675 qpair failed and we were unable to recover it. 00:35:55.675 [2024-07-12 00:48:23.389392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.675 [2024-07-12 00:48:23.389419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.675 qpair failed and we were unable to recover it. 00:35:55.675 [2024-07-12 00:48:23.389504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.675 [2024-07-12 00:48:23.389531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.675 qpair failed and we were unable to recover it. 00:35:55.675 [2024-07-12 00:48:23.389612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.675 [2024-07-12 00:48:23.389642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.675 qpair failed and we were unable to recover it. 00:35:55.675 [2024-07-12 00:48:23.389721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.675 [2024-07-12 00:48:23.389748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.675 qpair failed and we were unable to recover it. 00:35:55.675 [2024-07-12 00:48:23.389857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.675 [2024-07-12 00:48:23.389883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.675 qpair failed and we were unable to recover it. 00:35:55.675 [2024-07-12 00:48:23.390010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.675 [2024-07-12 00:48:23.390065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.675 qpair failed and we were unable to recover it. 00:35:55.675 [2024-07-12 00:48:23.390154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.675 [2024-07-12 00:48:23.390182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.675 qpair failed and we were unable to recover it. 00:35:55.675 [2024-07-12 00:48:23.390274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.675 [2024-07-12 00:48:23.390303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.675 qpair failed and we were unable to recover it. 00:35:55.675 [2024-07-12 00:48:23.390384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.675 [2024-07-12 00:48:23.390410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.675 qpair failed and we were unable to recover it. 00:35:55.675 [2024-07-12 00:48:23.390494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.675 [2024-07-12 00:48:23.390520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.675 qpair failed and we were unable to recover it. 00:35:55.675 [2024-07-12 00:48:23.390597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.675 [2024-07-12 00:48:23.390624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.675 qpair failed and we were unable to recover it. 00:35:55.675 [2024-07-12 00:48:23.390705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.675 [2024-07-12 00:48:23.390732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.675 qpair failed and we were unable to recover it. 00:35:55.675 [2024-07-12 00:48:23.390821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.675 [2024-07-12 00:48:23.390848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.675 qpair failed and we were unable to recover it. 00:35:55.675 [2024-07-12 00:48:23.390926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.675 [2024-07-12 00:48:23.390952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.675 qpair failed and we were unable to recover it. 00:35:55.675 [2024-07-12 00:48:23.391046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.675 [2024-07-12 00:48:23.391084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.675 qpair failed and we were unable to recover it. 00:35:55.675 [2024-07-12 00:48:23.391168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.675 [2024-07-12 00:48:23.391198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.675 qpair failed and we were unable to recover it. 00:35:55.675 [2024-07-12 00:48:23.391278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.675 [2024-07-12 00:48:23.391304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.675 qpair failed and we were unable to recover it. 00:35:55.675 [2024-07-12 00:48:23.391381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.675 [2024-07-12 00:48:23.391410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.675 qpair failed and we were unable to recover it. 00:35:55.675 [2024-07-12 00:48:23.391564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.675 [2024-07-12 00:48:23.391617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.675 qpair failed and we were unable to recover it. 00:35:55.675 [2024-07-12 00:48:23.391716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.675 [2024-07-12 00:48:23.391743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.675 qpair failed and we were unable to recover it. 00:35:55.675 [2024-07-12 00:48:23.391827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.675 [2024-07-12 00:48:23.391853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.675 qpair failed and we were unable to recover it. 00:35:55.675 [2024-07-12 00:48:23.391926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.675 [2024-07-12 00:48:23.391953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.675 qpair failed and we were unable to recover it. 00:35:55.675 [2024-07-12 00:48:23.392025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.675 [2024-07-12 00:48:23.392050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.675 qpair failed and we were unable to recover it. 00:35:55.675 [2024-07-12 00:48:23.392132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.675 [2024-07-12 00:48:23.392157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.675 qpair failed and we were unable to recover it. 00:35:55.675 [2024-07-12 00:48:23.392238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.675 [2024-07-12 00:48:23.392266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.675 qpair failed and we were unable to recover it. 00:35:55.675 [2024-07-12 00:48:23.392350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.675 [2024-07-12 00:48:23.392379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.675 qpair failed and we were unable to recover it. 00:35:55.675 [2024-07-12 00:48:23.392465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.675 [2024-07-12 00:48:23.392493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.675 qpair failed and we were unable to recover it. 00:35:55.675 [2024-07-12 00:48:23.392573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.675 [2024-07-12 00:48:23.392607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.675 qpair failed and we were unable to recover it. 00:35:55.675 [2024-07-12 00:48:23.392683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.675 [2024-07-12 00:48:23.392708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.675 qpair failed and we were unable to recover it. 00:35:55.675 [2024-07-12 00:48:23.392789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.675 [2024-07-12 00:48:23.392815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.675 qpair failed and we were unable to recover it. 00:35:55.675 [2024-07-12 00:48:23.392894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.675 [2024-07-12 00:48:23.392920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.675 qpair failed and we were unable to recover it. 00:35:55.675 [2024-07-12 00:48:23.393004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.675 [2024-07-12 00:48:23.393032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.675 qpair failed and we were unable to recover it. 00:35:55.675 [2024-07-12 00:48:23.393117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.675 [2024-07-12 00:48:23.393143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.675 qpair failed and we were unable to recover it. 00:35:55.675 [2024-07-12 00:48:23.393226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.675 [2024-07-12 00:48:23.393254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.675 qpair failed and we were unable to recover it. 00:35:55.675 [2024-07-12 00:48:23.393333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.675 [2024-07-12 00:48:23.393360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.675 qpair failed and we were unable to recover it. 00:35:55.675 [2024-07-12 00:48:23.393446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.675 [2024-07-12 00:48:23.393475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.675 qpair failed and we were unable to recover it. 00:35:55.675 [2024-07-12 00:48:23.393562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.675 [2024-07-12 00:48:23.393596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.676 qpair failed and we were unable to recover it. 00:35:55.676 [2024-07-12 00:48:23.393679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.676 [2024-07-12 00:48:23.393708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.676 qpair failed and we were unable to recover it. 00:35:55.676 [2024-07-12 00:48:23.393799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.676 [2024-07-12 00:48:23.393826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.676 qpair failed and we were unable to recover it. 00:35:55.676 [2024-07-12 00:48:23.393912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.676 [2024-07-12 00:48:23.393938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.676 qpair failed and we were unable to recover it. 00:35:55.676 [2024-07-12 00:48:23.394022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.676 [2024-07-12 00:48:23.394060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.676 qpair failed and we were unable to recover it. 00:35:55.676 [2024-07-12 00:48:23.394149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.676 [2024-07-12 00:48:23.394178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.676 qpair failed and we were unable to recover it. 00:35:55.676 [2024-07-12 00:48:23.394261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.676 [2024-07-12 00:48:23.394288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.676 qpair failed and we were unable to recover it. 00:35:55.676 [2024-07-12 00:48:23.394379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.676 [2024-07-12 00:48:23.394405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.676 qpair failed and we were unable to recover it. 00:35:55.676 [2024-07-12 00:48:23.394489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.676 [2024-07-12 00:48:23.394515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.676 qpair failed and we were unable to recover it. 00:35:55.676 [2024-07-12 00:48:23.394605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.676 [2024-07-12 00:48:23.394634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.676 qpair failed and we were unable to recover it. 00:35:55.676 [2024-07-12 00:48:23.394714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.676 [2024-07-12 00:48:23.394740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.676 qpair failed and we were unable to recover it. 00:35:55.676 [2024-07-12 00:48:23.394819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.676 [2024-07-12 00:48:23.394850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.676 qpair failed and we were unable to recover it. 00:35:55.676 [2024-07-12 00:48:23.394926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.676 [2024-07-12 00:48:23.394951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.676 qpair failed and we were unable to recover it. 00:35:55.676 [2024-07-12 00:48:23.395033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.676 [2024-07-12 00:48:23.395059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.676 qpair failed and we were unable to recover it. 00:35:55.676 [2024-07-12 00:48:23.395138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.676 [2024-07-12 00:48:23.395164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.676 qpair failed and we were unable to recover it. 00:35:55.676 [2024-07-12 00:48:23.395240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.676 [2024-07-12 00:48:23.395266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.676 qpair failed and we were unable to recover it. 00:35:55.676 [2024-07-12 00:48:23.395350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.676 [2024-07-12 00:48:23.395378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.676 qpair failed and we were unable to recover it. 00:35:55.676 [2024-07-12 00:48:23.395456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.676 [2024-07-12 00:48:23.395480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.676 qpair failed and we were unable to recover it. 00:35:55.676 [2024-07-12 00:48:23.395563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.676 [2024-07-12 00:48:23.395600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.676 qpair failed and we were unable to recover it. 00:35:55.676 [2024-07-12 00:48:23.395686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.676 [2024-07-12 00:48:23.395712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.676 qpair failed and we were unable to recover it. 00:35:55.676 [2024-07-12 00:48:23.395787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.676 [2024-07-12 00:48:23.395812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.676 qpair failed and we were unable to recover it. 00:35:55.676 [2024-07-12 00:48:23.395902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.676 [2024-07-12 00:48:23.395929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.676 qpair failed and we were unable to recover it. 00:35:55.676 [2024-07-12 00:48:23.396005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.676 [2024-07-12 00:48:23.396035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.676 qpair failed and we were unable to recover it. 00:35:55.676 [2024-07-12 00:48:23.396115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.676 [2024-07-12 00:48:23.396140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.676 qpair failed and we were unable to recover it. 00:35:55.676 [2024-07-12 00:48:23.396224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.676 [2024-07-12 00:48:23.396249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.676 qpair failed and we were unable to recover it. 00:35:55.676 [2024-07-12 00:48:23.396336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.676 [2024-07-12 00:48:23.396362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.676 qpair failed and we were unable to recover it. 00:35:55.676 [2024-07-12 00:48:23.396443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.676 [2024-07-12 00:48:23.396469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.676 qpair failed and we were unable to recover it. 00:35:55.676 [2024-07-12 00:48:23.396553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.676 [2024-07-12 00:48:23.396581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.676 qpair failed and we were unable to recover it. 00:35:55.676 [2024-07-12 00:48:23.396673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.676 [2024-07-12 00:48:23.396701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.676 qpair failed and we were unable to recover it. 00:35:55.676 [2024-07-12 00:48:23.396782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.676 [2024-07-12 00:48:23.396811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.676 qpair failed and we were unable to recover it. 00:35:55.676 [2024-07-12 00:48:23.396893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.676 [2024-07-12 00:48:23.396921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.676 qpair failed and we were unable to recover it. 00:35:55.676 [2024-07-12 00:48:23.397047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.676 [2024-07-12 00:48:23.397102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.676 qpair failed and we were unable to recover it. 00:35:55.676 [2024-07-12 00:48:23.397187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.676 [2024-07-12 00:48:23.397213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.676 qpair failed and we were unable to recover it. 00:35:55.676 [2024-07-12 00:48:23.397296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.676 [2024-07-12 00:48:23.397322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.676 qpair failed and we were unable to recover it. 00:35:55.676 [2024-07-12 00:48:23.397405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.676 [2024-07-12 00:48:23.397432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.676 qpair failed and we were unable to recover it. 00:35:55.676 [2024-07-12 00:48:23.397514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.676 [2024-07-12 00:48:23.397541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.676 qpair failed and we were unable to recover it. 00:35:55.676 [2024-07-12 00:48:23.397645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.676 [2024-07-12 00:48:23.397673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.676 qpair failed and we were unable to recover it. 00:35:55.676 [2024-07-12 00:48:23.397759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.676 [2024-07-12 00:48:23.397785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.676 qpair failed and we were unable to recover it. 00:35:55.676 [2024-07-12 00:48:23.397863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.676 [2024-07-12 00:48:23.397888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.676 qpair failed and we were unable to recover it. 00:35:55.676 [2024-07-12 00:48:23.397971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.676 [2024-07-12 00:48:23.397996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.676 qpair failed and we were unable to recover it. 00:35:55.676 [2024-07-12 00:48:23.398086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.676 [2024-07-12 00:48:23.398112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.676 qpair failed and we were unable to recover it. 00:35:55.676 [2024-07-12 00:48:23.398192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.676 [2024-07-12 00:48:23.398218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.676 qpair failed and we were unable to recover it. 00:35:55.676 [2024-07-12 00:48:23.398306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.676 [2024-07-12 00:48:23.398334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.676 qpair failed and we were unable to recover it. 00:35:55.676 [2024-07-12 00:48:23.398419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.676 [2024-07-12 00:48:23.398447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.676 qpair failed and we were unable to recover it. 00:35:55.676 [2024-07-12 00:48:23.398530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.676 [2024-07-12 00:48:23.398556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.676 qpair failed and we were unable to recover it. 00:35:55.676 [2024-07-12 00:48:23.398650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.676 [2024-07-12 00:48:23.398677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.676 qpair failed and we were unable to recover it. 00:35:55.676 [2024-07-12 00:48:23.398758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.676 [2024-07-12 00:48:23.398784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.676 qpair failed and we were unable to recover it. 00:35:55.676 [2024-07-12 00:48:23.398868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.676 [2024-07-12 00:48:23.398896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.676 qpair failed and we were unable to recover it. 00:35:55.676 [2024-07-12 00:48:23.398977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.676 [2024-07-12 00:48:23.399004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.676 qpair failed and we were unable to recover it. 00:35:55.676 [2024-07-12 00:48:23.399092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.676 [2024-07-12 00:48:23.399123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.676 qpair failed and we were unable to recover it. 00:35:55.676 [2024-07-12 00:48:23.399210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.676 [2024-07-12 00:48:23.399235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.676 qpair failed and we were unable to recover it. 00:35:55.676 [2024-07-12 00:48:23.399323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.676 [2024-07-12 00:48:23.399352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.960 qpair failed and we were unable to recover it. 00:35:55.960 [2024-07-12 00:48:23.399443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.960 [2024-07-12 00:48:23.399469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.960 qpair failed and we were unable to recover it. 00:35:55.960 [2024-07-12 00:48:23.399550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.960 [2024-07-12 00:48:23.399576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.960 qpair failed and we were unable to recover it. 00:35:55.960 [2024-07-12 00:48:23.399666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.960 [2024-07-12 00:48:23.399692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.960 qpair failed and we were unable to recover it. 00:35:55.960 [2024-07-12 00:48:23.399769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.960 [2024-07-12 00:48:23.399795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.960 qpair failed and we were unable to recover it. 00:35:55.960 [2024-07-12 00:48:23.399885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.960 [2024-07-12 00:48:23.399910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.960 qpair failed and we were unable to recover it. 00:35:55.960 [2024-07-12 00:48:23.399992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.960 [2024-07-12 00:48:23.400022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.960 qpair failed and we were unable to recover it. 00:35:55.960 [2024-07-12 00:48:23.400109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.960 [2024-07-12 00:48:23.400138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.960 qpair failed and we were unable to recover it. 00:35:55.960 [2024-07-12 00:48:23.400221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.960 [2024-07-12 00:48:23.400251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.960 qpair failed and we were unable to recover it. 00:35:55.960 [2024-07-12 00:48:23.400334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.960 [2024-07-12 00:48:23.400361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.960 qpair failed and we were unable to recover it. 00:35:55.960 [2024-07-12 00:48:23.400448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.960 [2024-07-12 00:48:23.400473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.960 qpair failed and we were unable to recover it. 00:35:55.960 [2024-07-12 00:48:23.400554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.960 [2024-07-12 00:48:23.400579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.960 qpair failed and we were unable to recover it. 00:35:55.960 [2024-07-12 00:48:23.400684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.960 [2024-07-12 00:48:23.400713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.960 qpair failed and we were unable to recover it. 00:35:55.960 [2024-07-12 00:48:23.400793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.960 [2024-07-12 00:48:23.400820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.960 qpair failed and we were unable to recover it. 00:35:55.960 [2024-07-12 00:48:23.400910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.960 [2024-07-12 00:48:23.400935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.960 qpair failed and we were unable to recover it. 00:35:55.960 [2024-07-12 00:48:23.401018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.960 [2024-07-12 00:48:23.401044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.960 qpair failed and we were unable to recover it. 00:35:55.960 [2024-07-12 00:48:23.401127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.960 [2024-07-12 00:48:23.401153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.960 qpair failed and we were unable to recover it. 00:35:55.960 [2024-07-12 00:48:23.401244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.960 [2024-07-12 00:48:23.401272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.960 qpair failed and we were unable to recover it. 00:35:55.960 [2024-07-12 00:48:23.401354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.960 [2024-07-12 00:48:23.401379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.960 qpair failed and we were unable to recover it. 00:35:55.960 [2024-07-12 00:48:23.401471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.960 [2024-07-12 00:48:23.401496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.960 qpair failed and we were unable to recover it. 00:35:55.960 [2024-07-12 00:48:23.401630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.960 [2024-07-12 00:48:23.401658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.960 qpair failed and we were unable to recover it. 00:35:55.960 [2024-07-12 00:48:23.401739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.960 [2024-07-12 00:48:23.401766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.960 qpair failed and we were unable to recover it. 00:35:55.960 [2024-07-12 00:48:23.401852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.960 [2024-07-12 00:48:23.401878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.960 qpair failed and we were unable to recover it. 00:35:55.960 [2024-07-12 00:48:23.401959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.960 [2024-07-12 00:48:23.401985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.960 qpair failed and we were unable to recover it. 00:35:55.960 [2024-07-12 00:48:23.402074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.960 [2024-07-12 00:48:23.402098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.960 qpair failed and we were unable to recover it. 00:35:55.960 [2024-07-12 00:48:23.402192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.960 [2024-07-12 00:48:23.402220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.960 qpair failed and we were unable to recover it. 00:35:55.960 [2024-07-12 00:48:23.402311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.960 [2024-07-12 00:48:23.402337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.960 qpair failed and we were unable to recover it. 00:35:55.960 [2024-07-12 00:48:23.402434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.960 [2024-07-12 00:48:23.402471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.960 qpair failed and we were unable to recover it. 00:35:55.960 [2024-07-12 00:48:23.402563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.960 [2024-07-12 00:48:23.402593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.960 qpair failed and we were unable to recover it. 00:35:55.960 [2024-07-12 00:48:23.402679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.960 [2024-07-12 00:48:23.402705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.960 qpair failed and we were unable to recover it. 00:35:55.960 [2024-07-12 00:48:23.402791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.960 [2024-07-12 00:48:23.402816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.960 qpair failed and we were unable to recover it. 00:35:55.960 [2024-07-12 00:48:23.402905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.960 [2024-07-12 00:48:23.402931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.960 qpair failed and we were unable to recover it. 00:35:55.960 [2024-07-12 00:48:23.403010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.960 [2024-07-12 00:48:23.403035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.960 qpair failed and we were unable to recover it. 00:35:55.960 [2024-07-12 00:48:23.403112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.960 [2024-07-12 00:48:23.403136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.960 qpair failed and we were unable to recover it. 00:35:55.960 [2024-07-12 00:48:23.403224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.960 [2024-07-12 00:48:23.403248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.960 qpair failed and we were unable to recover it. 00:35:55.961 [2024-07-12 00:48:23.403327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.961 [2024-07-12 00:48:23.403352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.961 qpair failed and we were unable to recover it. 00:35:55.961 [2024-07-12 00:48:23.403439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.961 [2024-07-12 00:48:23.403465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.961 qpair failed and we were unable to recover it. 00:35:55.961 [2024-07-12 00:48:23.403553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.961 [2024-07-12 00:48:23.403579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.961 qpair failed and we were unable to recover it. 00:35:55.961 [2024-07-12 00:48:23.403669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.961 [2024-07-12 00:48:23.403700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.961 qpair failed and we were unable to recover it. 00:35:55.961 [2024-07-12 00:48:23.403788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.961 [2024-07-12 00:48:23.403813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.961 qpair failed and we were unable to recover it. 00:35:55.961 [2024-07-12 00:48:23.403892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.961 [2024-07-12 00:48:23.403916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.961 qpair failed and we were unable to recover it. 00:35:55.961 [2024-07-12 00:48:23.403999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.961 [2024-07-12 00:48:23.404025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.961 qpair failed and we were unable to recover it. 00:35:55.961 [2024-07-12 00:48:23.404107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.961 [2024-07-12 00:48:23.404133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.961 qpair failed and we were unable to recover it. 00:35:55.961 [2024-07-12 00:48:23.404216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.961 [2024-07-12 00:48:23.404242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.961 qpair failed and we were unable to recover it. 00:35:55.961 [2024-07-12 00:48:23.404322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.961 [2024-07-12 00:48:23.404348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.961 qpair failed and we were unable to recover it. 00:35:55.961 [2024-07-12 00:48:23.404427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.961 [2024-07-12 00:48:23.404452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.961 qpair failed and we were unable to recover it. 00:35:55.961 [2024-07-12 00:48:23.404546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.961 [2024-07-12 00:48:23.404572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.961 qpair failed and we were unable to recover it. 00:35:55.961 [2024-07-12 00:48:23.404666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.961 [2024-07-12 00:48:23.404691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.961 qpair failed and we were unable to recover it. 00:35:55.961 [2024-07-12 00:48:23.404770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.961 [2024-07-12 00:48:23.404795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.961 qpair failed and we were unable to recover it. 00:35:55.961 [2024-07-12 00:48:23.404878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.961 [2024-07-12 00:48:23.404903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.961 qpair failed and we were unable to recover it. 00:35:55.961 [2024-07-12 00:48:23.404987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.961 [2024-07-12 00:48:23.405011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.961 qpair failed and we were unable to recover it. 00:35:55.961 [2024-07-12 00:48:23.405090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.961 [2024-07-12 00:48:23.405114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.961 qpair failed and we were unable to recover it. 00:35:55.961 [2024-07-12 00:48:23.405208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.961 [2024-07-12 00:48:23.405232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.961 qpair failed and we were unable to recover it. 00:35:55.961 [2024-07-12 00:48:23.405327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.961 [2024-07-12 00:48:23.405355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.961 qpair failed and we were unable to recover it. 00:35:55.961 [2024-07-12 00:48:23.405449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.961 [2024-07-12 00:48:23.405476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.961 qpair failed and we were unable to recover it. 00:35:55.961 [2024-07-12 00:48:23.405563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.961 [2024-07-12 00:48:23.405598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.961 qpair failed and we were unable to recover it. 00:35:55.961 [2024-07-12 00:48:23.405691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.961 [2024-07-12 00:48:23.405717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.961 qpair failed and we were unable to recover it. 00:35:55.961 [2024-07-12 00:48:23.405801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.961 [2024-07-12 00:48:23.405827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.961 qpair failed and we were unable to recover it. 00:35:55.961 [2024-07-12 00:48:23.405917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.961 [2024-07-12 00:48:23.405942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.961 qpair failed and we were unable to recover it. 00:35:55.961 [2024-07-12 00:48:23.406023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.961 [2024-07-12 00:48:23.406048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.961 qpair failed and we were unable to recover it. 00:35:55.961 [2024-07-12 00:48:23.406125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.961 [2024-07-12 00:48:23.406149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.961 qpair failed and we were unable to recover it. 00:35:55.961 [2024-07-12 00:48:23.406229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.961 [2024-07-12 00:48:23.406253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.961 qpair failed and we were unable to recover it. 00:35:55.961 [2024-07-12 00:48:23.406337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.961 [2024-07-12 00:48:23.406363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.961 qpair failed and we were unable to recover it. 00:35:55.961 [2024-07-12 00:48:23.406445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.961 [2024-07-12 00:48:23.406470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.961 qpair failed and we were unable to recover it. 00:35:55.961 [2024-07-12 00:48:23.406557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.961 [2024-07-12 00:48:23.406584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.961 qpair failed and we were unable to recover it. 00:35:55.961 [2024-07-12 00:48:23.406679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.961 [2024-07-12 00:48:23.406707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.961 qpair failed and we were unable to recover it. 00:35:55.961 [2024-07-12 00:48:23.406786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.961 [2024-07-12 00:48:23.406811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.961 qpair failed and we were unable to recover it. 00:35:55.962 [2024-07-12 00:48:23.406887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.962 [2024-07-12 00:48:23.406911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.962 qpair failed and we were unable to recover it. 00:35:55.962 [2024-07-12 00:48:23.407001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.962 [2024-07-12 00:48:23.407026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.962 qpair failed and we were unable to recover it. 00:35:55.962 [2024-07-12 00:48:23.407108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.962 [2024-07-12 00:48:23.407132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.962 qpair failed and we were unable to recover it. 00:35:55.962 [2024-07-12 00:48:23.407217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.962 [2024-07-12 00:48:23.407242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.962 qpair failed and we were unable to recover it. 00:35:55.962 [2024-07-12 00:48:23.407323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.962 [2024-07-12 00:48:23.407348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.962 qpair failed and we were unable to recover it. 00:35:55.962 [2024-07-12 00:48:23.407428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.962 [2024-07-12 00:48:23.407453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.962 qpair failed and we were unable to recover it. 00:35:55.962 [2024-07-12 00:48:23.407532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.962 [2024-07-12 00:48:23.407556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.962 qpair failed and we were unable to recover it. 00:35:55.962 [2024-07-12 00:48:23.407652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.962 [2024-07-12 00:48:23.407678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.962 qpair failed and we were unable to recover it. 00:35:55.962 [2024-07-12 00:48:23.407764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.962 [2024-07-12 00:48:23.407791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.962 qpair failed and we were unable to recover it. 00:35:55.962 [2024-07-12 00:48:23.407879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.962 [2024-07-12 00:48:23.407903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.962 qpair failed and we were unable to recover it. 00:35:55.962 [2024-07-12 00:48:23.407989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.962 [2024-07-12 00:48:23.408013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.962 qpair failed and we were unable to recover it. 00:35:55.962 [2024-07-12 00:48:23.408089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.962 [2024-07-12 00:48:23.408122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.962 qpair failed and we were unable to recover it. 00:35:55.962 [2024-07-12 00:48:23.408215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.962 [2024-07-12 00:48:23.408243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.962 qpair failed and we were unable to recover it. 00:35:55.962 [2024-07-12 00:48:23.408328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.962 [2024-07-12 00:48:23.408355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.962 qpair failed and we were unable to recover it. 00:35:55.962 [2024-07-12 00:48:23.408444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.962 [2024-07-12 00:48:23.408471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.962 qpair failed and we were unable to recover it. 00:35:55.962 [2024-07-12 00:48:23.408557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.962 [2024-07-12 00:48:23.408582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.962 qpair failed and we were unable to recover it. 00:35:55.962 [2024-07-12 00:48:23.408676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.962 [2024-07-12 00:48:23.408702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.962 qpair failed and we were unable to recover it. 00:35:55.962 [2024-07-12 00:48:23.408789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.962 [2024-07-12 00:48:23.408815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.962 qpair failed and we were unable to recover it. 00:35:55.962 [2024-07-12 00:48:23.408894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.962 [2024-07-12 00:48:23.408920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.962 qpair failed and we were unable to recover it. 00:35:55.962 [2024-07-12 00:48:23.409008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.962 [2024-07-12 00:48:23.409036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.962 qpair failed and we were unable to recover it. 00:35:55.962 [2024-07-12 00:48:23.409124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.962 [2024-07-12 00:48:23.409153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.962 qpair failed and we were unable to recover it. 00:35:55.962 [2024-07-12 00:48:23.409240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.962 [2024-07-12 00:48:23.409268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.962 qpair failed and we were unable to recover it. 00:35:55.962 [2024-07-12 00:48:23.409360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.962 [2024-07-12 00:48:23.409387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.962 qpair failed and we were unable to recover it. 00:35:55.962 [2024-07-12 00:48:23.409475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.962 [2024-07-12 00:48:23.409501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.962 qpair failed and we were unable to recover it. 00:35:55.962 [2024-07-12 00:48:23.409582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.962 [2024-07-12 00:48:23.409617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.962 qpair failed and we were unable to recover it. 00:35:55.962 [2024-07-12 00:48:23.409709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.962 [2024-07-12 00:48:23.409736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.962 qpair failed and we were unable to recover it. 00:35:55.962 [2024-07-12 00:48:23.409821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.962 [2024-07-12 00:48:23.409847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.962 qpair failed and we were unable to recover it. 00:35:55.962 [2024-07-12 00:48:23.409933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.962 [2024-07-12 00:48:23.409960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.962 qpair failed and we were unable to recover it. 00:35:55.962 [2024-07-12 00:48:23.410046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.962 [2024-07-12 00:48:23.410072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.962 qpair failed and we were unable to recover it. 00:35:55.962 [2024-07-12 00:48:23.410154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.962 [2024-07-12 00:48:23.410182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.962 qpair failed and we were unable to recover it. 00:35:55.962 [2024-07-12 00:48:23.410267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.962 [2024-07-12 00:48:23.410294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.962 qpair failed and we were unable to recover it. 00:35:55.962 [2024-07-12 00:48:23.410379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.962 [2024-07-12 00:48:23.410405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.962 qpair failed and we were unable to recover it. 00:35:55.963 [2024-07-12 00:48:23.410492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.963 [2024-07-12 00:48:23.410523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.963 qpair failed and we were unable to recover it. 00:35:55.963 [2024-07-12 00:48:23.410601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.963 [2024-07-12 00:48:23.410629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.963 qpair failed and we were unable to recover it. 00:35:55.963 [2024-07-12 00:48:23.410717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.963 [2024-07-12 00:48:23.410746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.963 qpair failed and we were unable to recover it. 00:35:55.963 [2024-07-12 00:48:23.410822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.963 [2024-07-12 00:48:23.410848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.963 qpair failed and we were unable to recover it. 00:35:55.963 [2024-07-12 00:48:23.410935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.963 [2024-07-12 00:48:23.410964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.963 qpair failed and we were unable to recover it. 00:35:55.963 [2024-07-12 00:48:23.411050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.963 [2024-07-12 00:48:23.411083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.963 qpair failed and we were unable to recover it. 00:35:55.963 [2024-07-12 00:48:23.411161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.963 [2024-07-12 00:48:23.411191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.963 qpair failed and we were unable to recover it. 00:35:55.963 [2024-07-12 00:48:23.411275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.963 [2024-07-12 00:48:23.411316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.963 qpair failed and we were unable to recover it. 00:35:55.963 [2024-07-12 00:48:23.411406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.963 [2024-07-12 00:48:23.411434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.963 qpair failed and we were unable to recover it. 00:35:55.963 [2024-07-12 00:48:23.411511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.963 [2024-07-12 00:48:23.411538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.963 qpair failed and we were unable to recover it. 00:35:55.963 [2024-07-12 00:48:23.411619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.963 [2024-07-12 00:48:23.411646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.963 qpair failed and we were unable to recover it. 00:35:55.963 [2024-07-12 00:48:23.411727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.963 [2024-07-12 00:48:23.411754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.963 qpair failed and we were unable to recover it. 00:35:55.963 [2024-07-12 00:48:23.411835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.963 [2024-07-12 00:48:23.411861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.963 qpair failed and we were unable to recover it. 00:35:55.963 [2024-07-12 00:48:23.411938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.963 [2024-07-12 00:48:23.411965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.963 qpair failed and we were unable to recover it. 00:35:55.963 [2024-07-12 00:48:23.412040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.963 [2024-07-12 00:48:23.412065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.963 qpair failed and we were unable to recover it. 00:35:55.963 [2024-07-12 00:48:23.412146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.963 [2024-07-12 00:48:23.412175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.963 qpair failed and we were unable to recover it. 00:35:55.963 [2024-07-12 00:48:23.412274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.963 [2024-07-12 00:48:23.412300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.963 qpair failed and we were unable to recover it. 00:35:55.963 [2024-07-12 00:48:23.412392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.963 [2024-07-12 00:48:23.412423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.963 qpair failed and we were unable to recover it. 00:35:55.963 [2024-07-12 00:48:23.412506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.963 [2024-07-12 00:48:23.412531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.963 qpair failed and we were unable to recover it. 00:35:55.963 [2024-07-12 00:48:23.412616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.963 [2024-07-12 00:48:23.412643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.963 qpair failed and we were unable to recover it. 00:35:55.963 [2024-07-12 00:48:23.412723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.963 [2024-07-12 00:48:23.412748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.963 qpair failed and we were unable to recover it. 00:35:55.963 [2024-07-12 00:48:23.412839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.963 [2024-07-12 00:48:23.412866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.963 qpair failed and we were unable to recover it. 00:35:55.963 [2024-07-12 00:48:23.412963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.963 [2024-07-12 00:48:23.412992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.963 qpair failed and we were unable to recover it. 00:35:55.963 [2024-07-12 00:48:23.413074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.963 [2024-07-12 00:48:23.413102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.963 qpair failed and we were unable to recover it. 00:35:55.963 [2024-07-12 00:48:23.413187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.963 [2024-07-12 00:48:23.413213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.963 qpair failed and we were unable to recover it. 00:35:55.963 [2024-07-12 00:48:23.413288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.963 [2024-07-12 00:48:23.413314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.963 qpair failed and we were unable to recover it. 00:35:55.963 [2024-07-12 00:48:23.413397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.963 [2024-07-12 00:48:23.413423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.963 qpair failed and we were unable to recover it. 00:35:55.963 [2024-07-12 00:48:23.413510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.963 [2024-07-12 00:48:23.413539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.963 qpair failed and we were unable to recover it. 00:35:55.963 [2024-07-12 00:48:23.413622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.963 [2024-07-12 00:48:23.413648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.963 qpair failed and we were unable to recover it. 00:35:55.963 [2024-07-12 00:48:23.413726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.963 [2024-07-12 00:48:23.413754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.963 qpair failed and we were unable to recover it. 00:35:55.963 [2024-07-12 00:48:23.413844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.963 [2024-07-12 00:48:23.413871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.963 qpair failed and we were unable to recover it. 00:35:55.963 [2024-07-12 00:48:23.413962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.963 [2024-07-12 00:48:23.413990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.963 qpair failed and we were unable to recover it. 00:35:55.963 [2024-07-12 00:48:23.414077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.963 [2024-07-12 00:48:23.414102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.963 qpair failed and we were unable to recover it. 00:35:55.963 [2024-07-12 00:48:23.414196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.963 [2024-07-12 00:48:23.414225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.963 qpair failed and we were unable to recover it. 00:35:55.963 [2024-07-12 00:48:23.414309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.963 [2024-07-12 00:48:23.414336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.963 qpair failed and we were unable to recover it. 00:35:55.963 [2024-07-12 00:48:23.414421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.963 [2024-07-12 00:48:23.414448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.963 qpair failed and we were unable to recover it. 00:35:55.963 [2024-07-12 00:48:23.414532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.964 [2024-07-12 00:48:23.414558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.964 qpair failed and we were unable to recover it. 00:35:55.964 [2024-07-12 00:48:23.414648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.964 [2024-07-12 00:48:23.414674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.964 qpair failed and we were unable to recover it. 00:35:55.964 [2024-07-12 00:48:23.414752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.964 [2024-07-12 00:48:23.414779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.964 qpair failed and we were unable to recover it. 00:35:55.964 [2024-07-12 00:48:23.414855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.964 [2024-07-12 00:48:23.414881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.964 qpair failed and we were unable to recover it. 00:35:55.964 [2024-07-12 00:48:23.414957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.964 [2024-07-12 00:48:23.414983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.964 qpair failed and we were unable to recover it. 00:35:55.964 [2024-07-12 00:48:23.415066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.964 [2024-07-12 00:48:23.415092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.964 qpair failed and we were unable to recover it. 00:35:55.964 [2024-07-12 00:48:23.415175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.964 [2024-07-12 00:48:23.415201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.964 qpair failed and we were unable to recover it. 00:35:55.964 [2024-07-12 00:48:23.415293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.964 [2024-07-12 00:48:23.415322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.964 qpair failed and we were unable to recover it. 00:35:55.964 [2024-07-12 00:48:23.415412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.964 [2024-07-12 00:48:23.415439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.964 qpair failed and we were unable to recover it. 00:35:55.964 [2024-07-12 00:48:23.415521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.964 [2024-07-12 00:48:23.415549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.964 qpair failed and we were unable to recover it. 00:35:55.964 [2024-07-12 00:48:23.415635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.964 [2024-07-12 00:48:23.415665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.964 qpair failed and we were unable to recover it. 00:35:55.964 [2024-07-12 00:48:23.415742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.964 [2024-07-12 00:48:23.415768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.964 qpair failed and we were unable to recover it. 00:35:55.964 [2024-07-12 00:48:23.415845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.964 [2024-07-12 00:48:23.415871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.964 qpair failed and we were unable to recover it. 00:35:55.964 [2024-07-12 00:48:23.415966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.964 [2024-07-12 00:48:23.415992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.964 qpair failed and we were unable to recover it. 00:35:55.964 [2024-07-12 00:48:23.416068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.964 [2024-07-12 00:48:23.416093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.964 qpair failed and we were unable to recover it. 00:35:55.964 [2024-07-12 00:48:23.416183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.964 [2024-07-12 00:48:23.416211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.964 qpair failed and we were unable to recover it. 00:35:55.964 [2024-07-12 00:48:23.416293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.964 [2024-07-12 00:48:23.416319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.964 qpair failed and we were unable to recover it. 00:35:55.964 [2024-07-12 00:48:23.416404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.964 [2024-07-12 00:48:23.416430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.964 qpair failed and we were unable to recover it. 00:35:55.964 [2024-07-12 00:48:23.416506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.964 [2024-07-12 00:48:23.416532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.964 qpair failed and we were unable to recover it. 00:35:55.964 [2024-07-12 00:48:23.416613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.964 [2024-07-12 00:48:23.416640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.964 qpair failed and we were unable to recover it. 00:35:55.964 [2024-07-12 00:48:23.416718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.964 [2024-07-12 00:48:23.416743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.964 qpair failed and we were unable to recover it. 00:35:55.964 [2024-07-12 00:48:23.416823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.964 [2024-07-12 00:48:23.416849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.964 qpair failed and we were unable to recover it. 00:35:55.964 [2024-07-12 00:48:23.416923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.964 [2024-07-12 00:48:23.416949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.964 qpair failed and we were unable to recover it. 00:35:55.964 [2024-07-12 00:48:23.417031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.964 [2024-07-12 00:48:23.417059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.964 qpair failed and we were unable to recover it. 00:35:55.964 [2024-07-12 00:48:23.417143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.964 [2024-07-12 00:48:23.417171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.964 qpair failed and we were unable to recover it. 00:35:55.964 [2024-07-12 00:48:23.417255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.964 [2024-07-12 00:48:23.417280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.964 qpair failed and we were unable to recover it. 00:35:55.964 [2024-07-12 00:48:23.417362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.964 [2024-07-12 00:48:23.417390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.964 qpair failed and we were unable to recover it. 00:35:55.964 [2024-07-12 00:48:23.417490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.964 [2024-07-12 00:48:23.417517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.964 qpair failed and we were unable to recover it. 00:35:55.964 [2024-07-12 00:48:23.417608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.964 [2024-07-12 00:48:23.417638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.964 qpair failed and we were unable to recover it. 00:35:55.964 [2024-07-12 00:48:23.417721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.964 [2024-07-12 00:48:23.417748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.964 qpair failed and we were unable to recover it. 00:35:55.964 [2024-07-12 00:48:23.417825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.964 [2024-07-12 00:48:23.417851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.964 qpair failed and we were unable to recover it. 00:35:55.964 [2024-07-12 00:48:23.417936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.964 [2024-07-12 00:48:23.417962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.964 qpair failed and we were unable to recover it. 00:35:55.964 [2024-07-12 00:48:23.418048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.964 [2024-07-12 00:48:23.418075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.964 qpair failed and we were unable to recover it. 00:35:55.964 [2024-07-12 00:48:23.418154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.964 [2024-07-12 00:48:23.418182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.964 qpair failed and we were unable to recover it. 00:35:55.964 [2024-07-12 00:48:23.418269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.964 [2024-07-12 00:48:23.418295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.964 qpair failed and we were unable to recover it. 00:35:55.964 [2024-07-12 00:48:23.418378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.965 [2024-07-12 00:48:23.418404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.965 qpair failed and we were unable to recover it. 00:35:55.965 [2024-07-12 00:48:23.418481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.965 [2024-07-12 00:48:23.418507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.965 qpair failed and we were unable to recover it. 00:35:55.965 [2024-07-12 00:48:23.418606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.965 [2024-07-12 00:48:23.418636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.965 qpair failed and we were unable to recover it. 00:35:55.965 [2024-07-12 00:48:23.418720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.965 [2024-07-12 00:48:23.418750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.965 qpair failed and we were unable to recover it. 00:35:55.965 [2024-07-12 00:48:23.418841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.965 [2024-07-12 00:48:23.418869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.965 qpair failed and we were unable to recover it. 00:35:55.965 [2024-07-12 00:48:23.418957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.965 [2024-07-12 00:48:23.418982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.965 qpair failed and we were unable to recover it. 00:35:55.965 [2024-07-12 00:48:23.419064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.965 [2024-07-12 00:48:23.419092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.965 qpair failed and we were unable to recover it. 00:35:55.965 [2024-07-12 00:48:23.419171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.965 [2024-07-12 00:48:23.419200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.965 qpair failed and we were unable to recover it. 00:35:55.965 [2024-07-12 00:48:23.419283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.965 [2024-07-12 00:48:23.419310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.965 qpair failed and we were unable to recover it. 00:35:55.965 [2024-07-12 00:48:23.419404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.965 [2024-07-12 00:48:23.419431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.965 qpair failed and we were unable to recover it. 00:35:55.965 [2024-07-12 00:48:23.419511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.965 [2024-07-12 00:48:23.419537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.965 qpair failed and we were unable to recover it. 00:35:55.965 [2024-07-12 00:48:23.419625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.965 [2024-07-12 00:48:23.419651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.965 qpair failed and we were unable to recover it. 00:35:55.965 [2024-07-12 00:48:23.419741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.965 [2024-07-12 00:48:23.419768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.965 qpair failed and we were unable to recover it. 00:35:55.965 [2024-07-12 00:48:23.419845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.965 [2024-07-12 00:48:23.419871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.965 qpair failed and we were unable to recover it. 00:35:55.965 [2024-07-12 00:48:23.419945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.965 [2024-07-12 00:48:23.419971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.965 qpair failed and we were unable to recover it. 00:35:55.965 [2024-07-12 00:48:23.420055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.965 [2024-07-12 00:48:23.420084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.965 qpair failed and we were unable to recover it. 00:35:55.965 [2024-07-12 00:48:23.420166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.965 [2024-07-12 00:48:23.420193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.965 qpair failed and we were unable to recover it. 00:35:55.965 [2024-07-12 00:48:23.420280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.965 [2024-07-12 00:48:23.420308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.965 qpair failed and we were unable to recover it. 00:35:55.965 [2024-07-12 00:48:23.420389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.965 [2024-07-12 00:48:23.420414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.965 qpair failed and we were unable to recover it. 00:35:55.965 [2024-07-12 00:48:23.420488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.965 [2024-07-12 00:48:23.420514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.965 qpair failed and we were unable to recover it. 00:35:55.965 [2024-07-12 00:48:23.420594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.965 [2024-07-12 00:48:23.420621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.965 qpair failed and we were unable to recover it. 00:35:55.965 [2024-07-12 00:48:23.420706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.965 [2024-07-12 00:48:23.420732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.965 qpair failed and we were unable to recover it. 00:35:55.965 [2024-07-12 00:48:23.420816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.965 [2024-07-12 00:48:23.420843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.965 qpair failed and we were unable to recover it. 00:35:55.965 [2024-07-12 00:48:23.420924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.965 [2024-07-12 00:48:23.420951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.965 qpair failed and we were unable to recover it. 00:35:55.965 [2024-07-12 00:48:23.421041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.965 [2024-07-12 00:48:23.421068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.965 qpair failed and we were unable to recover it. 00:35:55.965 [2024-07-12 00:48:23.421146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.965 [2024-07-12 00:48:23.421175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.965 qpair failed and we were unable to recover it. 00:35:55.965 [2024-07-12 00:48:23.421260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.965 [2024-07-12 00:48:23.421288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.965 qpair failed and we were unable to recover it. 00:35:55.965 [2024-07-12 00:48:23.421372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.965 [2024-07-12 00:48:23.421400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.965 qpair failed and we were unable to recover it. 00:35:55.965 [2024-07-12 00:48:23.421487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.965 [2024-07-12 00:48:23.421513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.965 qpair failed and we were unable to recover it. 00:35:55.965 [2024-07-12 00:48:23.421599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.965 [2024-07-12 00:48:23.421626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.965 qpair failed and we were unable to recover it. 00:35:55.966 [2024-07-12 00:48:23.421712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.966 [2024-07-12 00:48:23.421738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.966 qpair failed and we were unable to recover it. 00:35:55.966 [2024-07-12 00:48:23.421819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.966 [2024-07-12 00:48:23.421845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.966 qpair failed and we were unable to recover it. 00:35:55.966 [2024-07-12 00:48:23.421928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.966 [2024-07-12 00:48:23.421955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.966 qpair failed and we were unable to recover it. 00:35:55.966 [2024-07-12 00:48:23.422035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.966 [2024-07-12 00:48:23.422061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.966 qpair failed and we were unable to recover it. 00:35:55.966 [2024-07-12 00:48:23.422148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.966 [2024-07-12 00:48:23.422176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.966 qpair failed and we were unable to recover it. 00:35:55.966 [2024-07-12 00:48:23.422259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.966 [2024-07-12 00:48:23.422285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.966 qpair failed and we were unable to recover it. 00:35:55.966 [2024-07-12 00:48:23.422371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.966 [2024-07-12 00:48:23.422398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.966 qpair failed and we were unable to recover it. 00:35:55.966 [2024-07-12 00:48:23.422476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.966 [2024-07-12 00:48:23.422501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.966 qpair failed and we were unable to recover it. 00:35:55.966 [2024-07-12 00:48:23.422580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.966 [2024-07-12 00:48:23.422612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.966 qpair failed and we were unable to recover it. 00:35:55.966 [2024-07-12 00:48:23.422693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.966 [2024-07-12 00:48:23.422720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.966 qpair failed and we were unable to recover it. 00:35:55.966 [2024-07-12 00:48:23.422802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.966 [2024-07-12 00:48:23.422827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.966 qpair failed and we were unable to recover it. 00:35:55.966 [2024-07-12 00:48:23.422906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.966 [2024-07-12 00:48:23.422932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.966 qpair failed and we were unable to recover it. 00:35:55.966 [2024-07-12 00:48:23.423007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.966 [2024-07-12 00:48:23.423038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.966 qpair failed and we were unable to recover it. 00:35:55.966 [2024-07-12 00:48:23.423119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.966 [2024-07-12 00:48:23.423146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.966 qpair failed and we were unable to recover it. 00:35:55.966 [2024-07-12 00:48:23.423229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.966 [2024-07-12 00:48:23.423258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.966 qpair failed and we were unable to recover it. 00:35:55.966 [2024-07-12 00:48:23.423352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.966 [2024-07-12 00:48:23.423380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.966 qpair failed and we were unable to recover it. 00:35:55.966 [2024-07-12 00:48:23.423470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.966 [2024-07-12 00:48:23.423499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.966 qpair failed and we were unable to recover it. 00:35:55.966 [2024-07-12 00:48:23.423578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.966 [2024-07-12 00:48:23.423610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.966 qpair failed and we were unable to recover it. 00:35:55.966 [2024-07-12 00:48:23.423686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.966 [2024-07-12 00:48:23.423713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.966 qpair failed and we were unable to recover it. 00:35:55.966 [2024-07-12 00:48:23.423797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.966 [2024-07-12 00:48:23.423824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.966 qpair failed and we were unable to recover it. 00:35:55.966 [2024-07-12 00:48:23.423902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.966 [2024-07-12 00:48:23.423928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.966 qpair failed and we were unable to recover it. 00:35:55.966 [2024-07-12 00:48:23.424002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.966 [2024-07-12 00:48:23.424028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.966 qpair failed and we were unable to recover it. 00:35:55.966 [2024-07-12 00:48:23.424105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.966 [2024-07-12 00:48:23.424131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.966 qpair failed and we were unable to recover it. 00:35:55.966 [2024-07-12 00:48:23.424221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.966 [2024-07-12 00:48:23.424247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.966 qpair failed and we were unable to recover it. 00:35:55.966 [2024-07-12 00:48:23.424321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.966 [2024-07-12 00:48:23.424348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.966 qpair failed and we were unable to recover it. 00:35:55.966 [2024-07-12 00:48:23.424428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.966 [2024-07-12 00:48:23.424455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.966 qpair failed and we were unable to recover it. 00:35:55.966 [2024-07-12 00:48:23.424541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.966 [2024-07-12 00:48:23.424569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.966 qpair failed and we were unable to recover it. 00:35:55.966 [2024-07-12 00:48:23.424652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.966 [2024-07-12 00:48:23.424678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.966 qpair failed and we were unable to recover it. 00:35:55.966 [2024-07-12 00:48:23.424757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.966 [2024-07-12 00:48:23.424786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.966 qpair failed and we were unable to recover it. 00:35:55.966 [2024-07-12 00:48:23.424875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.966 [2024-07-12 00:48:23.424901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.966 qpair failed and we were unable to recover it. 00:35:55.966 [2024-07-12 00:48:23.424993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.966 [2024-07-12 00:48:23.425022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.966 qpair failed and we were unable to recover it. 00:35:55.966 [2024-07-12 00:48:23.425109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.966 [2024-07-12 00:48:23.425136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.966 qpair failed and we were unable to recover it. 00:35:55.966 [2024-07-12 00:48:23.425217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.966 [2024-07-12 00:48:23.425243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.966 qpair failed and we were unable to recover it. 00:35:55.966 [2024-07-12 00:48:23.425326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.966 [2024-07-12 00:48:23.425353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.966 qpair failed and we were unable to recover it. 00:35:55.966 [2024-07-12 00:48:23.425428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.967 [2024-07-12 00:48:23.425453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.967 qpair failed and we were unable to recover it. 00:35:55.967 [2024-07-12 00:48:23.425534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.967 [2024-07-12 00:48:23.425560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.967 qpair failed and we were unable to recover it. 00:35:55.967 [2024-07-12 00:48:23.425648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.967 [2024-07-12 00:48:23.425674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.967 qpair failed and we were unable to recover it. 00:35:55.967 [2024-07-12 00:48:23.425768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.967 [2024-07-12 00:48:23.425795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.967 qpair failed and we were unable to recover it. 00:35:55.967 [2024-07-12 00:48:23.425880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.967 [2024-07-12 00:48:23.425909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.967 qpair failed and we were unable to recover it. 00:35:55.967 [2024-07-12 00:48:23.425996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.967 [2024-07-12 00:48:23.426028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.967 qpair failed and we were unable to recover it. 00:35:55.967 [2024-07-12 00:48:23.426113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.967 [2024-07-12 00:48:23.426141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.967 qpair failed and we were unable to recover it. 00:35:55.967 [2024-07-12 00:48:23.426222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.967 [2024-07-12 00:48:23.426248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.967 qpair failed and we were unable to recover it. 00:35:55.967 [2024-07-12 00:48:23.426329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.967 [2024-07-12 00:48:23.426355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.967 qpair failed and we were unable to recover it. 00:35:55.967 [2024-07-12 00:48:23.426449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.967 [2024-07-12 00:48:23.426476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.967 qpair failed and we were unable to recover it. 00:35:55.967 [2024-07-12 00:48:23.426569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.967 [2024-07-12 00:48:23.426615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.967 qpair failed and we were unable to recover it. 00:35:55.967 [2024-07-12 00:48:23.426701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.967 [2024-07-12 00:48:23.426726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.967 qpair failed and we were unable to recover it. 00:35:55.967 [2024-07-12 00:48:23.426801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.967 [2024-07-12 00:48:23.426826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.967 qpair failed and we were unable to recover it. 00:35:55.967 [2024-07-12 00:48:23.426907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.967 [2024-07-12 00:48:23.426934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.967 qpair failed and we were unable to recover it. 00:35:55.967 [2024-07-12 00:48:23.427025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.967 [2024-07-12 00:48:23.427053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.967 qpair failed and we were unable to recover it. 00:35:55.967 [2024-07-12 00:48:23.427139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.967 [2024-07-12 00:48:23.427165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.967 qpair failed and we were unable to recover it. 00:35:55.967 [2024-07-12 00:48:23.427250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.967 [2024-07-12 00:48:23.427279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.967 qpair failed and we were unable to recover it. 00:35:55.967 [2024-07-12 00:48:23.427361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.967 [2024-07-12 00:48:23.427387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.967 qpair failed and we were unable to recover it. 00:35:55.967 [2024-07-12 00:48:23.427474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.967 [2024-07-12 00:48:23.427500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.967 qpair failed and we were unable to recover it. 00:35:55.967 [2024-07-12 00:48:23.427582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.967 [2024-07-12 00:48:23.427617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.967 qpair failed and we were unable to recover it. 00:35:55.967 [2024-07-12 00:48:23.427698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.967 [2024-07-12 00:48:23.427724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.967 qpair failed and we were unable to recover it. 00:35:55.967 [2024-07-12 00:48:23.427846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.967 [2024-07-12 00:48:23.427872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.967 qpair failed and we were unable to recover it. 00:35:55.967 [2024-07-12 00:48:23.427950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.967 [2024-07-12 00:48:23.427975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.967 qpair failed and we were unable to recover it. 00:35:55.967 [2024-07-12 00:48:23.428053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.967 [2024-07-12 00:48:23.428078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.967 qpair failed and we were unable to recover it. 00:35:55.967 [2024-07-12 00:48:23.428157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.967 [2024-07-12 00:48:23.428182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.967 qpair failed and we were unable to recover it. 00:35:55.967 [2024-07-12 00:48:23.428265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.967 [2024-07-12 00:48:23.428292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.967 qpair failed and we were unable to recover it. 00:35:55.967 [2024-07-12 00:48:23.428368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.967 [2024-07-12 00:48:23.428394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.967 qpair failed and we were unable to recover it. 00:35:55.967 [2024-07-12 00:48:23.428476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.967 [2024-07-12 00:48:23.428502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.967 qpair failed and we were unable to recover it. 00:35:55.967 [2024-07-12 00:48:23.428593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.967 [2024-07-12 00:48:23.428621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.967 qpair failed and we were unable to recover it. 00:35:55.967 [2024-07-12 00:48:23.428711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.967 [2024-07-12 00:48:23.428737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.967 qpair failed and we were unable to recover it. 00:35:55.967 [2024-07-12 00:48:23.428819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.967 [2024-07-12 00:48:23.428843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.967 qpair failed and we were unable to recover it. 00:35:55.967 [2024-07-12 00:48:23.428928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.967 [2024-07-12 00:48:23.428956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.967 qpair failed and we were unable to recover it. 00:35:55.967 [2024-07-12 00:48:23.429056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.967 [2024-07-12 00:48:23.429083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.967 qpair failed and we were unable to recover it. 00:35:55.967 [2024-07-12 00:48:23.429160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.968 [2024-07-12 00:48:23.429186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.968 qpair failed and we were unable to recover it. 00:35:55.968 [2024-07-12 00:48:23.429268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.968 [2024-07-12 00:48:23.429294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.968 qpair failed and we were unable to recover it. 00:35:55.968 [2024-07-12 00:48:23.429372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.968 [2024-07-12 00:48:23.429398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.968 qpair failed and we were unable to recover it. 00:35:55.968 [2024-07-12 00:48:23.429488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.968 [2024-07-12 00:48:23.429514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.968 qpair failed and we were unable to recover it. 00:35:55.968 [2024-07-12 00:48:23.429598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.968 [2024-07-12 00:48:23.429624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.968 qpair failed and we were unable to recover it. 00:35:55.968 [2024-07-12 00:48:23.429710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.968 [2024-07-12 00:48:23.429736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.968 qpair failed and we were unable to recover it. 00:35:55.968 [2024-07-12 00:48:23.429818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.968 [2024-07-12 00:48:23.429845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.968 qpair failed and we were unable to recover it. 00:35:55.968 [2024-07-12 00:48:23.429931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.968 [2024-07-12 00:48:23.429960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.968 qpair failed and we were unable to recover it. 00:35:55.968 [2024-07-12 00:48:23.430044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.968 [2024-07-12 00:48:23.430070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.968 qpair failed and we were unable to recover it. 00:35:55.968 [2024-07-12 00:48:23.430155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.968 [2024-07-12 00:48:23.430184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.968 qpair failed and we were unable to recover it. 00:35:55.968 [2024-07-12 00:48:23.430261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.968 [2024-07-12 00:48:23.430287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.968 qpair failed and we were unable to recover it. 00:35:55.968 [2024-07-12 00:48:23.430360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.968 [2024-07-12 00:48:23.430386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.968 qpair failed and we were unable to recover it. 00:35:55.968 [2024-07-12 00:48:23.430458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.968 [2024-07-12 00:48:23.430489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.968 qpair failed and we were unable to recover it. 00:35:55.968 [2024-07-12 00:48:23.430565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.968 [2024-07-12 00:48:23.430601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.968 qpair failed and we were unable to recover it. 00:35:55.968 [2024-07-12 00:48:23.430686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.968 [2024-07-12 00:48:23.430716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.968 qpair failed and we were unable to recover it. 00:35:55.968 [2024-07-12 00:48:23.430807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.968 [2024-07-12 00:48:23.430834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.968 qpair failed and we were unable to recover it. 00:35:55.968 [2024-07-12 00:48:23.430915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.968 [2024-07-12 00:48:23.430940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.968 qpair failed and we were unable to recover it. 00:35:55.968 [2024-07-12 00:48:23.431015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.968 [2024-07-12 00:48:23.431042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.968 qpair failed and we were unable to recover it. 00:35:55.968 [2024-07-12 00:48:23.431120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.968 [2024-07-12 00:48:23.431145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.968 qpair failed and we were unable to recover it. 00:35:55.968 [2024-07-12 00:48:23.431226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.968 [2024-07-12 00:48:23.431255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.968 qpair failed and we were unable to recover it. 00:35:55.968 [2024-07-12 00:48:23.431344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.968 [2024-07-12 00:48:23.431372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.968 qpair failed and we were unable to recover it. 00:35:55.968 [2024-07-12 00:48:23.431457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.968 [2024-07-12 00:48:23.431483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.968 qpair failed and we were unable to recover it. 00:35:55.968 [2024-07-12 00:48:23.431570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.968 [2024-07-12 00:48:23.431602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.968 qpair failed and we were unable to recover it. 00:35:55.968 [2024-07-12 00:48:23.431680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.968 [2024-07-12 00:48:23.431706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.968 qpair failed and we were unable to recover it. 00:35:55.968 [2024-07-12 00:48:23.431787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.968 [2024-07-12 00:48:23.431813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.968 qpair failed and we were unable to recover it. 00:35:55.968 [2024-07-12 00:48:23.431890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.968 [2024-07-12 00:48:23.431915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.968 qpair failed and we were unable to recover it. 00:35:55.968 [2024-07-12 00:48:23.432000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.968 [2024-07-12 00:48:23.432027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.968 qpair failed and we were unable to recover it. 00:35:55.968 [2024-07-12 00:48:23.432114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.968 [2024-07-12 00:48:23.432143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.968 qpair failed and we were unable to recover it. 00:35:55.968 [2024-07-12 00:48:23.432230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.968 [2024-07-12 00:48:23.432256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.968 qpair failed and we were unable to recover it. 00:35:55.968 [2024-07-12 00:48:23.432336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.968 [2024-07-12 00:48:23.432362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.968 qpair failed and we were unable to recover it. 00:35:55.968 [2024-07-12 00:48:23.432443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.968 [2024-07-12 00:48:23.432470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.968 qpair failed and we were unable to recover it. 00:35:55.968 [2024-07-12 00:48:23.432558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.968 [2024-07-12 00:48:23.432600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.968 qpair failed and we were unable to recover it. 00:35:55.968 [2024-07-12 00:48:23.432686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.968 [2024-07-12 00:48:23.432712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.968 qpair failed and we were unable to recover it. 00:35:55.968 [2024-07-12 00:48:23.432796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.968 [2024-07-12 00:48:23.432821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.968 qpair failed and we were unable to recover it. 00:35:55.968 [2024-07-12 00:48:23.432905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.968 [2024-07-12 00:48:23.432930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.968 qpair failed and we were unable to recover it. 00:35:55.968 [2024-07-12 00:48:23.433014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.969 [2024-07-12 00:48:23.433039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.969 qpair failed and we were unable to recover it. 00:35:55.969 [2024-07-12 00:48:23.433118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.969 [2024-07-12 00:48:23.433151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.969 qpair failed and we were unable to recover it. 00:35:55.969 [2024-07-12 00:48:23.433235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.969 [2024-07-12 00:48:23.433264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.969 qpair failed and we were unable to recover it. 00:35:55.969 [2024-07-12 00:48:23.433350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.969 [2024-07-12 00:48:23.433377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.969 qpair failed and we were unable to recover it. 00:35:55.969 [2024-07-12 00:48:23.433465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.969 [2024-07-12 00:48:23.433492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.969 qpair failed and we were unable to recover it. 00:35:55.969 [2024-07-12 00:48:23.433573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.969 [2024-07-12 00:48:23.433604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.969 qpair failed and we were unable to recover it. 00:35:55.969 [2024-07-12 00:48:23.433688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.969 [2024-07-12 00:48:23.433714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.969 qpair failed and we were unable to recover it. 00:35:55.969 [2024-07-12 00:48:23.433802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.969 [2024-07-12 00:48:23.433829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.969 qpair failed and we were unable to recover it. 00:35:55.969 [2024-07-12 00:48:23.433915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.969 [2024-07-12 00:48:23.433958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.969 qpair failed and we were unable to recover it. 00:35:55.969 [2024-07-12 00:48:23.434040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.969 [2024-07-12 00:48:23.434067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.969 qpair failed and we were unable to recover it. 00:35:55.969 [2024-07-12 00:48:23.434147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.969 [2024-07-12 00:48:23.434174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.969 qpair failed and we were unable to recover it. 00:35:55.969 [2024-07-12 00:48:23.434261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.969 [2024-07-12 00:48:23.434289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.969 qpair failed and we were unable to recover it. 00:35:55.969 [2024-07-12 00:48:23.434379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.969 [2024-07-12 00:48:23.434406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.969 qpair failed and we were unable to recover it. 00:35:55.969 [2024-07-12 00:48:23.434494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.969 [2024-07-12 00:48:23.434521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.969 qpair failed and we were unable to recover it. 00:35:55.969 [2024-07-12 00:48:23.434601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.969 [2024-07-12 00:48:23.434629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.969 qpair failed and we were unable to recover it. 00:35:55.969 [2024-07-12 00:48:23.434711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.969 [2024-07-12 00:48:23.434741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.969 qpair failed and we were unable to recover it. 00:35:55.969 [2024-07-12 00:48:23.434822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.969 [2024-07-12 00:48:23.434849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.969 qpair failed and we were unable to recover it. 00:35:55.969 [2024-07-12 00:48:23.434940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.969 [2024-07-12 00:48:23.434967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.969 qpair failed and we were unable to recover it. 00:35:55.969 [2024-07-12 00:48:23.435054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.969 [2024-07-12 00:48:23.435082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.969 qpair failed and we were unable to recover it. 00:35:55.969 [2024-07-12 00:48:23.435169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.969 [2024-07-12 00:48:23.435195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.969 qpair failed and we were unable to recover it. 00:35:55.969 [2024-07-12 00:48:23.435283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.969 [2024-07-12 00:48:23.435312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.969 qpair failed and we were unable to recover it. 00:35:55.969 [2024-07-12 00:48:23.435403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.969 [2024-07-12 00:48:23.435429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.969 qpair failed and we were unable to recover it. 00:35:55.969 [2024-07-12 00:48:23.435518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.969 [2024-07-12 00:48:23.435545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.969 qpair failed and we were unable to recover it. 00:35:55.969 [2024-07-12 00:48:23.435649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.969 [2024-07-12 00:48:23.435677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.969 qpair failed and we were unable to recover it. 00:35:55.969 [2024-07-12 00:48:23.435757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.969 [2024-07-12 00:48:23.435782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.969 qpair failed and we were unable to recover it. 00:35:55.969 [2024-07-12 00:48:23.435858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.969 [2024-07-12 00:48:23.435884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.969 qpair failed and we were unable to recover it. 00:35:55.969 [2024-07-12 00:48:23.435967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.969 [2024-07-12 00:48:23.435993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.969 qpair failed and we were unable to recover it. 00:35:55.969 [2024-07-12 00:48:23.436075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.969 [2024-07-12 00:48:23.436102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.969 qpair failed and we were unable to recover it. 00:35:55.969 [2024-07-12 00:48:23.436180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.969 [2024-07-12 00:48:23.436205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.969 qpair failed and we were unable to recover it. 00:35:55.969 [2024-07-12 00:48:23.436292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.969 [2024-07-12 00:48:23.436318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.969 qpair failed and we were unable to recover it. 00:35:55.969 [2024-07-12 00:48:23.436395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.969 [2024-07-12 00:48:23.436422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.969 qpair failed and we were unable to recover it. 00:35:55.969 [2024-07-12 00:48:23.436514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.969 [2024-07-12 00:48:23.436542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.969 qpair failed and we were unable to recover it. 00:35:55.969 [2024-07-12 00:48:23.436627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.969 [2024-07-12 00:48:23.436654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.969 qpair failed and we were unable to recover it. 00:35:55.969 [2024-07-12 00:48:23.436766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.969 [2024-07-12 00:48:23.436795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.969 qpair failed and we were unable to recover it. 00:35:55.969 [2024-07-12 00:48:23.436872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.969 [2024-07-12 00:48:23.436899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.970 qpair failed and we were unable to recover it. 00:35:55.970 [2024-07-12 00:48:23.436989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.970 [2024-07-12 00:48:23.437016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.970 qpair failed and we were unable to recover it. 00:35:55.970 [2024-07-12 00:48:23.437094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.970 [2024-07-12 00:48:23.437120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.970 qpair failed and we were unable to recover it. 00:35:55.970 [2024-07-12 00:48:23.437203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.970 [2024-07-12 00:48:23.437229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.970 qpair failed and we were unable to recover it. 00:35:55.970 [2024-07-12 00:48:23.437305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.970 [2024-07-12 00:48:23.437331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.970 qpair failed and we were unable to recover it. 00:35:55.970 [2024-07-12 00:48:23.437412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.970 [2024-07-12 00:48:23.437439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.970 qpair failed and we were unable to recover it. 00:35:55.970 [2024-07-12 00:48:23.437522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.970 [2024-07-12 00:48:23.437551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.970 qpair failed and we were unable to recover it. 00:35:55.970 [2024-07-12 00:48:23.437644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.970 [2024-07-12 00:48:23.437670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.970 qpair failed and we were unable to recover it. 00:35:55.970 [2024-07-12 00:48:23.437746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.970 [2024-07-12 00:48:23.437772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.970 qpair failed and we were unable to recover it. 00:35:55.970 [2024-07-12 00:48:23.437858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.970 [2024-07-12 00:48:23.437884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.970 qpair failed and we were unable to recover it. 00:35:55.970 [2024-07-12 00:48:23.437970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.970 [2024-07-12 00:48:23.438003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.970 qpair failed and we were unable to recover it. 00:35:55.970 [2024-07-12 00:48:23.438086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.970 [2024-07-12 00:48:23.438114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.970 qpair failed and we were unable to recover it. 00:35:55.970 [2024-07-12 00:48:23.438206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.970 [2024-07-12 00:48:23.438235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.970 qpair failed and we were unable to recover it. 00:35:55.970 [2024-07-12 00:48:23.438319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.970 [2024-07-12 00:48:23.438345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.970 qpair failed and we were unable to recover it. 00:35:55.970 [2024-07-12 00:48:23.438424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.970 [2024-07-12 00:48:23.438448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.970 qpair failed and we were unable to recover it. 00:35:55.970 [2024-07-12 00:48:23.438527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.970 [2024-07-12 00:48:23.438553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.970 qpair failed and we were unable to recover it. 00:35:55.970 [2024-07-12 00:48:23.438646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.970 [2024-07-12 00:48:23.438672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.970 qpair failed and we were unable to recover it. 00:35:55.970 [2024-07-12 00:48:23.438751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.970 [2024-07-12 00:48:23.438777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.970 qpair failed and we were unable to recover it. 00:35:55.970 [2024-07-12 00:48:23.438858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.970 [2024-07-12 00:48:23.438884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.970 qpair failed and we were unable to recover it. 00:35:55.970 [2024-07-12 00:48:23.438963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.970 [2024-07-12 00:48:23.438988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.970 qpair failed and we were unable to recover it. 00:35:55.970 [2024-07-12 00:48:23.439069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.970 [2024-07-12 00:48:23.439095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.970 qpair failed and we were unable to recover it. 00:35:55.970 [2024-07-12 00:48:23.439176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.970 [2024-07-12 00:48:23.439201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.970 qpair failed and we were unable to recover it. 00:35:55.970 [2024-07-12 00:48:23.439287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.970 [2024-07-12 00:48:23.439313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.970 qpair failed and we were unable to recover it. 00:35:55.970 [2024-07-12 00:48:23.439397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.970 [2024-07-12 00:48:23.439425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.970 qpair failed and we were unable to recover it. 00:35:55.970 [2024-07-12 00:48:23.439509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.970 [2024-07-12 00:48:23.439537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.970 qpair failed and we were unable to recover it. 00:35:55.970 [2024-07-12 00:48:23.439635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.970 [2024-07-12 00:48:23.439662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.970 qpair failed and we were unable to recover it. 00:35:55.970 [2024-07-12 00:48:23.439747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.970 [2024-07-12 00:48:23.439773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.970 qpair failed and we were unable to recover it. 00:35:55.970 [2024-07-12 00:48:23.439855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.970 [2024-07-12 00:48:23.439881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.970 qpair failed and we were unable to recover it. 00:35:55.970 [2024-07-12 00:48:23.439969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.970 [2024-07-12 00:48:23.439995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.970 qpair failed and we were unable to recover it. 00:35:55.970 [2024-07-12 00:48:23.440070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.970 [2024-07-12 00:48:23.440096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.970 qpair failed and we were unable to recover it. 00:35:55.970 [2024-07-12 00:48:23.440173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.970 [2024-07-12 00:48:23.440199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.970 qpair failed and we were unable to recover it. 00:35:55.970 [2024-07-12 00:48:23.440287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.970 [2024-07-12 00:48:23.440316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.970 qpair failed and we were unable to recover it. 00:35:55.970 [2024-07-12 00:48:23.440396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.970 [2024-07-12 00:48:23.440424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.970 qpair failed and we were unable to recover it. 00:35:55.971 [2024-07-12 00:48:23.440512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.971 [2024-07-12 00:48:23.440538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.971 qpair failed and we were unable to recover it. 00:35:55.971 [2024-07-12 00:48:23.440621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.971 [2024-07-12 00:48:23.440648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.971 qpair failed and we were unable to recover it. 00:35:55.971 [2024-07-12 00:48:23.440729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.971 [2024-07-12 00:48:23.440755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.971 qpair failed and we were unable to recover it. 00:35:55.971 [2024-07-12 00:48:23.440881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.971 [2024-07-12 00:48:23.440907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.971 qpair failed and we were unable to recover it. 00:35:55.971 [2024-07-12 00:48:23.440989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.971 [2024-07-12 00:48:23.441020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.971 qpair failed and we were unable to recover it. 00:35:55.971 [2024-07-12 00:48:23.441102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.971 [2024-07-12 00:48:23.441128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.971 qpair failed and we were unable to recover it. 00:35:55.971 [2024-07-12 00:48:23.441209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.971 [2024-07-12 00:48:23.441235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.971 qpair failed and we were unable to recover it. 00:35:55.971 [2024-07-12 00:48:23.441312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.971 [2024-07-12 00:48:23.441338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.971 qpair failed and we were unable to recover it. 00:35:55.971 [2024-07-12 00:48:23.441420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.971 [2024-07-12 00:48:23.441447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.971 qpair failed and we were unable to recover it. 00:35:55.971 [2024-07-12 00:48:23.441525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.971 [2024-07-12 00:48:23.441551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.971 qpair failed and we were unable to recover it. 00:35:55.971 [2024-07-12 00:48:23.441640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.971 [2024-07-12 00:48:23.441671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.971 qpair failed and we were unable to recover it. 00:35:55.971 [2024-07-12 00:48:23.441756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.971 [2024-07-12 00:48:23.441783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.971 qpair failed and we were unable to recover it. 00:35:55.971 [2024-07-12 00:48:23.441861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.971 [2024-07-12 00:48:23.441887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.971 qpair failed and we were unable to recover it. 00:35:55.971 [2024-07-12 00:48:23.441967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.971 [2024-07-12 00:48:23.441994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.971 qpair failed and we were unable to recover it. 00:35:55.971 [2024-07-12 00:48:23.442074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.971 [2024-07-12 00:48:23.442100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.971 qpair failed and we were unable to recover it. 00:35:55.971 [2024-07-12 00:48:23.442185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.971 [2024-07-12 00:48:23.442212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.971 qpair failed and we were unable to recover it. 00:35:55.971 [2024-07-12 00:48:23.442300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.971 [2024-07-12 00:48:23.442327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.971 qpair failed and we were unable to recover it. 00:35:55.971 [2024-07-12 00:48:23.442415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.971 [2024-07-12 00:48:23.442444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.971 qpair failed and we were unable to recover it. 00:35:55.971 [2024-07-12 00:48:23.442535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.971 [2024-07-12 00:48:23.442564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.971 qpair failed and we were unable to recover it. 00:35:55.971 [2024-07-12 00:48:23.442661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.971 [2024-07-12 00:48:23.442687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.971 qpair failed and we were unable to recover it. 00:35:55.971 [2024-07-12 00:48:23.442772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.971 [2024-07-12 00:48:23.442798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.971 qpair failed and we were unable to recover it. 00:35:55.971 [2024-07-12 00:48:23.442875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.971 [2024-07-12 00:48:23.442900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.971 qpair failed and we were unable to recover it. 00:35:55.971 [2024-07-12 00:48:23.442986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.971 [2024-07-12 00:48:23.443013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.971 qpair failed and we were unable to recover it. 00:35:55.971 [2024-07-12 00:48:23.443099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.971 [2024-07-12 00:48:23.443125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.971 qpair failed and we were unable to recover it. 00:35:55.971 [2024-07-12 00:48:23.443207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.971 [2024-07-12 00:48:23.443232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.971 qpair failed and we were unable to recover it. 00:35:55.971 [2024-07-12 00:48:23.443314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.971 [2024-07-12 00:48:23.443340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.971 qpair failed and we were unable to recover it. 00:35:55.971 [2024-07-12 00:48:23.443424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.971 [2024-07-12 00:48:23.443449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.971 qpair failed and we were unable to recover it. 00:35:55.971 [2024-07-12 00:48:23.443539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.971 [2024-07-12 00:48:23.443566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.971 qpair failed and we were unable to recover it. 00:35:55.971 [2024-07-12 00:48:23.443656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.971 [2024-07-12 00:48:23.443687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.971 qpair failed and we were unable to recover it. 00:35:55.971 [2024-07-12 00:48:23.443770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.971 [2024-07-12 00:48:23.443798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.971 qpair failed and we were unable to recover it. 00:35:55.971 [2024-07-12 00:48:23.443886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.971 [2024-07-12 00:48:23.443914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.971 qpair failed and we were unable to recover it. 00:35:55.971 [2024-07-12 00:48:23.444009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.971 [2024-07-12 00:48:23.444036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.971 qpair failed and we were unable to recover it. 00:35:55.971 [2024-07-12 00:48:23.444111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.971 [2024-07-12 00:48:23.444138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.971 qpair failed and we were unable to recover it. 00:35:55.971 [2024-07-12 00:48:23.444244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.971 [2024-07-12 00:48:23.444271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.971 qpair failed and we were unable to recover it. 00:35:55.971 [2024-07-12 00:48:23.444350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.971 [2024-07-12 00:48:23.444377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.971 qpair failed and we were unable to recover it. 00:35:55.971 [2024-07-12 00:48:23.444459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.971 [2024-07-12 00:48:23.444485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.971 qpair failed and we were unable to recover it. 00:35:55.971 [2024-07-12 00:48:23.444597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.972 [2024-07-12 00:48:23.444624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.972 qpair failed and we were unable to recover it. 00:35:55.972 [2024-07-12 00:48:23.444706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.972 [2024-07-12 00:48:23.444732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.972 qpair failed and we were unable to recover it. 00:35:55.972 [2024-07-12 00:48:23.444810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.972 [2024-07-12 00:48:23.444837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.972 qpair failed and we were unable to recover it. 00:35:55.972 [2024-07-12 00:48:23.444916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.972 [2024-07-12 00:48:23.444941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.972 qpair failed and we were unable to recover it. 00:35:55.972 [2024-07-12 00:48:23.445028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.972 [2024-07-12 00:48:23.445056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.972 qpair failed and we were unable to recover it. 00:35:55.972 [2024-07-12 00:48:23.445140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.972 [2024-07-12 00:48:23.445166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.972 qpair failed and we were unable to recover it. 00:35:55.972 [2024-07-12 00:48:23.445279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.972 [2024-07-12 00:48:23.445307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.972 qpair failed and we were unable to recover it. 00:35:55.972 [2024-07-12 00:48:23.445389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.972 [2024-07-12 00:48:23.445415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.972 qpair failed and we were unable to recover it. 00:35:55.972 [2024-07-12 00:48:23.445524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.972 [2024-07-12 00:48:23.445555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.972 qpair failed and we were unable to recover it. 00:35:55.972 [2024-07-12 00:48:23.445638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.972 [2024-07-12 00:48:23.445664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.972 qpair failed and we were unable to recover it. 00:35:55.972 [2024-07-12 00:48:23.445750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.972 [2024-07-12 00:48:23.445779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.972 qpair failed and we were unable to recover it. 00:35:55.972 [2024-07-12 00:48:23.445857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.972 [2024-07-12 00:48:23.445883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.972 qpair failed and we were unable to recover it. 00:35:55.972 [2024-07-12 00:48:23.445967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.972 [2024-07-12 00:48:23.445993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.972 qpair failed and we were unable to recover it. 00:35:55.972 [2024-07-12 00:48:23.446134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.972 [2024-07-12 00:48:23.446181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.972 qpair failed and we were unable to recover it. 00:35:55.972 [2024-07-12 00:48:23.446265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.972 [2024-07-12 00:48:23.446294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.972 qpair failed and we were unable to recover it. 00:35:55.972 [2024-07-12 00:48:23.446380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.972 [2024-07-12 00:48:23.446405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.972 qpair failed and we were unable to recover it. 00:35:55.972 [2024-07-12 00:48:23.446483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.972 [2024-07-12 00:48:23.446509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.972 qpair failed and we were unable to recover it. 00:35:55.972 [2024-07-12 00:48:23.446585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.972 [2024-07-12 00:48:23.446620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.972 qpair failed and we were unable to recover it. 00:35:55.972 [2024-07-12 00:48:23.446710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.972 [2024-07-12 00:48:23.446736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.972 qpair failed and we were unable to recover it. 00:35:55.972 [2024-07-12 00:48:23.446881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.972 [2024-07-12 00:48:23.446938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.972 qpair failed and we were unable to recover it. 00:35:55.972 [2024-07-12 00:48:23.447028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.972 [2024-07-12 00:48:23.447055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.972 qpair failed and we were unable to recover it. 00:35:55.972 [2024-07-12 00:48:23.447130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.972 [2024-07-12 00:48:23.447157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.972 qpair failed and we were unable to recover it. 00:35:55.972 [2024-07-12 00:48:23.447255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.972 [2024-07-12 00:48:23.447284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.972 qpair failed and we were unable to recover it. 00:35:55.972 [2024-07-12 00:48:23.447369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.972 [2024-07-12 00:48:23.447395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.972 qpair failed and we were unable to recover it. 00:35:55.972 [2024-07-12 00:48:23.447498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.972 [2024-07-12 00:48:23.447525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.972 qpair failed and we were unable to recover it. 00:35:55.972 [2024-07-12 00:48:23.447602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.972 [2024-07-12 00:48:23.447628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.972 qpair failed and we were unable to recover it. 00:35:55.972 [2024-07-12 00:48:23.447742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.972 [2024-07-12 00:48:23.447768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.972 qpair failed and we were unable to recover it. 00:35:55.972 [2024-07-12 00:48:23.447846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.972 [2024-07-12 00:48:23.447871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.972 qpair failed and we were unable to recover it. 00:35:55.972 [2024-07-12 00:48:23.447948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.972 [2024-07-12 00:48:23.447977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.972 qpair failed and we were unable to recover it. 00:35:55.972 [2024-07-12 00:48:23.448056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.972 [2024-07-12 00:48:23.448084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.972 qpair failed and we were unable to recover it. 00:35:55.972 [2024-07-12 00:48:23.448200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.972 [2024-07-12 00:48:23.448226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.972 qpair failed and we were unable to recover it. 00:35:55.972 [2024-07-12 00:48:23.448313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.972 [2024-07-12 00:48:23.448340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.972 qpair failed and we were unable to recover it. 00:35:55.972 [2024-07-12 00:48:23.448423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.972 [2024-07-12 00:48:23.448451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.972 qpair failed and we were unable to recover it. 00:35:55.972 [2024-07-12 00:48:23.448559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.972 [2024-07-12 00:48:23.448590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.972 qpair failed and we were unable to recover it. 00:35:55.972 [2024-07-12 00:48:23.448703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.972 [2024-07-12 00:48:23.448732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.972 qpair failed and we were unable to recover it. 00:35:55.972 [2024-07-12 00:48:23.448820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.972 [2024-07-12 00:48:23.448853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.972 qpair failed and we were unable to recover it. 00:35:55.973 [2024-07-12 00:48:23.448937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.973 [2024-07-12 00:48:23.448965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.973 qpair failed and we were unable to recover it. 00:35:55.973 [2024-07-12 00:48:23.449051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.973 [2024-07-12 00:48:23.449076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.973 qpair failed and we were unable to recover it. 00:35:55.973 [2024-07-12 00:48:23.449210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.973 [2024-07-12 00:48:23.449258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.973 qpair failed and we were unable to recover it. 00:35:55.973 [2024-07-12 00:48:23.449350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.973 [2024-07-12 00:48:23.449377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.973 qpair failed and we were unable to recover it. 00:35:55.973 [2024-07-12 00:48:23.449469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.973 [2024-07-12 00:48:23.449498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.973 qpair failed and we were unable to recover it. 00:35:55.973 [2024-07-12 00:48:23.449576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.973 [2024-07-12 00:48:23.449612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.973 qpair failed and we were unable to recover it. 00:35:55.973 [2024-07-12 00:48:23.449699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.973 [2024-07-12 00:48:23.449726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.973 qpair failed and we were unable to recover it. 00:35:55.973 [2024-07-12 00:48:23.449835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.973 [2024-07-12 00:48:23.449861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.973 qpair failed and we were unable to recover it. 00:35:55.973 [2024-07-12 00:48:23.450011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.973 [2024-07-12 00:48:23.450068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.973 qpair failed and we were unable to recover it. 00:35:55.973 [2024-07-12 00:48:23.450227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.973 [2024-07-12 00:48:23.450283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.973 qpair failed and we were unable to recover it. 00:35:55.973 [2024-07-12 00:48:23.450369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.973 [2024-07-12 00:48:23.450396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.973 qpair failed and we were unable to recover it. 00:35:55.973 [2024-07-12 00:48:23.450557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.973 [2024-07-12 00:48:23.450625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.973 qpair failed and we were unable to recover it. 00:35:55.973 [2024-07-12 00:48:23.450716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.973 [2024-07-12 00:48:23.450742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.973 qpair failed and we were unable to recover it. 00:35:55.973 [2024-07-12 00:48:23.450897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.973 [2024-07-12 00:48:23.450948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.973 qpair failed and we were unable to recover it. 00:35:55.973 [2024-07-12 00:48:23.451035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.973 [2024-07-12 00:48:23.451060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.973 qpair failed and we were unable to recover it. 00:35:55.973 [2024-07-12 00:48:23.451145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.973 [2024-07-12 00:48:23.451171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.973 qpair failed and we were unable to recover it. 00:35:55.973 [2024-07-12 00:48:23.451289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.973 [2024-07-12 00:48:23.451319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.973 qpair failed and we were unable to recover it. 00:35:55.973 [2024-07-12 00:48:23.451438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.973 [2024-07-12 00:48:23.451467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.973 qpair failed and we were unable to recover it. 00:35:55.973 [2024-07-12 00:48:23.451549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.973 [2024-07-12 00:48:23.451576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.973 qpair failed and we were unable to recover it. 00:35:55.973 [2024-07-12 00:48:23.451670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.973 [2024-07-12 00:48:23.451695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.973 qpair failed and we were unable to recover it. 00:35:55.973 [2024-07-12 00:48:23.451775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.973 [2024-07-12 00:48:23.451802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.973 qpair failed and we were unable to recover it. 00:35:55.973 [2024-07-12 00:48:23.451883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.973 [2024-07-12 00:48:23.451910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.973 qpair failed and we were unable to recover it. 00:35:55.973 [2024-07-12 00:48:23.452022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.973 [2024-07-12 00:48:23.452048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.973 qpair failed and we were unable to recover it. 00:35:55.973 [2024-07-12 00:48:23.452130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.973 [2024-07-12 00:48:23.452157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.973 qpair failed and we were unable to recover it. 00:35:55.973 [2024-07-12 00:48:23.452246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.973 [2024-07-12 00:48:23.452271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.973 qpair failed and we were unable to recover it. 00:35:55.973 [2024-07-12 00:48:23.452384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.973 [2024-07-12 00:48:23.452413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.973 qpair failed and we were unable to recover it. 00:35:55.973 [2024-07-12 00:48:23.452491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.973 [2024-07-12 00:48:23.452518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.973 qpair failed and we were unable to recover it. 00:35:55.973 [2024-07-12 00:48:23.452602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.973 [2024-07-12 00:48:23.452630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.973 qpair failed and we were unable to recover it. 00:35:55.973 [2024-07-12 00:48:23.452715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.973 [2024-07-12 00:48:23.452741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.973 qpair failed and we were unable to recover it. 00:35:55.973 [2024-07-12 00:48:23.452825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.973 [2024-07-12 00:48:23.452853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.973 qpair failed and we were unable to recover it. 00:35:55.973 [2024-07-12 00:48:23.452949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.973 [2024-07-12 00:48:23.452975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.973 qpair failed and we were unable to recover it. 00:35:55.973 [2024-07-12 00:48:23.453057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.973 [2024-07-12 00:48:23.453084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.973 qpair failed and we were unable to recover it. 00:35:55.973 [2024-07-12 00:48:23.453167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.973 [2024-07-12 00:48:23.453193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.973 qpair failed and we were unable to recover it. 00:35:55.973 [2024-07-12 00:48:23.453273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.973 [2024-07-12 00:48:23.453301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.973 qpair failed and we were unable to recover it. 00:35:55.973 [2024-07-12 00:48:23.453383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.974 [2024-07-12 00:48:23.453410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.974 qpair failed and we were unable to recover it. 00:35:55.974 [2024-07-12 00:48:23.453490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.974 [2024-07-12 00:48:23.453517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.974 qpair failed and we were unable to recover it. 00:35:55.974 [2024-07-12 00:48:23.453613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.974 [2024-07-12 00:48:23.453641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.974 qpair failed and we were unable to recover it. 00:35:55.974 [2024-07-12 00:48:23.453724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.974 [2024-07-12 00:48:23.453749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.974 qpair failed and we were unable to recover it. 00:35:55.974 [2024-07-12 00:48:23.453828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.974 [2024-07-12 00:48:23.453853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.974 qpair failed and we were unable to recover it. 00:35:55.974 [2024-07-12 00:48:23.453944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.974 [2024-07-12 00:48:23.453974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.974 qpair failed and we were unable to recover it. 00:35:55.974 [2024-07-12 00:48:23.454050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.974 [2024-07-12 00:48:23.454075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.974 qpair failed and we were unable to recover it. 00:35:55.974 [2024-07-12 00:48:23.454162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.974 [2024-07-12 00:48:23.454191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.974 qpair failed and we were unable to recover it. 00:35:55.974 [2024-07-12 00:48:23.454280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.974 [2024-07-12 00:48:23.454307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.974 qpair failed and we were unable to recover it. 00:35:55.974 [2024-07-12 00:48:23.454388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.974 [2024-07-12 00:48:23.454416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.974 qpair failed and we were unable to recover it. 00:35:55.974 [2024-07-12 00:48:23.454502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.974 [2024-07-12 00:48:23.454528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.974 qpair failed and we were unable to recover it. 00:35:55.974 [2024-07-12 00:48:23.454605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.974 [2024-07-12 00:48:23.454631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.974 qpair failed and we were unable to recover it. 00:35:55.974 [2024-07-12 00:48:23.454706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.974 [2024-07-12 00:48:23.454732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.974 qpair failed and we were unable to recover it. 00:35:55.974 [2024-07-12 00:48:23.454814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.974 [2024-07-12 00:48:23.454843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.974 qpair failed and we were unable to recover it. 00:35:55.974 [2024-07-12 00:48:23.454921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.974 [2024-07-12 00:48:23.454946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.974 qpair failed and we were unable to recover it. 00:35:55.974 [2024-07-12 00:48:23.455027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.974 [2024-07-12 00:48:23.455055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.974 qpair failed and we were unable to recover it. 00:35:55.974 [2024-07-12 00:48:23.455145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.974 [2024-07-12 00:48:23.455174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.974 qpair failed and we were unable to recover it. 00:35:55.974 [2024-07-12 00:48:23.455269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.974 [2024-07-12 00:48:23.455296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.974 qpair failed and we were unable to recover it. 00:35:55.974 [2024-07-12 00:48:23.455392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.974 [2024-07-12 00:48:23.455418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.974 qpair failed and we were unable to recover it. 00:35:55.974 [2024-07-12 00:48:23.455503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.974 [2024-07-12 00:48:23.455529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.974 qpair failed and we were unable to recover it. 00:35:55.974 [2024-07-12 00:48:23.455612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.974 [2024-07-12 00:48:23.455641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.974 qpair failed and we were unable to recover it. 00:35:55.974 [2024-07-12 00:48:23.455734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.974 [2024-07-12 00:48:23.455762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.974 qpair failed and we were unable to recover it. 00:35:55.974 [2024-07-12 00:48:23.455847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.974 [2024-07-12 00:48:23.455874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.974 qpair failed and we were unable to recover it. 00:35:55.974 [2024-07-12 00:48:23.455961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.974 [2024-07-12 00:48:23.455989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.974 qpair failed and we were unable to recover it. 00:35:55.974 [2024-07-12 00:48:23.456073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.974 [2024-07-12 00:48:23.456100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.974 qpair failed and we were unable to recover it. 00:35:55.974 [2024-07-12 00:48:23.456194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.974 [2024-07-12 00:48:23.456222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.974 qpair failed and we were unable to recover it. 00:35:55.974 [2024-07-12 00:48:23.456305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.974 [2024-07-12 00:48:23.456332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.974 qpair failed and we were unable to recover it. 00:35:55.974 [2024-07-12 00:48:23.456409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.974 [2024-07-12 00:48:23.456435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.974 qpair failed and we were unable to recover it. 00:35:55.974 [2024-07-12 00:48:23.456509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.974 [2024-07-12 00:48:23.456535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.974 qpair failed and we were unable to recover it. 00:35:55.974 [2024-07-12 00:48:23.456636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.974 [2024-07-12 00:48:23.456663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.974 qpair failed and we were unable to recover it. 00:35:55.974 [2024-07-12 00:48:23.456743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.974 [2024-07-12 00:48:23.456772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.974 qpair failed and we were unable to recover it. 00:35:55.974 [2024-07-12 00:48:23.456861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.974 [2024-07-12 00:48:23.456892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.974 qpair failed and we were unable to recover it. 00:35:55.974 [2024-07-12 00:48:23.456968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.974 [2024-07-12 00:48:23.456998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.974 qpair failed and we were unable to recover it. 00:35:55.974 [2024-07-12 00:48:23.457080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.974 [2024-07-12 00:48:23.457106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.974 qpair failed and we were unable to recover it. 00:35:55.975 [2024-07-12 00:48:23.457185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.975 [2024-07-12 00:48:23.457210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.975 qpair failed and we were unable to recover it. 00:35:55.975 [2024-07-12 00:48:23.457404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.975 [2024-07-12 00:48:23.457429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.975 qpair failed and we were unable to recover it. 00:35:55.975 [2024-07-12 00:48:23.457508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.975 [2024-07-12 00:48:23.457536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.975 qpair failed and we were unable to recover it. 00:35:55.975 [2024-07-12 00:48:23.457615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.975 [2024-07-12 00:48:23.457642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.975 qpair failed and we were unable to recover it. 00:35:55.975 [2024-07-12 00:48:23.457723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.975 [2024-07-12 00:48:23.457750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.975 qpair failed and we were unable to recover it. 00:35:55.975 [2024-07-12 00:48:23.457834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.975 [2024-07-12 00:48:23.457859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.975 qpair failed and we were unable to recover it. 00:35:55.975 [2024-07-12 00:48:23.457945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.975 [2024-07-12 00:48:23.457972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.975 qpair failed and we were unable to recover it. 00:35:55.975 [2024-07-12 00:48:23.458060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.975 [2024-07-12 00:48:23.458087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.975 qpair failed and we were unable to recover it. 00:35:55.975 [2024-07-12 00:48:23.458173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.975 [2024-07-12 00:48:23.458200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.975 qpair failed and we were unable to recover it. 00:35:55.975 [2024-07-12 00:48:23.458279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.975 [2024-07-12 00:48:23.458304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.975 qpair failed and we were unable to recover it. 00:35:55.975 [2024-07-12 00:48:23.458385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.975 [2024-07-12 00:48:23.458411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.975 qpair failed and we were unable to recover it. 00:35:55.975 [2024-07-12 00:48:23.458495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.975 [2024-07-12 00:48:23.458522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.975 qpair failed and we were unable to recover it. 00:35:55.975 [2024-07-12 00:48:23.458626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.975 [2024-07-12 00:48:23.458655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.975 qpair failed and we were unable to recover it. 00:35:55.975 [2024-07-12 00:48:23.458740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.975 [2024-07-12 00:48:23.458767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.975 qpair failed and we were unable to recover it. 00:35:55.975 [2024-07-12 00:48:23.458856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.975 [2024-07-12 00:48:23.458883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.975 qpair failed and we were unable to recover it. 00:35:55.975 [2024-07-12 00:48:23.458962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.975 [2024-07-12 00:48:23.458989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.975 qpair failed and we were unable to recover it. 00:35:55.975 [2024-07-12 00:48:23.459079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.975 [2024-07-12 00:48:23.459108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.975 qpair failed and we were unable to recover it. 00:35:55.975 [2024-07-12 00:48:23.459188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.975 [2024-07-12 00:48:23.459215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.975 qpair failed and we were unable to recover it. 00:35:55.975 [2024-07-12 00:48:23.459305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.975 [2024-07-12 00:48:23.459330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.975 qpair failed and we were unable to recover it. 00:35:55.975 [2024-07-12 00:48:23.459404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.975 [2024-07-12 00:48:23.459429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.975 qpair failed and we were unable to recover it. 00:35:55.975 [2024-07-12 00:48:23.459511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.975 [2024-07-12 00:48:23.459538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.975 qpair failed and we were unable to recover it. 00:35:55.975 [2024-07-12 00:48:23.459624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.975 [2024-07-12 00:48:23.459651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.975 qpair failed and we were unable to recover it. 00:35:55.975 [2024-07-12 00:48:23.459726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.975 [2024-07-12 00:48:23.459751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.975 qpair failed and we were unable to recover it. 00:35:55.975 [2024-07-12 00:48:23.459843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.975 [2024-07-12 00:48:23.459870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.975 qpair failed and we were unable to recover it. 00:35:55.975 [2024-07-12 00:48:23.459953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.975 [2024-07-12 00:48:23.459981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.975 qpair failed and we were unable to recover it. 00:35:55.975 [2024-07-12 00:48:23.460068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.975 [2024-07-12 00:48:23.460097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.975 qpair failed and we were unable to recover it. 00:35:55.975 [2024-07-12 00:48:23.460180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.975 [2024-07-12 00:48:23.460206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.975 qpair failed and we were unable to recover it. 00:35:55.975 [2024-07-12 00:48:23.460280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.975 [2024-07-12 00:48:23.460306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.975 qpair failed and we were unable to recover it. 00:35:55.975 [2024-07-12 00:48:23.460385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.975 [2024-07-12 00:48:23.460411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.975 qpair failed and we were unable to recover it. 00:35:55.975 [2024-07-12 00:48:23.460487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.975 [2024-07-12 00:48:23.460513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.975 qpair failed and we were unable to recover it. 00:35:55.975 [2024-07-12 00:48:23.460595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.976 [2024-07-12 00:48:23.460622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.976 qpair failed and we were unable to recover it. 00:35:55.976 [2024-07-12 00:48:23.460703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.976 [2024-07-12 00:48:23.460729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.976 qpair failed and we were unable to recover it. 00:35:55.976 [2024-07-12 00:48:23.460810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.976 [2024-07-12 00:48:23.460836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.976 qpair failed and we were unable to recover it. 00:35:55.976 [2024-07-12 00:48:23.460910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.976 [2024-07-12 00:48:23.460935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.976 qpair failed and we were unable to recover it. 00:35:55.976 [2024-07-12 00:48:23.461021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.976 [2024-07-12 00:48:23.461049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.976 qpair failed and we were unable to recover it. 00:35:55.976 [2024-07-12 00:48:23.461133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.976 [2024-07-12 00:48:23.461162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.976 qpair failed and we were unable to recover it. 00:35:55.976 [2024-07-12 00:48:23.461249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.976 [2024-07-12 00:48:23.461275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.976 qpair failed and we were unable to recover it. 00:35:55.976 [2024-07-12 00:48:23.461353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.976 [2024-07-12 00:48:23.461379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.976 qpair failed and we were unable to recover it. 00:35:55.976 [2024-07-12 00:48:23.461455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.976 [2024-07-12 00:48:23.461485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.976 qpair failed and we were unable to recover it. 00:35:55.976 [2024-07-12 00:48:23.461567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.976 [2024-07-12 00:48:23.461601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.976 qpair failed and we were unable to recover it. 00:35:55.976 [2024-07-12 00:48:23.461685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.976 [2024-07-12 00:48:23.461711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.976 qpair failed and we were unable to recover it. 00:35:55.976 [2024-07-12 00:48:23.461794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.976 [2024-07-12 00:48:23.461819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.976 qpair failed and we were unable to recover it. 00:35:55.976 [2024-07-12 00:48:23.461894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.976 [2024-07-12 00:48:23.461920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.976 qpair failed and we were unable to recover it. 00:35:55.976 [2024-07-12 00:48:23.461998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.976 [2024-07-12 00:48:23.462024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.976 qpair failed and we were unable to recover it. 00:35:55.976 [2024-07-12 00:48:23.462105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.976 [2024-07-12 00:48:23.462132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.976 qpair failed and we were unable to recover it. 00:35:55.976 [2024-07-12 00:48:23.462214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.976 [2024-07-12 00:48:23.462240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.976 qpair failed and we were unable to recover it. 00:35:55.976 [2024-07-12 00:48:23.462323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.976 [2024-07-12 00:48:23.462350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.976 qpair failed and we were unable to recover it. 00:35:55.976 [2024-07-12 00:48:23.462429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.976 [2024-07-12 00:48:23.462456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.976 qpair failed and we were unable to recover it. 00:35:55.976 [2024-07-12 00:48:23.462537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.976 [2024-07-12 00:48:23.462565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.976 qpair failed and we were unable to recover it. 00:35:55.976 [2024-07-12 00:48:23.462665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.976 [2024-07-12 00:48:23.462692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.976 qpair failed and we were unable to recover it. 00:35:55.976 [2024-07-12 00:48:23.462780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.976 [2024-07-12 00:48:23.462806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.976 qpair failed and we were unable to recover it. 00:35:55.976 [2024-07-12 00:48:23.462882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.976 [2024-07-12 00:48:23.462908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.976 qpair failed and we were unable to recover it. 00:35:55.976 [2024-07-12 00:48:23.463001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.976 [2024-07-12 00:48:23.463031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.976 qpair failed and we were unable to recover it. 00:35:55.976 [2024-07-12 00:48:23.463125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.976 [2024-07-12 00:48:23.463154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.976 qpair failed and we were unable to recover it. 00:35:55.976 [2024-07-12 00:48:23.463233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.976 [2024-07-12 00:48:23.463260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.976 qpair failed and we were unable to recover it. 00:35:55.976 [2024-07-12 00:48:23.463349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.976 [2024-07-12 00:48:23.463376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.976 qpair failed and we were unable to recover it. 00:35:55.976 [2024-07-12 00:48:23.463458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.976 [2024-07-12 00:48:23.463486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.976 qpair failed and we were unable to recover it. 00:35:55.976 [2024-07-12 00:48:23.463572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.976 [2024-07-12 00:48:23.463604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.976 qpair failed and we were unable to recover it. 00:35:55.976 [2024-07-12 00:48:23.463695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.976 [2024-07-12 00:48:23.463721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.976 qpair failed and we were unable to recover it. 00:35:55.976 [2024-07-12 00:48:23.463800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.976 [2024-07-12 00:48:23.463827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.976 qpair failed and we were unable to recover it. 00:35:55.976 [2024-07-12 00:48:23.463903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.976 [2024-07-12 00:48:23.463930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.976 qpair failed and we were unable to recover it. 00:35:55.976 [2024-07-12 00:48:23.464008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.976 [2024-07-12 00:48:23.464034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.976 qpair failed and we were unable to recover it. 00:35:55.976 [2024-07-12 00:48:23.464111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.976 [2024-07-12 00:48:23.464137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.976 qpair failed and we were unable to recover it. 00:35:55.976 [2024-07-12 00:48:23.464221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.976 [2024-07-12 00:48:23.464248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.976 qpair failed and we were unable to recover it. 00:35:55.976 [2024-07-12 00:48:23.464324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.976 [2024-07-12 00:48:23.464350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.976 qpair failed and we were unable to recover it. 00:35:55.977 [2024-07-12 00:48:23.464430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.977 [2024-07-12 00:48:23.464461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.977 qpair failed and we were unable to recover it. 00:35:55.977 [2024-07-12 00:48:23.464540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.977 [2024-07-12 00:48:23.464568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.977 qpair failed and we were unable to recover it. 00:35:55.977 [2024-07-12 00:48:23.464663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.977 [2024-07-12 00:48:23.464691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.977 qpair failed and we were unable to recover it. 00:35:55.977 [2024-07-12 00:48:23.464781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.977 [2024-07-12 00:48:23.464810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.977 qpair failed and we were unable to recover it. 00:35:55.977 [2024-07-12 00:48:23.464897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.977 [2024-07-12 00:48:23.464924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.977 qpair failed and we were unable to recover it. 00:35:55.977 [2024-07-12 00:48:23.465006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.977 [2024-07-12 00:48:23.465033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.977 qpair failed and we were unable to recover it. 00:35:55.977 [2024-07-12 00:48:23.465110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.977 [2024-07-12 00:48:23.465136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.977 qpair failed and we were unable to recover it. 00:35:55.977 [2024-07-12 00:48:23.465213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.977 [2024-07-12 00:48:23.465239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.977 qpair failed and we were unable to recover it. 00:35:55.977 [2024-07-12 00:48:23.465312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.977 [2024-07-12 00:48:23.465338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.977 qpair failed and we were unable to recover it. 00:35:55.977 [2024-07-12 00:48:23.465426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.977 [2024-07-12 00:48:23.465453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.977 qpair failed and we were unable to recover it. 00:35:55.977 [2024-07-12 00:48:23.465542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.977 [2024-07-12 00:48:23.465567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.977 qpair failed and we were unable to recover it. 00:35:55.977 [2024-07-12 00:48:23.465678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.977 [2024-07-12 00:48:23.465707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.977 qpair failed and we were unable to recover it. 00:35:55.977 [2024-07-12 00:48:23.465786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.977 [2024-07-12 00:48:23.465812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.977 qpair failed and we were unable to recover it. 00:35:55.977 [2024-07-12 00:48:23.465893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.977 [2024-07-12 00:48:23.465919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.977 qpair failed and we were unable to recover it. 00:35:55.977 [2024-07-12 00:48:23.466001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.977 [2024-07-12 00:48:23.466026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.977 qpair failed and we were unable to recover it. 00:35:55.977 [2024-07-12 00:48:23.466109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.977 [2024-07-12 00:48:23.466136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.977 qpair failed and we were unable to recover it. 00:35:55.977 [2024-07-12 00:48:23.466214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.977 [2024-07-12 00:48:23.466240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.977 qpair failed and we were unable to recover it. 00:35:55.977 [2024-07-12 00:48:23.466321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.977 [2024-07-12 00:48:23.466347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.977 qpair failed and we were unable to recover it. 00:35:55.977 [2024-07-12 00:48:23.466423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.977 [2024-07-12 00:48:23.466449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.977 qpair failed and we were unable to recover it. 00:35:55.977 [2024-07-12 00:48:23.466522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.977 [2024-07-12 00:48:23.466548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.977 qpair failed and we were unable to recover it. 00:35:55.977 [2024-07-12 00:48:23.466655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.977 [2024-07-12 00:48:23.466681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.977 qpair failed and we were unable to recover it. 00:35:55.977 [2024-07-12 00:48:23.466759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.977 [2024-07-12 00:48:23.466783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.977 qpair failed and we were unable to recover it. 00:35:55.977 [2024-07-12 00:48:23.466859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.977 [2024-07-12 00:48:23.466885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.977 qpair failed and we were unable to recover it. 00:35:55.977 [2024-07-12 00:48:23.466959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.977 [2024-07-12 00:48:23.466986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.977 qpair failed and we were unable to recover it. 00:35:55.977 [2024-07-12 00:48:23.467071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.977 [2024-07-12 00:48:23.467098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.977 qpair failed and we were unable to recover it. 00:35:55.977 [2024-07-12 00:48:23.467176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.977 [2024-07-12 00:48:23.467205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.977 qpair failed and we were unable to recover it. 00:35:55.977 [2024-07-12 00:48:23.467297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.977 [2024-07-12 00:48:23.467325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.977 qpair failed and we were unable to recover it. 00:35:55.977 [2024-07-12 00:48:23.467404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.977 [2024-07-12 00:48:23.467432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.977 qpair failed and we were unable to recover it. 00:35:55.977 [2024-07-12 00:48:23.467523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.977 [2024-07-12 00:48:23.467549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.977 qpair failed and we were unable to recover it. 00:35:55.977 [2024-07-12 00:48:23.467640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.977 [2024-07-12 00:48:23.467668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.977 qpair failed and we were unable to recover it. 00:35:55.977 [2024-07-12 00:48:23.467751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.977 [2024-07-12 00:48:23.467778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.977 qpair failed and we were unable to recover it. 00:35:55.977 [2024-07-12 00:48:23.467882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.977 [2024-07-12 00:48:23.467909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.977 qpair failed and we were unable to recover it. 00:35:55.977 [2024-07-12 00:48:23.467989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.977 [2024-07-12 00:48:23.468016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.977 qpair failed and we were unable to recover it. 00:35:55.977 [2024-07-12 00:48:23.468098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.977 [2024-07-12 00:48:23.468124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.977 qpair failed and we were unable to recover it. 00:35:55.977 [2024-07-12 00:48:23.468200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.977 [2024-07-12 00:48:23.468228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.978 qpair failed and we were unable to recover it. 00:35:55.978 [2024-07-12 00:48:23.468331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.978 [2024-07-12 00:48:23.468361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.978 qpair failed and we were unable to recover it. 00:35:55.978 [2024-07-12 00:48:23.468447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.978 [2024-07-12 00:48:23.468475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.978 qpair failed and we were unable to recover it. 00:35:55.978 [2024-07-12 00:48:23.468551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.978 [2024-07-12 00:48:23.468577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.978 qpair failed and we were unable to recover it. 00:35:55.978 [2024-07-12 00:48:23.468664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.978 [2024-07-12 00:48:23.468690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.978 qpair failed and we were unable to recover it. 00:35:55.978 [2024-07-12 00:48:23.468765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.978 [2024-07-12 00:48:23.468791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.978 qpair failed and we were unable to recover it. 00:35:55.978 [2024-07-12 00:48:23.468872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.978 [2024-07-12 00:48:23.468902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.978 qpair failed and we were unable to recover it. 00:35:55.978 [2024-07-12 00:48:23.468989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.978 [2024-07-12 00:48:23.469015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.978 qpair failed and we were unable to recover it. 00:35:55.978 [2024-07-12 00:48:23.469099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.978 [2024-07-12 00:48:23.469125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.978 qpair failed and we were unable to recover it. 00:35:55.978 [2024-07-12 00:48:23.469199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.978 [2024-07-12 00:48:23.469225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.978 qpair failed and we were unable to recover it. 00:35:55.978 [2024-07-12 00:48:23.469298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.978 [2024-07-12 00:48:23.469324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.978 qpair failed and we were unable to recover it. 00:35:55.978 [2024-07-12 00:48:23.469402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.978 [2024-07-12 00:48:23.469428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.978 qpair failed and we were unable to recover it. 00:35:55.978 [2024-07-12 00:48:23.469510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.978 [2024-07-12 00:48:23.469539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.978 qpair failed and we were unable to recover it. 00:35:55.978 [2024-07-12 00:48:23.469634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.978 [2024-07-12 00:48:23.469662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.978 qpair failed and we were unable to recover it. 00:35:55.978 [2024-07-12 00:48:23.469740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.978 [2024-07-12 00:48:23.469767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.978 qpair failed and we were unable to recover it. 00:35:55.978 [2024-07-12 00:48:23.469848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.978 [2024-07-12 00:48:23.469876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.978 qpair failed and we were unable to recover it. 00:35:55.978 [2024-07-12 00:48:23.469961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.978 [2024-07-12 00:48:23.469988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.978 qpair failed and we were unable to recover it. 00:35:55.978 [2024-07-12 00:48:23.470067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.978 [2024-07-12 00:48:23.470093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.978 qpair failed and we were unable to recover it. 00:35:55.978 [2024-07-12 00:48:23.470173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.978 [2024-07-12 00:48:23.470201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.978 qpair failed and we were unable to recover it. 00:35:55.978 [2024-07-12 00:48:23.470289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.978 [2024-07-12 00:48:23.470315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.978 qpair failed and we were unable to recover it. 00:35:55.978 [2024-07-12 00:48:23.470411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.978 [2024-07-12 00:48:23.470439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.978 qpair failed and we were unable to recover it. 00:35:55.978 [2024-07-12 00:48:23.470522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.978 [2024-07-12 00:48:23.470551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.978 qpair failed and we were unable to recover it. 00:35:55.978 [2024-07-12 00:48:23.470650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.978 [2024-07-12 00:48:23.470679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.978 qpair failed and we were unable to recover it. 00:35:55.978 [2024-07-12 00:48:23.470768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.978 [2024-07-12 00:48:23.470794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.978 qpair failed and we were unable to recover it. 00:35:55.978 [2024-07-12 00:48:23.470870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.978 [2024-07-12 00:48:23.470896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.978 qpair failed and we were unable to recover it. 00:35:55.978 [2024-07-12 00:48:23.470977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.978 [2024-07-12 00:48:23.471003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.978 qpair failed and we were unable to recover it. 00:35:55.978 [2024-07-12 00:48:23.471087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.978 [2024-07-12 00:48:23.471116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.978 qpair failed and we were unable to recover it. 00:35:55.978 [2024-07-12 00:48:23.471205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.978 [2024-07-12 00:48:23.471233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.978 qpair failed and we were unable to recover it. 00:35:55.978 [2024-07-12 00:48:23.471309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.978 [2024-07-12 00:48:23.471337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.978 qpair failed and we were unable to recover it. 00:35:55.978 [2024-07-12 00:48:23.471418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.978 [2024-07-12 00:48:23.471443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.978 qpair failed and we were unable to recover it. 00:35:55.978 [2024-07-12 00:48:23.471525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.978 [2024-07-12 00:48:23.471551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.978 qpair failed and we were unable to recover it. 00:35:55.978 [2024-07-12 00:48:23.471634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.978 [2024-07-12 00:48:23.471659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.978 qpair failed and we were unable to recover it. 00:35:55.978 [2024-07-12 00:48:23.471745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.978 [2024-07-12 00:48:23.471772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.978 qpair failed and we were unable to recover it. 00:35:55.978 [2024-07-12 00:48:23.471856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.979 [2024-07-12 00:48:23.471890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.979 qpair failed and we were unable to recover it. 00:35:55.979 [2024-07-12 00:48:23.471969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.979 [2024-07-12 00:48:23.471996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.979 qpair failed and we were unable to recover it. 00:35:55.979 [2024-07-12 00:48:23.472071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.979 [2024-07-12 00:48:23.472097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.979 qpair failed and we were unable to recover it. 00:35:55.979 [2024-07-12 00:48:23.472173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.979 [2024-07-12 00:48:23.472199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.979 qpair failed and we were unable to recover it. 00:35:55.979 [2024-07-12 00:48:23.472277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.979 [2024-07-12 00:48:23.472305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.979 qpair failed and we were unable to recover it. 00:35:55.979 [2024-07-12 00:48:23.472383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.979 [2024-07-12 00:48:23.472410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.979 qpair failed and we were unable to recover it. 00:35:55.979 [2024-07-12 00:48:23.472494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.979 [2024-07-12 00:48:23.472520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.979 qpair failed and we were unable to recover it. 00:35:55.979 [2024-07-12 00:48:23.472600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.979 [2024-07-12 00:48:23.472626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.979 qpair failed and we were unable to recover it. 00:35:55.979 [2024-07-12 00:48:23.472705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.979 [2024-07-12 00:48:23.472733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.979 qpair failed and we were unable to recover it. 00:35:55.979 [2024-07-12 00:48:23.472816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.979 [2024-07-12 00:48:23.472842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.979 qpair failed and we were unable to recover it. 00:35:55.979 [2024-07-12 00:48:23.472929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.979 [2024-07-12 00:48:23.472957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.979 qpair failed and we were unable to recover it. 00:35:55.979 [2024-07-12 00:48:23.473041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.979 [2024-07-12 00:48:23.473067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.979 qpair failed and we were unable to recover it. 00:35:55.979 [2024-07-12 00:48:23.473146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.979 [2024-07-12 00:48:23.473172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.979 qpair failed and we were unable to recover it. 00:35:55.979 [2024-07-12 00:48:23.473249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.979 [2024-07-12 00:48:23.473275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.979 qpair failed and we were unable to recover it. 00:35:55.979 [2024-07-12 00:48:23.473358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.979 [2024-07-12 00:48:23.473385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.979 qpair failed and we were unable to recover it. 00:35:55.979 [2024-07-12 00:48:23.473464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.979 [2024-07-12 00:48:23.473489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.979 qpair failed and we were unable to recover it. 00:35:55.979 [2024-07-12 00:48:23.473684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.979 [2024-07-12 00:48:23.473712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.979 qpair failed and we were unable to recover it. 00:35:55.979 [2024-07-12 00:48:23.473798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.979 [2024-07-12 00:48:23.473827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.979 qpair failed and we were unable to recover it. 00:35:55.979 [2024-07-12 00:48:23.473905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.979 [2024-07-12 00:48:23.473930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.979 qpair failed and we were unable to recover it. 00:35:55.979 [2024-07-12 00:48:23.474009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.979 [2024-07-12 00:48:23.474035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.979 qpair failed and we were unable to recover it. 00:35:55.979 [2024-07-12 00:48:23.474114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.979 [2024-07-12 00:48:23.474141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.979 qpair failed and we were unable to recover it. 00:35:55.979 [2024-07-12 00:48:23.474220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.979 [2024-07-12 00:48:23.474245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.979 qpair failed and we were unable to recover it. 00:35:55.979 [2024-07-12 00:48:23.474322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.979 [2024-07-12 00:48:23.474349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.979 qpair failed and we were unable to recover it. 00:35:55.979 [2024-07-12 00:48:23.474438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.979 [2024-07-12 00:48:23.474466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.979 qpair failed and we were unable to recover it. 00:35:55.979 [2024-07-12 00:48:23.474554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.979 [2024-07-12 00:48:23.474581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.979 qpair failed and we were unable to recover it. 00:35:55.979 [2024-07-12 00:48:23.474669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.979 [2024-07-12 00:48:23.474696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.979 qpair failed and we were unable to recover it. 00:35:55.979 [2024-07-12 00:48:23.474783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.979 [2024-07-12 00:48:23.474809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.979 qpair failed and we were unable to recover it. 00:35:55.979 [2024-07-12 00:48:23.474905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.979 [2024-07-12 00:48:23.474934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.979 qpair failed and we were unable to recover it. 00:35:55.979 [2024-07-12 00:48:23.475016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.979 [2024-07-12 00:48:23.475042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.979 qpair failed and we were unable to recover it. 00:35:55.979 [2024-07-12 00:48:23.475132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.979 [2024-07-12 00:48:23.475159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.979 qpair failed and we were unable to recover it. 00:35:55.979 [2024-07-12 00:48:23.475236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.979 [2024-07-12 00:48:23.475261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.979 qpair failed and we were unable to recover it. 00:35:55.979 [2024-07-12 00:48:23.475376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.979 [2024-07-12 00:48:23.475401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.979 qpair failed and we were unable to recover it. 00:35:55.979 [2024-07-12 00:48:23.475475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.979 [2024-07-12 00:48:23.475500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.979 qpair failed and we were unable to recover it. 00:35:55.979 [2024-07-12 00:48:23.475581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.979 [2024-07-12 00:48:23.475616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.979 qpair failed and we were unable to recover it. 00:35:55.979 [2024-07-12 00:48:23.475702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.979 [2024-07-12 00:48:23.475729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.979 qpair failed and we were unable to recover it. 00:35:55.979 [2024-07-12 00:48:23.475814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.979 [2024-07-12 00:48:23.475840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.979 qpair failed and we were unable to recover it. 00:35:55.979 [2024-07-12 00:48:23.475953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.980 [2024-07-12 00:48:23.475978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.980 qpair failed and we were unable to recover it. 00:35:55.980 [2024-07-12 00:48:23.476056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.980 [2024-07-12 00:48:23.476085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.980 qpair failed and we were unable to recover it. 00:35:55.980 [2024-07-12 00:48:23.476161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.980 [2024-07-12 00:48:23.476187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.980 qpair failed and we were unable to recover it. 00:35:55.980 [2024-07-12 00:48:23.476264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.980 [2024-07-12 00:48:23.476292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.980 qpair failed and we were unable to recover it. 00:35:55.980 [2024-07-12 00:48:23.476397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.980 [2024-07-12 00:48:23.476424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.980 qpair failed and we were unable to recover it. 00:35:55.980 [2024-07-12 00:48:23.476514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.980 [2024-07-12 00:48:23.476543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.980 qpair failed and we were unable to recover it. 00:35:55.980 [2024-07-12 00:48:23.476632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.980 [2024-07-12 00:48:23.476659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.980 qpair failed and we were unable to recover it. 00:35:55.980 [2024-07-12 00:48:23.476738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.980 [2024-07-12 00:48:23.476764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.980 qpair failed and we were unable to recover it. 00:35:55.980 [2024-07-12 00:48:23.476852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.980 [2024-07-12 00:48:23.476878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.980 qpair failed and we were unable to recover it. 00:35:55.980 [2024-07-12 00:48:23.476963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.980 [2024-07-12 00:48:23.476989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.980 qpair failed and we were unable to recover it. 00:35:55.980 [2024-07-12 00:48:23.477080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.980 [2024-07-12 00:48:23.477108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.980 qpair failed and we were unable to recover it. 00:35:55.980 [2024-07-12 00:48:23.477187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.980 [2024-07-12 00:48:23.477214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.980 qpair failed and we were unable to recover it. 00:35:55.980 [2024-07-12 00:48:23.477299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.980 [2024-07-12 00:48:23.477326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.980 qpair failed and we were unable to recover it. 00:35:55.980 [2024-07-12 00:48:23.477413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.980 [2024-07-12 00:48:23.477440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.980 qpair failed and we were unable to recover it. 00:35:55.980 [2024-07-12 00:48:23.477528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.980 [2024-07-12 00:48:23.477554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.980 qpair failed and we were unable to recover it. 00:35:55.980 [2024-07-12 00:48:23.477640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.980 [2024-07-12 00:48:23.477666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.980 qpair failed and we were unable to recover it. 00:35:55.980 [2024-07-12 00:48:23.477746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.980 [2024-07-12 00:48:23.477773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.980 qpair failed and we were unable to recover it. 00:35:55.980 [2024-07-12 00:48:23.477862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.980 [2024-07-12 00:48:23.477887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.980 qpair failed and we were unable to recover it. 00:35:55.980 [2024-07-12 00:48:23.477970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.980 [2024-07-12 00:48:23.478014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.980 qpair failed and we were unable to recover it. 00:35:55.980 [2024-07-12 00:48:23.478100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.980 [2024-07-12 00:48:23.478128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.980 qpair failed and we were unable to recover it. 00:35:55.980 [2024-07-12 00:48:23.478244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.980 [2024-07-12 00:48:23.478270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.980 qpair failed and we were unable to recover it. 00:35:55.980 [2024-07-12 00:48:23.478352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.980 [2024-07-12 00:48:23.478379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.980 qpair failed and we were unable to recover it. 00:35:55.980 [2024-07-12 00:48:23.478459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.980 [2024-07-12 00:48:23.478485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.980 qpair failed and we were unable to recover it. 00:35:55.980 [2024-07-12 00:48:23.478566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.980 [2024-07-12 00:48:23.478602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.980 qpair failed and we were unable to recover it. 00:35:55.980 [2024-07-12 00:48:23.478704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.980 [2024-07-12 00:48:23.478732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.980 qpair failed and we were unable to recover it. 00:35:55.980 [2024-07-12 00:48:23.478814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.980 [2024-07-12 00:48:23.478839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.980 qpair failed and we were unable to recover it. 00:35:55.980 [2024-07-12 00:48:23.478920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.980 [2024-07-12 00:48:23.478946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.980 qpair failed and we were unable to recover it. 00:35:55.980 [2024-07-12 00:48:23.479026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.980 [2024-07-12 00:48:23.479052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:55.980 qpair failed and we were unable to recover it. 00:35:55.980 [2024-07-12 00:48:23.479133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.980 [2024-07-12 00:48:23.479162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.981 qpair failed and we were unable to recover it. 00:35:55.981 [2024-07-12 00:48:23.479242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.981 [2024-07-12 00:48:23.479271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.981 qpair failed and we were unable to recover it. 00:35:55.981 [2024-07-12 00:48:23.479358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.981 [2024-07-12 00:48:23.479390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.981 qpair failed and we were unable to recover it. 00:35:55.981 [2024-07-12 00:48:23.479468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.981 [2024-07-12 00:48:23.479499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.981 qpair failed and we were unable to recover it. 00:35:55.981 [2024-07-12 00:48:23.479592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.981 [2024-07-12 00:48:23.479618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.981 qpair failed and we were unable to recover it. 00:35:55.981 [2024-07-12 00:48:23.479707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.981 [2024-07-12 00:48:23.479736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.981 qpair failed and we were unable to recover it. 00:35:55.981 [2024-07-12 00:48:23.479823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.981 [2024-07-12 00:48:23.479848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.981 qpair failed and we were unable to recover it. 00:35:55.981 [2024-07-12 00:48:23.479922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.981 [2024-07-12 00:48:23.479949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.981 qpair failed and we were unable to recover it. 00:35:55.981 [2024-07-12 00:48:23.480024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.981 [2024-07-12 00:48:23.480050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.981 qpair failed and we were unable to recover it. 00:35:55.981 [2024-07-12 00:48:23.480142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.981 [2024-07-12 00:48:23.480170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.981 qpair failed and we were unable to recover it. 00:35:55.981 [2024-07-12 00:48:23.480272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.981 [2024-07-12 00:48:23.480299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.981 qpair failed and we were unable to recover it. 00:35:55.981 [2024-07-12 00:48:23.480377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.981 [2024-07-12 00:48:23.480405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.981 qpair failed and we were unable to recover it. 00:35:55.981 [2024-07-12 00:48:23.480517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.981 [2024-07-12 00:48:23.480544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.981 qpair failed and we were unable to recover it. 00:35:55.981 [2024-07-12 00:48:23.480642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.981 [2024-07-12 00:48:23.480670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.981 qpair failed and we were unable to recover it. 00:35:55.981 [2024-07-12 00:48:23.480753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.981 [2024-07-12 00:48:23.480779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.981 qpair failed and we were unable to recover it. 00:35:55.981 [2024-07-12 00:48:23.480870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.981 [2024-07-12 00:48:23.480897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.981 qpair failed and we were unable to recover it. 00:35:55.981 [2024-07-12 00:48:23.481009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.981 [2024-07-12 00:48:23.481036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.981 qpair failed and we were unable to recover it. 00:35:55.981 [2024-07-12 00:48:23.481158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.981 [2024-07-12 00:48:23.481222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.981 qpair failed and we were unable to recover it. 00:35:55.981 [2024-07-12 00:48:23.481312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.981 [2024-07-12 00:48:23.481340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.981 qpair failed and we were unable to recover it. 00:35:55.981 [2024-07-12 00:48:23.481482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.981 [2024-07-12 00:48:23.481533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.981 qpair failed and we were unable to recover it. 00:35:55.981 [2024-07-12 00:48:23.481622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.981 [2024-07-12 00:48:23.481650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.981 qpair failed and we were unable to recover it. 00:35:55.981 [2024-07-12 00:48:23.481727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.981 [2024-07-12 00:48:23.481756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.981 qpair failed and we were unable to recover it. 00:35:55.981 [2024-07-12 00:48:23.481841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.981 [2024-07-12 00:48:23.481867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.981 qpair failed and we were unable to recover it. 00:35:55.981 [2024-07-12 00:48:23.481954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.981 [2024-07-12 00:48:23.481981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.981 qpair failed and we were unable to recover it. 00:35:55.981 [2024-07-12 00:48:23.482066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.981 [2024-07-12 00:48:23.482093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.981 qpair failed and we were unable to recover it. 00:35:55.981 [2024-07-12 00:48:23.482185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.981 [2024-07-12 00:48:23.482212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.981 qpair failed and we were unable to recover it. 00:35:55.981 [2024-07-12 00:48:23.482320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.981 [2024-07-12 00:48:23.482345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.981 qpair failed and we were unable to recover it. 00:35:55.981 [2024-07-12 00:48:23.482426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.981 [2024-07-12 00:48:23.482453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.981 qpair failed and we were unable to recover it. 00:35:55.981 [2024-07-12 00:48:23.482534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.981 [2024-07-12 00:48:23.482561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.981 qpair failed and we were unable to recover it. 00:35:55.981 [2024-07-12 00:48:23.482655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.981 [2024-07-12 00:48:23.482682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.981 qpair failed and we were unable to recover it. 00:35:55.981 [2024-07-12 00:48:23.482767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.981 [2024-07-12 00:48:23.482795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.981 qpair failed and we were unable to recover it. 00:35:55.981 [2024-07-12 00:48:23.482886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.981 [2024-07-12 00:48:23.482912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.981 qpair failed and we were unable to recover it. 00:35:55.981 [2024-07-12 00:48:23.483012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.982 [2024-07-12 00:48:23.483040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.982 qpair failed and we were unable to recover it. 00:35:55.982 [2024-07-12 00:48:23.483123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.982 [2024-07-12 00:48:23.483148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.982 qpair failed and we were unable to recover it. 00:35:55.982 [2024-07-12 00:48:23.483225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.982 [2024-07-12 00:48:23.483250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.982 qpair failed and we were unable to recover it. 00:35:55.982 [2024-07-12 00:48:23.483331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.982 [2024-07-12 00:48:23.483357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.982 qpair failed and we were unable to recover it. 00:35:55.982 [2024-07-12 00:48:23.483457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.982 [2024-07-12 00:48:23.483485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.982 qpair failed and we were unable to recover it. 00:35:55.982 [2024-07-12 00:48:23.483579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.982 [2024-07-12 00:48:23.483613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.982 qpair failed and we were unable to recover it. 00:35:55.982 [2024-07-12 00:48:23.483695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.982 [2024-07-12 00:48:23.483723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.982 qpair failed and we were unable to recover it. 00:35:55.982 [2024-07-12 00:48:23.483806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.982 [2024-07-12 00:48:23.483833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.982 qpair failed and we were unable to recover it. 00:35:55.982 [2024-07-12 00:48:23.483920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.982 [2024-07-12 00:48:23.483946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.982 qpair failed and we were unable to recover it. 00:35:55.982 [2024-07-12 00:48:23.484023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.982 [2024-07-12 00:48:23.484053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.982 qpair failed and we were unable to recover it. 00:35:55.982 [2024-07-12 00:48:23.484133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.982 [2024-07-12 00:48:23.484159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.982 qpair failed and we were unable to recover it. 00:35:55.982 [2024-07-12 00:48:23.484240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.982 [2024-07-12 00:48:23.484271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.982 qpair failed and we were unable to recover it. 00:35:55.982 [2024-07-12 00:48:23.484358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.982 [2024-07-12 00:48:23.484385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.982 qpair failed and we were unable to recover it. 00:35:55.982 [2024-07-12 00:48:23.484464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.982 [2024-07-12 00:48:23.484490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.982 qpair failed and we were unable to recover it. 00:35:55.982 [2024-07-12 00:48:23.484567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.982 [2024-07-12 00:48:23.484599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.982 qpair failed and we were unable to recover it. 00:35:55.982 [2024-07-12 00:48:23.484680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.982 [2024-07-12 00:48:23.484709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.982 qpair failed and we were unable to recover it. 00:35:55.982 [2024-07-12 00:48:23.484797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.982 [2024-07-12 00:48:23.484825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.982 qpair failed and we were unable to recover it. 00:35:55.982 [2024-07-12 00:48:23.484915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.982 [2024-07-12 00:48:23.484941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.982 qpair failed and we were unable to recover it. 00:35:55.982 [2024-07-12 00:48:23.485023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.982 [2024-07-12 00:48:23.485050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.982 qpair failed and we were unable to recover it. 00:35:55.982 [2024-07-12 00:48:23.485132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.982 [2024-07-12 00:48:23.485159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.982 qpair failed and we were unable to recover it. 00:35:55.982 [2024-07-12 00:48:23.485247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.982 [2024-07-12 00:48:23.485273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.982 qpair failed and we were unable to recover it. 00:35:55.982 [2024-07-12 00:48:23.485355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.982 [2024-07-12 00:48:23.485380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.982 qpair failed and we were unable to recover it. 00:35:55.982 [2024-07-12 00:48:23.485468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.982 [2024-07-12 00:48:23.485496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.982 qpair failed and we were unable to recover it. 00:35:55.982 [2024-07-12 00:48:23.485582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.982 [2024-07-12 00:48:23.485618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.982 qpair failed and we were unable to recover it. 00:35:55.982 [2024-07-12 00:48:23.485700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.982 [2024-07-12 00:48:23.485726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.982 qpair failed and we were unable to recover it. 00:35:55.982 [2024-07-12 00:48:23.485823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.982 [2024-07-12 00:48:23.485850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.982 qpair failed and we were unable to recover it. 00:35:55.982 [2024-07-12 00:48:23.485939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.982 [2024-07-12 00:48:23.485966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.982 qpair failed and we were unable to recover it. 00:35:55.982 [2024-07-12 00:48:23.486053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.982 [2024-07-12 00:48:23.486081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.982 qpair failed and we were unable to recover it. 00:35:55.982 [2024-07-12 00:48:23.486163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.982 [2024-07-12 00:48:23.486189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.982 qpair failed and we were unable to recover it. 00:35:55.982 [2024-07-12 00:48:23.486269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.982 [2024-07-12 00:48:23.486300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.982 qpair failed and we were unable to recover it. 00:35:55.982 [2024-07-12 00:48:23.486382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.982 [2024-07-12 00:48:23.486409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.982 qpair failed and we were unable to recover it. 00:35:55.982 [2024-07-12 00:48:23.486487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.982 [2024-07-12 00:48:23.486513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.982 qpair failed and we were unable to recover it. 00:35:55.983 [2024-07-12 00:48:23.486602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.983 [2024-07-12 00:48:23.486636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.983 qpair failed and we were unable to recover it. 00:35:55.983 [2024-07-12 00:48:23.486713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.983 [2024-07-12 00:48:23.486740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.983 qpair failed and we were unable to recover it. 00:35:55.983 [2024-07-12 00:48:23.486822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.983 [2024-07-12 00:48:23.486847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.983 qpair failed and we were unable to recover it. 00:35:55.983 [2024-07-12 00:48:23.486936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.983 [2024-07-12 00:48:23.486962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.983 qpair failed and we were unable to recover it. 00:35:55.983 [2024-07-12 00:48:23.487041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.983 [2024-07-12 00:48:23.487068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.983 qpair failed and we were unable to recover it. 00:35:55.983 [2024-07-12 00:48:23.487148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.983 [2024-07-12 00:48:23.487177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.983 qpair failed and we were unable to recover it. 00:35:55.983 [2024-07-12 00:48:23.487267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.983 [2024-07-12 00:48:23.487293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.983 qpair failed and we were unable to recover it. 00:35:55.983 [2024-07-12 00:48:23.487370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.983 [2024-07-12 00:48:23.487395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.983 qpair failed and we were unable to recover it. 00:35:55.983 [2024-07-12 00:48:23.487476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.983 [2024-07-12 00:48:23.487502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.983 qpair failed and we were unable to recover it. 00:35:55.983 [2024-07-12 00:48:23.487582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.983 [2024-07-12 00:48:23.487618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.983 qpair failed and we were unable to recover it. 00:35:55.983 [2024-07-12 00:48:23.487694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.983 [2024-07-12 00:48:23.487719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.983 qpair failed and we were unable to recover it. 00:35:55.983 [2024-07-12 00:48:23.487803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.983 [2024-07-12 00:48:23.487829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.983 qpair failed and we were unable to recover it. 00:35:55.983 [2024-07-12 00:48:23.487910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.983 [2024-07-12 00:48:23.487936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.983 qpair failed and we were unable to recover it. 00:35:55.983 [2024-07-12 00:48:23.488015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.983 [2024-07-12 00:48:23.488040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.983 qpair failed and we were unable to recover it. 00:35:55.983 [2024-07-12 00:48:23.488118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.983 [2024-07-12 00:48:23.488143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.983 qpair failed and we were unable to recover it. 00:35:55.983 [2024-07-12 00:48:23.488222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.983 [2024-07-12 00:48:23.488248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.983 qpair failed and we were unable to recover it. 00:35:55.983 [2024-07-12 00:48:23.488327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.983 [2024-07-12 00:48:23.488356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.983 qpair failed and we were unable to recover it. 00:35:55.983 [2024-07-12 00:48:23.488432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.983 [2024-07-12 00:48:23.488459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.983 qpair failed and we were unable to recover it. 00:35:55.983 [2024-07-12 00:48:23.488536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.983 [2024-07-12 00:48:23.488562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.983 qpair failed and we were unable to recover it. 00:35:55.983 [2024-07-12 00:48:23.488647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.983 [2024-07-12 00:48:23.488678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.983 qpair failed and we were unable to recover it. 00:35:55.983 [2024-07-12 00:48:23.488757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.983 [2024-07-12 00:48:23.488785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.983 qpair failed and we were unable to recover it. 00:35:55.983 [2024-07-12 00:48:23.488874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.983 [2024-07-12 00:48:23.488901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.983 qpair failed and we were unable to recover it. 00:35:55.983 [2024-07-12 00:48:23.488987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.983 [2024-07-12 00:48:23.489016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.983 qpair failed and we were unable to recover it. 00:35:55.983 [2024-07-12 00:48:23.489101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.983 [2024-07-12 00:48:23.489127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.983 qpair failed and we were unable to recover it. 00:35:55.983 [2024-07-12 00:48:23.489207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.983 [2024-07-12 00:48:23.489234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.983 qpair failed and we were unable to recover it. 00:35:55.983 [2024-07-12 00:48:23.489316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.983 [2024-07-12 00:48:23.489341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.983 qpair failed and we were unable to recover it. 00:35:55.983 [2024-07-12 00:48:23.489416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.983 [2024-07-12 00:48:23.489443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.983 qpair failed and we were unable to recover it. 00:35:55.983 [2024-07-12 00:48:23.489523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.983 [2024-07-12 00:48:23.489549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.983 qpair failed and we were unable to recover it. 00:35:55.983 [2024-07-12 00:48:23.489639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.983 [2024-07-12 00:48:23.489666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.983 qpair failed and we were unable to recover it. 00:35:55.983 [2024-07-12 00:48:23.489764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.983 [2024-07-12 00:48:23.489790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.983 qpair failed and we were unable to recover it. 00:35:55.983 [2024-07-12 00:48:23.489868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.983 [2024-07-12 00:48:23.489894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.983 qpair failed and we were unable to recover it. 00:35:55.983 [2024-07-12 00:48:23.489970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.983 [2024-07-12 00:48:23.489995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.983 qpair failed and we were unable to recover it. 00:35:55.983 [2024-07-12 00:48:23.490075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.983 [2024-07-12 00:48:23.490100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.983 qpair failed and we were unable to recover it. 00:35:55.983 [2024-07-12 00:48:23.490196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.983 [2024-07-12 00:48:23.490222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.983 qpair failed and we were unable to recover it. 00:35:55.983 [2024-07-12 00:48:23.490322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.983 [2024-07-12 00:48:23.490350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.984 qpair failed and we were unable to recover it. 00:35:55.984 [2024-07-12 00:48:23.490437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.984 [2024-07-12 00:48:23.490463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.984 qpair failed and we were unable to recover it. 00:35:55.984 [2024-07-12 00:48:23.490548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.984 [2024-07-12 00:48:23.490575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.984 qpair failed and we were unable to recover it. 00:35:55.984 [2024-07-12 00:48:23.490666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.984 [2024-07-12 00:48:23.490692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.984 qpair failed and we were unable to recover it. 00:35:55.984 [2024-07-12 00:48:23.490774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.984 [2024-07-12 00:48:23.490800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.984 qpair failed and we were unable to recover it. 00:35:55.984 [2024-07-12 00:48:23.490890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.984 [2024-07-12 00:48:23.490916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.984 qpair failed and we were unable to recover it. 00:35:55.984 [2024-07-12 00:48:23.490994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.984 [2024-07-12 00:48:23.491021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.984 qpair failed and we were unable to recover it. 00:35:55.984 [2024-07-12 00:48:23.491106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.984 [2024-07-12 00:48:23.491132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.984 qpair failed and we were unable to recover it. 00:35:55.984 [2024-07-12 00:48:23.491244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.984 [2024-07-12 00:48:23.491270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.984 qpair failed and we were unable to recover it. 00:35:55.984 [2024-07-12 00:48:23.491349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.984 [2024-07-12 00:48:23.491375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.984 qpair failed and we were unable to recover it. 00:35:55.984 [2024-07-12 00:48:23.491463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.984 [2024-07-12 00:48:23.491492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.984 qpair failed and we were unable to recover it. 00:35:55.984 [2024-07-12 00:48:23.491597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.984 [2024-07-12 00:48:23.491624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.984 qpair failed and we were unable to recover it. 00:35:55.984 [2024-07-12 00:48:23.491709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.984 [2024-07-12 00:48:23.491738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.984 qpair failed and we were unable to recover it. 00:35:55.984 [2024-07-12 00:48:23.491817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.984 [2024-07-12 00:48:23.491843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.984 qpair failed and we were unable to recover it. 00:35:55.984 [2024-07-12 00:48:23.491929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.984 [2024-07-12 00:48:23.491954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.984 qpair failed and we were unable to recover it. 00:35:55.984 [2024-07-12 00:48:23.492032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.984 [2024-07-12 00:48:23.492058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.984 qpair failed and we were unable to recover it. 00:35:55.984 [2024-07-12 00:48:23.492134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.984 [2024-07-12 00:48:23.492161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.984 qpair failed and we were unable to recover it. 00:35:55.984 [2024-07-12 00:48:23.492241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.984 [2024-07-12 00:48:23.492268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.984 qpair failed and we were unable to recover it. 00:35:55.984 [2024-07-12 00:48:23.492347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.984 [2024-07-12 00:48:23.492373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.984 qpair failed and we were unable to recover it. 00:35:55.984 [2024-07-12 00:48:23.492450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.984 [2024-07-12 00:48:23.492475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.984 qpair failed and we were unable to recover it. 00:35:55.984 [2024-07-12 00:48:23.492552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.984 [2024-07-12 00:48:23.492577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.984 qpair failed and we were unable to recover it. 00:35:55.984 [2024-07-12 00:48:23.492659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.984 [2024-07-12 00:48:23.492685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.984 qpair failed and we were unable to recover it. 00:35:55.984 [2024-07-12 00:48:23.492760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.984 [2024-07-12 00:48:23.492787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.984 qpair failed and we were unable to recover it. 00:35:55.984 [2024-07-12 00:48:23.492868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.984 [2024-07-12 00:48:23.492895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.984 qpair failed and we were unable to recover it. 00:35:55.984 [2024-07-12 00:48:23.492971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.984 [2024-07-12 00:48:23.492996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.984 qpair failed and we were unable to recover it. 00:35:55.984 [2024-07-12 00:48:23.493071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.984 [2024-07-12 00:48:23.493100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.984 qpair failed and we were unable to recover it. 00:35:55.984 [2024-07-12 00:48:23.493178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.984 [2024-07-12 00:48:23.493204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.984 qpair failed and we were unable to recover it. 00:35:55.984 [2024-07-12 00:48:23.493288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.984 [2024-07-12 00:48:23.493314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.984 qpair failed and we were unable to recover it. 00:35:55.984 [2024-07-12 00:48:23.493395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.984 [2024-07-12 00:48:23.493424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.984 qpair failed and we were unable to recover it. 00:35:55.984 [2024-07-12 00:48:23.493506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.984 [2024-07-12 00:48:23.493532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.984 qpair failed and we were unable to recover it. 00:35:55.984 [2024-07-12 00:48:23.493619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.984 [2024-07-12 00:48:23.493647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.984 qpair failed and we were unable to recover it. 00:35:55.984 [2024-07-12 00:48:23.493730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.984 [2024-07-12 00:48:23.493756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.984 qpair failed and we were unable to recover it. 00:35:55.984 [2024-07-12 00:48:23.493840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.984 [2024-07-12 00:48:23.493868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.984 qpair failed and we were unable to recover it. 00:35:55.984 [2024-07-12 00:48:23.493947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.984 [2024-07-12 00:48:23.493973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.984 qpair failed and we were unable to recover it. 00:35:55.984 [2024-07-12 00:48:23.494056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.984 [2024-07-12 00:48:23.494084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.984 qpair failed and we were unable to recover it. 00:35:55.984 [2024-07-12 00:48:23.494171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.984 [2024-07-12 00:48:23.494199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.984 qpair failed and we were unable to recover it. 00:35:55.984 [2024-07-12 00:48:23.494277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.984 [2024-07-12 00:48:23.494307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.984 qpair failed and we were unable to recover it. 00:35:55.984 [2024-07-12 00:48:23.494387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.984 [2024-07-12 00:48:23.494415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.984 qpair failed and we were unable to recover it. 00:35:55.985 [2024-07-12 00:48:23.494498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.985 [2024-07-12 00:48:23.494526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.985 qpair failed and we were unable to recover it. 00:35:55.985 [2024-07-12 00:48:23.494614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.985 [2024-07-12 00:48:23.494644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.985 qpair failed and we were unable to recover it. 00:35:55.985 [2024-07-12 00:48:23.494726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.985 [2024-07-12 00:48:23.494752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.985 qpair failed and we were unable to recover it. 00:35:55.985 [2024-07-12 00:48:23.494837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.985 [2024-07-12 00:48:23.494864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.985 qpair failed and we were unable to recover it. 00:35:55.985 [2024-07-12 00:48:23.494943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.985 [2024-07-12 00:48:23.494969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.985 qpair failed and we were unable to recover it. 00:35:55.985 [2024-07-12 00:48:23.495047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.985 [2024-07-12 00:48:23.495073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.985 qpair failed and we were unable to recover it. 00:35:55.985 [2024-07-12 00:48:23.495150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.985 [2024-07-12 00:48:23.495177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.985 qpair failed and we were unable to recover it. 00:35:55.985 [2024-07-12 00:48:23.495253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.985 [2024-07-12 00:48:23.495280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.985 qpair failed and we were unable to recover it. 00:35:55.985 [2024-07-12 00:48:23.495354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.985 [2024-07-12 00:48:23.495380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.985 qpair failed and we were unable to recover it. 00:35:55.985 [2024-07-12 00:48:23.495459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.985 [2024-07-12 00:48:23.495486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.985 qpair failed and we were unable to recover it. 00:35:55.985 [2024-07-12 00:48:23.495564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.985 [2024-07-12 00:48:23.495595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.985 qpair failed and we were unable to recover it. 00:35:55.985 [2024-07-12 00:48:23.495725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.985 [2024-07-12 00:48:23.495751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.985 qpair failed and we were unable to recover it. 00:35:55.985 [2024-07-12 00:48:23.495839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.985 [2024-07-12 00:48:23.495866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.985 qpair failed and we were unable to recover it. 00:35:55.985 [2024-07-12 00:48:23.495993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.985 [2024-07-12 00:48:23.496019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.985 qpair failed and we were unable to recover it. 00:35:55.985 [2024-07-12 00:48:23.496102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.985 [2024-07-12 00:48:23.496131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.985 qpair failed and we were unable to recover it. 00:35:55.985 [2024-07-12 00:48:23.496220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.985 [2024-07-12 00:48:23.496247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.985 qpair failed and we were unable to recover it. 00:35:55.985 [2024-07-12 00:48:23.496331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.985 [2024-07-12 00:48:23.496357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.985 qpair failed and we were unable to recover it. 00:35:55.985 [2024-07-12 00:48:23.496438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.985 [2024-07-12 00:48:23.496464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.985 qpair failed and we were unable to recover it. 00:35:55.985 [2024-07-12 00:48:23.496550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.985 [2024-07-12 00:48:23.496577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.985 qpair failed and we were unable to recover it. 00:35:55.985 [2024-07-12 00:48:23.496668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.985 [2024-07-12 00:48:23.496693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.985 qpair failed and we were unable to recover it. 00:35:55.985 [2024-07-12 00:48:23.496772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.985 [2024-07-12 00:48:23.496797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.985 qpair failed and we were unable to recover it. 00:35:55.985 [2024-07-12 00:48:23.496884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.985 [2024-07-12 00:48:23.496911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.985 qpair failed and we were unable to recover it. 00:35:55.985 [2024-07-12 00:48:23.496987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.985 [2024-07-12 00:48:23.497013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.985 qpair failed and we were unable to recover it. 00:35:55.985 [2024-07-12 00:48:23.497087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.985 [2024-07-12 00:48:23.497113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.985 qpair failed and we were unable to recover it. 00:35:55.985 [2024-07-12 00:48:23.497197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.985 [2024-07-12 00:48:23.497223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.985 qpair failed and we were unable to recover it. 00:35:55.985 [2024-07-12 00:48:23.497305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.985 [2024-07-12 00:48:23.497331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.985 qpair failed and we were unable to recover it. 00:35:55.985 [2024-07-12 00:48:23.497411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.985 [2024-07-12 00:48:23.497437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.986 qpair failed and we were unable to recover it. 00:35:55.986 [2024-07-12 00:48:23.497523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.986 [2024-07-12 00:48:23.497549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.986 qpair failed and we were unable to recover it. 00:35:55.986 [2024-07-12 00:48:23.497661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.986 [2024-07-12 00:48:23.497691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.986 qpair failed and we were unable to recover it. 00:35:55.986 [2024-07-12 00:48:23.497775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.986 [2024-07-12 00:48:23.497804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.986 qpair failed and we were unable to recover it. 00:35:55.986 [2024-07-12 00:48:23.497938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.986 [2024-07-12 00:48:23.497965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.986 qpair failed and we were unable to recover it. 00:35:55.986 [2024-07-12 00:48:23.498041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.986 [2024-07-12 00:48:23.498068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.986 qpair failed and we were unable to recover it. 00:35:55.986 [2024-07-12 00:48:23.498146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.986 [2024-07-12 00:48:23.498173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.986 qpair failed and we were unable to recover it. 00:35:55.986 [2024-07-12 00:48:23.498252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.986 [2024-07-12 00:48:23.498278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.986 qpair failed and we were unable to recover it. 00:35:55.986 [2024-07-12 00:48:23.498351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.986 [2024-07-12 00:48:23.498378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.986 qpair failed and we were unable to recover it. 00:35:55.986 [2024-07-12 00:48:23.498461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.986 [2024-07-12 00:48:23.498488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.986 qpair failed and we were unable to recover it. 00:35:55.986 [2024-07-12 00:48:23.498567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.986 [2024-07-12 00:48:23.498600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.986 qpair failed and we were unable to recover it. 00:35:55.986 [2024-07-12 00:48:23.498676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.986 [2024-07-12 00:48:23.498702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.986 qpair failed and we were unable to recover it. 00:35:55.986 [2024-07-12 00:48:23.498783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.986 [2024-07-12 00:48:23.498812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.986 qpair failed and we were unable to recover it. 00:35:55.986 [2024-07-12 00:48:23.498892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.986 [2024-07-12 00:48:23.498918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.986 qpair failed and we were unable to recover it. 00:35:55.986 [2024-07-12 00:48:23.498996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.986 [2024-07-12 00:48:23.499022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.986 qpair failed and we were unable to recover it. 00:35:55.986 [2024-07-12 00:48:23.499109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.986 [2024-07-12 00:48:23.499135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.986 qpair failed and we were unable to recover it. 00:35:55.986 [2024-07-12 00:48:23.499216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.986 [2024-07-12 00:48:23.499243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.986 qpair failed and we were unable to recover it. 00:35:55.986 [2024-07-12 00:48:23.499319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.986 [2024-07-12 00:48:23.499348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.986 qpair failed and we were unable to recover it. 00:35:55.986 [2024-07-12 00:48:23.499426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.986 [2024-07-12 00:48:23.499453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.986 qpair failed and we were unable to recover it. 00:35:55.986 [2024-07-12 00:48:23.499530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.986 [2024-07-12 00:48:23.499556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.986 qpair failed and we were unable to recover it. 00:35:55.986 [2024-07-12 00:48:23.499642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.986 [2024-07-12 00:48:23.499668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.986 qpair failed and we were unable to recover it. 00:35:55.986 [2024-07-12 00:48:23.499742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.986 [2024-07-12 00:48:23.499768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.986 qpair failed and we were unable to recover it. 00:35:55.986 [2024-07-12 00:48:23.499851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.986 [2024-07-12 00:48:23.499878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.986 qpair failed and we were unable to recover it. 00:35:55.986 [2024-07-12 00:48:23.499954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.986 [2024-07-12 00:48:23.499980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.986 qpair failed and we were unable to recover it. 00:35:55.986 [2024-07-12 00:48:23.500065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.986 [2024-07-12 00:48:23.500094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.986 qpair failed and we were unable to recover it. 00:35:55.986 [2024-07-12 00:48:23.500177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.986 [2024-07-12 00:48:23.500203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.986 qpair failed and we were unable to recover it. 00:35:55.986 [2024-07-12 00:48:23.500288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.986 [2024-07-12 00:48:23.500315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.986 qpair failed and we were unable to recover it. 00:35:55.986 [2024-07-12 00:48:23.500394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.986 [2024-07-12 00:48:23.500421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.986 qpair failed and we were unable to recover it. 00:35:55.986 [2024-07-12 00:48:23.500510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.986 [2024-07-12 00:48:23.500541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.986 qpair failed and we were unable to recover it. 00:35:55.986 [2024-07-12 00:48:23.500631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.986 [2024-07-12 00:48:23.500658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.986 qpair failed and we were unable to recover it. 00:35:55.986 [2024-07-12 00:48:23.500741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.986 [2024-07-12 00:48:23.500768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.986 qpair failed and we were unable to recover it. 00:35:55.986 [2024-07-12 00:48:23.500853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.986 [2024-07-12 00:48:23.500881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.986 qpair failed and we were unable to recover it. 00:35:55.986 [2024-07-12 00:48:23.500958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.986 [2024-07-12 00:48:23.500984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.987 qpair failed and we were unable to recover it. 00:35:55.987 [2024-07-12 00:48:23.501062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.987 [2024-07-12 00:48:23.501089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.987 qpair failed and we were unable to recover it. 00:35:55.987 [2024-07-12 00:48:23.501175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.987 [2024-07-12 00:48:23.501202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.987 qpair failed and we were unable to recover it. 00:35:55.987 [2024-07-12 00:48:23.501282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.987 [2024-07-12 00:48:23.501310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.987 qpair failed and we were unable to recover it. 00:35:55.987 [2024-07-12 00:48:23.501396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.987 [2024-07-12 00:48:23.501424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.987 qpair failed and we were unable to recover it. 00:35:55.987 [2024-07-12 00:48:23.501511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.987 [2024-07-12 00:48:23.501539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.987 qpair failed and we were unable to recover it. 00:35:55.987 [2024-07-12 00:48:23.501615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.987 [2024-07-12 00:48:23.501642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.987 qpair failed and we were unable to recover it. 00:35:55.987 [2024-07-12 00:48:23.501725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.987 [2024-07-12 00:48:23.501751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.987 qpair failed and we were unable to recover it. 00:35:55.987 [2024-07-12 00:48:23.501834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.987 [2024-07-12 00:48:23.501860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.987 qpair failed and we were unable to recover it. 00:35:55.987 [2024-07-12 00:48:23.501941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.987 [2024-07-12 00:48:23.501967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.987 qpair failed and we were unable to recover it. 00:35:55.987 [2024-07-12 00:48:23.502057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.987 [2024-07-12 00:48:23.502085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.987 qpair failed and we were unable to recover it. 00:35:55.987 [2024-07-12 00:48:23.502163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.987 [2024-07-12 00:48:23.502189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.987 qpair failed and we were unable to recover it. 00:35:55.987 [2024-07-12 00:48:23.502274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.987 [2024-07-12 00:48:23.502300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.987 qpair failed and we were unable to recover it. 00:35:55.987 [2024-07-12 00:48:23.502427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.987 [2024-07-12 00:48:23.502453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.987 qpair failed and we were unable to recover it. 00:35:55.987 [2024-07-12 00:48:23.502540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.987 [2024-07-12 00:48:23.502568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.987 qpair failed and we were unable to recover it. 00:35:55.987 [2024-07-12 00:48:23.502664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.987 [2024-07-12 00:48:23.502693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.987 qpair failed and we were unable to recover it. 00:35:55.987 [2024-07-12 00:48:23.502770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.987 [2024-07-12 00:48:23.502796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.987 qpair failed and we were unable to recover it. 00:35:55.987 [2024-07-12 00:48:23.502890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.987 [2024-07-12 00:48:23.502917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.987 qpair failed and we were unable to recover it. 00:35:55.987 [2024-07-12 00:48:23.502996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.987 [2024-07-12 00:48:23.503021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.987 qpair failed and we were unable to recover it. 00:35:55.987 [2024-07-12 00:48:23.503096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.987 [2024-07-12 00:48:23.503121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.987 qpair failed and we were unable to recover it. 00:35:55.987 [2024-07-12 00:48:23.503206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.987 [2024-07-12 00:48:23.503232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.987 qpair failed and we were unable to recover it. 00:35:55.987 [2024-07-12 00:48:23.503307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.987 [2024-07-12 00:48:23.503332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.987 qpair failed and we were unable to recover it. 00:35:55.987 [2024-07-12 00:48:23.503419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.987 [2024-07-12 00:48:23.503448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.987 qpair failed and we were unable to recover it. 00:35:55.987 [2024-07-12 00:48:23.503544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.987 [2024-07-12 00:48:23.503571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.987 qpair failed and we were unable to recover it. 00:35:55.987 [2024-07-12 00:48:23.503663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.987 [2024-07-12 00:48:23.503691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.987 qpair failed and we were unable to recover it. 00:35:55.987 [2024-07-12 00:48:23.503773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.987 [2024-07-12 00:48:23.503799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.987 qpair failed and we were unable to recover it. 00:35:55.987 [2024-07-12 00:48:23.503877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.987 [2024-07-12 00:48:23.503902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.987 qpair failed and we were unable to recover it. 00:35:55.987 [2024-07-12 00:48:23.503981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.987 [2024-07-12 00:48:23.504006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.987 qpair failed and we were unable to recover it. 00:35:55.987 [2024-07-12 00:48:23.504082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.987 [2024-07-12 00:48:23.504108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.987 qpair failed and we were unable to recover it. 00:35:55.987 [2024-07-12 00:48:23.504196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.987 [2024-07-12 00:48:23.504224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.987 qpair failed and we were unable to recover it. 00:35:55.987 [2024-07-12 00:48:23.504301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.987 [2024-07-12 00:48:23.504326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.987 qpair failed and we were unable to recover it. 00:35:55.987 [2024-07-12 00:48:23.504401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.987 [2024-07-12 00:48:23.504426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.987 qpair failed and we were unable to recover it. 00:35:55.987 [2024-07-12 00:48:23.504511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.987 [2024-07-12 00:48:23.504537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.987 qpair failed and we were unable to recover it. 00:35:55.987 [2024-07-12 00:48:23.504621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.987 [2024-07-12 00:48:23.504648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.987 qpair failed and we were unable to recover it. 00:35:55.987 [2024-07-12 00:48:23.504730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.987 [2024-07-12 00:48:23.504757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.987 qpair failed and we were unable to recover it. 00:35:55.987 [2024-07-12 00:48:23.504856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.987 [2024-07-12 00:48:23.504883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.987 qpair failed and we were unable to recover it. 00:35:55.987 [2024-07-12 00:48:23.505017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.988 [2024-07-12 00:48:23.505051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.988 qpair failed and we were unable to recover it. 00:35:55.988 [2024-07-12 00:48:23.505132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.988 [2024-07-12 00:48:23.505161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.988 qpair failed and we were unable to recover it. 00:35:55.988 [2024-07-12 00:48:23.505254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.988 [2024-07-12 00:48:23.505282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.988 qpair failed and we were unable to recover it. 00:35:55.988 [2024-07-12 00:48:23.505371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.988 [2024-07-12 00:48:23.505397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.988 qpair failed and we were unable to recover it. 00:35:55.988 [2024-07-12 00:48:23.505475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.988 [2024-07-12 00:48:23.505500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.988 qpair failed and we were unable to recover it. 00:35:55.988 [2024-07-12 00:48:23.505593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.988 [2024-07-12 00:48:23.505620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.988 qpair failed and we were unable to recover it. 00:35:55.988 [2024-07-12 00:48:23.505700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.988 [2024-07-12 00:48:23.505742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.988 qpair failed and we were unable to recover it. 00:35:55.988 [2024-07-12 00:48:23.505825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.988 [2024-07-12 00:48:23.505853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.988 qpair failed and we were unable to recover it. 00:35:55.988 [2024-07-12 00:48:23.505948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.988 [2024-07-12 00:48:23.505976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.988 qpair failed and we were unable to recover it. 00:35:55.988 [2024-07-12 00:48:23.506059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.988 [2024-07-12 00:48:23.506086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.988 qpair failed and we were unable to recover it. 00:35:55.988 [2024-07-12 00:48:23.506213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.988 [2024-07-12 00:48:23.506239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.988 qpair failed and we were unable to recover it. 00:35:55.988 [2024-07-12 00:48:23.506322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.988 [2024-07-12 00:48:23.506349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.988 qpair failed and we were unable to recover it. 00:35:55.988 [2024-07-12 00:48:23.506424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.988 [2024-07-12 00:48:23.506450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.988 qpair failed and we were unable to recover it. 00:35:55.988 [2024-07-12 00:48:23.506527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.988 [2024-07-12 00:48:23.506553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.988 qpair failed and we were unable to recover it. 00:35:55.988 [2024-07-12 00:48:23.506695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.988 [2024-07-12 00:48:23.506722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.988 qpair failed and we were unable to recover it. 00:35:55.988 [2024-07-12 00:48:23.506802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.988 [2024-07-12 00:48:23.506828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.988 qpair failed and we were unable to recover it. 00:35:55.988 [2024-07-12 00:48:23.506919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.988 [2024-07-12 00:48:23.506945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.988 qpair failed and we were unable to recover it. 00:35:55.988 [2024-07-12 00:48:23.507027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.988 [2024-07-12 00:48:23.507054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.988 qpair failed and we were unable to recover it. 00:35:55.988 [2024-07-12 00:48:23.507139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.988 [2024-07-12 00:48:23.507168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.988 qpair failed and we were unable to recover it. 00:35:55.988 [2024-07-12 00:48:23.507254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.988 [2024-07-12 00:48:23.507298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.988 qpair failed and we were unable to recover it. 00:35:55.988 [2024-07-12 00:48:23.507392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.988 [2024-07-12 00:48:23.507419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.988 qpair failed and we were unable to recover it. 00:35:55.988 [2024-07-12 00:48:23.507496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.988 [2024-07-12 00:48:23.507522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.988 qpair failed and we were unable to recover it. 00:35:55.988 [2024-07-12 00:48:23.507614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.988 [2024-07-12 00:48:23.507640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.988 qpair failed and we were unable to recover it. 00:35:55.988 [2024-07-12 00:48:23.507716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.988 [2024-07-12 00:48:23.507742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.988 qpair failed and we were unable to recover it. 00:35:55.988 [2024-07-12 00:48:23.507825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.988 [2024-07-12 00:48:23.507851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.988 qpair failed and we were unable to recover it. 00:35:55.988 [2024-07-12 00:48:23.507927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.988 [2024-07-12 00:48:23.507952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.988 qpair failed and we were unable to recover it. 00:35:55.988 [2024-07-12 00:48:23.508031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.988 [2024-07-12 00:48:23.508056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.988 qpair failed and we were unable to recover it. 00:35:55.988 [2024-07-12 00:48:23.508143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.988 [2024-07-12 00:48:23.508172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.988 qpair failed and we were unable to recover it. 00:35:55.988 [2024-07-12 00:48:23.508251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.988 [2024-07-12 00:48:23.508279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.988 qpair failed and we were unable to recover it. 00:35:55.988 [2024-07-12 00:48:23.508370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.988 [2024-07-12 00:48:23.508397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.988 qpair failed and we were unable to recover it. 00:35:55.988 [2024-07-12 00:48:23.508474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.988 [2024-07-12 00:48:23.508502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.988 qpair failed and we were unable to recover it. 00:35:55.988 [2024-07-12 00:48:23.508591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.988 [2024-07-12 00:48:23.508618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.988 qpair failed and we were unable to recover it. 00:35:55.988 [2024-07-12 00:48:23.508706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.988 [2024-07-12 00:48:23.508737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.988 qpair failed and we were unable to recover it. 00:35:55.988 [2024-07-12 00:48:23.508816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.988 [2024-07-12 00:48:23.508842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.988 qpair failed and we were unable to recover it. 00:35:55.989 [2024-07-12 00:48:23.508926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.989 [2024-07-12 00:48:23.508956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.989 qpair failed and we were unable to recover it. 00:35:55.989 [2024-07-12 00:48:23.509051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.989 [2024-07-12 00:48:23.509078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.989 qpair failed and we were unable to recover it. 00:35:55.989 [2024-07-12 00:48:23.509181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.989 [2024-07-12 00:48:23.509208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.989 qpair failed and we were unable to recover it. 00:35:55.989 [2024-07-12 00:48:23.509289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.989 [2024-07-12 00:48:23.509317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.989 qpair failed and we were unable to recover it. 00:35:55.989 [2024-07-12 00:48:23.509405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.989 [2024-07-12 00:48:23.509433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.989 qpair failed and we were unable to recover it. 00:35:55.989 [2024-07-12 00:48:23.509523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.989 [2024-07-12 00:48:23.509550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.989 qpair failed and we were unable to recover it. 00:35:55.989 [2024-07-12 00:48:23.509644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.989 [2024-07-12 00:48:23.509677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.989 qpair failed and we were unable to recover it. 00:35:55.989 [2024-07-12 00:48:23.509763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.989 [2024-07-12 00:48:23.509789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.989 qpair failed and we were unable to recover it. 00:35:55.989 [2024-07-12 00:48:23.509871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.989 [2024-07-12 00:48:23.509897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.989 qpair failed and we were unable to recover it. 00:35:55.989 [2024-07-12 00:48:23.509979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.989 [2024-07-12 00:48:23.510005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.989 qpair failed and we were unable to recover it. 00:35:55.989 [2024-07-12 00:48:23.510082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.989 [2024-07-12 00:48:23.510108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.989 qpair failed and we were unable to recover it. 00:35:55.989 [2024-07-12 00:48:23.510188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.989 [2024-07-12 00:48:23.510213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.989 qpair failed and we were unable to recover it. 00:35:55.989 [2024-07-12 00:48:23.510293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.989 [2024-07-12 00:48:23.510318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.989 qpair failed and we were unable to recover it. 00:35:55.989 [2024-07-12 00:48:23.510421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.989 [2024-07-12 00:48:23.510448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.989 qpair failed and we were unable to recover it. 00:35:55.989 [2024-07-12 00:48:23.510534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.989 [2024-07-12 00:48:23.510562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.989 qpair failed and we were unable to recover it. 00:35:55.989 [2024-07-12 00:48:23.510653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.989 [2024-07-12 00:48:23.510680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.989 qpair failed and we were unable to recover it. 00:35:55.989 [2024-07-12 00:48:23.510761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.989 [2024-07-12 00:48:23.510792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.989 qpair failed and we were unable to recover it. 00:35:55.989 [2024-07-12 00:48:23.510871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.989 [2024-07-12 00:48:23.510898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.989 qpair failed and we were unable to recover it. 00:35:55.989 [2024-07-12 00:48:23.510978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.989 [2024-07-12 00:48:23.511004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.989 qpair failed and we were unable to recover it. 00:35:55.989 [2024-07-12 00:48:23.511088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.989 [2024-07-12 00:48:23.511115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.989 qpair failed and we were unable to recover it. 00:35:55.989 [2024-07-12 00:48:23.511206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.989 [2024-07-12 00:48:23.511232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.989 qpair failed and we were unable to recover it. 00:35:55.989 [2024-07-12 00:48:23.511313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.989 [2024-07-12 00:48:23.511342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.989 qpair failed and we were unable to recover it. 00:35:55.989 [2024-07-12 00:48:23.511440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.989 [2024-07-12 00:48:23.511467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.989 qpair failed and we were unable to recover it. 00:35:55.989 [2024-07-12 00:48:23.511543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.989 [2024-07-12 00:48:23.511570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.989 qpair failed and we were unable to recover it. 00:35:55.989 [2024-07-12 00:48:23.511653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.989 [2024-07-12 00:48:23.511680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.989 qpair failed and we were unable to recover it. 00:35:55.989 [2024-07-12 00:48:23.511770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.989 [2024-07-12 00:48:23.511795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.989 qpair failed and we were unable to recover it. 00:35:55.989 [2024-07-12 00:48:23.511882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.989 [2024-07-12 00:48:23.511911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.989 qpair failed and we were unable to recover it. 00:35:55.989 [2024-07-12 00:48:23.511994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.989 [2024-07-12 00:48:23.512023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.989 qpair failed and we were unable to recover it. 00:35:55.989 [2024-07-12 00:48:23.512102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.989 [2024-07-12 00:48:23.512128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.989 qpair failed and we were unable to recover it. 00:35:55.989 [2024-07-12 00:48:23.512213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.989 [2024-07-12 00:48:23.512238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.989 qpair failed and we were unable to recover it. 00:35:55.989 [2024-07-12 00:48:23.512313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.989 [2024-07-12 00:48:23.512338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.989 qpair failed and we were unable to recover it. 00:35:55.989 [2024-07-12 00:48:23.512424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.989 [2024-07-12 00:48:23.512451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.989 qpair failed and we were unable to recover it. 00:35:55.989 [2024-07-12 00:48:23.512534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.989 [2024-07-12 00:48:23.512560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.989 qpair failed and we were unable to recover it. 00:35:55.989 [2024-07-12 00:48:23.512652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.989 [2024-07-12 00:48:23.512681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.989 qpair failed and we were unable to recover it. 00:35:55.989 [2024-07-12 00:48:23.512761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.989 [2024-07-12 00:48:23.512788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.990 qpair failed and we were unable to recover it. 00:35:55.990 [2024-07-12 00:48:23.512870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.990 [2024-07-12 00:48:23.512896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.990 qpair failed and we were unable to recover it. 00:35:55.990 [2024-07-12 00:48:23.512975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.990 [2024-07-12 00:48:23.513002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.990 qpair failed and we were unable to recover it. 00:35:55.990 [2024-07-12 00:48:23.513080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.990 [2024-07-12 00:48:23.513107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.990 qpair failed and we were unable to recover it. 00:35:55.990 [2024-07-12 00:48:23.513181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.990 [2024-07-12 00:48:23.513207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.990 qpair failed and we were unable to recover it. 00:35:55.990 [2024-07-12 00:48:23.513286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.990 [2024-07-12 00:48:23.513313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.990 qpair failed and we were unable to recover it. 00:35:55.990 [2024-07-12 00:48:23.513403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.990 [2024-07-12 00:48:23.513429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.990 qpair failed and we were unable to recover it. 00:35:55.990 [2024-07-12 00:48:23.513508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.990 [2024-07-12 00:48:23.513537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.990 qpair failed and we were unable to recover it. 00:35:55.990 [2024-07-12 00:48:23.513615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.990 [2024-07-12 00:48:23.513642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.990 qpair failed and we were unable to recover it. 00:35:55.990 [2024-07-12 00:48:23.513730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.990 [2024-07-12 00:48:23.513758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.990 qpair failed and we were unable to recover it. 00:35:55.990 [2024-07-12 00:48:23.513842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.990 [2024-07-12 00:48:23.513868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.990 qpair failed and we were unable to recover it. 00:35:55.990 [2024-07-12 00:48:23.513945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.990 [2024-07-12 00:48:23.513972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.990 qpair failed and we were unable to recover it. 00:35:55.990 [2024-07-12 00:48:23.514170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.990 [2024-07-12 00:48:23.514202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.990 qpair failed and we were unable to recover it. 00:35:55.990 [2024-07-12 00:48:23.514289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.990 [2024-07-12 00:48:23.514315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.990 qpair failed and we were unable to recover it. 00:35:55.990 [2024-07-12 00:48:23.514392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.990 [2024-07-12 00:48:23.514418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.990 qpair failed and we were unable to recover it. 00:35:55.990 [2024-07-12 00:48:23.514496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.990 [2024-07-12 00:48:23.514522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.990 qpair failed and we were unable to recover it. 00:35:55.990 [2024-07-12 00:48:23.514601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.990 [2024-07-12 00:48:23.514628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.990 qpair failed and we were unable to recover it. 00:35:55.990 [2024-07-12 00:48:23.514714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.990 [2024-07-12 00:48:23.514743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.990 qpair failed and we were unable to recover it. 00:35:55.990 [2024-07-12 00:48:23.514831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.990 [2024-07-12 00:48:23.514859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.990 qpair failed and we were unable to recover it. 00:35:55.990 [2024-07-12 00:48:23.514946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.990 [2024-07-12 00:48:23.514974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.990 qpair failed and we were unable to recover it. 00:35:55.990 [2024-07-12 00:48:23.515057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.990 [2024-07-12 00:48:23.515084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.990 qpair failed and we were unable to recover it. 00:35:55.990 [2024-07-12 00:48:23.515166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.990 [2024-07-12 00:48:23.515192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.990 qpair failed and we were unable to recover it. 00:35:55.990 [2024-07-12 00:48:23.515277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.990 [2024-07-12 00:48:23.515306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.990 qpair failed and we were unable to recover it. 00:35:55.990 [2024-07-12 00:48:23.515393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.990 [2024-07-12 00:48:23.515420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.990 qpair failed and we were unable to recover it. 00:35:55.990 [2024-07-12 00:48:23.515503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.990 [2024-07-12 00:48:23.515530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.990 qpair failed and we were unable to recover it. 00:35:55.990 [2024-07-12 00:48:23.515610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.990 [2024-07-12 00:48:23.515652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.990 qpair failed and we were unable to recover it. 00:35:55.990 [2024-07-12 00:48:23.515742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.990 [2024-07-12 00:48:23.515775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.990 qpair failed and we were unable to recover it. 00:35:55.990 [2024-07-12 00:48:23.515857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.990 [2024-07-12 00:48:23.515884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.990 qpair failed and we were unable to recover it. 00:35:55.990 [2024-07-12 00:48:23.515965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.990 [2024-07-12 00:48:23.515995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.991 qpair failed and we were unable to recover it. 00:35:55.991 [2024-07-12 00:48:23.516079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.991 [2024-07-12 00:48:23.516106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.991 qpair failed and we were unable to recover it. 00:35:55.991 [2024-07-12 00:48:23.516184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.991 [2024-07-12 00:48:23.516211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.991 qpair failed and we were unable to recover it. 00:35:55.991 [2024-07-12 00:48:23.516294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.991 [2024-07-12 00:48:23.516321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.991 qpair failed and we were unable to recover it. 00:35:55.991 [2024-07-12 00:48:23.516405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.991 [2024-07-12 00:48:23.516436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.991 qpair failed and we were unable to recover it. 00:35:55.991 [2024-07-12 00:48:23.516521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.991 [2024-07-12 00:48:23.516548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.991 qpair failed and we were unable to recover it. 00:35:55.991 [2024-07-12 00:48:23.516642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.991 [2024-07-12 00:48:23.516671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.991 qpair failed and we were unable to recover it. 00:35:55.991 [2024-07-12 00:48:23.516754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.991 [2024-07-12 00:48:23.516780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.991 qpair failed and we were unable to recover it. 00:35:55.991 [2024-07-12 00:48:23.516867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.991 [2024-07-12 00:48:23.516893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.991 qpair failed and we were unable to recover it. 00:35:55.991 [2024-07-12 00:48:23.516984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.991 [2024-07-12 00:48:23.517012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.991 qpair failed and we were unable to recover it. 00:35:55.991 [2024-07-12 00:48:23.517125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.991 [2024-07-12 00:48:23.517161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.991 qpair failed and we were unable to recover it. 00:35:55.991 [2024-07-12 00:48:23.517277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.991 [2024-07-12 00:48:23.517306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.991 qpair failed and we were unable to recover it. 00:35:55.991 [2024-07-12 00:48:23.517391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.991 [2024-07-12 00:48:23.517426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.991 qpair failed and we were unable to recover it. 00:35:55.991 [2024-07-12 00:48:23.517517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.991 [2024-07-12 00:48:23.517543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.991 qpair failed and we were unable to recover it. 00:35:55.991 [2024-07-12 00:48:23.517648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.991 [2024-07-12 00:48:23.517680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.991 qpair failed and we were unable to recover it. 00:35:55.991 [2024-07-12 00:48:23.517795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.991 [2024-07-12 00:48:23.517834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.991 qpair failed and we were unable to recover it. 00:35:55.991 [2024-07-12 00:48:23.517947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.991 [2024-07-12 00:48:23.517977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.991 qpair failed and we were unable to recover it. 00:35:55.991 [2024-07-12 00:48:23.518061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.991 [2024-07-12 00:48:23.518088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.991 qpair failed and we were unable to recover it. 00:35:55.991 [2024-07-12 00:48:23.518186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.991 [2024-07-12 00:48:23.518214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.991 qpair failed and we were unable to recover it. 00:35:55.991 [2024-07-12 00:48:23.518308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.991 [2024-07-12 00:48:23.518335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.991 qpair failed and we were unable to recover it. 00:35:55.991 [2024-07-12 00:48:23.518424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.991 [2024-07-12 00:48:23.518451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.991 qpair failed and we were unable to recover it. 00:35:55.991 [2024-07-12 00:48:23.518536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.991 [2024-07-12 00:48:23.518562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.991 qpair failed and we were unable to recover it. 00:35:55.991 [2024-07-12 00:48:23.518658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.991 [2024-07-12 00:48:23.518689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.991 qpair failed and we were unable to recover it. 00:35:55.991 [2024-07-12 00:48:23.518784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.991 [2024-07-12 00:48:23.518811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.991 qpair failed and we were unable to recover it. 00:35:55.991 [2024-07-12 00:48:23.518898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.991 [2024-07-12 00:48:23.518930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.991 qpair failed and we were unable to recover it. 00:35:55.991 [2024-07-12 00:48:23.519013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.991 [2024-07-12 00:48:23.519040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.991 qpair failed and we were unable to recover it. 00:35:55.991 [2024-07-12 00:48:23.519127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.991 [2024-07-12 00:48:23.519154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.991 qpair failed and we were unable to recover it. 00:35:55.991 [2024-07-12 00:48:23.519235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.991 [2024-07-12 00:48:23.519261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.991 qpair failed and we were unable to recover it. 00:35:55.991 [2024-07-12 00:48:23.519344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.991 [2024-07-12 00:48:23.519372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.991 qpair failed and we were unable to recover it. 00:35:55.991 [2024-07-12 00:48:23.519452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.991 [2024-07-12 00:48:23.519479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.991 qpair failed and we were unable to recover it. 00:35:55.991 [2024-07-12 00:48:23.519561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.991 [2024-07-12 00:48:23.519592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.991 qpair failed and we were unable to recover it. 00:35:55.991 [2024-07-12 00:48:23.519676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.991 [2024-07-12 00:48:23.519703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.991 qpair failed and we were unable to recover it. 00:35:55.991 [2024-07-12 00:48:23.519792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.992 [2024-07-12 00:48:23.519819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.992 qpair failed and we were unable to recover it. 00:35:55.992 [2024-07-12 00:48:23.519897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.992 [2024-07-12 00:48:23.519927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.992 qpair failed and we were unable to recover it. 00:35:55.992 [2024-07-12 00:48:23.520008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.992 [2024-07-12 00:48:23.520034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.992 qpair failed and we were unable to recover it. 00:35:55.992 [2024-07-12 00:48:23.520116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.992 [2024-07-12 00:48:23.520142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.992 qpair failed and we were unable to recover it. 00:35:55.992 [2024-07-12 00:48:23.520245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.992 [2024-07-12 00:48:23.520277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.992 qpair failed and we were unable to recover it. 00:35:55.992 [2024-07-12 00:48:23.520365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.992 [2024-07-12 00:48:23.520393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.992 qpair failed and we were unable to recover it. 00:35:55.992 [2024-07-12 00:48:23.520493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.992 [2024-07-12 00:48:23.520521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.992 qpair failed and we were unable to recover it. 00:35:55.992 [2024-07-12 00:48:23.520603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.992 [2024-07-12 00:48:23.520629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.992 qpair failed and we were unable to recover it. 00:35:55.992 [2024-07-12 00:48:23.520717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.992 [2024-07-12 00:48:23.520748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.992 qpair failed and we were unable to recover it. 00:35:55.992 [2024-07-12 00:48:23.520836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.992 [2024-07-12 00:48:23.520861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.992 qpair failed and we were unable to recover it. 00:35:55.992 [2024-07-12 00:48:23.520942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.992 [2024-07-12 00:48:23.520968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.992 qpair failed and we were unable to recover it. 00:35:55.992 [2024-07-12 00:48:23.521052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.992 [2024-07-12 00:48:23.521081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.992 qpair failed and we were unable to recover it. 00:35:55.992 [2024-07-12 00:48:23.521189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.992 [2024-07-12 00:48:23.521217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.992 qpair failed and we were unable to recover it. 00:35:55.992 [2024-07-12 00:48:23.521295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.992 [2024-07-12 00:48:23.521322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.992 qpair failed and we were unable to recover it. 00:35:55.992 [2024-07-12 00:48:23.521406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.992 [2024-07-12 00:48:23.521434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.992 qpair failed and we were unable to recover it. 00:35:55.992 [2024-07-12 00:48:23.521526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.992 [2024-07-12 00:48:23.521553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.992 qpair failed and we were unable to recover it. 00:35:55.992 [2024-07-12 00:48:23.521642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.992 [2024-07-12 00:48:23.521668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.992 qpair failed and we were unable to recover it. 00:35:55.992 [2024-07-12 00:48:23.521752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.992 [2024-07-12 00:48:23.521778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.992 qpair failed and we were unable to recover it. 00:35:55.992 [2024-07-12 00:48:23.521858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.992 [2024-07-12 00:48:23.521883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.992 qpair failed and we were unable to recover it. 00:35:55.992 [2024-07-12 00:48:23.521966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.992 [2024-07-12 00:48:23.521992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.992 qpair failed and we were unable to recover it. 00:35:55.992 [2024-07-12 00:48:23.522073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.992 [2024-07-12 00:48:23.522099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.992 qpair failed and we were unable to recover it. 00:35:55.992 [2024-07-12 00:48:23.522177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.992 [2024-07-12 00:48:23.522204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.992 qpair failed and we were unable to recover it. 00:35:55.992 [2024-07-12 00:48:23.522281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.992 [2024-07-12 00:48:23.522306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.992 qpair failed and we were unable to recover it. 00:35:55.992 [2024-07-12 00:48:23.522385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.992 [2024-07-12 00:48:23.522414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.992 qpair failed and we were unable to recover it. 00:35:55.992 [2024-07-12 00:48:23.522494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.992 [2024-07-12 00:48:23.522520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.992 qpair failed and we were unable to recover it. 00:35:55.992 [2024-07-12 00:48:23.522600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.992 [2024-07-12 00:48:23.522627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.992 qpair failed and we were unable to recover it. 00:35:55.992 [2024-07-12 00:48:23.522703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.992 [2024-07-12 00:48:23.522728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.992 qpair failed and we were unable to recover it. 00:35:55.992 [2024-07-12 00:48:23.522806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.992 [2024-07-12 00:48:23.522831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.992 qpair failed and we were unable to recover it. 00:35:55.992 [2024-07-12 00:48:23.522908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.992 [2024-07-12 00:48:23.522933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.992 qpair failed and we were unable to recover it. 00:35:55.992 [2024-07-12 00:48:23.523015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.992 [2024-07-12 00:48:23.523042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.992 qpair failed and we were unable to recover it. 00:35:55.992 [2024-07-12 00:48:23.523120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.992 [2024-07-12 00:48:23.523147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.992 qpair failed and we were unable to recover it. 00:35:55.992 [2024-07-12 00:48:23.523224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.992 [2024-07-12 00:48:23.523250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.992 qpair failed and we were unable to recover it. 00:35:55.992 [2024-07-12 00:48:23.523327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.992 [2024-07-12 00:48:23.523357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.992 qpair failed and we were unable to recover it. 00:35:55.992 [2024-07-12 00:48:23.523436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.992 [2024-07-12 00:48:23.523462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.993 qpair failed and we were unable to recover it. 00:35:55.993 [2024-07-12 00:48:23.523541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.993 [2024-07-12 00:48:23.523572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.993 qpair failed and we were unable to recover it. 00:35:55.993 [2024-07-12 00:48:23.523662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.993 [2024-07-12 00:48:23.523688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.993 qpair failed and we were unable to recover it. 00:35:55.993 [2024-07-12 00:48:23.523766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.993 [2024-07-12 00:48:23.523792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.993 qpair failed and we were unable to recover it. 00:35:55.993 [2024-07-12 00:48:23.523872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.993 [2024-07-12 00:48:23.523898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.993 qpair failed and we were unable to recover it. 00:35:55.993 [2024-07-12 00:48:23.523980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.993 [2024-07-12 00:48:23.524008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.993 qpair failed and we were unable to recover it. 00:35:55.993 [2024-07-12 00:48:23.524096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.993 [2024-07-12 00:48:23.524121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.993 qpair failed and we were unable to recover it. 00:35:55.993 [2024-07-12 00:48:23.524205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.993 [2024-07-12 00:48:23.524231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.993 qpair failed and we were unable to recover it. 00:35:55.993 [2024-07-12 00:48:23.524311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.993 [2024-07-12 00:48:23.524336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.993 qpair failed and we were unable to recover it. 00:35:55.993 [2024-07-12 00:48:23.524423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.993 [2024-07-12 00:48:23.524449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.993 qpair failed and we were unable to recover it. 00:35:55.993 [2024-07-12 00:48:23.524529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.993 [2024-07-12 00:48:23.524555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.993 qpair failed and we were unable to recover it. 00:35:55.993 [2024-07-12 00:48:23.524639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.993 [2024-07-12 00:48:23.524665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.993 qpair failed and we were unable to recover it. 00:35:55.993 [2024-07-12 00:48:23.524750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.993 [2024-07-12 00:48:23.524776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.993 qpair failed and we were unable to recover it. 00:35:55.993 [2024-07-12 00:48:23.524877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.993 [2024-07-12 00:48:23.524903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.993 qpair failed and we were unable to recover it. 00:35:55.993 [2024-07-12 00:48:23.524990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.993 [2024-07-12 00:48:23.525020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.993 qpair failed and we were unable to recover it. 00:35:55.993 [2024-07-12 00:48:23.525114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.993 [2024-07-12 00:48:23.525140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.993 qpair failed and we were unable to recover it. 00:35:55.993 [2024-07-12 00:48:23.525218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.993 [2024-07-12 00:48:23.525243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.993 qpair failed and we were unable to recover it. 00:35:55.993 [2024-07-12 00:48:23.525319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.993 [2024-07-12 00:48:23.525344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.993 qpair failed and we were unable to recover it. 00:35:55.993 [2024-07-12 00:48:23.525446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.993 [2024-07-12 00:48:23.525472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.993 qpair failed and we were unable to recover it. 00:35:55.993 [2024-07-12 00:48:23.525558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.993 [2024-07-12 00:48:23.525582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.993 qpair failed and we were unable to recover it. 00:35:55.993 [2024-07-12 00:48:23.525669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.993 [2024-07-12 00:48:23.525694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.993 qpair failed and we were unable to recover it. 00:35:55.993 [2024-07-12 00:48:23.525778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.993 [2024-07-12 00:48:23.525804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.993 qpair failed and we were unable to recover it. 00:35:55.993 [2024-07-12 00:48:23.525887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.993 [2024-07-12 00:48:23.525913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.993 qpair failed and we were unable to recover it. 00:35:55.993 [2024-07-12 00:48:23.526002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.993 [2024-07-12 00:48:23.526031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.993 qpair failed and we were unable to recover it. 00:35:55.993 [2024-07-12 00:48:23.526119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.993 [2024-07-12 00:48:23.526147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.993 qpair failed and we were unable to recover it. 00:35:55.993 [2024-07-12 00:48:23.526229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.993 [2024-07-12 00:48:23.526255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.993 qpair failed and we were unable to recover it. 00:35:55.993 [2024-07-12 00:48:23.526345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.993 [2024-07-12 00:48:23.526375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.993 qpair failed and we were unable to recover it. 00:35:55.993 [2024-07-12 00:48:23.526464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.993 [2024-07-12 00:48:23.526491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.993 qpair failed and we were unable to recover it. 00:35:55.993 [2024-07-12 00:48:23.526574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.993 [2024-07-12 00:48:23.526609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.993 qpair failed and we were unable to recover it. 00:35:55.993 [2024-07-12 00:48:23.526691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.993 [2024-07-12 00:48:23.526718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.993 qpair failed and we were unable to recover it. 00:35:55.993 [2024-07-12 00:48:23.526802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.993 [2024-07-12 00:48:23.526830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.993 qpair failed and we were unable to recover it. 00:35:55.993 [2024-07-12 00:48:23.526906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.993 [2024-07-12 00:48:23.526932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.993 qpair failed and we were unable to recover it. 00:35:55.993 [2024-07-12 00:48:23.527006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.993 [2024-07-12 00:48:23.527033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.993 qpair failed and we were unable to recover it. 00:35:55.993 [2024-07-12 00:48:23.527124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.993 [2024-07-12 00:48:23.527152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.993 qpair failed and we were unable to recover it. 00:35:55.993 [2024-07-12 00:48:23.527233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.993 [2024-07-12 00:48:23.527261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.993 qpair failed and we were unable to recover it. 00:35:55.993 [2024-07-12 00:48:23.527347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.994 [2024-07-12 00:48:23.527376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.994 qpair failed and we were unable to recover it. 00:35:55.994 [2024-07-12 00:48:23.527460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.994 [2024-07-12 00:48:23.527487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.994 qpair failed and we were unable to recover it. 00:35:55.994 [2024-07-12 00:48:23.527561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.994 [2024-07-12 00:48:23.527591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.994 qpair failed and we were unable to recover it. 00:35:55.994 [2024-07-12 00:48:23.527668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.994 [2024-07-12 00:48:23.527693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.994 qpair failed and we were unable to recover it. 00:35:55.994 [2024-07-12 00:48:23.527769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.994 [2024-07-12 00:48:23.527816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.994 qpair failed and we were unable to recover it. 00:35:55.994 [2024-07-12 00:48:23.527893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.994 [2024-07-12 00:48:23.527930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.994 qpair failed and we were unable to recover it. 00:35:55.994 [2024-07-12 00:48:23.528016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.994 [2024-07-12 00:48:23.528042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.994 qpair failed and we were unable to recover it. 00:35:55.994 [2024-07-12 00:48:23.528119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.994 [2024-07-12 00:48:23.528145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.994 qpair failed and we were unable to recover it. 00:35:55.994 [2024-07-12 00:48:23.528231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.994 [2024-07-12 00:48:23.528260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.994 qpair failed and we were unable to recover it. 00:35:55.994 [2024-07-12 00:48:23.528339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.994 [2024-07-12 00:48:23.528366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.994 qpair failed and we were unable to recover it. 00:35:55.994 [2024-07-12 00:48:23.528448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.994 [2024-07-12 00:48:23.528475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.994 qpair failed and we were unable to recover it. 00:35:55.994 [2024-07-12 00:48:23.528558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.994 [2024-07-12 00:48:23.528584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.994 qpair failed and we were unable to recover it. 00:35:55.994 [2024-07-12 00:48:23.528685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.994 [2024-07-12 00:48:23.528713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.994 qpair failed and we were unable to recover it. 00:35:55.994 [2024-07-12 00:48:23.528808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.994 [2024-07-12 00:48:23.528836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.994 qpair failed and we were unable to recover it. 00:35:55.994 [2024-07-12 00:48:23.528924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.994 [2024-07-12 00:48:23.528951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.994 qpair failed and we were unable to recover it. 00:35:55.994 [2024-07-12 00:48:23.529032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.994 [2024-07-12 00:48:23.529060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.994 qpair failed and we were unable to recover it. 00:35:55.994 [2024-07-12 00:48:23.529140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.994 [2024-07-12 00:48:23.529166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.994 qpair failed and we were unable to recover it. 00:35:55.994 [2024-07-12 00:48:23.529246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.994 [2024-07-12 00:48:23.529272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.994 qpair failed and we were unable to recover it. 00:35:55.994 [2024-07-12 00:48:23.529362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.994 [2024-07-12 00:48:23.529389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.994 qpair failed and we were unable to recover it. 00:35:55.994 [2024-07-12 00:48:23.529477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.994 [2024-07-12 00:48:23.529503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.994 qpair failed and we were unable to recover it. 00:35:55.994 [2024-07-12 00:48:23.529614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.994 [2024-07-12 00:48:23.529644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.994 qpair failed and we were unable to recover it. 00:35:55.994 [2024-07-12 00:48:23.529728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.994 [2024-07-12 00:48:23.529754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.994 qpair failed and we were unable to recover it. 00:35:55.994 [2024-07-12 00:48:23.529850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.994 [2024-07-12 00:48:23.529878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.994 qpair failed and we were unable to recover it. 00:35:55.994 [2024-07-12 00:48:23.529957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.994 [2024-07-12 00:48:23.529983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.994 qpair failed and we were unable to recover it. 00:35:55.994 [2024-07-12 00:48:23.530058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.994 [2024-07-12 00:48:23.530088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.994 qpair failed and we were unable to recover it. 00:35:55.994 [2024-07-12 00:48:23.530177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.994 [2024-07-12 00:48:23.530205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.994 qpair failed and we were unable to recover it. 00:35:55.994 [2024-07-12 00:48:23.530293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.994 [2024-07-12 00:48:23.530321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.994 qpair failed and we were unable to recover it. 00:35:55.994 [2024-07-12 00:48:23.530400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.994 [2024-07-12 00:48:23.530428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.994 qpair failed and we were unable to recover it. 00:35:55.994 [2024-07-12 00:48:23.530516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.994 [2024-07-12 00:48:23.530544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.994 qpair failed and we were unable to recover it. 00:35:55.994 [2024-07-12 00:48:23.530638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.994 [2024-07-12 00:48:23.530666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.994 qpair failed and we were unable to recover it. 00:35:55.994 [2024-07-12 00:48:23.530742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.994 [2024-07-12 00:48:23.530768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.994 qpair failed and we were unable to recover it. 00:35:55.994 [2024-07-12 00:48:23.530857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.994 [2024-07-12 00:48:23.530883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.994 qpair failed and we were unable to recover it. 00:35:55.994 [2024-07-12 00:48:23.530968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.994 [2024-07-12 00:48:23.530997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.994 qpair failed and we were unable to recover it. 00:35:55.994 [2024-07-12 00:48:23.531074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.994 [2024-07-12 00:48:23.531100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.994 qpair failed and we were unable to recover it. 00:35:55.994 [2024-07-12 00:48:23.531184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.994 [2024-07-12 00:48:23.531210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.994 qpair failed and we were unable to recover it. 00:35:55.994 [2024-07-12 00:48:23.531292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.995 [2024-07-12 00:48:23.531320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.995 qpair failed and we were unable to recover it. 00:35:55.995 [2024-07-12 00:48:23.531395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.995 [2024-07-12 00:48:23.531421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.995 qpair failed and we were unable to recover it. 00:35:55.995 [2024-07-12 00:48:23.531502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.995 [2024-07-12 00:48:23.531529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.995 qpair failed and we were unable to recover it. 00:35:55.995 [2024-07-12 00:48:23.531624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.995 [2024-07-12 00:48:23.531653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.995 qpair failed and we were unable to recover it. 00:35:55.995 [2024-07-12 00:48:23.531731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.995 [2024-07-12 00:48:23.531757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.995 qpair failed and we were unable to recover it. 00:35:55.995 [2024-07-12 00:48:23.531843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.995 [2024-07-12 00:48:23.531869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.995 qpair failed and we were unable to recover it. 00:35:55.995 [2024-07-12 00:48:23.531945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.995 [2024-07-12 00:48:23.531971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.995 qpair failed and we were unable to recover it. 00:35:55.995 [2024-07-12 00:48:23.532056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.995 [2024-07-12 00:48:23.532085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.995 qpair failed and we were unable to recover it. 00:35:55.995 [2024-07-12 00:48:23.532164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.995 [2024-07-12 00:48:23.532189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.995 qpair failed and we were unable to recover it. 00:35:55.995 [2024-07-12 00:48:23.532272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.995 [2024-07-12 00:48:23.532305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.995 qpair failed and we were unable to recover it. 00:35:55.995 [2024-07-12 00:48:23.532382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.995 [2024-07-12 00:48:23.532409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.995 qpair failed and we were unable to recover it. 00:35:55.995 [2024-07-12 00:48:23.532490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.995 [2024-07-12 00:48:23.532516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.995 qpair failed and we were unable to recover it. 00:35:55.995 [2024-07-12 00:48:23.532601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.995 [2024-07-12 00:48:23.532629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.995 qpair failed and we were unable to recover it. 00:35:55.995 [2024-07-12 00:48:23.532708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.995 [2024-07-12 00:48:23.532736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.995 qpair failed and we were unable to recover it. 00:35:55.995 [2024-07-12 00:48:23.532816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.995 [2024-07-12 00:48:23.532842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.995 qpair failed and we were unable to recover it. 00:35:55.995 [2024-07-12 00:48:23.532923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.995 [2024-07-12 00:48:23.532949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.995 qpair failed and we were unable to recover it. 00:35:55.995 [2024-07-12 00:48:23.533023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.995 [2024-07-12 00:48:23.533048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.995 qpair failed and we were unable to recover it. 00:35:55.995 [2024-07-12 00:48:23.533137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.995 [2024-07-12 00:48:23.533162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.995 qpair failed and we were unable to recover it. 00:35:55.995 [2024-07-12 00:48:23.533244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.995 [2024-07-12 00:48:23.533270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.995 qpair failed and we were unable to recover it. 00:35:55.995 [2024-07-12 00:48:23.533362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.995 [2024-07-12 00:48:23.533389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.995 qpair failed and we were unable to recover it. 00:35:55.995 [2024-07-12 00:48:23.533483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.995 [2024-07-12 00:48:23.533511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.995 qpair failed and we were unable to recover it. 00:35:55.995 [2024-07-12 00:48:23.533604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.995 [2024-07-12 00:48:23.533632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.995 qpair failed and we were unable to recover it. 00:35:55.995 [2024-07-12 00:48:23.533726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.995 [2024-07-12 00:48:23.533754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.995 qpair failed and we were unable to recover it. 00:35:55.995 [2024-07-12 00:48:23.533847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.995 [2024-07-12 00:48:23.533874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.995 qpair failed and we were unable to recover it. 00:35:55.995 [2024-07-12 00:48:23.533960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.995 [2024-07-12 00:48:23.533988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.995 qpair failed and we were unable to recover it. 00:35:55.995 [2024-07-12 00:48:23.534068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.995 [2024-07-12 00:48:23.534095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.995 qpair failed and we were unable to recover it. 00:35:55.995 [2024-07-12 00:48:23.534182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.996 [2024-07-12 00:48:23.534209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.996 qpair failed and we were unable to recover it. 00:35:55.996 [2024-07-12 00:48:23.534291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.996 [2024-07-12 00:48:23.534317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.996 qpair failed and we were unable to recover it. 00:35:55.996 [2024-07-12 00:48:23.534397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.996 [2024-07-12 00:48:23.534423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.996 qpair failed and we were unable to recover it. 00:35:55.996 [2024-07-12 00:48:23.534506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.996 [2024-07-12 00:48:23.534533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.996 qpair failed and we were unable to recover it. 00:35:55.996 [2024-07-12 00:48:23.534621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.996 [2024-07-12 00:48:23.534651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.996 qpair failed and we were unable to recover it. 00:35:55.996 [2024-07-12 00:48:23.534736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.996 [2024-07-12 00:48:23.534762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.996 qpair failed and we were unable to recover it. 00:35:55.996 [2024-07-12 00:48:23.534844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.996 [2024-07-12 00:48:23.534870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.996 qpair failed and we were unable to recover it. 00:35:55.996 [2024-07-12 00:48:23.534944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.996 [2024-07-12 00:48:23.534971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.996 qpair failed and we were unable to recover it. 00:35:55.996 [2024-07-12 00:48:23.535049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.996 [2024-07-12 00:48:23.535074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.996 qpair failed and we were unable to recover it. 00:35:55.996 [2024-07-12 00:48:23.535152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.996 [2024-07-12 00:48:23.535178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.996 qpair failed and we were unable to recover it. 00:35:55.996 [2024-07-12 00:48:23.535262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.996 [2024-07-12 00:48:23.535291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.996 qpair failed and we were unable to recover it. 00:35:55.996 [2024-07-12 00:48:23.535381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.996 [2024-07-12 00:48:23.535426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.996 qpair failed and we were unable to recover it. 00:35:55.996 [2024-07-12 00:48:23.535518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.996 [2024-07-12 00:48:23.535545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.996 qpair failed and we were unable to recover it. 00:35:55.996 [2024-07-12 00:48:23.535645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.996 [2024-07-12 00:48:23.535672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.996 qpair failed and we were unable to recover it. 00:35:55.996 [2024-07-12 00:48:23.535748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.996 [2024-07-12 00:48:23.535773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.996 qpair failed and we were unable to recover it. 00:35:55.996 [2024-07-12 00:48:23.535851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.996 [2024-07-12 00:48:23.535877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.996 qpair failed and we were unable to recover it. 00:35:55.996 [2024-07-12 00:48:23.535960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.996 [2024-07-12 00:48:23.535988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.996 qpair failed and we were unable to recover it. 00:35:55.996 [2024-07-12 00:48:23.536077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.996 [2024-07-12 00:48:23.536105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.996 qpair failed and we were unable to recover it. 00:35:55.996 [2024-07-12 00:48:23.536184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.996 [2024-07-12 00:48:23.536211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.996 qpair failed and we were unable to recover it. 00:35:55.996 [2024-07-12 00:48:23.536288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.996 [2024-07-12 00:48:23.536314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.996 qpair failed and we were unable to recover it. 00:35:55.996 [2024-07-12 00:48:23.536397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.996 [2024-07-12 00:48:23.536423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.996 qpair failed and we were unable to recover it. 00:35:55.996 [2024-07-12 00:48:23.536503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.996 [2024-07-12 00:48:23.536529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.996 qpair failed and we were unable to recover it. 00:35:55.996 [2024-07-12 00:48:23.536607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.996 [2024-07-12 00:48:23.536634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.996 qpair failed and we were unable to recover it. 00:35:55.996 [2024-07-12 00:48:23.536763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.996 [2024-07-12 00:48:23.536796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.996 qpair failed and we were unable to recover it. 00:35:55.996 [2024-07-12 00:48:23.536887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.996 [2024-07-12 00:48:23.536914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.996 qpair failed and we were unable to recover it. 00:35:55.996 [2024-07-12 00:48:23.537012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.996 [2024-07-12 00:48:23.537038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.996 qpair failed and we were unable to recover it. 00:35:55.996 [2024-07-12 00:48:23.537164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.996 [2024-07-12 00:48:23.537190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.996 qpair failed and we were unable to recover it. 00:35:55.996 [2024-07-12 00:48:23.537276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.996 [2024-07-12 00:48:23.537302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.996 qpair failed and we were unable to recover it. 00:35:55.996 [2024-07-12 00:48:23.537382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.996 [2024-07-12 00:48:23.537409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.996 qpair failed and we were unable to recover it. 00:35:55.996 [2024-07-12 00:48:23.537496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.996 [2024-07-12 00:48:23.537523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.996 qpair failed and we were unable to recover it. 00:35:55.996 [2024-07-12 00:48:23.537620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.996 [2024-07-12 00:48:23.537647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.996 qpair failed and we were unable to recover it. 00:35:55.996 [2024-07-12 00:48:23.537726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.996 [2024-07-12 00:48:23.537753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.996 qpair failed and we were unable to recover it. 00:35:55.996 [2024-07-12 00:48:23.537834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.996 [2024-07-12 00:48:23.537860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.996 qpair failed and we were unable to recover it. 00:35:55.996 [2024-07-12 00:48:23.537942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.997 [2024-07-12 00:48:23.537968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.997 qpair failed and we were unable to recover it. 00:35:55.997 [2024-07-12 00:48:23.538055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.997 [2024-07-12 00:48:23.538084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.997 qpair failed and we were unable to recover it. 00:35:55.997 [2024-07-12 00:48:23.538166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.997 [2024-07-12 00:48:23.538191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.997 qpair failed and we were unable to recover it. 00:35:55.997 [2024-07-12 00:48:23.538269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.997 [2024-07-12 00:48:23.538296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.997 qpair failed and we were unable to recover it. 00:35:55.997 [2024-07-12 00:48:23.538397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.997 [2024-07-12 00:48:23.538423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.997 qpair failed and we were unable to recover it. 00:35:55.997 [2024-07-12 00:48:23.538509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.997 [2024-07-12 00:48:23.538539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.997 qpair failed and we were unable to recover it. 00:35:55.997 [2024-07-12 00:48:23.538635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.997 [2024-07-12 00:48:23.538663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.997 qpair failed and we were unable to recover it. 00:35:55.997 [2024-07-12 00:48:23.538750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.997 [2024-07-12 00:48:23.538777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.997 qpair failed and we were unable to recover it. 00:35:55.997 [2024-07-12 00:48:23.538858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.997 [2024-07-12 00:48:23.538885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.997 qpair failed and we were unable to recover it. 00:35:55.997 [2024-07-12 00:48:23.538962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.997 [2024-07-12 00:48:23.538988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.997 qpair failed and we were unable to recover it. 00:35:55.997 [2024-07-12 00:48:23.539067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.997 [2024-07-12 00:48:23.539093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.997 qpair failed and we were unable to recover it. 00:35:55.997 [2024-07-12 00:48:23.539179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.997 [2024-07-12 00:48:23.539207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.997 qpair failed and we were unable to recover it. 00:35:55.997 [2024-07-12 00:48:23.539286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.997 [2024-07-12 00:48:23.539312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.997 qpair failed and we were unable to recover it. 00:35:55.997 [2024-07-12 00:48:23.539386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.997 [2024-07-12 00:48:23.539412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.997 qpair failed and we were unable to recover it. 00:35:55.997 [2024-07-12 00:48:23.539486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.997 [2024-07-12 00:48:23.539510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.997 qpair failed and we were unable to recover it. 00:35:55.997 [2024-07-12 00:48:23.539597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.997 [2024-07-12 00:48:23.539624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.997 qpair failed and we were unable to recover it. 00:35:55.997 [2024-07-12 00:48:23.539699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.997 [2024-07-12 00:48:23.539725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.997 qpair failed and we were unable to recover it. 00:35:55.997 [2024-07-12 00:48:23.539806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.997 [2024-07-12 00:48:23.539834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.997 qpair failed and we were unable to recover it. 00:35:55.997 [2024-07-12 00:48:23.539916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.997 [2024-07-12 00:48:23.539942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.997 qpair failed and we were unable to recover it. 00:35:55.997 [2024-07-12 00:48:23.540021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.997 [2024-07-12 00:48:23.540047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.997 qpair failed and we were unable to recover it. 00:35:55.997 [2024-07-12 00:48:23.540123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.997 [2024-07-12 00:48:23.540149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.997 qpair failed and we were unable to recover it. 00:35:55.997 [2024-07-12 00:48:23.540228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.997 [2024-07-12 00:48:23.540254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.997 qpair failed and we were unable to recover it. 00:35:55.997 [2024-07-12 00:48:23.540335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.997 [2024-07-12 00:48:23.540361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.997 qpair failed and we were unable to recover it. 00:35:55.997 [2024-07-12 00:48:23.540435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.997 [2024-07-12 00:48:23.540461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.997 qpair failed and we were unable to recover it. 00:35:55.997 [2024-07-12 00:48:23.540542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.997 [2024-07-12 00:48:23.540569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.997 qpair failed and we were unable to recover it. 00:35:55.997 [2024-07-12 00:48:23.540660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.997 [2024-07-12 00:48:23.540689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.997 qpair failed and we were unable to recover it. 00:35:55.997 [2024-07-12 00:48:23.540779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.997 [2024-07-12 00:48:23.540806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.997 qpair failed and we were unable to recover it. 00:35:55.997 [2024-07-12 00:48:23.540895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.997 [2024-07-12 00:48:23.540922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.997 qpair failed and we were unable to recover it. 00:35:55.997 [2024-07-12 00:48:23.540997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.997 [2024-07-12 00:48:23.541022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.997 qpair failed and we were unable to recover it. 00:35:55.997 [2024-07-12 00:48:23.541108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.997 [2024-07-12 00:48:23.541134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.997 qpair failed and we were unable to recover it. 00:35:55.997 [2024-07-12 00:48:23.541219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.997 [2024-07-12 00:48:23.541250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.997 qpair failed and we were unable to recover it. 00:35:55.997 [2024-07-12 00:48:23.541343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.997 [2024-07-12 00:48:23.541371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.997 qpair failed and we were unable to recover it. 00:35:55.997 [2024-07-12 00:48:23.541478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.997 [2024-07-12 00:48:23.541505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.997 qpair failed and we were unable to recover it. 00:35:55.997 [2024-07-12 00:48:23.541601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.997 [2024-07-12 00:48:23.541628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.997 qpair failed and we were unable to recover it. 00:35:55.997 [2024-07-12 00:48:23.541709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.998 [2024-07-12 00:48:23.541736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.998 qpair failed and we were unable to recover it. 00:35:55.998 [2024-07-12 00:48:23.541816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.998 [2024-07-12 00:48:23.541842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.998 qpair failed and we were unable to recover it. 00:35:55.998 [2024-07-12 00:48:23.541942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.998 [2024-07-12 00:48:23.541968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.998 qpair failed and we were unable to recover it. 00:35:55.998 [2024-07-12 00:48:23.542047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.998 [2024-07-12 00:48:23.542072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.998 qpair failed and we were unable to recover it. 00:35:55.998 [2024-07-12 00:48:23.542149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.998 [2024-07-12 00:48:23.542173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.998 qpair failed and we were unable to recover it. 00:35:55.998 [2024-07-12 00:48:23.542254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.998 [2024-07-12 00:48:23.542280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.998 qpair failed and we were unable to recover it. 00:35:55.998 [2024-07-12 00:48:23.542362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.998 [2024-07-12 00:48:23.542389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.998 qpair failed and we were unable to recover it. 00:35:55.998 [2024-07-12 00:48:23.542466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.998 [2024-07-12 00:48:23.542492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.998 qpair failed and we were unable to recover it. 00:35:55.998 [2024-07-12 00:48:23.542579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.998 [2024-07-12 00:48:23.542612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.998 qpair failed and we were unable to recover it. 00:35:55.998 [2024-07-12 00:48:23.542689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.998 [2024-07-12 00:48:23.542715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.998 qpair failed and we were unable to recover it. 00:35:55.998 [2024-07-12 00:48:23.542797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.998 [2024-07-12 00:48:23.542823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.998 qpair failed and we were unable to recover it. 00:35:55.998 [2024-07-12 00:48:23.542903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.998 [2024-07-12 00:48:23.542929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.998 qpair failed and we were unable to recover it. 00:35:55.998 [2024-07-12 00:48:23.543013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.998 [2024-07-12 00:48:23.543041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.998 qpair failed and we were unable to recover it. 00:35:55.998 [2024-07-12 00:48:23.543117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.998 [2024-07-12 00:48:23.543144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.998 qpair failed and we were unable to recover it. 00:35:55.998 [2024-07-12 00:48:23.543221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.998 [2024-07-12 00:48:23.543248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.998 qpair failed and we were unable to recover it. 00:35:55.998 [2024-07-12 00:48:23.543324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.998 [2024-07-12 00:48:23.543350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.998 qpair failed and we were unable to recover it. 00:35:55.998 [2024-07-12 00:48:23.543427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.998 [2024-07-12 00:48:23.543452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.998 qpair failed and we were unable to recover it. 00:35:55.998 [2024-07-12 00:48:23.543534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.998 [2024-07-12 00:48:23.543559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.998 qpair failed and we were unable to recover it. 00:35:55.998 [2024-07-12 00:48:23.543682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.998 [2024-07-12 00:48:23.543709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.998 qpair failed and we were unable to recover it. 00:35:55.998 [2024-07-12 00:48:23.543797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.998 [2024-07-12 00:48:23.543822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.998 qpair failed and we were unable to recover it. 00:35:55.998 [2024-07-12 00:48:23.543899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.998 [2024-07-12 00:48:23.543925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.998 qpair failed and we were unable to recover it. 00:35:55.998 [2024-07-12 00:48:23.544005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.998 [2024-07-12 00:48:23.544034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.998 qpair failed and we were unable to recover it. 00:35:55.998 [2024-07-12 00:48:23.544111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.998 [2024-07-12 00:48:23.544137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.998 qpair failed and we were unable to recover it. 00:35:55.998 [2024-07-12 00:48:23.544230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.998 [2024-07-12 00:48:23.544257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.998 qpair failed and we were unable to recover it. 00:35:55.998 [2024-07-12 00:48:23.544338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.998 [2024-07-12 00:48:23.544364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.998 qpair failed and we were unable to recover it. 00:35:55.998 [2024-07-12 00:48:23.544447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.998 [2024-07-12 00:48:23.544473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.998 qpair failed and we were unable to recover it. 00:35:55.998 [2024-07-12 00:48:23.544558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.998 [2024-07-12 00:48:23.544584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.998 qpair failed and we were unable to recover it. 00:35:55.998 [2024-07-12 00:48:23.544706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.998 [2024-07-12 00:48:23.544732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.998 qpair failed and we were unable to recover it. 00:35:55.998 [2024-07-12 00:48:23.544811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.998 [2024-07-12 00:48:23.544836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.998 qpair failed and we were unable to recover it. 00:35:55.998 [2024-07-12 00:48:23.544914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.998 [2024-07-12 00:48:23.544940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.998 qpair failed and we were unable to recover it. 00:35:55.998 [2024-07-12 00:48:23.545025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.998 [2024-07-12 00:48:23.545050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.998 qpair failed and we were unable to recover it. 00:35:55.998 [2024-07-12 00:48:23.545126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.998 [2024-07-12 00:48:23.545151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.998 qpair failed and we were unable to recover it. 00:35:55.998 [2024-07-12 00:48:23.545233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.998 [2024-07-12 00:48:23.545259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.998 qpair failed and we were unable to recover it. 00:35:55.998 [2024-07-12 00:48:23.545343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.999 [2024-07-12 00:48:23.545371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.999 qpair failed and we were unable to recover it. 00:35:55.999 [2024-07-12 00:48:23.545449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.999 [2024-07-12 00:48:23.545475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.999 qpair failed and we were unable to recover it. 00:35:55.999 [2024-07-12 00:48:23.545560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.999 [2024-07-12 00:48:23.545598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.999 qpair failed and we were unable to recover it. 00:35:55.999 [2024-07-12 00:48:23.545680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.999 [2024-07-12 00:48:23.545710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.999 qpair failed and we were unable to recover it. 00:35:55.999 [2024-07-12 00:48:23.545807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.999 [2024-07-12 00:48:23.545832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.999 qpair failed and we were unable to recover it. 00:35:55.999 [2024-07-12 00:48:23.545915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.999 [2024-07-12 00:48:23.545941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.999 qpair failed and we were unable to recover it. 00:35:55.999 [2024-07-12 00:48:23.546018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.999 [2024-07-12 00:48:23.546044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.999 qpair failed and we were unable to recover it. 00:35:55.999 [2024-07-12 00:48:23.546129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.999 [2024-07-12 00:48:23.546157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.999 qpair failed and we were unable to recover it. 00:35:55.999 [2024-07-12 00:48:23.546241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.999 [2024-07-12 00:48:23.546267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.999 qpair failed and we were unable to recover it. 00:35:55.999 [2024-07-12 00:48:23.546345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.999 [2024-07-12 00:48:23.546373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.999 qpair failed and we were unable to recover it. 00:35:55.999 [2024-07-12 00:48:23.546468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.999 [2024-07-12 00:48:23.546508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.999 qpair failed and we were unable to recover it. 00:35:55.999 [2024-07-12 00:48:23.546608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.999 [2024-07-12 00:48:23.546637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.999 qpair failed and we were unable to recover it. 00:35:55.999 [2024-07-12 00:48:23.546715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.999 [2024-07-12 00:48:23.546742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.999 qpair failed and we were unable to recover it. 00:35:55.999 [2024-07-12 00:48:23.546829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.999 [2024-07-12 00:48:23.546856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.999 qpair failed and we were unable to recover it. 00:35:55.999 [2024-07-12 00:48:23.546934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.999 [2024-07-12 00:48:23.546961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.999 qpair failed and we were unable to recover it. 00:35:55.999 [2024-07-12 00:48:23.547040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.999 [2024-07-12 00:48:23.547066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.999 qpair failed and we were unable to recover it. 00:35:55.999 [2024-07-12 00:48:23.547142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.999 [2024-07-12 00:48:23.547167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.999 qpair failed and we were unable to recover it. 00:35:55.999 [2024-07-12 00:48:23.547304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.999 [2024-07-12 00:48:23.547330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.999 qpair failed and we were unable to recover it. 00:35:55.999 [2024-07-12 00:48:23.547424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.999 [2024-07-12 00:48:23.547452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.999 qpair failed and we were unable to recover it. 00:35:55.999 [2024-07-12 00:48:23.547527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.999 [2024-07-12 00:48:23.547553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.999 qpair failed and we were unable to recover it. 00:35:55.999 [2024-07-12 00:48:23.547639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.999 [2024-07-12 00:48:23.547666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.999 qpair failed and we were unable to recover it. 00:35:55.999 [2024-07-12 00:48:23.547744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.999 [2024-07-12 00:48:23.547772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.999 qpair failed and we were unable to recover it. 00:35:55.999 [2024-07-12 00:48:23.547852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.999 [2024-07-12 00:48:23.547879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.999 qpair failed and we were unable to recover it. 00:35:55.999 [2024-07-12 00:48:23.547979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.999 [2024-07-12 00:48:23.548006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.999 qpair failed and we were unable to recover it. 00:35:55.999 [2024-07-12 00:48:23.548088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.999 [2024-07-12 00:48:23.548114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.999 qpair failed and we were unable to recover it. 00:35:55.999 [2024-07-12 00:48:23.548189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.999 [2024-07-12 00:48:23.548214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.999 qpair failed and we were unable to recover it. 00:35:55.999 [2024-07-12 00:48:23.548297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.999 [2024-07-12 00:48:23.548323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.999 qpair failed and we were unable to recover it. 00:35:55.999 [2024-07-12 00:48:23.548399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.999 [2024-07-12 00:48:23.548425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.999 qpair failed and we were unable to recover it. 00:35:55.999 [2024-07-12 00:48:23.548502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.999 [2024-07-12 00:48:23.548528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.999 qpair failed and we were unable to recover it. 00:35:55.999 [2024-07-12 00:48:23.548616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.999 [2024-07-12 00:48:23.548643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:55.999 qpair failed and we were unable to recover it. 00:35:55.999 [2024-07-12 00:48:23.548728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.999 [2024-07-12 00:48:23.548756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:55.999 qpair failed and we were unable to recover it. 00:35:55.999 [2024-07-12 00:48:23.548841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.999 [2024-07-12 00:48:23.548870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.999 qpair failed and we were unable to recover it. 00:35:55.999 [2024-07-12 00:48:23.548964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.999 [2024-07-12 00:48:23.548991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.999 qpair failed and we were unable to recover it. 00:35:55.999 [2024-07-12 00:48:23.549068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.999 [2024-07-12 00:48:23.549094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:55.999 qpair failed and we were unable to recover it. 00:35:56.000 [2024-07-12 00:48:23.549170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.000 [2024-07-12 00:48:23.549195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.000 qpair failed and we were unable to recover it. 00:35:56.000 [2024-07-12 00:48:23.549279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.000 [2024-07-12 00:48:23.549305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.000 qpair failed and we were unable to recover it. 00:35:56.000 [2024-07-12 00:48:23.549381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.000 [2024-07-12 00:48:23.549406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.000 qpair failed and we were unable to recover it. 00:35:56.000 [2024-07-12 00:48:23.549482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.000 [2024-07-12 00:48:23.549507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.000 qpair failed and we were unable to recover it. 00:35:56.000 [2024-07-12 00:48:23.549597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.000 [2024-07-12 00:48:23.549623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.000 qpair failed and we were unable to recover it. 00:35:56.000 [2024-07-12 00:48:23.549705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.000 [2024-07-12 00:48:23.549730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.000 qpair failed and we were unable to recover it. 00:35:56.000 [2024-07-12 00:48:23.549830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.000 [2024-07-12 00:48:23.549857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.000 qpair failed and we were unable to recover it. 00:35:56.000 [2024-07-12 00:48:23.549946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.000 [2024-07-12 00:48:23.549972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.000 qpair failed and we were unable to recover it. 00:35:56.000 [2024-07-12 00:48:23.550051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.000 [2024-07-12 00:48:23.550080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.000 qpair failed and we were unable to recover it. 00:35:56.000 [2024-07-12 00:48:23.550159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.000 [2024-07-12 00:48:23.550191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.000 qpair failed and we were unable to recover it. 00:35:56.000 [2024-07-12 00:48:23.550280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.000 [2024-07-12 00:48:23.550307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.000 qpair failed and we were unable to recover it. 00:35:56.000 [2024-07-12 00:48:23.550390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.000 [2024-07-12 00:48:23.550416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.000 qpair failed and we were unable to recover it. 00:35:56.000 [2024-07-12 00:48:23.550497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.000 [2024-07-12 00:48:23.550522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.000 qpair failed and we were unable to recover it. 00:35:56.000 [2024-07-12 00:48:23.550619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.000 [2024-07-12 00:48:23.550646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.000 qpair failed and we were unable to recover it. 00:35:56.000 [2024-07-12 00:48:23.550726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.000 [2024-07-12 00:48:23.550752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.000 qpair failed and we were unable to recover it. 00:35:56.000 [2024-07-12 00:48:23.550827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.000 [2024-07-12 00:48:23.550852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.000 qpair failed and we were unable to recover it. 00:35:56.000 [2024-07-12 00:48:23.550930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.000 [2024-07-12 00:48:23.550956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.000 qpair failed and we were unable to recover it. 00:35:56.000 [2024-07-12 00:48:23.551041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.000 [2024-07-12 00:48:23.551069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.000 qpair failed and we were unable to recover it. 00:35:56.000 [2024-07-12 00:48:23.551171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.000 [2024-07-12 00:48:23.551195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.000 qpair failed and we were unable to recover it. 00:35:56.000 [2024-07-12 00:48:23.551290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.000 [2024-07-12 00:48:23.551317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.000 qpair failed and we were unable to recover it. 00:35:56.000 [2024-07-12 00:48:23.551395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.000 [2024-07-12 00:48:23.551421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.000 qpair failed and we were unable to recover it. 00:35:56.000 [2024-07-12 00:48:23.551496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.000 [2024-07-12 00:48:23.551521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.000 qpair failed and we were unable to recover it. 00:35:56.000 [2024-07-12 00:48:23.551614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.000 [2024-07-12 00:48:23.551641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.000 qpair failed and we were unable to recover it. 00:35:56.000 [2024-07-12 00:48:23.551730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.000 [2024-07-12 00:48:23.551758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.000 qpair failed and we were unable to recover it. 00:35:56.000 [2024-07-12 00:48:23.551844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.000 [2024-07-12 00:48:23.551871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.000 qpair failed and we were unable to recover it. 00:35:56.000 [2024-07-12 00:48:23.551955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.000 [2024-07-12 00:48:23.551981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.000 qpair failed and we were unable to recover it. 00:35:56.000 [2024-07-12 00:48:23.552060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.000 [2024-07-12 00:48:23.552086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.000 qpair failed and we were unable to recover it. 00:35:56.000 [2024-07-12 00:48:23.552168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.000 [2024-07-12 00:48:23.552193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.000 qpair failed and we were unable to recover it. 00:35:56.000 [2024-07-12 00:48:23.552276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.001 [2024-07-12 00:48:23.552302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.001 qpair failed and we were unable to recover it. 00:35:56.001 [2024-07-12 00:48:23.552388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.001 [2024-07-12 00:48:23.552414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.001 qpair failed and we were unable to recover it. 00:35:56.001 [2024-07-12 00:48:23.552498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.001 [2024-07-12 00:48:23.552526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.001 qpair failed and we were unable to recover it. 00:35:56.001 [2024-07-12 00:48:23.552619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.001 [2024-07-12 00:48:23.552646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.001 qpair failed and we were unable to recover it. 00:35:56.001 [2024-07-12 00:48:23.552723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.001 [2024-07-12 00:48:23.552749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.001 qpair failed and we were unable to recover it. 00:35:56.001 [2024-07-12 00:48:23.552824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.001 [2024-07-12 00:48:23.552850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.001 qpair failed and we were unable to recover it. 00:35:56.001 [2024-07-12 00:48:23.552928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.001 [2024-07-12 00:48:23.552954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.001 qpair failed and we were unable to recover it. 00:35:56.001 [2024-07-12 00:48:23.553038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.001 [2024-07-12 00:48:23.553065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.001 qpair failed and we were unable to recover it. 00:35:56.001 [2024-07-12 00:48:23.553149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.001 [2024-07-12 00:48:23.553178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.001 qpair failed and we were unable to recover it. 00:35:56.001 [2024-07-12 00:48:23.553258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.001 [2024-07-12 00:48:23.553285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.001 qpair failed and we were unable to recover it. 00:35:56.001 [2024-07-12 00:48:23.553373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.001 [2024-07-12 00:48:23.553401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.001 qpair failed and we were unable to recover it. 00:35:56.001 [2024-07-12 00:48:23.553484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.001 [2024-07-12 00:48:23.553510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.001 qpair failed and we were unable to recover it. 00:35:56.001 [2024-07-12 00:48:23.553603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.001 [2024-07-12 00:48:23.553631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.001 qpair failed and we were unable to recover it. 00:35:56.001 [2024-07-12 00:48:23.553707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.001 [2024-07-12 00:48:23.553733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.001 qpair failed and we were unable to recover it. 00:35:56.001 [2024-07-12 00:48:23.553813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.001 [2024-07-12 00:48:23.553839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.001 qpair failed and we were unable to recover it. 00:35:56.001 [2024-07-12 00:48:23.553916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.001 [2024-07-12 00:48:23.553941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.001 qpair failed and we were unable to recover it. 00:35:56.001 [2024-07-12 00:48:23.554020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.001 [2024-07-12 00:48:23.554048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.001 qpair failed and we were unable to recover it. 00:35:56.001 [2024-07-12 00:48:23.554127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.001 [2024-07-12 00:48:23.554153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.001 qpair failed and we were unable to recover it. 00:35:56.001 [2024-07-12 00:48:23.554236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.001 [2024-07-12 00:48:23.554263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.001 qpair failed and we were unable to recover it. 00:35:56.001 [2024-07-12 00:48:23.554340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.001 [2024-07-12 00:48:23.554366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.001 qpair failed and we were unable to recover it. 00:35:56.001 [2024-07-12 00:48:23.554442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.001 [2024-07-12 00:48:23.554467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.001 qpair failed and we were unable to recover it. 00:35:56.001 [2024-07-12 00:48:23.554552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.001 [2024-07-12 00:48:23.554577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.001 qpair failed and we were unable to recover it. 00:35:56.001 [2024-07-12 00:48:23.554691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.001 [2024-07-12 00:48:23.554718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.001 qpair failed and we were unable to recover it. 00:35:56.001 [2024-07-12 00:48:23.554801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.001 [2024-07-12 00:48:23.554828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.001 qpair failed and we were unable to recover it. 00:35:56.001 [2024-07-12 00:48:23.554913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.001 [2024-07-12 00:48:23.554940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.001 qpair failed and we were unable to recover it. 00:35:56.001 [2024-07-12 00:48:23.555022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.001 [2024-07-12 00:48:23.555049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.001 qpair failed and we were unable to recover it. 00:35:56.001 [2024-07-12 00:48:23.555132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.001 [2024-07-12 00:48:23.555159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.001 qpair failed and we were unable to recover it. 00:35:56.001 [2024-07-12 00:48:23.555244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.001 [2024-07-12 00:48:23.555270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.001 qpair failed and we were unable to recover it. 00:35:56.001 [2024-07-12 00:48:23.555346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.001 [2024-07-12 00:48:23.555372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.001 qpair failed and we were unable to recover it. 00:35:56.001 [2024-07-12 00:48:23.555452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.001 [2024-07-12 00:48:23.555481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.001 qpair failed and we were unable to recover it. 00:35:56.001 [2024-07-12 00:48:23.555572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.001 [2024-07-12 00:48:23.555605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.001 qpair failed and we were unable to recover it. 00:35:56.001 [2024-07-12 00:48:23.555682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.001 [2024-07-12 00:48:23.555708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.001 qpair failed and we were unable to recover it. 00:35:56.001 [2024-07-12 00:48:23.555902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.001 [2024-07-12 00:48:23.555928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.001 qpair failed and we were unable to recover it. 00:35:56.001 [2024-07-12 00:48:23.556005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.001 [2024-07-12 00:48:23.556032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.001 qpair failed and we were unable to recover it. 00:35:56.001 [2024-07-12 00:48:23.556113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.001 [2024-07-12 00:48:23.556138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.001 qpair failed and we were unable to recover it. 00:35:56.001 [2024-07-12 00:48:23.556225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.002 [2024-07-12 00:48:23.556251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.002 qpair failed and we were unable to recover it. 00:35:56.002 [2024-07-12 00:48:23.556332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.002 [2024-07-12 00:48:23.556359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.002 qpair failed and we were unable to recover it. 00:35:56.002 [2024-07-12 00:48:23.556438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.002 [2024-07-12 00:48:23.556464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.002 qpair failed and we were unable to recover it. 00:35:56.002 [2024-07-12 00:48:23.556563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.002 [2024-07-12 00:48:23.556593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.002 qpair failed and we were unable to recover it. 00:35:56.002 [2024-07-12 00:48:23.556690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.002 [2024-07-12 00:48:23.556716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.002 qpair failed and we were unable to recover it. 00:35:56.002 [2024-07-12 00:48:23.556795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.002 [2024-07-12 00:48:23.556821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.002 qpair failed and we were unable to recover it. 00:35:56.002 [2024-07-12 00:48:23.556907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.002 [2024-07-12 00:48:23.556934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.002 qpair failed and we were unable to recover it. 00:35:56.002 [2024-07-12 00:48:23.557015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.002 [2024-07-12 00:48:23.557041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.002 qpair failed and we were unable to recover it. 00:35:56.002 [2024-07-12 00:48:23.557122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.002 [2024-07-12 00:48:23.557152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.002 qpair failed and we were unable to recover it. 00:35:56.002 [2024-07-12 00:48:23.557238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.002 [2024-07-12 00:48:23.557264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.002 qpair failed and we were unable to recover it. 00:35:56.002 [2024-07-12 00:48:23.557343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.002 [2024-07-12 00:48:23.557369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.002 qpair failed and we were unable to recover it. 00:35:56.002 [2024-07-12 00:48:23.557464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.002 [2024-07-12 00:48:23.557494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.002 qpair failed and we were unable to recover it. 00:35:56.002 [2024-07-12 00:48:23.557571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.002 [2024-07-12 00:48:23.557606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.002 qpair failed and we were unable to recover it. 00:35:56.002 [2024-07-12 00:48:23.557693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.002 [2024-07-12 00:48:23.557728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.002 qpair failed and we were unable to recover it. 00:35:56.002 [2024-07-12 00:48:23.557819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.002 [2024-07-12 00:48:23.557845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.002 qpair failed and we were unable to recover it. 00:35:56.002 [2024-07-12 00:48:23.557931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.002 [2024-07-12 00:48:23.557959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.002 qpair failed and we were unable to recover it. 00:35:56.002 [2024-07-12 00:48:23.558046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.002 [2024-07-12 00:48:23.558073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.002 qpair failed and we were unable to recover it. 00:35:56.002 [2024-07-12 00:48:23.558150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.002 [2024-07-12 00:48:23.558176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.002 qpair failed and we were unable to recover it. 00:35:56.002 [2024-07-12 00:48:23.558255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.002 [2024-07-12 00:48:23.558280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.002 qpair failed and we were unable to recover it. 00:35:56.002 [2024-07-12 00:48:23.558358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.002 [2024-07-12 00:48:23.558384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.002 qpair failed and we were unable to recover it. 00:35:56.002 [2024-07-12 00:48:23.558459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.002 [2024-07-12 00:48:23.558485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.002 qpair failed and we were unable to recover it. 00:35:56.002 [2024-07-12 00:48:23.558565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.002 [2024-07-12 00:48:23.558602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.002 qpair failed and we were unable to recover it. 00:35:56.002 [2024-07-12 00:48:23.558689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.002 [2024-07-12 00:48:23.558716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.002 qpair failed and we were unable to recover it. 00:35:56.002 [2024-07-12 00:48:23.558792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.002 [2024-07-12 00:48:23.558818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.002 qpair failed and we were unable to recover it. 00:35:56.002 [2024-07-12 00:48:23.558900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.002 [2024-07-12 00:48:23.558927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.002 qpair failed and we were unable to recover it. 00:35:56.002 [2024-07-12 00:48:23.559004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.002 [2024-07-12 00:48:23.559030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.002 qpair failed and we were unable to recover it. 00:35:56.002 [2024-07-12 00:48:23.559107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.002 [2024-07-12 00:48:23.559133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.002 qpair failed and we were unable to recover it. 00:35:56.002 [2024-07-12 00:48:23.559215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.002 [2024-07-12 00:48:23.559241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.002 qpair failed and we were unable to recover it. 00:35:56.002 [2024-07-12 00:48:23.559349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.002 [2024-07-12 00:48:23.559376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.002 qpair failed and we were unable to recover it. 00:35:56.002 [2024-07-12 00:48:23.559452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.002 [2024-07-12 00:48:23.559477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.002 qpair failed and we were unable to recover it. 00:35:56.002 [2024-07-12 00:48:23.559590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.002 [2024-07-12 00:48:23.559618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.002 qpair failed and we were unable to recover it. 00:35:56.002 [2024-07-12 00:48:23.559701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.002 [2024-07-12 00:48:23.559727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.002 qpair failed and we were unable to recover it. 00:35:56.002 [2024-07-12 00:48:23.559804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.002 [2024-07-12 00:48:23.559829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.002 qpair failed and we were unable to recover it. 00:35:56.002 [2024-07-12 00:48:23.559905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.002 [2024-07-12 00:48:23.559930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.002 qpair failed and we were unable to recover it. 00:35:56.002 [2024-07-12 00:48:23.560024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.002 [2024-07-12 00:48:23.560051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.002 qpair failed and we were unable to recover it. 00:35:56.002 [2024-07-12 00:48:23.560138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.002 [2024-07-12 00:48:23.560164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.002 qpair failed and we were unable to recover it. 00:35:56.002 [2024-07-12 00:48:23.560246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.003 [2024-07-12 00:48:23.560271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.003 qpair failed and we were unable to recover it. 00:35:56.003 [2024-07-12 00:48:23.560360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.003 [2024-07-12 00:48:23.560385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.003 qpair failed and we were unable to recover it. 00:35:56.003 [2024-07-12 00:48:23.560484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.003 [2024-07-12 00:48:23.560510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.003 qpair failed and we were unable to recover it. 00:35:56.003 [2024-07-12 00:48:23.560605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.003 [2024-07-12 00:48:23.560632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.003 qpair failed and we were unable to recover it. 00:35:56.003 [2024-07-12 00:48:23.560718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.003 [2024-07-12 00:48:23.560745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.003 qpair failed and we were unable to recover it. 00:35:56.003 [2024-07-12 00:48:23.560823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.003 [2024-07-12 00:48:23.560848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.003 qpair failed and we were unable to recover it. 00:35:56.003 [2024-07-12 00:48:23.560934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.003 [2024-07-12 00:48:23.560961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.003 qpair failed and we were unable to recover it. 00:35:56.003 [2024-07-12 00:48:23.561050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.003 [2024-07-12 00:48:23.561077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.003 qpair failed and we were unable to recover it. 00:35:56.003 [2024-07-12 00:48:23.561161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.003 [2024-07-12 00:48:23.561187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.003 qpair failed and we were unable to recover it. 00:35:56.003 [2024-07-12 00:48:23.561271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.003 [2024-07-12 00:48:23.561298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.003 qpair failed and we were unable to recover it. 00:35:56.003 [2024-07-12 00:48:23.561379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.003 [2024-07-12 00:48:23.561404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.003 qpair failed and we were unable to recover it. 00:35:56.003 [2024-07-12 00:48:23.561478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.003 [2024-07-12 00:48:23.561505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.003 qpair failed and we were unable to recover it. 00:35:56.003 [2024-07-12 00:48:23.561605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.003 [2024-07-12 00:48:23.561631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.003 qpair failed and we were unable to recover it. 00:35:56.003 [2024-07-12 00:48:23.561715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.003 [2024-07-12 00:48:23.561742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.003 qpair failed and we were unable to recover it. 00:35:56.003 [2024-07-12 00:48:23.561828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.003 [2024-07-12 00:48:23.561857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.003 qpair failed and we were unable to recover it. 00:35:56.003 [2024-07-12 00:48:23.561941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.003 [2024-07-12 00:48:23.561970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.003 qpair failed and we were unable to recover it. 00:35:56.003 [2024-07-12 00:48:23.562050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.003 [2024-07-12 00:48:23.562077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.003 qpair failed and we were unable to recover it. 00:35:56.003 [2024-07-12 00:48:23.562160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.003 [2024-07-12 00:48:23.562191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.003 qpair failed and we were unable to recover it. 00:35:56.003 [2024-07-12 00:48:23.562272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.003 [2024-07-12 00:48:23.562298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.003 qpair failed and we were unable to recover it. 00:35:56.003 [2024-07-12 00:48:23.562388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.003 [2024-07-12 00:48:23.562415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.003 qpair failed and we were unable to recover it. 00:35:56.003 [2024-07-12 00:48:23.562498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.003 [2024-07-12 00:48:23.562525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.003 qpair failed and we were unable to recover it. 00:35:56.003 [2024-07-12 00:48:23.562603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.003 [2024-07-12 00:48:23.562629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.003 qpair failed and we were unable to recover it. 00:35:56.003 [2024-07-12 00:48:23.562713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.003 [2024-07-12 00:48:23.562739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.003 qpair failed and we were unable to recover it. 00:35:56.003 [2024-07-12 00:48:23.562817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.003 [2024-07-12 00:48:23.562843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.003 qpair failed and we were unable to recover it. 00:35:56.003 [2024-07-12 00:48:23.562921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.003 [2024-07-12 00:48:23.562946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.003 qpair failed and we were unable to recover it. 00:35:56.003 [2024-07-12 00:48:23.563031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.003 [2024-07-12 00:48:23.563058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.003 qpair failed and we were unable to recover it. 00:35:56.003 [2024-07-12 00:48:23.563134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.003 [2024-07-12 00:48:23.563160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.003 qpair failed and we were unable to recover it. 00:35:56.003 [2024-07-12 00:48:23.563235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.003 [2024-07-12 00:48:23.563260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.003 qpair failed and we were unable to recover it. 00:35:56.003 [2024-07-12 00:48:23.563337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.003 [2024-07-12 00:48:23.563365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.003 qpair failed and we were unable to recover it. 00:35:56.003 [2024-07-12 00:48:23.563441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.003 [2024-07-12 00:48:23.563467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.003 qpair failed and we were unable to recover it. 00:35:56.003 [2024-07-12 00:48:23.563552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.003 [2024-07-12 00:48:23.563578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.003 qpair failed and we were unable to recover it. 00:35:56.003 [2024-07-12 00:48:23.563678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.003 [2024-07-12 00:48:23.563704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.003 qpair failed and we were unable to recover it. 00:35:56.003 [2024-07-12 00:48:23.563786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.003 [2024-07-12 00:48:23.563815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.003 qpair failed and we were unable to recover it. 00:35:56.003 [2024-07-12 00:48:23.563894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.003 [2024-07-12 00:48:23.563920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.003 qpair failed and we were unable to recover it. 00:35:56.003 [2024-07-12 00:48:23.564009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.003 [2024-07-12 00:48:23.564035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.003 qpair failed and we were unable to recover it. 00:35:56.003 [2024-07-12 00:48:23.564110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.003 [2024-07-12 00:48:23.564136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.003 qpair failed and we were unable to recover it. 00:35:56.003 [2024-07-12 00:48:23.564225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.003 [2024-07-12 00:48:23.564253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.003 qpair failed and we were unable to recover it. 00:35:56.003 [2024-07-12 00:48:23.564336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.003 [2024-07-12 00:48:23.564362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.003 qpair failed and we were unable to recover it. 00:35:56.003 [2024-07-12 00:48:23.564445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.003 [2024-07-12 00:48:23.564473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.003 qpair failed and we were unable to recover it. 00:35:56.003 [2024-07-12 00:48:23.564555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.003 [2024-07-12 00:48:23.564581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.003 qpair failed and we were unable to recover it. 00:35:56.003 [2024-07-12 00:48:23.564668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.003 [2024-07-12 00:48:23.564695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.003 qpair failed and we were unable to recover it. 00:35:56.003 [2024-07-12 00:48:23.564773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.003 [2024-07-12 00:48:23.564799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.003 qpair failed and we were unable to recover it. 00:35:56.004 [2024-07-12 00:48:23.564885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.004 [2024-07-12 00:48:23.564911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.004 qpair failed and we were unable to recover it. 00:35:56.004 [2024-07-12 00:48:23.564993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.004 [2024-07-12 00:48:23.565019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.004 qpair failed and we were unable to recover it. 00:35:56.004 [2024-07-12 00:48:23.565113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.004 [2024-07-12 00:48:23.565142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.004 qpair failed and we were unable to recover it. 00:35:56.004 [2024-07-12 00:48:23.565220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.004 [2024-07-12 00:48:23.565256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.004 qpair failed and we were unable to recover it. 00:35:56.004 [2024-07-12 00:48:23.565339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.004 [2024-07-12 00:48:23.565366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.004 qpair failed and we were unable to recover it. 00:35:56.004 [2024-07-12 00:48:23.565449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.004 [2024-07-12 00:48:23.565475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.004 qpair failed and we were unable to recover it. 00:35:56.004 [2024-07-12 00:48:23.565550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.004 [2024-07-12 00:48:23.565575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.004 qpair failed and we were unable to recover it. 00:35:56.004 [2024-07-12 00:48:23.565663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.004 [2024-07-12 00:48:23.565689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.004 qpair failed and we were unable to recover it. 00:35:56.004 [2024-07-12 00:48:23.565768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.004 [2024-07-12 00:48:23.565793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.004 qpair failed and we were unable to recover it. 00:35:56.004 [2024-07-12 00:48:23.565879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.004 [2024-07-12 00:48:23.565904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.004 qpair failed and we were unable to recover it. 00:35:56.004 [2024-07-12 00:48:23.565999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.004 [2024-07-12 00:48:23.566025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.004 qpair failed and we were unable to recover it. 00:35:56.004 [2024-07-12 00:48:23.566219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.004 [2024-07-12 00:48:23.566247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.004 qpair failed and we were unable to recover it. 00:35:56.004 [2024-07-12 00:48:23.566329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.004 [2024-07-12 00:48:23.566356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.004 qpair failed and we were unable to recover it. 00:35:56.004 [2024-07-12 00:48:23.566439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.004 [2024-07-12 00:48:23.566466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.004 qpair failed and we were unable to recover it. 00:35:56.004 [2024-07-12 00:48:23.566541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.004 [2024-07-12 00:48:23.566571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.004 qpair failed and we were unable to recover it. 00:35:56.004 [2024-07-12 00:48:23.566665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.004 [2024-07-12 00:48:23.566697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.004 qpair failed and we were unable to recover it. 00:35:56.004 [2024-07-12 00:48:23.566784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.004 [2024-07-12 00:48:23.566810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.004 qpair failed and we were unable to recover it. 00:35:56.004 [2024-07-12 00:48:23.566885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.004 [2024-07-12 00:48:23.566915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.004 qpair failed and we were unable to recover it. 00:35:56.004 [2024-07-12 00:48:23.566999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.004 [2024-07-12 00:48:23.567028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.004 qpair failed and we were unable to recover it. 00:35:56.004 [2024-07-12 00:48:23.567110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.004 [2024-07-12 00:48:23.567137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.004 qpair failed and we were unable to recover it. 00:35:56.004 [2024-07-12 00:48:23.567212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.004 [2024-07-12 00:48:23.567239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.004 qpair failed and we were unable to recover it. 00:35:56.004 [2024-07-12 00:48:23.567344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.004 [2024-07-12 00:48:23.567370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.004 qpair failed and we were unable to recover it. 00:35:56.004 [2024-07-12 00:48:23.567457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.004 [2024-07-12 00:48:23.567484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.004 qpair failed and we were unable to recover it. 00:35:56.004 [2024-07-12 00:48:23.567560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.004 [2024-07-12 00:48:23.567594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.004 qpair failed and we were unable to recover it. 00:35:56.004 [2024-07-12 00:48:23.567686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.004 [2024-07-12 00:48:23.567711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.004 qpair failed and we were unable to recover it. 00:35:56.004 [2024-07-12 00:48:23.567800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.004 [2024-07-12 00:48:23.567826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.004 qpair failed and we were unable to recover it. 00:35:56.004 [2024-07-12 00:48:23.567900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.004 [2024-07-12 00:48:23.567925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.004 qpair failed and we were unable to recover it. 00:35:56.004 [2024-07-12 00:48:23.568005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.004 [2024-07-12 00:48:23.568032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.004 qpair failed and we were unable to recover it. 00:35:56.004 [2024-07-12 00:48:23.568126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.004 [2024-07-12 00:48:23.568154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.004 qpair failed and we were unable to recover it. 00:35:56.004 [2024-07-12 00:48:23.568238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.004 [2024-07-12 00:48:23.568264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.004 qpair failed and we were unable to recover it. 00:35:56.004 [2024-07-12 00:48:23.568344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.004 [2024-07-12 00:48:23.568372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.004 qpair failed and we were unable to recover it. 00:35:56.004 [2024-07-12 00:48:23.568456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.004 [2024-07-12 00:48:23.568482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.004 qpair failed and we were unable to recover it. 00:35:56.004 [2024-07-12 00:48:23.568570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.004 [2024-07-12 00:48:23.568603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.004 qpair failed and we were unable to recover it. 00:35:56.004 [2024-07-12 00:48:23.568681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.004 [2024-07-12 00:48:23.568707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.004 qpair failed and we were unable to recover it. 00:35:56.004 [2024-07-12 00:48:23.568786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.004 [2024-07-12 00:48:23.568816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.004 qpair failed and we were unable to recover it. 00:35:56.004 [2024-07-12 00:48:23.568902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.004 [2024-07-12 00:48:23.568931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.004 qpair failed and we were unable to recover it. 00:35:56.004 [2024-07-12 00:48:23.569015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.004 [2024-07-12 00:48:23.569043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.004 qpair failed and we were unable to recover it. 00:35:56.005 [2024-07-12 00:48:23.569121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.005 [2024-07-12 00:48:23.569148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.005 qpair failed and we were unable to recover it. 00:35:56.005 [2024-07-12 00:48:23.569227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.005 [2024-07-12 00:48:23.569253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.005 qpair failed and we were unable to recover it. 00:35:56.005 [2024-07-12 00:48:23.569329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.005 [2024-07-12 00:48:23.569355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.005 qpair failed and we were unable to recover it. 00:35:56.005 [2024-07-12 00:48:23.569441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.005 [2024-07-12 00:48:23.569467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.005 qpair failed and we were unable to recover it. 00:35:56.005 [2024-07-12 00:48:23.569543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.005 [2024-07-12 00:48:23.569568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.005 qpair failed and we were unable to recover it. 00:35:56.005 [2024-07-12 00:48:23.569666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.005 [2024-07-12 00:48:23.569692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.005 qpair failed and we were unable to recover it. 00:35:56.005 [2024-07-12 00:48:23.569774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.005 [2024-07-12 00:48:23.569801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.005 qpair failed and we were unable to recover it. 00:35:56.005 [2024-07-12 00:48:23.569888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.005 [2024-07-12 00:48:23.569915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.005 qpair failed and we were unable to recover it. 00:35:56.005 [2024-07-12 00:48:23.569992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.005 [2024-07-12 00:48:23.570020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.005 qpair failed and we were unable to recover it. 00:35:56.005 [2024-07-12 00:48:23.570107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.005 [2024-07-12 00:48:23.570132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.005 qpair failed and we were unable to recover it. 00:35:56.005 [2024-07-12 00:48:23.570208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.005 [2024-07-12 00:48:23.570235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.005 qpair failed and we were unable to recover it. 00:35:56.005 [2024-07-12 00:48:23.570311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.005 [2024-07-12 00:48:23.570337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.005 qpair failed and we were unable to recover it. 00:35:56.005 [2024-07-12 00:48:23.570414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.005 [2024-07-12 00:48:23.570440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.005 qpair failed and we were unable to recover it. 00:35:56.005 [2024-07-12 00:48:23.570514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.005 [2024-07-12 00:48:23.570539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.005 qpair failed and we were unable to recover it. 00:35:56.005 [2024-07-12 00:48:23.570626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.005 [2024-07-12 00:48:23.570655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.005 qpair failed and we were unable to recover it. 00:35:56.005 [2024-07-12 00:48:23.570742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.005 [2024-07-12 00:48:23.570769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.005 qpair failed and we were unable to recover it. 00:35:56.005 [2024-07-12 00:48:23.570853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.005 [2024-07-12 00:48:23.570879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.005 qpair failed and we were unable to recover it. 00:35:56.005 [2024-07-12 00:48:23.570961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.005 [2024-07-12 00:48:23.570987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.005 qpair failed and we were unable to recover it. 00:35:56.005 [2024-07-12 00:48:23.571067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.005 [2024-07-12 00:48:23.571097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.005 qpair failed and we were unable to recover it. 00:35:56.005 [2024-07-12 00:48:23.571182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.005 [2024-07-12 00:48:23.571207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.005 qpair failed and we were unable to recover it. 00:35:56.005 [2024-07-12 00:48:23.571289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.005 [2024-07-12 00:48:23.571315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.005 qpair failed and we were unable to recover it. 00:35:56.005 [2024-07-12 00:48:23.571393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.005 [2024-07-12 00:48:23.571419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.005 qpair failed and we were unable to recover it. 00:35:56.005 [2024-07-12 00:48:23.571497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.005 [2024-07-12 00:48:23.571523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.005 qpair failed and we were unable to recover it. 00:35:56.005 [2024-07-12 00:48:23.571599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.005 [2024-07-12 00:48:23.571626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.005 qpair failed and we were unable to recover it. 00:35:56.005 [2024-07-12 00:48:23.571705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.005 [2024-07-12 00:48:23.571731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.005 qpair failed and we were unable to recover it. 00:35:56.005 [2024-07-12 00:48:23.571815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.005 [2024-07-12 00:48:23.571844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.005 qpair failed and we were unable to recover it. 00:35:56.005 [2024-07-12 00:48:23.571926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.005 [2024-07-12 00:48:23.571953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.005 qpair failed and we were unable to recover it. 00:35:56.005 [2024-07-12 00:48:23.572151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.005 [2024-07-12 00:48:23.572181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.005 qpair failed and we were unable to recover it. 00:35:56.005 [2024-07-12 00:48:23.572260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.005 [2024-07-12 00:48:23.572286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.005 qpair failed and we were unable to recover it. 00:35:56.005 [2024-07-12 00:48:23.572377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.005 [2024-07-12 00:48:23.572420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.005 qpair failed and we were unable to recover it. 00:35:56.005 [2024-07-12 00:48:23.572526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.006 [2024-07-12 00:48:23.572553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.006 qpair failed and we were unable to recover it. 00:35:56.006 [2024-07-12 00:48:23.572646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.006 [2024-07-12 00:48:23.572675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.006 qpair failed and we were unable to recover it. 00:35:56.006 [2024-07-12 00:48:23.572766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.006 [2024-07-12 00:48:23.572794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.006 qpair failed and we were unable to recover it. 00:35:56.006 [2024-07-12 00:48:23.572878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.006 [2024-07-12 00:48:23.572905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.006 qpair failed and we were unable to recover it. 00:35:56.006 [2024-07-12 00:48:23.572986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.006 [2024-07-12 00:48:23.573012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.006 qpair failed and we were unable to recover it. 00:35:56.006 [2024-07-12 00:48:23.573086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.006 [2024-07-12 00:48:23.573111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.006 qpair failed and we were unable to recover it. 00:35:56.006 [2024-07-12 00:48:23.573204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.006 [2024-07-12 00:48:23.573231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.006 qpair failed and we were unable to recover it. 00:35:56.006 [2024-07-12 00:48:23.573315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.006 [2024-07-12 00:48:23.573341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.006 qpair failed and we were unable to recover it. 00:35:56.006 [2024-07-12 00:48:23.573419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.006 [2024-07-12 00:48:23.573444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.006 qpair failed and we were unable to recover it. 00:35:56.006 [2024-07-12 00:48:23.573528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.006 [2024-07-12 00:48:23.573554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.006 qpair failed and we were unable to recover it. 00:35:56.006 [2024-07-12 00:48:23.573654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.006 [2024-07-12 00:48:23.573682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.006 qpair failed and we were unable to recover it. 00:35:56.006 [2024-07-12 00:48:23.573760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.006 [2024-07-12 00:48:23.573788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.006 qpair failed and we were unable to recover it. 00:35:56.006 [2024-07-12 00:48:23.573872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.006 [2024-07-12 00:48:23.573899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.006 qpair failed and we were unable to recover it. 00:35:56.006 [2024-07-12 00:48:23.573983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.006 [2024-07-12 00:48:23.574010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.006 qpair failed and we were unable to recover it. 00:35:56.006 [2024-07-12 00:48:23.574085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.006 [2024-07-12 00:48:23.574111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.006 qpair failed and we were unable to recover it. 00:35:56.006 [2024-07-12 00:48:23.574202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.006 [2024-07-12 00:48:23.574229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.006 qpair failed and we were unable to recover it. 00:35:56.006 [2024-07-12 00:48:23.574311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.006 [2024-07-12 00:48:23.574338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.006 qpair failed and we were unable to recover it. 00:35:56.006 [2024-07-12 00:48:23.574428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.006 [2024-07-12 00:48:23.574456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.006 qpair failed and we were unable to recover it. 00:35:56.006 [2024-07-12 00:48:23.574544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.006 [2024-07-12 00:48:23.574572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.006 qpair failed and we were unable to recover it. 00:35:56.006 [2024-07-12 00:48:23.574665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.006 [2024-07-12 00:48:23.574694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.006 qpair failed and we were unable to recover it. 00:35:56.006 [2024-07-12 00:48:23.574777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.006 [2024-07-12 00:48:23.574806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.006 qpair failed and we were unable to recover it. 00:35:56.006 [2024-07-12 00:48:23.574895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.006 [2024-07-12 00:48:23.574922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.006 qpair failed and we were unable to recover it. 00:35:56.006 [2024-07-12 00:48:23.575003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.006 [2024-07-12 00:48:23.575029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.006 qpair failed and we were unable to recover it. 00:35:56.006 [2024-07-12 00:48:23.575108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.006 [2024-07-12 00:48:23.575134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.006 qpair failed and we were unable to recover it. 00:35:56.006 [2024-07-12 00:48:23.575213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.006 [2024-07-12 00:48:23.575238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.006 qpair failed and we were unable to recover it. 00:35:56.006 [2024-07-12 00:48:23.575315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.006 [2024-07-12 00:48:23.575343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.006 qpair failed and we were unable to recover it. 00:35:56.006 [2024-07-12 00:48:23.575423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.006 [2024-07-12 00:48:23.575450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.006 qpair failed and we were unable to recover it. 00:35:56.006 [2024-07-12 00:48:23.575534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.006 [2024-07-12 00:48:23.575561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.006 qpair failed and we were unable to recover it. 00:35:56.006 [2024-07-12 00:48:23.575645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.006 [2024-07-12 00:48:23.575681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.006 qpair failed and we were unable to recover it. 00:35:56.006 [2024-07-12 00:48:23.575771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.006 [2024-07-12 00:48:23.575798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.006 qpair failed and we were unable to recover it. 00:35:56.006 [2024-07-12 00:48:23.575880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.006 [2024-07-12 00:48:23.575908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.006 qpair failed and we were unable to recover it. 00:35:56.006 [2024-07-12 00:48:23.575982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.006 [2024-07-12 00:48:23.576009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.006 qpair failed and we were unable to recover it. 00:35:56.006 [2024-07-12 00:48:23.576085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.006 [2024-07-12 00:48:23.576111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.006 qpair failed and we were unable to recover it. 00:35:56.006 [2024-07-12 00:48:23.576242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.006 [2024-07-12 00:48:23.576268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.006 qpair failed and we were unable to recover it. 00:35:56.006 [2024-07-12 00:48:23.576354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.006 [2024-07-12 00:48:23.576381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.006 qpair failed and we were unable to recover it. 00:35:56.006 [2024-07-12 00:48:23.576508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.006 [2024-07-12 00:48:23.576534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.006 qpair failed and we were unable to recover it. 00:35:56.006 [2024-07-12 00:48:23.576682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.006 [2024-07-12 00:48:23.576732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.006 qpair failed and we were unable to recover it. 00:35:56.006 [2024-07-12 00:48:23.576811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.006 [2024-07-12 00:48:23.576837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.007 qpair failed and we were unable to recover it. 00:35:56.007 [2024-07-12 00:48:23.576920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.007 [2024-07-12 00:48:23.576946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.007 qpair failed and we were unable to recover it. 00:35:56.007 [2024-07-12 00:48:23.577027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.007 [2024-07-12 00:48:23.577054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.007 qpair failed and we were unable to recover it. 00:35:56.007 [2024-07-12 00:48:23.577132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.007 [2024-07-12 00:48:23.577158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.007 qpair failed and we were unable to recover it. 00:35:56.007 [2024-07-12 00:48:23.577238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.007 [2024-07-12 00:48:23.577264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.007 qpair failed and we were unable to recover it. 00:35:56.007 [2024-07-12 00:48:23.577357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.007 [2024-07-12 00:48:23.577384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.007 qpair failed and we were unable to recover it. 00:35:56.007 [2024-07-12 00:48:23.577463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.007 [2024-07-12 00:48:23.577489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.007 qpair failed and we were unable to recover it. 00:35:56.007 [2024-07-12 00:48:23.577562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.007 [2024-07-12 00:48:23.577597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.007 qpair failed and we were unable to recover it. 00:35:56.007 [2024-07-12 00:48:23.577680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.007 [2024-07-12 00:48:23.577706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.007 qpair failed and we were unable to recover it. 00:35:56.007 [2024-07-12 00:48:23.577786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.007 [2024-07-12 00:48:23.577813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.007 qpair failed and we were unable to recover it. 00:35:56.007 [2024-07-12 00:48:23.577898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.007 [2024-07-12 00:48:23.577924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.007 qpair failed and we were unable to recover it. 00:35:56.007 [2024-07-12 00:48:23.578000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.007 [2024-07-12 00:48:23.578030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.007 qpair failed and we were unable to recover it. 00:35:56.007 [2024-07-12 00:48:23.578116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.007 [2024-07-12 00:48:23.578145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.007 qpair failed and we were unable to recover it. 00:35:56.007 [2024-07-12 00:48:23.578230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.007 [2024-07-12 00:48:23.578257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.007 qpair failed and we were unable to recover it. 00:35:56.007 [2024-07-12 00:48:23.578338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.007 [2024-07-12 00:48:23.578365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.007 qpair failed and we were unable to recover it. 00:35:56.007 [2024-07-12 00:48:23.578441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.007 [2024-07-12 00:48:23.578466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.007 qpair failed and we were unable to recover it. 00:35:56.007 [2024-07-12 00:48:23.578542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.007 [2024-07-12 00:48:23.578568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.007 qpair failed and we were unable to recover it. 00:35:56.007 [2024-07-12 00:48:23.578655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.007 [2024-07-12 00:48:23.578681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.007 qpair failed and we were unable to recover it. 00:35:56.007 [2024-07-12 00:48:23.578771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.007 [2024-07-12 00:48:23.578799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.007 qpair failed and we were unable to recover it. 00:35:56.007 [2024-07-12 00:48:23.578877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.007 [2024-07-12 00:48:23.578903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.007 qpair failed and we were unable to recover it. 00:35:56.007 [2024-07-12 00:48:23.578980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.007 [2024-07-12 00:48:23.579007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.007 qpair failed and we were unable to recover it. 00:35:56.007 [2024-07-12 00:48:23.579085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.007 [2024-07-12 00:48:23.579111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.007 qpair failed and we were unable to recover it. 00:35:56.007 [2024-07-12 00:48:23.579192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.007 [2024-07-12 00:48:23.579221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.007 qpair failed and we were unable to recover it. 00:35:56.007 [2024-07-12 00:48:23.579310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.007 [2024-07-12 00:48:23.579336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.007 qpair failed and we were unable to recover it. 00:35:56.007 [2024-07-12 00:48:23.579417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.007 [2024-07-12 00:48:23.579444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.007 qpair failed and we were unable to recover it. 00:35:56.007 [2024-07-12 00:48:23.579523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.007 [2024-07-12 00:48:23.579548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.007 qpair failed and we were unable to recover it. 00:35:56.007 [2024-07-12 00:48:23.579634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.007 [2024-07-12 00:48:23.579660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.007 qpair failed and we were unable to recover it. 00:35:56.007 [2024-07-12 00:48:23.579734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.007 [2024-07-12 00:48:23.579759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.007 qpair failed and we were unable to recover it. 00:35:56.007 [2024-07-12 00:48:23.579831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.007 [2024-07-12 00:48:23.579856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.007 qpair failed and we were unable to recover it. 00:35:56.007 [2024-07-12 00:48:23.579959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.007 [2024-07-12 00:48:23.579986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.007 qpair failed and we were unable to recover it. 00:35:56.007 [2024-07-12 00:48:23.580072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.007 [2024-07-12 00:48:23.580098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.007 qpair failed and we were unable to recover it. 00:35:56.007 [2024-07-12 00:48:23.580199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.007 [2024-07-12 00:48:23.580230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.007 qpair failed and we were unable to recover it. 00:35:56.007 [2024-07-12 00:48:23.580316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.007 [2024-07-12 00:48:23.580342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.007 qpair failed and we were unable to recover it. 00:35:56.007 [2024-07-12 00:48:23.580426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.007 [2024-07-12 00:48:23.580454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.007 qpair failed and we were unable to recover it. 00:35:56.007 [2024-07-12 00:48:23.580539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.007 [2024-07-12 00:48:23.580568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.007 qpair failed and we were unable to recover it. 00:35:56.007 [2024-07-12 00:48:23.580663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.007 [2024-07-12 00:48:23.580690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.007 qpair failed and we were unable to recover it. 00:35:56.007 [2024-07-12 00:48:23.580766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.007 [2024-07-12 00:48:23.580792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.007 qpair failed and we were unable to recover it. 00:35:56.007 [2024-07-12 00:48:23.580878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.007 [2024-07-12 00:48:23.580903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.007 qpair failed and we were unable to recover it. 00:35:56.007 [2024-07-12 00:48:23.580982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.007 [2024-07-12 00:48:23.581008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.007 qpair failed and we were unable to recover it. 00:35:56.007 [2024-07-12 00:48:23.581093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.007 [2024-07-12 00:48:23.581120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.007 qpair failed and we were unable to recover it. 00:35:56.007 [2024-07-12 00:48:23.581198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.007 [2024-07-12 00:48:23.581227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.007 qpair failed and we were unable to recover it. 00:35:56.007 [2024-07-12 00:48:23.581313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.007 [2024-07-12 00:48:23.581340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.007 qpair failed and we were unable to recover it. 00:35:56.007 [2024-07-12 00:48:23.581417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.007 [2024-07-12 00:48:23.581443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.008 qpair failed and we were unable to recover it. 00:35:56.008 [2024-07-12 00:48:23.581526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.008 [2024-07-12 00:48:23.581553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.008 qpair failed and we were unable to recover it. 00:35:56.008 [2024-07-12 00:48:23.581644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.008 [2024-07-12 00:48:23.581670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.008 qpair failed and we were unable to recover it. 00:35:56.008 [2024-07-12 00:48:23.581756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.008 [2024-07-12 00:48:23.581781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.008 qpair failed and we were unable to recover it. 00:35:56.008 [2024-07-12 00:48:23.581871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.008 [2024-07-12 00:48:23.581898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.008 qpair failed and we were unable to recover it. 00:35:56.008 [2024-07-12 00:48:23.581982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.008 [2024-07-12 00:48:23.582008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.008 qpair failed and we were unable to recover it. 00:35:56.008 [2024-07-12 00:48:23.582094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.008 [2024-07-12 00:48:23.582123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.008 qpair failed and we were unable to recover it. 00:35:56.008 [2024-07-12 00:48:23.582209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.008 [2024-07-12 00:48:23.582236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.008 qpair failed and we were unable to recover it. 00:35:56.008 [2024-07-12 00:48:23.582310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.008 [2024-07-12 00:48:23.582337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.008 qpair failed and we were unable to recover it. 00:35:56.008 [2024-07-12 00:48:23.582414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.008 [2024-07-12 00:48:23.582439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.008 qpair failed and we were unable to recover it. 00:35:56.008 [2024-07-12 00:48:23.582513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.008 [2024-07-12 00:48:23.582538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.008 qpair failed and we were unable to recover it. 00:35:56.008 [2024-07-12 00:48:23.582613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.008 [2024-07-12 00:48:23.582649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.008 qpair failed and we were unable to recover it. 00:35:56.008 [2024-07-12 00:48:23.582736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.008 [2024-07-12 00:48:23.582762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.008 qpair failed and we were unable to recover it. 00:35:56.008 [2024-07-12 00:48:23.582846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.008 [2024-07-12 00:48:23.582869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.008 qpair failed and we were unable to recover it. 00:35:56.008 [2024-07-12 00:48:23.582976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.008 [2024-07-12 00:48:23.583011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.008 qpair failed and we were unable to recover it. 00:35:56.008 [2024-07-12 00:48:23.583097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.008 [2024-07-12 00:48:23.583124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.008 qpair failed and we were unable to recover it. 00:35:56.008 [2024-07-12 00:48:23.583210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.008 [2024-07-12 00:48:23.583238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.008 qpair failed and we were unable to recover it. 00:35:56.008 [2024-07-12 00:48:23.583328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.008 [2024-07-12 00:48:23.583354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.008 qpair failed and we were unable to recover it. 00:35:56.008 [2024-07-12 00:48:23.583437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.008 [2024-07-12 00:48:23.583464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.008 qpair failed and we were unable to recover it. 00:35:56.008 [2024-07-12 00:48:23.583552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.008 [2024-07-12 00:48:23.583579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.008 qpair failed and we were unable to recover it. 00:35:56.008 [2024-07-12 00:48:23.583670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.008 [2024-07-12 00:48:23.583699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.008 qpair failed and we were unable to recover it. 00:35:56.008 [2024-07-12 00:48:23.583780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.008 [2024-07-12 00:48:23.583810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.008 qpair failed and we were unable to recover it. 00:35:56.008 [2024-07-12 00:48:23.583895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.008 [2024-07-12 00:48:23.583924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.008 qpair failed and we were unable to recover it. 00:35:56.008 [2024-07-12 00:48:23.584000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.008 [2024-07-12 00:48:23.584027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.008 qpair failed and we were unable to recover it. 00:35:56.008 [2024-07-12 00:48:23.584109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.008 [2024-07-12 00:48:23.584138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.008 qpair failed and we were unable to recover it. 00:35:56.008 [2024-07-12 00:48:23.584214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.008 [2024-07-12 00:48:23.584239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.008 qpair failed and we were unable to recover it. 00:35:56.008 [2024-07-12 00:48:23.584322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.008 [2024-07-12 00:48:23.584350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.008 qpair failed and we were unable to recover it. 00:35:56.008 [2024-07-12 00:48:23.584429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.008 [2024-07-12 00:48:23.584455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.008 qpair failed and we were unable to recover it. 00:35:56.008 [2024-07-12 00:48:23.584534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.008 [2024-07-12 00:48:23.584559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.008 qpair failed and we were unable to recover it. 00:35:56.008 [2024-07-12 00:48:23.584650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.008 [2024-07-12 00:48:23.584681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.008 qpair failed and we were unable to recover it. 00:35:56.008 [2024-07-12 00:48:23.584764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.008 [2024-07-12 00:48:23.584791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.008 qpair failed and we were unable to recover it. 00:35:56.008 [2024-07-12 00:48:23.584871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.008 [2024-07-12 00:48:23.584897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.008 qpair failed and we were unable to recover it. 00:35:56.008 [2024-07-12 00:48:23.584975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.008 [2024-07-12 00:48:23.585001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.008 qpair failed and we were unable to recover it. 00:35:56.008 [2024-07-12 00:48:23.585079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.008 [2024-07-12 00:48:23.585104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.008 qpair failed and we were unable to recover it. 00:35:56.008 [2024-07-12 00:48:23.585185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.008 [2024-07-12 00:48:23.585211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.008 qpair failed and we were unable to recover it. 00:35:56.008 [2024-07-12 00:48:23.585294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.008 [2024-07-12 00:48:23.585320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.008 qpair failed and we were unable to recover it. 00:35:56.008 [2024-07-12 00:48:23.585399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.008 [2024-07-12 00:48:23.585424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.008 qpair failed and we were unable to recover it. 00:35:56.008 [2024-07-12 00:48:23.585507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.008 [2024-07-12 00:48:23.585534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.008 qpair failed and we were unable to recover it. 00:35:56.008 [2024-07-12 00:48:23.585621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.008 [2024-07-12 00:48:23.585647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.008 qpair failed and we were unable to recover it. 00:35:56.008 [2024-07-12 00:48:23.585726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.008 [2024-07-12 00:48:23.585752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.008 qpair failed and we were unable to recover it. 00:35:56.008 [2024-07-12 00:48:23.585830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.008 [2024-07-12 00:48:23.585856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.008 qpair failed and we were unable to recover it. 00:35:56.008 [2024-07-12 00:48:23.585931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.008 [2024-07-12 00:48:23.585957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.008 qpair failed and we were unable to recover it. 00:35:56.008 [2024-07-12 00:48:23.586030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.008 [2024-07-12 00:48:23.586055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.008 qpair failed and we were unable to recover it. 00:35:56.008 [2024-07-12 00:48:23.586145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.008 [2024-07-12 00:48:23.586175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.008 qpair failed and we were unable to recover it. 00:35:56.008 [2024-07-12 00:48:23.586259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.008 [2024-07-12 00:48:23.586288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.008 qpair failed and we were unable to recover it. 00:35:56.008 [2024-07-12 00:48:23.586379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.009 [2024-07-12 00:48:23.586408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.009 qpair failed and we were unable to recover it. 00:35:56.009 [2024-07-12 00:48:23.586495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.009 [2024-07-12 00:48:23.586520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.009 qpair failed and we were unable to recover it. 00:35:56.009 [2024-07-12 00:48:23.586604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.009 [2024-07-12 00:48:23.586632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.009 qpair failed and we were unable to recover it. 00:35:56.009 [2024-07-12 00:48:23.586714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.009 [2024-07-12 00:48:23.586740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.009 qpair failed and we were unable to recover it. 00:35:56.009 [2024-07-12 00:48:23.586819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.009 [2024-07-12 00:48:23.586844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.009 qpair failed and we were unable to recover it. 00:35:56.009 [2024-07-12 00:48:23.586930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.009 [2024-07-12 00:48:23.586958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.009 qpair failed and we were unable to recover it. 00:35:56.009 [2024-07-12 00:48:23.587040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.009 [2024-07-12 00:48:23.587066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.009 qpair failed and we were unable to recover it. 00:35:56.009 [2024-07-12 00:48:23.587144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.009 [2024-07-12 00:48:23.587173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.009 qpair failed and we were unable to recover it. 00:35:56.009 [2024-07-12 00:48:23.587258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.009 [2024-07-12 00:48:23.587284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.009 qpair failed and we were unable to recover it. 00:35:56.009 [2024-07-12 00:48:23.587362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.009 [2024-07-12 00:48:23.587391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.009 qpair failed and we were unable to recover it. 00:35:56.009 [2024-07-12 00:48:23.587477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.009 [2024-07-12 00:48:23.587505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.009 qpair failed and we were unable to recover it. 00:35:56.009 [2024-07-12 00:48:23.587591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.009 [2024-07-12 00:48:23.587620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.009 qpair failed and we were unable to recover it. 00:35:56.009 [2024-07-12 00:48:23.587705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.009 [2024-07-12 00:48:23.587731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.009 qpair failed and we were unable to recover it. 00:35:56.009 [2024-07-12 00:48:23.587810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.009 [2024-07-12 00:48:23.587836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.009 qpair failed and we were unable to recover it. 00:35:56.009 [2024-07-12 00:48:23.587912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.009 [2024-07-12 00:48:23.587941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.009 qpair failed and we were unable to recover it. 00:35:56.009 [2024-07-12 00:48:23.588021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.009 [2024-07-12 00:48:23.588049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.009 qpair failed and we were unable to recover it. 00:35:56.009 [2024-07-12 00:48:23.588141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.009 [2024-07-12 00:48:23.588168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.009 qpair failed and we were unable to recover it. 00:35:56.009 [2024-07-12 00:48:23.588249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.009 [2024-07-12 00:48:23.588275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.009 qpair failed and we were unable to recover it. 00:35:56.009 [2024-07-12 00:48:23.588355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.009 [2024-07-12 00:48:23.588381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.009 qpair failed and we were unable to recover it. 00:35:56.009 [2024-07-12 00:48:23.588465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.009 [2024-07-12 00:48:23.588491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.009 qpair failed and we were unable to recover it. 00:35:56.009 [2024-07-12 00:48:23.588573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.009 [2024-07-12 00:48:23.588604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.009 qpair failed and we were unable to recover it. 00:35:56.009 [2024-07-12 00:48:23.588686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.009 [2024-07-12 00:48:23.588713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.009 qpair failed and we were unable to recover it. 00:35:56.009 [2024-07-12 00:48:23.588803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.009 [2024-07-12 00:48:23.588832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.009 qpair failed and we were unable to recover it. 00:35:56.009 [2024-07-12 00:48:23.588919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.009 [2024-07-12 00:48:23.588946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.009 qpair failed and we were unable to recover it. 00:35:56.009 [2024-07-12 00:48:23.589024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.009 [2024-07-12 00:48:23.589051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.009 qpair failed and we were unable to recover it. 00:35:56.009 [2024-07-12 00:48:23.589143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.009 [2024-07-12 00:48:23.589168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.009 qpair failed and we were unable to recover it. 00:35:56.009 [2024-07-12 00:48:23.589245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.009 [2024-07-12 00:48:23.589270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.009 qpair failed and we were unable to recover it. 00:35:56.009 [2024-07-12 00:48:23.589462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.009 [2024-07-12 00:48:23.589488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.009 qpair failed and we were unable to recover it. 00:35:56.009 [2024-07-12 00:48:23.589568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.009 [2024-07-12 00:48:23.589604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.009 qpair failed and we were unable to recover it. 00:35:56.009 [2024-07-12 00:48:23.589691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.009 [2024-07-12 00:48:23.589716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.009 qpair failed and we were unable to recover it. 00:35:56.009 [2024-07-12 00:48:23.589801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.009 [2024-07-12 00:48:23.589828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.009 qpair failed and we were unable to recover it. 00:35:56.009 [2024-07-12 00:48:23.589909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.009 [2024-07-12 00:48:23.589934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.009 qpair failed and we were unable to recover it. 00:35:56.009 [2024-07-12 00:48:23.590025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.009 [2024-07-12 00:48:23.590054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.009 qpair failed and we were unable to recover it. 00:35:56.009 [2024-07-12 00:48:23.590142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.009 [2024-07-12 00:48:23.590170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.009 qpair failed and we were unable to recover it. 00:35:56.009 [2024-07-12 00:48:23.590252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.009 [2024-07-12 00:48:23.590279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.009 qpair failed and we were unable to recover it. 00:35:56.009 [2024-07-12 00:48:23.590365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.009 [2024-07-12 00:48:23.590392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.009 qpair failed and we were unable to recover it. 00:35:56.009 [2024-07-12 00:48:23.590474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.009 [2024-07-12 00:48:23.590500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.009 qpair failed and we were unable to recover it. 00:35:56.009 [2024-07-12 00:48:23.590577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.009 [2024-07-12 00:48:23.590611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.009 qpair failed and we were unable to recover it. 00:35:56.009 [2024-07-12 00:48:23.590700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.009 [2024-07-12 00:48:23.590725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.009 qpair failed and we were unable to recover it. 00:35:56.009 [2024-07-12 00:48:23.590809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.009 [2024-07-12 00:48:23.590837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.009 qpair failed and we were unable to recover it. 00:35:56.009 [2024-07-12 00:48:23.590920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.009 [2024-07-12 00:48:23.590946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.009 qpair failed and we were unable to recover it. 00:35:56.009 [2024-07-12 00:48:23.591026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.009 [2024-07-12 00:48:23.591052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.009 qpair failed and we were unable to recover it. 00:35:56.010 [2024-07-12 00:48:23.591127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.010 [2024-07-12 00:48:23.591152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.010 qpair failed and we were unable to recover it. 00:35:56.010 [2024-07-12 00:48:23.591225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.010 [2024-07-12 00:48:23.591250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.010 qpair failed and we were unable to recover it. 00:35:56.010 [2024-07-12 00:48:23.591330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.010 [2024-07-12 00:48:23.591356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.010 qpair failed and we were unable to recover it. 00:35:56.010 [2024-07-12 00:48:23.591441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.010 [2024-07-12 00:48:23.591468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.010 qpair failed and we were unable to recover it. 00:35:56.010 [2024-07-12 00:48:23.591546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.010 [2024-07-12 00:48:23.591574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.010 qpair failed and we were unable to recover it. 00:35:56.010 [2024-07-12 00:48:23.591670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.010 [2024-07-12 00:48:23.591699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.010 qpair failed and we were unable to recover it. 00:35:56.010 [2024-07-12 00:48:23.591785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.010 [2024-07-12 00:48:23.591812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.010 qpair failed and we were unable to recover it. 00:35:56.010 [2024-07-12 00:48:23.591907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.010 [2024-07-12 00:48:23.591934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.010 qpair failed and we were unable to recover it. 00:35:56.010 [2024-07-12 00:48:23.592016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.010 [2024-07-12 00:48:23.592042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.010 qpair failed and we were unable to recover it. 00:35:56.010 [2024-07-12 00:48:23.592120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.010 [2024-07-12 00:48:23.592154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.010 qpair failed and we were unable to recover it. 00:35:56.010 [2024-07-12 00:48:23.592238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.010 [2024-07-12 00:48:23.592264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.010 qpair failed and we were unable to recover it. 00:35:56.010 [2024-07-12 00:48:23.592339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.010 [2024-07-12 00:48:23.592366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.010 qpair failed and we were unable to recover it. 00:35:56.010 [2024-07-12 00:48:23.592457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.010 [2024-07-12 00:48:23.592483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.010 qpair failed and we were unable to recover it. 00:35:56.010 [2024-07-12 00:48:23.592570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.010 [2024-07-12 00:48:23.592601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.010 qpair failed and we were unable to recover it. 00:35:56.010 [2024-07-12 00:48:23.592676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.010 [2024-07-12 00:48:23.592702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.010 qpair failed and we were unable to recover it. 00:35:56.010 [2024-07-12 00:48:23.592790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.010 [2024-07-12 00:48:23.592816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.010 qpair failed and we were unable to recover it. 00:35:56.010 [2024-07-12 00:48:23.592894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.010 [2024-07-12 00:48:23.592922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.010 qpair failed and we were unable to recover it. 00:35:56.010 [2024-07-12 00:48:23.593004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.010 [2024-07-12 00:48:23.593032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.010 qpair failed and we were unable to recover it. 00:35:56.010 [2024-07-12 00:48:23.593122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.010 [2024-07-12 00:48:23.593150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.010 qpair failed and we were unable to recover it. 00:35:56.010 [2024-07-12 00:48:23.593228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.010 [2024-07-12 00:48:23.593254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.010 qpair failed and we were unable to recover it. 00:35:56.010 [2024-07-12 00:48:23.593339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.010 [2024-07-12 00:48:23.593366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.010 qpair failed and we were unable to recover it. 00:35:56.010 [2024-07-12 00:48:23.593439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.010 [2024-07-12 00:48:23.593464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.010 qpair failed and we were unable to recover it. 00:35:56.010 [2024-07-12 00:48:23.593552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.010 [2024-07-12 00:48:23.593579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.010 qpair failed and we were unable to recover it. 00:35:56.010 [2024-07-12 00:48:23.593681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.010 [2024-07-12 00:48:23.593706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.010 qpair failed and we were unable to recover it. 00:35:56.010 [2024-07-12 00:48:23.593785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.010 [2024-07-12 00:48:23.593810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.010 qpair failed and we were unable to recover it. 00:35:56.010 [2024-07-12 00:48:23.593920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.010 [2024-07-12 00:48:23.593945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.010 qpair failed and we were unable to recover it. 00:35:56.011 [2024-07-12 00:48:23.594045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.011 [2024-07-12 00:48:23.594113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.011 qpair failed and we were unable to recover it. 00:35:56.011 [2024-07-12 00:48:23.594200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.011 [2024-07-12 00:48:23.594225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.011 qpair failed and we were unable to recover it. 00:35:56.011 [2024-07-12 00:48:23.594303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.011 [2024-07-12 00:48:23.594328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.011 qpair failed and we were unable to recover it. 00:35:56.011 [2024-07-12 00:48:23.594409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.011 [2024-07-12 00:48:23.594435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.011 qpair failed and we were unable to recover it. 00:35:56.011 [2024-07-12 00:48:23.594521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.011 [2024-07-12 00:48:23.594548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.011 qpair failed and we were unable to recover it. 00:35:56.011 [2024-07-12 00:48:23.594631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.011 [2024-07-12 00:48:23.594657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.011 qpair failed and we were unable to recover it. 00:35:56.011 [2024-07-12 00:48:23.594735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.011 [2024-07-12 00:48:23.594764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.011 qpair failed and we were unable to recover it. 00:35:56.011 [2024-07-12 00:48:23.594866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.011 [2024-07-12 00:48:23.594893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.011 qpair failed and we were unable to recover it. 00:35:56.011 [2024-07-12 00:48:23.594978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.011 [2024-07-12 00:48:23.595005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.011 qpair failed and we were unable to recover it. 00:35:56.011 [2024-07-12 00:48:23.595094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.011 [2024-07-12 00:48:23.595120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.011 qpair failed and we were unable to recover it. 00:35:56.011 [2024-07-12 00:48:23.595197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.011 [2024-07-12 00:48:23.595227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.011 qpair failed and we were unable to recover it. 00:35:56.011 [2024-07-12 00:48:23.595315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.011 [2024-07-12 00:48:23.595344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.011 qpair failed and we were unable to recover it. 00:35:56.011 [2024-07-12 00:48:23.595429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.011 [2024-07-12 00:48:23.595457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.011 qpair failed and we were unable to recover it. 00:35:56.011 [2024-07-12 00:48:23.595543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.011 [2024-07-12 00:48:23.595570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.011 qpair failed and we were unable to recover it. 00:35:56.011 [2024-07-12 00:48:23.595654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.011 [2024-07-12 00:48:23.595679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.011 qpair failed and we were unable to recover it. 00:35:56.011 [2024-07-12 00:48:23.595758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.011 [2024-07-12 00:48:23.595785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.011 qpair failed and we were unable to recover it. 00:35:56.011 [2024-07-12 00:48:23.595979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.011 [2024-07-12 00:48:23.596004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.011 qpair failed and we were unable to recover it. 00:35:56.011 [2024-07-12 00:48:23.596193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.011 [2024-07-12 00:48:23.596220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.011 qpair failed and we were unable to recover it. 00:35:56.011 [2024-07-12 00:48:23.596298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.011 [2024-07-12 00:48:23.596324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.011 qpair failed and we were unable to recover it. 00:35:56.011 [2024-07-12 00:48:23.596407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.011 [2024-07-12 00:48:23.596433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.011 qpair failed and we were unable to recover it. 00:35:56.011 [2024-07-12 00:48:23.596507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.011 [2024-07-12 00:48:23.596533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.011 qpair failed and we were unable to recover it. 00:35:56.011 [2024-07-12 00:48:23.596617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.011 [2024-07-12 00:48:23.596643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.011 qpair failed and we were unable to recover it. 00:35:56.011 [2024-07-12 00:48:23.596718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.011 [2024-07-12 00:48:23.596743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.011 qpair failed and we were unable to recover it. 00:35:56.011 [2024-07-12 00:48:23.596820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.011 [2024-07-12 00:48:23.596846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.011 qpair failed and we were unable to recover it. 00:35:56.011 [2024-07-12 00:48:23.596926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.011 [2024-07-12 00:48:23.596951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.011 qpair failed and we were unable to recover it. 00:35:56.011 [2024-07-12 00:48:23.597028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.011 [2024-07-12 00:48:23.597053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.011 qpair failed and we were unable to recover it. 00:35:56.011 [2024-07-12 00:48:23.597154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.011 [2024-07-12 00:48:23.597183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.011 qpair failed and we were unable to recover it. 00:35:56.011 [2024-07-12 00:48:23.597266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.011 [2024-07-12 00:48:23.597291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.011 qpair failed and we were unable to recover it. 00:35:56.011 [2024-07-12 00:48:23.597371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.011 [2024-07-12 00:48:23.597398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.011 qpair failed and we were unable to recover it. 00:35:56.011 [2024-07-12 00:48:23.597484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.011 [2024-07-12 00:48:23.597510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.011 qpair failed and we were unable to recover it. 00:35:56.011 [2024-07-12 00:48:23.597602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.011 [2024-07-12 00:48:23.597636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.011 qpair failed and we were unable to recover it. 00:35:56.011 [2024-07-12 00:48:23.597724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.011 [2024-07-12 00:48:23.597751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.011 qpair failed and we were unable to recover it. 00:35:56.011 [2024-07-12 00:48:23.597831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.011 [2024-07-12 00:48:23.597858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.011 qpair failed and we were unable to recover it. 00:35:56.011 [2024-07-12 00:48:23.597941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.011 [2024-07-12 00:48:23.597968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.011 qpair failed and we were unable to recover it. 00:35:56.011 [2024-07-12 00:48:23.598046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.011 [2024-07-12 00:48:23.598075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.011 qpair failed and we were unable to recover it. 00:35:56.011 [2024-07-12 00:48:23.598158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.011 [2024-07-12 00:48:23.598185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.012 qpair failed and we were unable to recover it. 00:35:56.012 [2024-07-12 00:48:23.598266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.012 [2024-07-12 00:48:23.598292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.012 qpair failed and we were unable to recover it. 00:35:56.012 [2024-07-12 00:48:23.598378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.012 [2024-07-12 00:48:23.598405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.012 qpair failed and we were unable to recover it. 00:35:56.012 [2024-07-12 00:48:23.598492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.012 [2024-07-12 00:48:23.598520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.012 qpair failed and we were unable to recover it. 00:35:56.012 [2024-07-12 00:48:23.598612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.012 [2024-07-12 00:48:23.598640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.012 qpair failed and we were unable to recover it. 00:35:56.012 [2024-07-12 00:48:23.598726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.012 [2024-07-12 00:48:23.598752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.012 qpair failed and we were unable to recover it. 00:35:56.012 [2024-07-12 00:48:23.598839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.012 [2024-07-12 00:48:23.598865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.012 qpair failed and we were unable to recover it. 00:35:56.012 [2024-07-12 00:48:23.598955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.012 [2024-07-12 00:48:23.598980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.012 qpair failed and we were unable to recover it. 00:35:56.012 [2024-07-12 00:48:23.599062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.012 [2024-07-12 00:48:23.599090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.012 qpair failed and we were unable to recover it. 00:35:56.012 [2024-07-12 00:48:23.599172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.012 [2024-07-12 00:48:23.599197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.012 qpair failed and we were unable to recover it. 00:35:56.012 [2024-07-12 00:48:23.599277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.012 [2024-07-12 00:48:23.599305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.012 qpair failed and we were unable to recover it. 00:35:56.012 [2024-07-12 00:48:23.599392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.012 [2024-07-12 00:48:23.599420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.012 qpair failed and we were unable to recover it. 00:35:56.012 [2024-07-12 00:48:23.599501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.012 [2024-07-12 00:48:23.599527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.012 qpair failed and we were unable to recover it. 00:35:56.012 [2024-07-12 00:48:23.599605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.012 [2024-07-12 00:48:23.599631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.012 qpair failed and we were unable to recover it. 00:35:56.012 [2024-07-12 00:48:23.599824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.012 [2024-07-12 00:48:23.599850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.012 qpair failed and we were unable to recover it. 00:35:56.012 [2024-07-12 00:48:23.599928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.012 [2024-07-12 00:48:23.599956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.012 qpair failed and we were unable to recover it. 00:35:56.012 [2024-07-12 00:48:23.600043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.012 [2024-07-12 00:48:23.600070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.012 qpair failed and we were unable to recover it. 00:35:56.012 [2024-07-12 00:48:23.600155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.012 [2024-07-12 00:48:23.600182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.012 qpair failed and we were unable to recover it. 00:35:56.012 [2024-07-12 00:48:23.600264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.012 [2024-07-12 00:48:23.600289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.012 qpair failed and we were unable to recover it. 00:35:56.012 [2024-07-12 00:48:23.600367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.012 [2024-07-12 00:48:23.600393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.012 qpair failed and we were unable to recover it. 00:35:56.012 [2024-07-12 00:48:23.600471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.012 [2024-07-12 00:48:23.600496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.012 qpair failed and we were unable to recover it. 00:35:56.012 [2024-07-12 00:48:23.600574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.012 [2024-07-12 00:48:23.600610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.012 qpair failed and we were unable to recover it. 00:35:56.012 [2024-07-12 00:48:23.600697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.012 [2024-07-12 00:48:23.600725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.012 qpair failed and we were unable to recover it. 00:35:56.012 [2024-07-12 00:48:23.600821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.012 [2024-07-12 00:48:23.600850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.012 qpair failed and we were unable to recover it. 00:35:56.012 [2024-07-12 00:48:23.600932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.012 [2024-07-12 00:48:23.600958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.012 qpair failed and we were unable to recover it. 00:35:56.012 [2024-07-12 00:48:23.601040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.012 [2024-07-12 00:48:23.601067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.012 qpair failed and we were unable to recover it. 00:35:56.012 [2024-07-12 00:48:23.601154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.012 [2024-07-12 00:48:23.601180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.012 qpair failed and we were unable to recover it. 00:35:56.012 [2024-07-12 00:48:23.601269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.012 [2024-07-12 00:48:23.601294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.012 qpair failed and we were unable to recover it. 00:35:56.012 [2024-07-12 00:48:23.601374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.012 [2024-07-12 00:48:23.601401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.012 qpair failed and we were unable to recover it. 00:35:56.012 [2024-07-12 00:48:23.601487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.012 [2024-07-12 00:48:23.601515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.012 qpair failed and we were unable to recover it. 00:35:56.012 [2024-07-12 00:48:23.601600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.012 [2024-07-12 00:48:23.601627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.012 qpair failed and we were unable to recover it. 00:35:56.012 [2024-07-12 00:48:23.601702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.012 [2024-07-12 00:48:23.601729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.012 qpair failed and we were unable to recover it. 00:35:56.012 [2024-07-12 00:48:23.601810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.012 [2024-07-12 00:48:23.601836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.012 qpair failed and we were unable to recover it. 00:35:56.012 [2024-07-12 00:48:23.601914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.013 [2024-07-12 00:48:23.601940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.013 qpair failed and we were unable to recover it. 00:35:56.013 [2024-07-12 00:48:23.602021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.013 [2024-07-12 00:48:23.602049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.013 qpair failed and we were unable to recover it. 00:35:56.013 [2024-07-12 00:48:23.602127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.013 [2024-07-12 00:48:23.602154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.013 qpair failed and we were unable to recover it. 00:35:56.013 [2024-07-12 00:48:23.602244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.013 [2024-07-12 00:48:23.602272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.013 qpair failed and we were unable to recover it. 00:35:56.013 [2024-07-12 00:48:23.602351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.013 [2024-07-12 00:48:23.602376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.013 qpair failed and we were unable to recover it. 00:35:56.013 [2024-07-12 00:48:23.602451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.013 [2024-07-12 00:48:23.602476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.013 qpair failed and we were unable to recover it. 00:35:56.013 [2024-07-12 00:48:23.602559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.013 [2024-07-12 00:48:23.602584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.013 qpair failed and we were unable to recover it. 00:35:56.013 [2024-07-12 00:48:23.602682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.013 [2024-07-12 00:48:23.602707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.013 qpair failed and we were unable to recover it. 00:35:56.013 [2024-07-12 00:48:23.602783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.013 [2024-07-12 00:48:23.602809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.013 qpair failed and we were unable to recover it. 00:35:56.013 [2024-07-12 00:48:23.602890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.013 [2024-07-12 00:48:23.602920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.013 qpair failed and we were unable to recover it. 00:35:56.013 [2024-07-12 00:48:23.603002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.013 [2024-07-12 00:48:23.603030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.013 qpair failed and we were unable to recover it. 00:35:56.013 [2024-07-12 00:48:23.603114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.013 [2024-07-12 00:48:23.603142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.013 qpair failed and we were unable to recover it. 00:35:56.013 [2024-07-12 00:48:23.603251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.013 [2024-07-12 00:48:23.603280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.013 qpair failed and we were unable to recover it. 00:35:56.013 [2024-07-12 00:48:23.603366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.013 [2024-07-12 00:48:23.603393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.013 qpair failed and we were unable to recover it. 00:35:56.013 [2024-07-12 00:48:23.603474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.013 [2024-07-12 00:48:23.603500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.013 qpair failed and we were unable to recover it. 00:35:56.013 [2024-07-12 00:48:23.603591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.013 [2024-07-12 00:48:23.603619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.013 qpair failed and we were unable to recover it. 00:35:56.013 [2024-07-12 00:48:23.603705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.013 [2024-07-12 00:48:23.603732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.013 qpair failed and we were unable to recover it. 00:35:56.013 [2024-07-12 00:48:23.603819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.013 [2024-07-12 00:48:23.603847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.013 qpair failed and we were unable to recover it. 00:35:56.013 [2024-07-12 00:48:23.603934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.013 [2024-07-12 00:48:23.603962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.013 qpair failed and we were unable to recover it. 00:35:56.013 [2024-07-12 00:48:23.604039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.013 [2024-07-12 00:48:23.604064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.013 qpair failed and we were unable to recover it. 00:35:56.013 [2024-07-12 00:48:23.604142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.013 [2024-07-12 00:48:23.604168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.013 qpair failed and we were unable to recover it. 00:35:56.013 [2024-07-12 00:48:23.604248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.013 [2024-07-12 00:48:23.604273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.013 qpair failed and we were unable to recover it. 00:35:56.013 [2024-07-12 00:48:23.604358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.013 [2024-07-12 00:48:23.604384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.013 qpair failed and we were unable to recover it. 00:35:56.013 [2024-07-12 00:48:23.604462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.013 [2024-07-12 00:48:23.604489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.013 qpair failed and we were unable to recover it. 00:35:56.013 [2024-07-12 00:48:23.604565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.013 [2024-07-12 00:48:23.604596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.013 qpair failed and we were unable to recover it. 00:35:56.013 [2024-07-12 00:48:23.604690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.013 [2024-07-12 00:48:23.604717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.013 qpair failed and we were unable to recover it. 00:35:56.013 [2024-07-12 00:48:23.604806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.013 [2024-07-12 00:48:23.604832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.013 qpair failed and we were unable to recover it. 00:35:56.013 [2024-07-12 00:48:23.604911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.013 [2024-07-12 00:48:23.604937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.013 qpair failed and we were unable to recover it. 00:35:56.013 [2024-07-12 00:48:23.605018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.013 [2024-07-12 00:48:23.605046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.013 qpair failed and we were unable to recover it. 00:35:56.013 [2024-07-12 00:48:23.605130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.013 [2024-07-12 00:48:23.605157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.013 qpair failed and we were unable to recover it. 00:35:56.013 [2024-07-12 00:48:23.605250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.013 [2024-07-12 00:48:23.605278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.013 qpair failed and we were unable to recover it. 00:35:56.013 [2024-07-12 00:48:23.605360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.013 [2024-07-12 00:48:23.605387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.013 qpair failed and we were unable to recover it. 00:35:56.013 [2024-07-12 00:48:23.605468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.014 [2024-07-12 00:48:23.605496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.014 qpair failed and we were unable to recover it. 00:35:56.014 [2024-07-12 00:48:23.605572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.014 [2024-07-12 00:48:23.605607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.014 qpair failed and we were unable to recover it. 00:35:56.014 [2024-07-12 00:48:23.605696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.014 [2024-07-12 00:48:23.605722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.014 qpair failed and we were unable to recover it. 00:35:56.014 [2024-07-12 00:48:23.605812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.014 [2024-07-12 00:48:23.605841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.014 qpair failed and we were unable to recover it. 00:35:56.014 [2024-07-12 00:48:23.605919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.014 [2024-07-12 00:48:23.605949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.014 qpair failed and we were unable to recover it. 00:35:56.014 [2024-07-12 00:48:23.606032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.014 [2024-07-12 00:48:23.606060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.014 qpair failed and we were unable to recover it. 00:35:56.014 [2024-07-12 00:48:23.606148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.014 [2024-07-12 00:48:23.606177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.014 qpair failed and we were unable to recover it. 00:35:56.014 [2024-07-12 00:48:23.606256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.014 [2024-07-12 00:48:23.606284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.014 qpair failed and we were unable to recover it. 00:35:56.014 [2024-07-12 00:48:23.606373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.014 [2024-07-12 00:48:23.606399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.014 qpair failed and we were unable to recover it. 00:35:56.014 [2024-07-12 00:48:23.606481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.014 [2024-07-12 00:48:23.606506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.014 qpair failed and we were unable to recover it. 00:35:56.014 [2024-07-12 00:48:23.606603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.014 [2024-07-12 00:48:23.606629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.014 qpair failed and we were unable to recover it. 00:35:56.014 [2024-07-12 00:48:23.606709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.014 [2024-07-12 00:48:23.606736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.014 qpair failed and we were unable to recover it. 00:35:56.014 [2024-07-12 00:48:23.606822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.014 [2024-07-12 00:48:23.606848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.014 qpair failed and we were unable to recover it. 00:35:56.014 [2024-07-12 00:48:23.606936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.014 [2024-07-12 00:48:23.606963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.014 qpair failed and we were unable to recover it. 00:35:56.014 [2024-07-12 00:48:23.607055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.014 [2024-07-12 00:48:23.607082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.014 qpair failed and we were unable to recover it. 00:35:56.014 [2024-07-12 00:48:23.607166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.014 [2024-07-12 00:48:23.607193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.014 qpair failed and we were unable to recover it. 00:35:56.014 [2024-07-12 00:48:23.607271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.014 [2024-07-12 00:48:23.607296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.014 qpair failed and we were unable to recover it. 00:35:56.014 [2024-07-12 00:48:23.607379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.014 [2024-07-12 00:48:23.607407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.014 qpair failed and we were unable to recover it. 00:35:56.014 [2024-07-12 00:48:23.607496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.014 [2024-07-12 00:48:23.607522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.014 qpair failed and we were unable to recover it. 00:35:56.014 [2024-07-12 00:48:23.607611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.014 [2024-07-12 00:48:23.607641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.014 qpair failed and we were unable to recover it. 00:35:56.014 [2024-07-12 00:48:23.607717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.014 [2024-07-12 00:48:23.607742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.014 qpair failed and we were unable to recover it. 00:35:56.014 [2024-07-12 00:48:23.607934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.014 [2024-07-12 00:48:23.607959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.014 qpair failed and we were unable to recover it. 00:35:56.014 [2024-07-12 00:48:23.608048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.014 [2024-07-12 00:48:23.608076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.014 qpair failed and we were unable to recover it. 00:35:56.014 [2024-07-12 00:48:23.608152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.014 [2024-07-12 00:48:23.608178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.014 qpair failed and we were unable to recover it. 00:35:56.014 [2024-07-12 00:48:23.608260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.014 [2024-07-12 00:48:23.608290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.014 qpair failed and we were unable to recover it. 00:35:56.014 [2024-07-12 00:48:23.608369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.014 [2024-07-12 00:48:23.608394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.014 qpair failed and we were unable to recover it. 00:35:56.014 [2024-07-12 00:48:23.608478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.014 [2024-07-12 00:48:23.608506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.014 qpair failed and we were unable to recover it. 00:35:56.014 [2024-07-12 00:48:23.608595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.014 [2024-07-12 00:48:23.608623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.014 qpair failed and we were unable to recover it. 00:35:56.014 [2024-07-12 00:48:23.608707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.014 [2024-07-12 00:48:23.608735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.014 qpair failed and we were unable to recover it. 00:35:56.014 [2024-07-12 00:48:23.608816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.014 [2024-07-12 00:48:23.608841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.014 qpair failed and we were unable to recover it. 00:35:56.014 [2024-07-12 00:48:23.608923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.014 [2024-07-12 00:48:23.608949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.014 qpair failed and we were unable to recover it. 00:35:56.014 [2024-07-12 00:48:23.609028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.015 [2024-07-12 00:48:23.609054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.015 qpair failed and we were unable to recover it. 00:35:56.015 [2024-07-12 00:48:23.609148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.015 [2024-07-12 00:48:23.609177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.015 qpair failed and we were unable to recover it. 00:35:56.015 [2024-07-12 00:48:23.609260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.015 [2024-07-12 00:48:23.609288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.015 qpair failed and we were unable to recover it. 00:35:56.015 [2024-07-12 00:48:23.609386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.015 [2024-07-12 00:48:23.609427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.015 qpair failed and we were unable to recover it. 00:35:56.015 [2024-07-12 00:48:23.609516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.015 [2024-07-12 00:48:23.609544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.015 qpair failed and we were unable to recover it. 00:35:56.015 [2024-07-12 00:48:23.609634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.015 [2024-07-12 00:48:23.609662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.015 qpair failed and we were unable to recover it. 00:35:56.015 [2024-07-12 00:48:23.609740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.015 [2024-07-12 00:48:23.609766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.015 qpair failed and we were unable to recover it. 00:35:56.015 [2024-07-12 00:48:23.609852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.015 [2024-07-12 00:48:23.609880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.015 qpair failed and we were unable to recover it. 00:35:56.015 [2024-07-12 00:48:23.609967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.015 [2024-07-12 00:48:23.609993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.015 qpair failed and we were unable to recover it. 00:35:56.015 [2024-07-12 00:48:23.610203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.015 [2024-07-12 00:48:23.610242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.015 qpair failed and we were unable to recover it. 00:35:56.015 [2024-07-12 00:48:23.610431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.015 [2024-07-12 00:48:23.610457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.015 qpair failed and we were unable to recover it. 00:35:56.015 [2024-07-12 00:48:23.610533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.015 [2024-07-12 00:48:23.610559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.015 qpair failed and we were unable to recover it. 00:35:56.015 [2024-07-12 00:48:23.610648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.015 [2024-07-12 00:48:23.610677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.015 qpair failed and we were unable to recover it. 00:35:56.015 [2024-07-12 00:48:23.610767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.015 [2024-07-12 00:48:23.610796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.015 qpair failed and we were unable to recover it. 00:35:56.015 [2024-07-12 00:48:23.610895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.015 [2024-07-12 00:48:23.610923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.015 qpair failed and we were unable to recover it. 00:35:56.015 [2024-07-12 00:48:23.611007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.015 [2024-07-12 00:48:23.611034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.015 qpair failed and we were unable to recover it. 00:35:56.015 [2024-07-12 00:48:23.611112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.015 [2024-07-12 00:48:23.611138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.015 qpair failed and we were unable to recover it. 00:35:56.015 [2024-07-12 00:48:23.611273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.015 [2024-07-12 00:48:23.611323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.015 qpair failed and we were unable to recover it. 00:35:56.015 [2024-07-12 00:48:23.611401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.015 [2024-07-12 00:48:23.611427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.015 qpair failed and we were unable to recover it. 00:35:56.015 [2024-07-12 00:48:23.611512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.015 [2024-07-12 00:48:23.611539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.015 qpair failed and we were unable to recover it. 00:35:56.015 [2024-07-12 00:48:23.611623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.015 [2024-07-12 00:48:23.611652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.015 qpair failed and we were unable to recover it. 00:35:56.016 [2024-07-12 00:48:23.611742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.016 [2024-07-12 00:48:23.611768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.016 qpair failed and we were unable to recover it. 00:35:56.016 [2024-07-12 00:48:23.611848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.016 [2024-07-12 00:48:23.611874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.016 qpair failed and we were unable to recover it. 00:35:56.016 [2024-07-12 00:48:23.611962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.016 [2024-07-12 00:48:23.611989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.016 qpair failed and we were unable to recover it. 00:35:56.016 [2024-07-12 00:48:23.612079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.016 [2024-07-12 00:48:23.612108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.016 qpair failed and we were unable to recover it. 00:35:56.016 [2024-07-12 00:48:23.612196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.016 [2024-07-12 00:48:23.612224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.016 qpair failed and we were unable to recover it. 00:35:56.016 [2024-07-12 00:48:23.612308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.016 [2024-07-12 00:48:23.612335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.016 qpair failed and we were unable to recover it. 00:35:56.016 [2024-07-12 00:48:23.612419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.016 [2024-07-12 00:48:23.612445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.016 qpair failed and we were unable to recover it. 00:35:56.016 [2024-07-12 00:48:23.612529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.016 [2024-07-12 00:48:23.612557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.016 qpair failed and we were unable to recover it. 00:35:56.016 [2024-07-12 00:48:23.612640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.016 [2024-07-12 00:48:23.612665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.016 qpair failed and we were unable to recover it. 00:35:56.016 [2024-07-12 00:48:23.612748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.016 [2024-07-12 00:48:23.612776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.016 qpair failed and we were unable to recover it. 00:35:56.016 [2024-07-12 00:48:23.612859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.016 [2024-07-12 00:48:23.612887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.016 qpair failed and we were unable to recover it. 00:35:56.016 [2024-07-12 00:48:23.612969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.016 [2024-07-12 00:48:23.612997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.016 qpair failed and we were unable to recover it. 00:35:56.016 [2024-07-12 00:48:23.613085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.016 [2024-07-12 00:48:23.613112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.016 qpair failed and we were unable to recover it. 00:35:56.016 [2024-07-12 00:48:23.613198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.016 [2024-07-12 00:48:23.613225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.016 qpair failed and we were unable to recover it. 00:35:56.016 [2024-07-12 00:48:23.613311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.016 [2024-07-12 00:48:23.613338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.016 qpair failed and we were unable to recover it. 00:35:56.016 [2024-07-12 00:48:23.613423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.016 [2024-07-12 00:48:23.613449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.016 qpair failed and we were unable to recover it. 00:35:56.016 [2024-07-12 00:48:23.613536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.016 [2024-07-12 00:48:23.613564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.016 qpair failed and we were unable to recover it. 00:35:56.016 [2024-07-12 00:48:23.613656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.016 [2024-07-12 00:48:23.613690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.016 qpair failed and we were unable to recover it. 00:35:56.016 [2024-07-12 00:48:23.613770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.016 [2024-07-12 00:48:23.613797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.016 qpair failed and we were unable to recover it. 00:35:56.016 [2024-07-12 00:48:23.613882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.016 [2024-07-12 00:48:23.613914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.016 qpair failed and we were unable to recover it. 00:35:56.016 [2024-07-12 00:48:23.614001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.016 [2024-07-12 00:48:23.614028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.016 qpair failed and we were unable to recover it. 00:35:56.016 [2024-07-12 00:48:23.614106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.016 [2024-07-12 00:48:23.614133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.016 qpair failed and we were unable to recover it. 00:35:56.016 [2024-07-12 00:48:23.614213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.016 [2024-07-12 00:48:23.614241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.016 qpair failed and we were unable to recover it. 00:35:56.016 [2024-07-12 00:48:23.614320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.016 [2024-07-12 00:48:23.614346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.016 qpair failed and we were unable to recover it. 00:35:56.016 [2024-07-12 00:48:23.614430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.016 [2024-07-12 00:48:23.614460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.016 qpair failed and we were unable to recover it. 00:35:56.016 [2024-07-12 00:48:23.614539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.016 [2024-07-12 00:48:23.614564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.016 qpair failed and we were unable to recover it. 00:35:56.016 [2024-07-12 00:48:23.614655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.016 [2024-07-12 00:48:23.614685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.016 qpair failed and we were unable to recover it. 00:35:56.016 [2024-07-12 00:48:23.614760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.016 [2024-07-12 00:48:23.614785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.016 qpair failed and we were unable to recover it. 00:35:56.016 [2024-07-12 00:48:23.614871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.016 [2024-07-12 00:48:23.614898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.016 qpair failed and we were unable to recover it. 00:35:56.016 [2024-07-12 00:48:23.614974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.016 [2024-07-12 00:48:23.614999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.016 qpair failed and we were unable to recover it. 00:35:56.016 [2024-07-12 00:48:23.615080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.016 [2024-07-12 00:48:23.615108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.016 qpair failed and we were unable to recover it. 00:35:56.016 [2024-07-12 00:48:23.615200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.017 [2024-07-12 00:48:23.615225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.017 qpair failed and we were unable to recover it. 00:35:56.017 [2024-07-12 00:48:23.615312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.017 [2024-07-12 00:48:23.615339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.017 qpair failed and we were unable to recover it. 00:35:56.017 [2024-07-12 00:48:23.615431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.017 [2024-07-12 00:48:23.615458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.017 qpair failed and we were unable to recover it. 00:35:56.017 [2024-07-12 00:48:23.615538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.017 [2024-07-12 00:48:23.615566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.017 qpair failed and we were unable to recover it. 00:35:56.017 [2024-07-12 00:48:23.615657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.017 [2024-07-12 00:48:23.615686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.017 qpair failed and we were unable to recover it. 00:35:56.017 [2024-07-12 00:48:23.615771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.017 [2024-07-12 00:48:23.615796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.017 qpair failed and we were unable to recover it. 00:35:56.017 [2024-07-12 00:48:23.615883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.017 [2024-07-12 00:48:23.615908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.017 qpair failed and we were unable to recover it. 00:35:56.017 [2024-07-12 00:48:23.615986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.017 [2024-07-12 00:48:23.616014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.017 qpair failed and we were unable to recover it. 00:35:56.017 [2024-07-12 00:48:23.616099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.017 [2024-07-12 00:48:23.616127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.017 qpair failed and we were unable to recover it. 00:35:56.017 [2024-07-12 00:48:23.616221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.017 [2024-07-12 00:48:23.616250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.017 qpair failed and we were unable to recover it. 00:35:56.017 [2024-07-12 00:48:23.616330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.017 [2024-07-12 00:48:23.616357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.017 qpair failed and we were unable to recover it. 00:35:56.017 [2024-07-12 00:48:23.616431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.017 [2024-07-12 00:48:23.616457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.017 qpair failed and we were unable to recover it. 00:35:56.017 [2024-07-12 00:48:23.616542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.017 [2024-07-12 00:48:23.616568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.017 qpair failed and we were unable to recover it. 00:35:56.017 [2024-07-12 00:48:23.616655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.017 [2024-07-12 00:48:23.616681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.017 qpair failed and we were unable to recover it. 00:35:56.017 [2024-07-12 00:48:23.616756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.017 [2024-07-12 00:48:23.616781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.017 qpair failed and we were unable to recover it. 00:35:56.017 [2024-07-12 00:48:23.616866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.017 [2024-07-12 00:48:23.616910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.017 qpair failed and we were unable to recover it. 00:35:56.017 [2024-07-12 00:48:23.616995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.017 [2024-07-12 00:48:23.617024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.017 qpair failed and we were unable to recover it. 00:35:56.017 [2024-07-12 00:48:23.617103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.017 [2024-07-12 00:48:23.617130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.017 qpair failed and we were unable to recover it. 00:35:56.017 [2024-07-12 00:48:23.617209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.017 [2024-07-12 00:48:23.617234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.017 qpair failed and we were unable to recover it. 00:35:56.017 [2024-07-12 00:48:23.617316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.017 [2024-07-12 00:48:23.617342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.017 qpair failed and we were unable to recover it. 00:35:56.017 [2024-07-12 00:48:23.617443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.017 [2024-07-12 00:48:23.617469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.017 qpair failed and we were unable to recover it. 00:35:56.017 [2024-07-12 00:48:23.617556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.017 [2024-07-12 00:48:23.617583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.017 qpair failed and we were unable to recover it. 00:35:56.017 [2024-07-12 00:48:23.617664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.017 [2024-07-12 00:48:23.617688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.017 qpair failed and we were unable to recover it. 00:35:56.017 [2024-07-12 00:48:23.617764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.017 [2024-07-12 00:48:23.617790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.017 qpair failed and we were unable to recover it. 00:35:56.017 [2024-07-12 00:48:23.617874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.017 [2024-07-12 00:48:23.617903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.017 qpair failed and we were unable to recover it. 00:35:56.017 [2024-07-12 00:48:23.617989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.017 [2024-07-12 00:48:23.618017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.017 qpair failed and we were unable to recover it. 00:35:56.017 [2024-07-12 00:48:23.618100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.017 [2024-07-12 00:48:23.618126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.017 qpair failed and we were unable to recover it. 00:35:56.017 [2024-07-12 00:48:23.618207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.017 [2024-07-12 00:48:23.618234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.017 qpair failed and we were unable to recover it. 00:35:56.017 [2024-07-12 00:48:23.618327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.017 [2024-07-12 00:48:23.618354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.017 qpair failed and we were unable to recover it. 00:35:56.017 [2024-07-12 00:48:23.618435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.017 [2024-07-12 00:48:23.618461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.017 qpair failed and we were unable to recover it. 00:35:56.017 [2024-07-12 00:48:23.618556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.017 [2024-07-12 00:48:23.618582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.017 qpair failed and we were unable to recover it. 00:35:56.017 [2024-07-12 00:48:23.618667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.017 [2024-07-12 00:48:23.618693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.017 qpair failed and we were unable to recover it. 00:35:56.017 [2024-07-12 00:48:23.618775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.017 [2024-07-12 00:48:23.618804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.017 qpair failed and we were unable to recover it. 00:35:56.018 [2024-07-12 00:48:23.618887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.018 [2024-07-12 00:48:23.618915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.018 qpair failed and we were unable to recover it. 00:35:56.018 [2024-07-12 00:48:23.619002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.018 [2024-07-12 00:48:23.619030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.018 qpair failed and we were unable to recover it. 00:35:56.018 [2024-07-12 00:48:23.619116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.018 [2024-07-12 00:48:23.619143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.018 qpair failed and we were unable to recover it. 00:35:56.018 [2024-07-12 00:48:23.619225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.018 [2024-07-12 00:48:23.619252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.018 qpair failed and we were unable to recover it. 00:35:56.018 [2024-07-12 00:48:23.619336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.018 [2024-07-12 00:48:23.619366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.018 qpair failed and we were unable to recover it. 00:35:56.018 [2024-07-12 00:48:23.619444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.018 [2024-07-12 00:48:23.619473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.018 qpair failed and we were unable to recover it. 00:35:56.018 [2024-07-12 00:48:23.619563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.018 [2024-07-12 00:48:23.619599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.018 qpair failed and we were unable to recover it. 00:35:56.018 [2024-07-12 00:48:23.619688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.018 [2024-07-12 00:48:23.619717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.018 qpair failed and we were unable to recover it. 00:35:56.018 [2024-07-12 00:48:23.619804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.018 [2024-07-12 00:48:23.619832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.018 qpair failed and we were unable to recover it. 00:35:56.018 [2024-07-12 00:48:23.619927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.018 [2024-07-12 00:48:23.619957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.018 qpair failed and we were unable to recover it. 00:35:56.018 [2024-07-12 00:48:23.620032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.018 [2024-07-12 00:48:23.620059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.018 qpair failed and we were unable to recover it. 00:35:56.018 [2024-07-12 00:48:23.620134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.018 [2024-07-12 00:48:23.620160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.018 qpair failed and we were unable to recover it. 00:35:56.018 [2024-07-12 00:48:23.620234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.018 [2024-07-12 00:48:23.620260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.018 qpair failed and we were unable to recover it. 00:35:56.018 [2024-07-12 00:48:23.620350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.018 [2024-07-12 00:48:23.620377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.018 qpair failed and we were unable to recover it. 00:35:56.018 [2024-07-12 00:48:23.620461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.018 [2024-07-12 00:48:23.620488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.018 qpair failed and we were unable to recover it. 00:35:56.018 [2024-07-12 00:48:23.620578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.018 [2024-07-12 00:48:23.620615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.018 qpair failed and we were unable to recover it. 00:35:56.018 [2024-07-12 00:48:23.620693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.018 [2024-07-12 00:48:23.620719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.018 qpair failed and we were unable to recover it. 00:35:56.018 [2024-07-12 00:48:23.620797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.018 [2024-07-12 00:48:23.620825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.018 qpair failed and we were unable to recover it. 00:35:56.018 [2024-07-12 00:48:23.620915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.018 [2024-07-12 00:48:23.620942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.018 qpair failed and we were unable to recover it. 00:35:56.018 [2024-07-12 00:48:23.621025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.018 [2024-07-12 00:48:23.621053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.018 qpair failed and we were unable to recover it. 00:35:56.018 [2024-07-12 00:48:23.621130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.018 [2024-07-12 00:48:23.621156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.018 qpair failed and we were unable to recover it. 00:35:56.018 [2024-07-12 00:48:23.621237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.018 [2024-07-12 00:48:23.621264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.018 qpair failed and we were unable to recover it. 00:35:56.018 [2024-07-12 00:48:23.621349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.018 [2024-07-12 00:48:23.621380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.018 qpair failed and we were unable to recover it. 00:35:56.018 [2024-07-12 00:48:23.621473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.018 [2024-07-12 00:48:23.621501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.018 qpair failed and we were unable to recover it. 00:35:56.018 [2024-07-12 00:48:23.621582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.018 [2024-07-12 00:48:23.621618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.018 qpair failed and we were unable to recover it. 00:35:56.018 [2024-07-12 00:48:23.621702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.018 [2024-07-12 00:48:23.621733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.018 qpair failed and we were unable to recover it. 00:35:56.018 [2024-07-12 00:48:23.621814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.018 [2024-07-12 00:48:23.621840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.018 qpair failed and we were unable to recover it. 00:35:56.018 [2024-07-12 00:48:23.621923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.018 [2024-07-12 00:48:23.621950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.018 qpair failed and we were unable to recover it. 00:35:56.018 [2024-07-12 00:48:23.622038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.018 [2024-07-12 00:48:23.622063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.018 qpair failed and we were unable to recover it. 00:35:56.018 [2024-07-12 00:48:23.622140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.019 [2024-07-12 00:48:23.622167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.019 qpair failed and we were unable to recover it. 00:35:56.019 [2024-07-12 00:48:23.622251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.019 [2024-07-12 00:48:23.622278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.019 qpair failed and we were unable to recover it. 00:35:56.019 [2024-07-12 00:48:23.622359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.019 [2024-07-12 00:48:23.622389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.019 qpair failed and we were unable to recover it. 00:35:56.019 [2024-07-12 00:48:23.622475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.019 [2024-07-12 00:48:23.622502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.019 qpair failed and we were unable to recover it. 00:35:56.019 [2024-07-12 00:48:23.622596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.019 [2024-07-12 00:48:23.622623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.019 qpair failed and we were unable to recover it. 00:35:56.019 [2024-07-12 00:48:23.622711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.019 [2024-07-12 00:48:23.622738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.019 qpair failed and we were unable to recover it. 00:35:56.019 [2024-07-12 00:48:23.622821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.019 [2024-07-12 00:48:23.622848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.019 qpair failed and we were unable to recover it. 00:35:56.019 [2024-07-12 00:48:23.622932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.019 [2024-07-12 00:48:23.622958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.019 qpair failed and we were unable to recover it. 00:35:56.019 [2024-07-12 00:48:23.623043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.019 [2024-07-12 00:48:23.623068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.019 qpair failed and we were unable to recover it. 00:35:56.019 [2024-07-12 00:48:23.623149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.019 [2024-07-12 00:48:23.623175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.019 qpair failed and we were unable to recover it. 00:35:56.019 [2024-07-12 00:48:23.623252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.019 [2024-07-12 00:48:23.623278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.019 qpair failed and we were unable to recover it. 00:35:56.019 [2024-07-12 00:48:23.623356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.019 [2024-07-12 00:48:23.623382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.019 qpair failed and we were unable to recover it. 00:35:56.019 [2024-07-12 00:48:23.623460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.019 [2024-07-12 00:48:23.623487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.019 qpair failed and we were unable to recover it. 00:35:56.019 [2024-07-12 00:48:23.623593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.019 [2024-07-12 00:48:23.623620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.019 qpair failed and we were unable to recover it. 00:35:56.019 [2024-07-12 00:48:23.623700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.019 [2024-07-12 00:48:23.623726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.019 qpair failed and we were unable to recover it. 00:35:56.019 [2024-07-12 00:48:23.623813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.019 [2024-07-12 00:48:23.623840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.019 qpair failed and we were unable to recover it. 00:35:56.019 [2024-07-12 00:48:23.623923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.019 [2024-07-12 00:48:23.623949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.019 qpair failed and we were unable to recover it. 00:35:56.019 [2024-07-12 00:48:23.624034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.019 [2024-07-12 00:48:23.624060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.019 qpair failed and we were unable to recover it. 00:35:56.019 [2024-07-12 00:48:23.624138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.019 [2024-07-12 00:48:23.624165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.019 qpair failed and we were unable to recover it. 00:35:56.019 [2024-07-12 00:48:23.624242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.019 [2024-07-12 00:48:23.624267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.019 qpair failed and we were unable to recover it. 00:35:56.019 [2024-07-12 00:48:23.624354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.019 [2024-07-12 00:48:23.624383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.019 qpair failed and we were unable to recover it. 00:35:56.019 [2024-07-12 00:48:23.624469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.019 [2024-07-12 00:48:23.624496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.019 qpair failed and we were unable to recover it. 00:35:56.019 [2024-07-12 00:48:23.624572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.019 [2024-07-12 00:48:23.624606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.019 qpair failed and we were unable to recover it. 00:35:56.019 [2024-07-12 00:48:23.624684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.019 [2024-07-12 00:48:23.624710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.019 qpair failed and we were unable to recover it. 00:35:56.019 [2024-07-12 00:48:23.624788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.019 [2024-07-12 00:48:23.624813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.019 qpair failed and we were unable to recover it. 00:35:56.019 [2024-07-12 00:48:23.624895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.019 [2024-07-12 00:48:23.624925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.019 qpair failed and we were unable to recover it. 00:35:56.019 [2024-07-12 00:48:23.625014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.019 [2024-07-12 00:48:23.625042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.019 qpair failed and we were unable to recover it. 00:35:56.019 [2024-07-12 00:48:23.625121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.019 [2024-07-12 00:48:23.625148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.019 qpair failed and we were unable to recover it. 00:35:56.019 [2024-07-12 00:48:23.625227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.019 [2024-07-12 00:48:23.625253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.019 qpair failed and we were unable to recover it. 00:35:56.019 [2024-07-12 00:48:23.625342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.019 [2024-07-12 00:48:23.625372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.019 qpair failed and we were unable to recover it. 00:35:56.019 [2024-07-12 00:48:23.625459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.019 [2024-07-12 00:48:23.625486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.019 qpair failed and we were unable to recover it. 00:35:56.019 [2024-07-12 00:48:23.625570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.019 [2024-07-12 00:48:23.625608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.019 qpair failed and we were unable to recover it. 00:35:56.019 [2024-07-12 00:48:23.625696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.019 [2024-07-12 00:48:23.625724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.019 qpair failed and we were unable to recover it. 00:35:56.019 [2024-07-12 00:48:23.625807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.019 [2024-07-12 00:48:23.625838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.019 qpair failed and we were unable to recover it. 00:35:56.019 [2024-07-12 00:48:23.625929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.020 [2024-07-12 00:48:23.625956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.020 qpair failed and we were unable to recover it. 00:35:56.020 [2024-07-12 00:48:23.626038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.020 [2024-07-12 00:48:23.626064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.020 qpair failed and we were unable to recover it. 00:35:56.020 [2024-07-12 00:48:23.626142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.020 [2024-07-12 00:48:23.626168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.020 qpair failed and we were unable to recover it. 00:35:56.020 [2024-07-12 00:48:23.626251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.020 [2024-07-12 00:48:23.626277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.020 qpair failed and we were unable to recover it. 00:35:56.020 [2024-07-12 00:48:23.626378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.020 [2024-07-12 00:48:23.626405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.020 qpair failed and we were unable to recover it. 00:35:56.020 [2024-07-12 00:48:23.626489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.020 [2024-07-12 00:48:23.626516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.020 qpair failed and we were unable to recover it. 00:35:56.020 [2024-07-12 00:48:23.626611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.020 [2024-07-12 00:48:23.626638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.020 qpair failed and we were unable to recover it. 00:35:56.020 [2024-07-12 00:48:23.626718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.020 [2024-07-12 00:48:23.626745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.020 qpair failed and we were unable to recover it. 00:35:56.020 [2024-07-12 00:48:23.626823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.020 [2024-07-12 00:48:23.626850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.020 qpair failed and we were unable to recover it. 00:35:56.020 [2024-07-12 00:48:23.626944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.020 [2024-07-12 00:48:23.626973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.020 qpair failed and we were unable to recover it. 00:35:56.020 [2024-07-12 00:48:23.627059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.020 [2024-07-12 00:48:23.627088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.020 qpair failed and we were unable to recover it. 00:35:56.020 [2024-07-12 00:48:23.627185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.020 [2024-07-12 00:48:23.627211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.020 qpair failed and we were unable to recover it. 00:35:56.020 [2024-07-12 00:48:23.627293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.020 [2024-07-12 00:48:23.627319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.020 qpair failed and we were unable to recover it. 00:35:56.020 [2024-07-12 00:48:23.627401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.020 [2024-07-12 00:48:23.627446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.020 qpair failed and we were unable to recover it. 00:35:56.020 [2024-07-12 00:48:23.627545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.020 [2024-07-12 00:48:23.627573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.020 qpair failed and we were unable to recover it. 00:35:56.020 [2024-07-12 00:48:23.627669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.020 [2024-07-12 00:48:23.627701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.020 qpair failed and we were unable to recover it. 00:35:56.020 [2024-07-12 00:48:23.627782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.020 [2024-07-12 00:48:23.627809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.020 qpair failed and we were unable to recover it. 00:35:56.020 [2024-07-12 00:48:23.627898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.020 [2024-07-12 00:48:23.627928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.020 qpair failed and we were unable to recover it. 00:35:56.020 [2024-07-12 00:48:23.628003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.020 [2024-07-12 00:48:23.628030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.020 qpair failed and we were unable to recover it. 00:35:56.020 [2024-07-12 00:48:23.628116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.020 [2024-07-12 00:48:23.628144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.020 qpair failed and we were unable to recover it. 00:35:56.020 [2024-07-12 00:48:23.628231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.020 [2024-07-12 00:48:23.628259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.020 qpair failed and we were unable to recover it. 00:35:56.020 [2024-07-12 00:48:23.628337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.020 [2024-07-12 00:48:23.628363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.020 qpair failed and we were unable to recover it. 00:35:56.020 [2024-07-12 00:48:23.628471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.020 [2024-07-12 00:48:23.628497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.020 qpair failed and we were unable to recover it. 00:35:56.020 [2024-07-12 00:48:23.628576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.020 [2024-07-12 00:48:23.628609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.020 qpair failed and we were unable to recover it. 00:35:56.020 [2024-07-12 00:48:23.628697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.020 [2024-07-12 00:48:23.628725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.021 qpair failed and we were unable to recover it. 00:35:56.021 [2024-07-12 00:48:23.628809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.021 [2024-07-12 00:48:23.628835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.021 qpair failed and we were unable to recover it. 00:35:56.021 [2024-07-12 00:48:23.628944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.021 [2024-07-12 00:48:23.628972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.021 qpair failed and we were unable to recover it. 00:35:56.021 [2024-07-12 00:48:23.629054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.021 [2024-07-12 00:48:23.629079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.021 qpair failed and we were unable to recover it. 00:35:56.021 [2024-07-12 00:48:23.629166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.021 [2024-07-12 00:48:23.629193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.021 qpair failed and we were unable to recover it. 00:35:56.021 [2024-07-12 00:48:23.629275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.021 [2024-07-12 00:48:23.629300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.021 qpair failed and we were unable to recover it. 00:35:56.021 [2024-07-12 00:48:23.629376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.021 [2024-07-12 00:48:23.629401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.021 qpair failed and we were unable to recover it. 00:35:56.021 [2024-07-12 00:48:23.629479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.021 [2024-07-12 00:48:23.629504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.021 qpair failed and we were unable to recover it. 00:35:56.021 [2024-07-12 00:48:23.629582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.021 [2024-07-12 00:48:23.629618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.021 qpair failed and we were unable to recover it. 00:35:56.021 [2024-07-12 00:48:23.629701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.021 [2024-07-12 00:48:23.629726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.021 qpair failed and we were unable to recover it. 00:35:56.021 [2024-07-12 00:48:23.629802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.021 [2024-07-12 00:48:23.629827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.021 qpair failed and we were unable to recover it. 00:35:56.021 [2024-07-12 00:48:23.629907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.021 [2024-07-12 00:48:23.629932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.021 qpair failed and we were unable to recover it. 00:35:56.021 [2024-07-12 00:48:23.630012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.021 [2024-07-12 00:48:23.630037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.021 qpair failed and we were unable to recover it. 00:35:56.021 [2024-07-12 00:48:23.630111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.021 [2024-07-12 00:48:23.630136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.021 qpair failed and we were unable to recover it. 00:35:56.021 [2024-07-12 00:48:23.630222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.021 [2024-07-12 00:48:23.630249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.021 qpair failed and we were unable to recover it. 00:35:56.021 [2024-07-12 00:48:23.630326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.021 [2024-07-12 00:48:23.630359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.021 qpair failed and we were unable to recover it. 00:35:56.021 [2024-07-12 00:48:23.630442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.021 [2024-07-12 00:48:23.630469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.021 qpair failed and we were unable to recover it. 00:35:56.021 [2024-07-12 00:48:23.630560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.021 [2024-07-12 00:48:23.630595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.021 qpair failed and we were unable to recover it. 00:35:56.021 [2024-07-12 00:48:23.630684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.021 [2024-07-12 00:48:23.630712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.021 qpair failed and we were unable to recover it. 00:35:56.021 [2024-07-12 00:48:23.630795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.021 [2024-07-12 00:48:23.630821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.021 qpair failed and we were unable to recover it. 00:35:56.021 [2024-07-12 00:48:23.630919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.021 [2024-07-12 00:48:23.630946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.021 qpair failed and we were unable to recover it. 00:35:56.021 [2024-07-12 00:48:23.631035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.021 [2024-07-12 00:48:23.631061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.021 qpair failed and we were unable to recover it. 00:35:56.021 [2024-07-12 00:48:23.631142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.021 [2024-07-12 00:48:23.631171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.021 qpair failed and we were unable to recover it. 00:35:56.021 [2024-07-12 00:48:23.631258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.021 [2024-07-12 00:48:23.631285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.021 qpair failed and we were unable to recover it. 00:35:56.021 [2024-07-12 00:48:23.631368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.021 [2024-07-12 00:48:23.631397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.021 qpair failed and we were unable to recover it. 00:35:56.021 [2024-07-12 00:48:23.631482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.021 [2024-07-12 00:48:23.631507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.021 qpair failed and we were unable to recover it. 00:35:56.021 [2024-07-12 00:48:23.631596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.021 [2024-07-12 00:48:23.631624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.021 qpair failed and we were unable to recover it. 00:35:56.021 [2024-07-12 00:48:23.631701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.021 [2024-07-12 00:48:23.631726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.021 qpair failed and we were unable to recover it. 00:35:56.021 [2024-07-12 00:48:23.631808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.021 [2024-07-12 00:48:23.631840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.021 qpair failed and we were unable to recover it. 00:35:56.021 [2024-07-12 00:48:23.631947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.022 [2024-07-12 00:48:23.631974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.022 qpair failed and we were unable to recover it. 00:35:56.022 [2024-07-12 00:48:23.632061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.022 [2024-07-12 00:48:23.632089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.022 qpair failed and we were unable to recover it. 00:35:56.022 [2024-07-12 00:48:23.632171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.022 [2024-07-12 00:48:23.632199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.022 qpair failed and we were unable to recover it. 00:35:56.022 [2024-07-12 00:48:23.632287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.022 [2024-07-12 00:48:23.632314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.022 qpair failed and we were unable to recover it. 00:35:56.022 [2024-07-12 00:48:23.632393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.022 [2024-07-12 00:48:23.632420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.022 qpair failed and we were unable to recover it. 00:35:56.022 [2024-07-12 00:48:23.632502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.022 [2024-07-12 00:48:23.632527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.022 qpair failed and we were unable to recover it. 00:35:56.022 [2024-07-12 00:48:23.632613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.022 [2024-07-12 00:48:23.632639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.022 qpair failed and we were unable to recover it. 00:35:56.022 [2024-07-12 00:48:23.632723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.022 [2024-07-12 00:48:23.632748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.022 qpair failed and we were unable to recover it. 00:35:56.022 [2024-07-12 00:48:23.632823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.022 [2024-07-12 00:48:23.632849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.022 qpair failed and we were unable to recover it. 00:35:56.022 [2024-07-12 00:48:23.632939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.022 [2024-07-12 00:48:23.632967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.022 qpair failed and we were unable to recover it. 00:35:56.022 [2024-07-12 00:48:23.633049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.022 [2024-07-12 00:48:23.633076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.022 qpair failed and we were unable to recover it. 00:35:56.022 [2024-07-12 00:48:23.633153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.022 [2024-07-12 00:48:23.633181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.022 qpair failed and we were unable to recover it. 00:35:56.022 [2024-07-12 00:48:23.633265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.022 [2024-07-12 00:48:23.633292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.022 qpair failed and we were unable to recover it. 00:35:56.022 [2024-07-12 00:48:23.633375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.022 [2024-07-12 00:48:23.633404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.022 qpair failed and we were unable to recover it. 00:35:56.022 [2024-07-12 00:48:23.633494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.022 [2024-07-12 00:48:23.633522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.022 qpair failed and we were unable to recover it. 00:35:56.022 [2024-07-12 00:48:23.633625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.022 [2024-07-12 00:48:23.633651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.022 qpair failed and we were unable to recover it. 00:35:56.022 [2024-07-12 00:48:23.633731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.022 [2024-07-12 00:48:23.633757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.022 qpair failed and we were unable to recover it. 00:35:56.022 [2024-07-12 00:48:23.633834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.022 [2024-07-12 00:48:23.633859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.022 qpair failed and we were unable to recover it. 00:35:56.022 [2024-07-12 00:48:23.633939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.022 [2024-07-12 00:48:23.633964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.022 qpair failed and we were unable to recover it. 00:35:56.022 [2024-07-12 00:48:23.634037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.022 [2024-07-12 00:48:23.634061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.022 qpair failed and we were unable to recover it. 00:35:56.022 [2024-07-12 00:48:23.634137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.022 [2024-07-12 00:48:23.634162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.022 qpair failed and we were unable to recover it. 00:35:56.022 [2024-07-12 00:48:23.634249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.022 [2024-07-12 00:48:23.634276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.022 qpair failed and we were unable to recover it. 00:35:56.022 [2024-07-12 00:48:23.634360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.022 [2024-07-12 00:48:23.634387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.022 qpair failed and we were unable to recover it. 00:35:56.022 [2024-07-12 00:48:23.634473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.022 [2024-07-12 00:48:23.634502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.022 qpair failed and we were unable to recover it. 00:35:56.022 [2024-07-12 00:48:23.634596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.022 [2024-07-12 00:48:23.634624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.022 qpair failed and we were unable to recover it. 00:35:56.022 [2024-07-12 00:48:23.634704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.022 [2024-07-12 00:48:23.634730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.022 qpair failed and we were unable to recover it. 00:35:56.022 [2024-07-12 00:48:23.634804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.022 [2024-07-12 00:48:23.634835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.022 qpair failed and we were unable to recover it. 00:35:56.022 [2024-07-12 00:48:23.634926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.022 [2024-07-12 00:48:23.634953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.022 qpair failed and we were unable to recover it. 00:35:56.022 [2024-07-12 00:48:23.635040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.022 [2024-07-12 00:48:23.635069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.022 qpair failed and we were unable to recover it. 00:35:56.022 [2024-07-12 00:48:23.635154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.022 [2024-07-12 00:48:23.635182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.022 qpair failed and we were unable to recover it. 00:35:56.022 [2024-07-12 00:48:23.635268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.023 [2024-07-12 00:48:23.635296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.023 qpair failed and we were unable to recover it. 00:35:56.023 [2024-07-12 00:48:23.635380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.023 [2024-07-12 00:48:23.635406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.023 qpair failed and we were unable to recover it. 00:35:56.023 [2024-07-12 00:48:23.635483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.023 [2024-07-12 00:48:23.635509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.023 qpair failed and we were unable to recover it. 00:35:56.023 [2024-07-12 00:48:23.635597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.023 [2024-07-12 00:48:23.635625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.023 qpair failed and we were unable to recover it. 00:35:56.023 [2024-07-12 00:48:23.635700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.023 [2024-07-12 00:48:23.635727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.023 qpair failed and we were unable to recover it. 00:35:56.023 [2024-07-12 00:48:23.635817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.023 [2024-07-12 00:48:23.635843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.023 qpair failed and we were unable to recover it. 00:35:56.023 [2024-07-12 00:48:23.635925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.023 [2024-07-12 00:48:23.635953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.023 qpair failed and we were unable to recover it. 00:35:56.023 [2024-07-12 00:48:23.636040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.023 [2024-07-12 00:48:23.636067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.023 qpair failed and we were unable to recover it. 00:35:56.023 [2024-07-12 00:48:23.636152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.023 [2024-07-12 00:48:23.636178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.023 qpair failed and we were unable to recover it. 00:35:56.023 [2024-07-12 00:48:23.636252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.023 [2024-07-12 00:48:23.636278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.023 qpair failed and we were unable to recover it. 00:35:56.023 [2024-07-12 00:48:23.636389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.023 [2024-07-12 00:48:23.636415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.023 qpair failed and we were unable to recover it. 00:35:56.023 [2024-07-12 00:48:23.636498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.023 [2024-07-12 00:48:23.636524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.023 qpair failed and we were unable to recover it. 00:35:56.023 [2024-07-12 00:48:23.636605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.023 [2024-07-12 00:48:23.636632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.023 qpair failed and we were unable to recover it. 00:35:56.023 [2024-07-12 00:48:23.636718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.023 [2024-07-12 00:48:23.636746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.023 qpair failed and we were unable to recover it. 00:35:56.023 [2024-07-12 00:48:23.636822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.023 [2024-07-12 00:48:23.636849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.023 qpair failed and we were unable to recover it. 00:35:56.023 [2024-07-12 00:48:23.636925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.023 [2024-07-12 00:48:23.636951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.023 qpair failed and we were unable to recover it. 00:35:56.023 [2024-07-12 00:48:23.637033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.023 [2024-07-12 00:48:23.637059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.023 qpair failed and we were unable to recover it. 00:35:56.023 [2024-07-12 00:48:23.637134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.023 [2024-07-12 00:48:23.637160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.023 qpair failed and we were unable to recover it. 00:35:56.023 [2024-07-12 00:48:23.637248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.023 [2024-07-12 00:48:23.637282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.023 qpair failed and we were unable to recover it. 00:35:56.023 [2024-07-12 00:48:23.637363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.023 [2024-07-12 00:48:23.637391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.023 qpair failed and we were unable to recover it. 00:35:56.023 [2024-07-12 00:48:23.637478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.023 [2024-07-12 00:48:23.637507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.023 qpair failed and we were unable to recover it. 00:35:56.023 [2024-07-12 00:48:23.637583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.023 [2024-07-12 00:48:23.637615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.023 qpair failed and we were unable to recover it. 00:35:56.023 [2024-07-12 00:48:23.637698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.023 [2024-07-12 00:48:23.637724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.023 qpair failed and we were unable to recover it. 00:35:56.023 [2024-07-12 00:48:23.637804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.023 [2024-07-12 00:48:23.637834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.023 qpair failed and we were unable to recover it. 00:35:56.023 [2024-07-12 00:48:23.637925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.023 [2024-07-12 00:48:23.637951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.023 qpair failed and we were unable to recover it. 00:35:56.023 [2024-07-12 00:48:23.638042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.023 [2024-07-12 00:48:23.638070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.023 qpair failed and we were unable to recover it. 00:35:56.023 [2024-07-12 00:48:23.638155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.023 [2024-07-12 00:48:23.638183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.023 qpair failed and we were unable to recover it. 00:35:56.023 [2024-07-12 00:48:23.638273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.023 [2024-07-12 00:48:23.638300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.023 qpair failed and we were unable to recover it. 00:35:56.023 [2024-07-12 00:48:23.638385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.023 [2024-07-12 00:48:23.638412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.023 qpair failed and we were unable to recover it. 00:35:56.023 [2024-07-12 00:48:23.638493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.023 [2024-07-12 00:48:23.638519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.023 qpair failed and we were unable to recover it. 00:35:56.023 [2024-07-12 00:48:23.638610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.023 [2024-07-12 00:48:23.638640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.023 qpair failed and we were unable to recover it. 00:35:56.023 [2024-07-12 00:48:23.638728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.023 [2024-07-12 00:48:23.638756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.023 qpair failed and we were unable to recover it. 00:35:56.023 [2024-07-12 00:48:23.638836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.023 [2024-07-12 00:48:23.638862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.023 qpair failed and we were unable to recover it. 00:35:56.023 [2024-07-12 00:48:23.638941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.023 [2024-07-12 00:48:23.638967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.023 qpair failed and we were unable to recover it. 00:35:56.023 [2024-07-12 00:48:23.639048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.023 [2024-07-12 00:48:23.639075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.023 qpair failed and we were unable to recover it. 00:35:56.024 [2024-07-12 00:48:23.639151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.024 [2024-07-12 00:48:23.639181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.024 qpair failed and we were unable to recover it. 00:35:56.024 [2024-07-12 00:48:23.639268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.024 [2024-07-12 00:48:23.639296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.024 qpair failed and we were unable to recover it. 00:35:56.024 [2024-07-12 00:48:23.639388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.024 [2024-07-12 00:48:23.639415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.024 qpair failed and we were unable to recover it. 00:35:56.024 [2024-07-12 00:48:23.639497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.024 [2024-07-12 00:48:23.639524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.024 qpair failed and we were unable to recover it. 00:35:56.024 [2024-07-12 00:48:23.639603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.024 [2024-07-12 00:48:23.639629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.024 qpair failed and we were unable to recover it. 00:35:56.024 [2024-07-12 00:48:23.639705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.024 [2024-07-12 00:48:23.639730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.024 qpair failed and we were unable to recover it. 00:35:56.024 [2024-07-12 00:48:23.639815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.024 [2024-07-12 00:48:23.639840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.024 qpair failed and we were unable to recover it. 00:35:56.024 [2024-07-12 00:48:23.639925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.024 [2024-07-12 00:48:23.639950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.024 qpair failed and we were unable to recover it. 00:35:56.024 [2024-07-12 00:48:23.640037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.024 [2024-07-12 00:48:23.640065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.024 qpair failed and we were unable to recover it. 00:35:56.024 [2024-07-12 00:48:23.640141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.024 [2024-07-12 00:48:23.640166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.024 qpair failed and we were unable to recover it. 00:35:56.024 [2024-07-12 00:48:23.640249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.024 [2024-07-12 00:48:23.640277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.024 qpair failed and we were unable to recover it. 00:35:56.024 [2024-07-12 00:48:23.640362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.024 [2024-07-12 00:48:23.640388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.024 qpair failed and we were unable to recover it. 00:35:56.024 [2024-07-12 00:48:23.640462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.024 [2024-07-12 00:48:23.640488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.024 qpair failed and we were unable to recover it. 00:35:56.024 [2024-07-12 00:48:23.640568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.024 [2024-07-12 00:48:23.640599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.024 qpair failed and we were unable to recover it. 00:35:56.024 [2024-07-12 00:48:23.640682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.024 [2024-07-12 00:48:23.640711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.024 qpair failed and we were unable to recover it. 00:35:56.024 [2024-07-12 00:48:23.640810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.024 [2024-07-12 00:48:23.640837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.024 qpair failed and we were unable to recover it. 00:35:56.024 [2024-07-12 00:48:23.640942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.024 [2024-07-12 00:48:23.640970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.024 qpair failed and we were unable to recover it. 00:35:56.024 [2024-07-12 00:48:23.641057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.024 [2024-07-12 00:48:23.641083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.024 qpair failed and we were unable to recover it. 00:35:56.024 [2024-07-12 00:48:23.641167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.024 [2024-07-12 00:48:23.641195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.024 qpair failed and we were unable to recover it. 00:35:56.024 [2024-07-12 00:48:23.641279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.024 [2024-07-12 00:48:23.641304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.024 qpair failed and we were unable to recover it. 00:35:56.024 [2024-07-12 00:48:23.641384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.024 [2024-07-12 00:48:23.641412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.024 qpair failed and we were unable to recover it. 00:35:56.024 [2024-07-12 00:48:23.641494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.024 [2024-07-12 00:48:23.641521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.024 qpair failed and we were unable to recover it. 00:35:56.024 [2024-07-12 00:48:23.641605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.024 [2024-07-12 00:48:23.641632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.024 qpair failed and we were unable to recover it. 00:35:56.024 [2024-07-12 00:48:23.641720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.024 [2024-07-12 00:48:23.641746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.024 qpair failed and we were unable to recover it. 00:35:56.024 [2024-07-12 00:48:23.641825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.024 [2024-07-12 00:48:23.641852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.024 qpair failed and we were unable to recover it. 00:35:56.024 [2024-07-12 00:48:23.641927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.024 [2024-07-12 00:48:23.641953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.024 qpair failed and we were unable to recover it. 00:35:56.024 [2024-07-12 00:48:23.642030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.024 [2024-07-12 00:48:23.642058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.024 qpair failed and we were unable to recover it. 00:35:56.024 [2024-07-12 00:48:23.642138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.024 [2024-07-12 00:48:23.642164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.024 qpair failed and we were unable to recover it. 00:35:56.024 [2024-07-12 00:48:23.642253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.024 [2024-07-12 00:48:23.642285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.024 qpair failed and we were unable to recover it. 00:35:56.024 [2024-07-12 00:48:23.642370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.024 [2024-07-12 00:48:23.642398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.024 qpair failed and we were unable to recover it. 00:35:56.024 [2024-07-12 00:48:23.642472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.024 [2024-07-12 00:48:23.642499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.024 qpair failed and we were unable to recover it. 00:35:56.024 [2024-07-12 00:48:23.642582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.024 [2024-07-12 00:48:23.642617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.024 qpair failed and we were unable to recover it. 00:35:56.025 [2024-07-12 00:48:23.642703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.025 [2024-07-12 00:48:23.642729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.025 qpair failed and we were unable to recover it. 00:35:56.025 [2024-07-12 00:48:23.642805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.025 [2024-07-12 00:48:23.642831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.025 qpair failed and we were unable to recover it. 00:35:56.025 [2024-07-12 00:48:23.642910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.025 [2024-07-12 00:48:23.642936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.025 qpair failed and we were unable to recover it. 00:35:56.025 [2024-07-12 00:48:23.643020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.025 [2024-07-12 00:48:23.643048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.025 qpair failed and we were unable to recover it. 00:35:56.025 [2024-07-12 00:48:23.643140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.025 [2024-07-12 00:48:23.643170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.025 qpair failed and we were unable to recover it. 00:35:56.025 [2024-07-12 00:48:23.643253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.025 [2024-07-12 00:48:23.643280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.025 qpair failed and we were unable to recover it. 00:35:56.025 [2024-07-12 00:48:23.643353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.025 [2024-07-12 00:48:23.643381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.025 qpair failed and we were unable to recover it. 00:35:56.025 [2024-07-12 00:48:23.643464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.025 [2024-07-12 00:48:23.643490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.025 qpair failed and we were unable to recover it. 00:35:56.025 [2024-07-12 00:48:23.643568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.025 [2024-07-12 00:48:23.643605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.025 qpair failed and we were unable to recover it. 00:35:56.025 [2024-07-12 00:48:23.643683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.025 [2024-07-12 00:48:23.643709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.025 qpair failed and we were unable to recover it. 00:35:56.025 [2024-07-12 00:48:23.643799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.025 [2024-07-12 00:48:23.643825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.025 qpair failed and we were unable to recover it. 00:35:56.025 [2024-07-12 00:48:23.643903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.025 [2024-07-12 00:48:23.643929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.025 qpair failed and we were unable to recover it. 00:35:56.025 [2024-07-12 00:48:23.644014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.025 [2024-07-12 00:48:23.644042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.025 qpair failed and we were unable to recover it. 00:35:56.025 [2024-07-12 00:48:23.644123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.025 [2024-07-12 00:48:23.644152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.025 qpair failed and we were unable to recover it. 00:35:56.025 [2024-07-12 00:48:23.644228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.025 [2024-07-12 00:48:23.644254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.025 qpair failed and we were unable to recover it. 00:35:56.025 [2024-07-12 00:48:23.644330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.025 [2024-07-12 00:48:23.644357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.025 qpair failed and we were unable to recover it. 00:35:56.025 [2024-07-12 00:48:23.644444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.025 [2024-07-12 00:48:23.644472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.025 qpair failed and we were unable to recover it. 00:35:56.025 [2024-07-12 00:48:23.644561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.025 [2024-07-12 00:48:23.644597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.025 qpair failed and we were unable to recover it. 00:35:56.025 [2024-07-12 00:48:23.644692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.025 [2024-07-12 00:48:23.644721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.025 qpair failed and we were unable to recover it. 00:35:56.025 [2024-07-12 00:48:23.644803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.025 [2024-07-12 00:48:23.644832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.025 qpair failed and we were unable to recover it. 00:35:56.025 [2024-07-12 00:48:23.644921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.025 [2024-07-12 00:48:23.644948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.025 qpair failed and we were unable to recover it. 00:35:56.025 [2024-07-12 00:48:23.645031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.025 [2024-07-12 00:48:23.645058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.025 qpair failed and we were unable to recover it. 00:35:56.025 [2024-07-12 00:48:23.645144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.025 [2024-07-12 00:48:23.645173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.025 qpair failed and we were unable to recover it. 00:35:56.025 [2024-07-12 00:48:23.645253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.025 [2024-07-12 00:48:23.645283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.025 qpair failed and we were unable to recover it. 00:35:56.025 [2024-07-12 00:48:23.645370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.025 [2024-07-12 00:48:23.645401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.025 qpair failed and we were unable to recover it. 00:35:56.025 [2024-07-12 00:48:23.645490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.025 [2024-07-12 00:48:23.645518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.025 qpair failed and we were unable to recover it. 00:35:56.025 [2024-07-12 00:48:23.645603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.025 [2024-07-12 00:48:23.645633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.025 qpair failed and we were unable to recover it. 00:35:56.025 [2024-07-12 00:48:23.645715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.025 [2024-07-12 00:48:23.645742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.025 qpair failed and we were unable to recover it. 00:35:56.025 [2024-07-12 00:48:23.645826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.025 [2024-07-12 00:48:23.645853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.025 qpair failed and we were unable to recover it. 00:35:56.025 [2024-07-12 00:48:23.645933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.025 [2024-07-12 00:48:23.645960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.025 qpair failed and we were unable to recover it. 00:35:56.025 [2024-07-12 00:48:23.646036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.025 [2024-07-12 00:48:23.646063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.025 qpair failed and we were unable to recover it. 00:35:56.025 [2024-07-12 00:48:23.646137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.025 [2024-07-12 00:48:23.646163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.026 qpair failed and we were unable to recover it. 00:35:56.026 [2024-07-12 00:48:23.646246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.026 [2024-07-12 00:48:23.646275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.026 qpair failed and we were unable to recover it. 00:35:56.026 [2024-07-12 00:48:23.646353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.026 [2024-07-12 00:48:23.646382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.026 qpair failed and we were unable to recover it. 00:35:56.026 [2024-07-12 00:48:23.646471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.026 [2024-07-12 00:48:23.646500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.026 qpair failed and we were unable to recover it. 00:35:56.026 [2024-07-12 00:48:23.646579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.026 [2024-07-12 00:48:23.646613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.026 qpair failed and we were unable to recover it. 00:35:56.026 [2024-07-12 00:48:23.646694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.026 [2024-07-12 00:48:23.646721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.026 qpair failed and we were unable to recover it. 00:35:56.026 [2024-07-12 00:48:23.646811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.026 [2024-07-12 00:48:23.646838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.026 qpair failed and we were unable to recover it. 00:35:56.026 [2024-07-12 00:48:23.646917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.026 [2024-07-12 00:48:23.646944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.026 qpair failed and we were unable to recover it. 00:35:56.026 [2024-07-12 00:48:23.647027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.026 [2024-07-12 00:48:23.647052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.026 qpair failed and we were unable to recover it. 00:35:56.026 [2024-07-12 00:48:23.647139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.026 [2024-07-12 00:48:23.647166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.026 qpair failed and we were unable to recover it. 00:35:56.026 [2024-07-12 00:48:23.647245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.026 [2024-07-12 00:48:23.647271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.026 qpair failed and we were unable to recover it. 00:35:56.026 [2024-07-12 00:48:23.647351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.026 [2024-07-12 00:48:23.647380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.026 qpair failed and we were unable to recover it. 00:35:56.026 [2024-07-12 00:48:23.647459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.026 [2024-07-12 00:48:23.647488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.026 qpair failed and we were unable to recover it. 00:35:56.026 [2024-07-12 00:48:23.647568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.026 [2024-07-12 00:48:23.647602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.026 qpair failed and we were unable to recover it. 00:35:56.026 [2024-07-12 00:48:23.647687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.026 [2024-07-12 00:48:23.647713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.026 qpair failed and we were unable to recover it. 00:35:56.026 [2024-07-12 00:48:23.647794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.026 [2024-07-12 00:48:23.647820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.026 qpair failed and we were unable to recover it. 00:35:56.026 [2024-07-12 00:48:23.647901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.026 [2024-07-12 00:48:23.647927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.026 qpair failed and we were unable to recover it. 00:35:56.026 [2024-07-12 00:48:23.648003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.026 [2024-07-12 00:48:23.648028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.026 qpair failed and we were unable to recover it. 00:35:56.026 [2024-07-12 00:48:23.648144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.026 [2024-07-12 00:48:23.648173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.026 qpair failed and we were unable to recover it. 00:35:56.026 [2024-07-12 00:48:23.648272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.026 [2024-07-12 00:48:23.648302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.026 qpair failed and we were unable to recover it. 00:35:56.026 [2024-07-12 00:48:23.648392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.026 [2024-07-12 00:48:23.648419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.026 qpair failed and we were unable to recover it. 00:35:56.026 [2024-07-12 00:48:23.648498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.026 [2024-07-12 00:48:23.648524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.026 qpair failed and we were unable to recover it. 00:35:56.026 [2024-07-12 00:48:23.648602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.026 [2024-07-12 00:48:23.648629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.026 qpair failed and we were unable to recover it. 00:35:56.026 [2024-07-12 00:48:23.648712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.026 [2024-07-12 00:48:23.648741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.026 qpair failed and we were unable to recover it. 00:35:56.026 [2024-07-12 00:48:23.648818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.026 [2024-07-12 00:48:23.648844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.026 qpair failed and we were unable to recover it. 00:35:56.026 [2024-07-12 00:48:23.648926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.027 [2024-07-12 00:48:23.648952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.027 qpair failed and we were unable to recover it. 00:35:56.027 [2024-07-12 00:48:23.649028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.027 [2024-07-12 00:48:23.649054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.027 qpair failed and we were unable to recover it. 00:35:56.027 [2024-07-12 00:48:23.649128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.027 [2024-07-12 00:48:23.649153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.027 qpair failed and we were unable to recover it. 00:35:56.027 [2024-07-12 00:48:23.649237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.027 [2024-07-12 00:48:23.649264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.027 qpair failed and we were unable to recover it. 00:35:56.027 [2024-07-12 00:48:23.649341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.027 [2024-07-12 00:48:23.649368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.027 qpair failed and we were unable to recover it. 00:35:56.027 [2024-07-12 00:48:23.649448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.027 [2024-07-12 00:48:23.649477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.027 qpair failed and we were unable to recover it. 00:35:56.027 [2024-07-12 00:48:23.649562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.027 [2024-07-12 00:48:23.649593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.027 qpair failed and we were unable to recover it. 00:35:56.027 [2024-07-12 00:48:23.649681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.027 [2024-07-12 00:48:23.649713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.027 qpair failed and we were unable to recover it. 00:35:56.027 [2024-07-12 00:48:23.649806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.027 [2024-07-12 00:48:23.649833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.027 qpair failed and we were unable to recover it. 00:35:56.027 [2024-07-12 00:48:23.649914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.027 [2024-07-12 00:48:23.649940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.027 qpair failed and we were unable to recover it. 00:35:56.027 [2024-07-12 00:48:23.650028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.027 [2024-07-12 00:48:23.650053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.027 qpair failed and we were unable to recover it. 00:35:56.027 [2024-07-12 00:48:23.650136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.027 [2024-07-12 00:48:23.650163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.027 qpair failed and we were unable to recover it. 00:35:56.027 [2024-07-12 00:48:23.650252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.027 [2024-07-12 00:48:23.650280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.027 qpair failed and we were unable to recover it. 00:35:56.027 [2024-07-12 00:48:23.650355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.027 [2024-07-12 00:48:23.650381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.027 qpair failed and we were unable to recover it. 00:35:56.027 [2024-07-12 00:48:23.650472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.027 [2024-07-12 00:48:23.650499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.027 qpair failed and we were unable to recover it. 00:35:56.027 [2024-07-12 00:48:23.650593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.027 [2024-07-12 00:48:23.650622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.027 qpair failed and we were unable to recover it. 00:35:56.027 [2024-07-12 00:48:23.650701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.027 [2024-07-12 00:48:23.650729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.027 qpair failed and we were unable to recover it. 00:35:56.027 [2024-07-12 00:48:23.650806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.027 [2024-07-12 00:48:23.650832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.027 qpair failed and we were unable to recover it. 00:35:56.027 [2024-07-12 00:48:23.650914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.027 [2024-07-12 00:48:23.650940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.027 qpair failed and we were unable to recover it. 00:35:56.027 [2024-07-12 00:48:23.651014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.027 [2024-07-12 00:48:23.651040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.027 qpair failed and we were unable to recover it. 00:35:56.027 [2024-07-12 00:48:23.651126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.027 [2024-07-12 00:48:23.651152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.027 qpair failed and we were unable to recover it. 00:35:56.027 [2024-07-12 00:48:23.651240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.027 [2024-07-12 00:48:23.651266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.027 qpair failed and we were unable to recover it. 00:35:56.027 [2024-07-12 00:48:23.651351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.027 [2024-07-12 00:48:23.651379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.027 qpair failed and we were unable to recover it. 00:35:56.027 [2024-07-12 00:48:23.651460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.027 [2024-07-12 00:48:23.651487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.027 qpair failed and we were unable to recover it. 00:35:56.027 [2024-07-12 00:48:23.651563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.027 [2024-07-12 00:48:23.651601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.027 qpair failed and we were unable to recover it. 00:35:56.027 [2024-07-12 00:48:23.651690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.027 [2024-07-12 00:48:23.651717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.027 qpair failed and we were unable to recover it. 00:35:56.027 [2024-07-12 00:48:23.651801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.027 [2024-07-12 00:48:23.651827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.027 qpair failed and we were unable to recover it. 00:35:56.027 [2024-07-12 00:48:23.651909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.027 [2024-07-12 00:48:23.651937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.027 qpair failed and we were unable to recover it. 00:35:56.027 [2024-07-12 00:48:23.652020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.027 [2024-07-12 00:48:23.652049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.027 qpair failed and we were unable to recover it. 00:35:56.027 [2024-07-12 00:48:23.652139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.027 [2024-07-12 00:48:23.652168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.027 qpair failed and we were unable to recover it. 00:35:56.027 [2024-07-12 00:48:23.652258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.027 [2024-07-12 00:48:23.652286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.027 qpair failed and we were unable to recover it. 00:35:56.027 [2024-07-12 00:48:23.652363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.027 [2024-07-12 00:48:23.652389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.027 qpair failed and we were unable to recover it. 00:35:56.027 [2024-07-12 00:48:23.652470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.028 [2024-07-12 00:48:23.652495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.028 qpair failed and we were unable to recover it. 00:35:56.028 [2024-07-12 00:48:23.652581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.028 [2024-07-12 00:48:23.652616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.028 qpair failed and we were unable to recover it. 00:35:56.028 [2024-07-12 00:48:23.652702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.028 [2024-07-12 00:48:23.652728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.028 qpair failed and we were unable to recover it. 00:35:56.028 [2024-07-12 00:48:23.652811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.028 [2024-07-12 00:48:23.652836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.028 qpair failed and we were unable to recover it. 00:35:56.028 [2024-07-12 00:48:23.652920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.028 [2024-07-12 00:48:23.652947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.028 qpair failed and we were unable to recover it. 00:35:56.028 [2024-07-12 00:48:23.653021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.028 [2024-07-12 00:48:23.653047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.028 qpair failed and we were unable to recover it. 00:35:56.028 [2024-07-12 00:48:23.653128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.028 [2024-07-12 00:48:23.653155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.028 qpair failed and we were unable to recover it. 00:35:56.028 [2024-07-12 00:48:23.653231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.028 [2024-07-12 00:48:23.653256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.028 qpair failed and we were unable to recover it. 00:35:56.028 [2024-07-12 00:48:23.653448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.028 [2024-07-12 00:48:23.653473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.028 qpair failed and we were unable to recover it. 00:35:56.028 [2024-07-12 00:48:23.653548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.028 [2024-07-12 00:48:23.653574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.028 qpair failed and we were unable to recover it. 00:35:56.028 [2024-07-12 00:48:23.653668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.028 [2024-07-12 00:48:23.653695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.028 qpair failed and we were unable to recover it. 00:35:56.028 [2024-07-12 00:48:23.653777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.028 [2024-07-12 00:48:23.653806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.028 qpair failed and we were unable to recover it. 00:35:56.028 [2024-07-12 00:48:23.653890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.028 [2024-07-12 00:48:23.653917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.028 qpair failed and we were unable to recover it. 00:35:56.028 [2024-07-12 00:48:23.653992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.028 [2024-07-12 00:48:23.654018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.028 qpair failed and we were unable to recover it. 00:35:56.028 [2024-07-12 00:48:23.654096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.028 [2024-07-12 00:48:23.654121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.028 qpair failed and we were unable to recover it. 00:35:56.028 [2024-07-12 00:48:23.654205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.028 [2024-07-12 00:48:23.654237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.028 qpair failed and we were unable to recover it. 00:35:56.028 [2024-07-12 00:48:23.654337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.028 [2024-07-12 00:48:23.654367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.028 qpair failed and we were unable to recover it. 00:35:56.028 [2024-07-12 00:48:23.654447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.028 [2024-07-12 00:48:23.654473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.028 qpair failed and we were unable to recover it. 00:35:56.028 [2024-07-12 00:48:23.654553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.028 [2024-07-12 00:48:23.654580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.028 qpair failed and we were unable to recover it. 00:35:56.028 [2024-07-12 00:48:23.654669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.028 [2024-07-12 00:48:23.654694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.028 qpair failed and we were unable to recover it. 00:35:56.028 [2024-07-12 00:48:23.654769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.028 [2024-07-12 00:48:23.654794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.028 qpair failed and we were unable to recover it. 00:35:56.028 [2024-07-12 00:48:23.654880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.028 [2024-07-12 00:48:23.654909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.028 qpair failed and we were unable to recover it. 00:35:56.028 [2024-07-12 00:48:23.654993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.028 [2024-07-12 00:48:23.655019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.028 qpair failed and we were unable to recover it. 00:35:56.028 [2024-07-12 00:48:23.655108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.028 [2024-07-12 00:48:23.655136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.028 qpair failed and we were unable to recover it. 00:35:56.028 [2024-07-12 00:48:23.655214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.028 [2024-07-12 00:48:23.655240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.028 qpair failed and we were unable to recover it. 00:35:56.028 [2024-07-12 00:48:23.655320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.028 [2024-07-12 00:48:23.655347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.028 qpair failed and we were unable to recover it. 00:35:56.028 [2024-07-12 00:48:23.655422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.028 [2024-07-12 00:48:23.655448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.028 qpair failed and we were unable to recover it. 00:35:56.028 [2024-07-12 00:48:23.655524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.028 [2024-07-12 00:48:23.655550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.028 qpair failed and we were unable to recover it. 00:35:56.028 [2024-07-12 00:48:23.655633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.028 [2024-07-12 00:48:23.655659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.028 qpair failed and we were unable to recover it. 00:35:56.028 [2024-07-12 00:48:23.655741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.028 [2024-07-12 00:48:23.655767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.028 qpair failed and we were unable to recover it. 00:35:56.028 [2024-07-12 00:48:23.655847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.028 [2024-07-12 00:48:23.655876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.028 qpair failed and we were unable to recover it. 00:35:56.028 [2024-07-12 00:48:23.655965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.029 [2024-07-12 00:48:23.655990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.029 qpair failed and we were unable to recover it. 00:35:56.029 [2024-07-12 00:48:23.656067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.029 [2024-07-12 00:48:23.656095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.029 qpair failed and we were unable to recover it. 00:35:56.029 [2024-07-12 00:48:23.656184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.029 [2024-07-12 00:48:23.656211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.029 qpair failed and we were unable to recover it. 00:35:56.029 [2024-07-12 00:48:23.656287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.029 [2024-07-12 00:48:23.656313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.029 qpair failed and we were unable to recover it. 00:35:56.029 [2024-07-12 00:48:23.656394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.029 [2024-07-12 00:48:23.656419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.029 qpair failed and we were unable to recover it. 00:35:56.029 [2024-07-12 00:48:23.656497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.029 [2024-07-12 00:48:23.656525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.029 qpair failed and we were unable to recover it. 00:35:56.029 [2024-07-12 00:48:23.656613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.029 [2024-07-12 00:48:23.656645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.029 qpair failed and we were unable to recover it. 00:35:56.029 [2024-07-12 00:48:23.656736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.029 [2024-07-12 00:48:23.656764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.029 qpair failed and we were unable to recover it. 00:35:56.029 [2024-07-12 00:48:23.656959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.029 [2024-07-12 00:48:23.656985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.029 qpair failed and we were unable to recover it. 00:35:56.029 [2024-07-12 00:48:23.657059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.029 [2024-07-12 00:48:23.657084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.029 qpair failed and we were unable to recover it. 00:35:56.029 [2024-07-12 00:48:23.657165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.029 [2024-07-12 00:48:23.657194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.029 qpair failed and we were unable to recover it. 00:35:56.029 [2024-07-12 00:48:23.657286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.029 [2024-07-12 00:48:23.657318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.029 qpair failed and we were unable to recover it. 00:35:56.029 [2024-07-12 00:48:23.657401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.029 [2024-07-12 00:48:23.657429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.029 qpair failed and we were unable to recover it. 00:35:56.029 [2024-07-12 00:48:23.657513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.029 [2024-07-12 00:48:23.657550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.029 qpair failed and we were unable to recover it. 00:35:56.029 [2024-07-12 00:48:23.657704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.029 [2024-07-12 00:48:23.657747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.029 qpair failed and we were unable to recover it. 00:35:56.029 [2024-07-12 00:48:23.657832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.029 [2024-07-12 00:48:23.657858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.029 qpair failed and we were unable to recover it. 00:35:56.029 [2024-07-12 00:48:23.657941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.029 [2024-07-12 00:48:23.657967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.029 qpair failed and we were unable to recover it. 00:35:56.029 [2024-07-12 00:48:23.658051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.029 [2024-07-12 00:48:23.658077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.029 qpair failed and we were unable to recover it. 00:35:56.029 [2024-07-12 00:48:23.658154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.029 [2024-07-12 00:48:23.658180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.029 qpair failed and we were unable to recover it. 00:35:56.029 [2024-07-12 00:48:23.658255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.029 [2024-07-12 00:48:23.658280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.029 qpair failed and we were unable to recover it. 00:35:56.029 [2024-07-12 00:48:23.658361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.029 [2024-07-12 00:48:23.658387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.029 qpair failed and we were unable to recover it. 00:35:56.029 [2024-07-12 00:48:23.658471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.029 [2024-07-12 00:48:23.658497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.029 qpair failed and we were unable to recover it. 00:35:56.029 [2024-07-12 00:48:23.658580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.029 [2024-07-12 00:48:23.658611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.029 qpair failed and we were unable to recover it. 00:35:56.029 [2024-07-12 00:48:23.658729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.029 [2024-07-12 00:48:23.658755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.029 qpair failed and we were unable to recover it. 00:35:56.029 [2024-07-12 00:48:23.658837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.029 [2024-07-12 00:48:23.658863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.029 qpair failed and we were unable to recover it. 00:35:56.029 [2024-07-12 00:48:23.658959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.029 [2024-07-12 00:48:23.658986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.029 qpair failed and we were unable to recover it. 00:35:56.029 [2024-07-12 00:48:23.659072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.029 [2024-07-12 00:48:23.659099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.029 qpair failed and we were unable to recover it. 00:35:56.029 [2024-07-12 00:48:23.659185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.029 [2024-07-12 00:48:23.659214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.029 qpair failed and we were unable to recover it. 00:35:56.029 [2024-07-12 00:48:23.659296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.029 [2024-07-12 00:48:23.659325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.029 qpair failed and we were unable to recover it. 00:35:56.029 [2024-07-12 00:48:23.659412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.029 [2024-07-12 00:48:23.659439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.029 qpair failed and we were unable to recover it. 00:35:56.029 [2024-07-12 00:48:23.659518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.030 [2024-07-12 00:48:23.659543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.030 qpair failed and we were unable to recover it. 00:35:56.030 [2024-07-12 00:48:23.659671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.030 [2024-07-12 00:48:23.659699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.030 qpair failed and we were unable to recover it. 00:35:56.030 [2024-07-12 00:48:23.659784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.030 [2024-07-12 00:48:23.659812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.030 qpair failed and we were unable to recover it. 00:35:56.030 [2024-07-12 00:48:23.659895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.030 [2024-07-12 00:48:23.659920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.030 qpair failed and we were unable to recover it. 00:35:56.030 [2024-07-12 00:48:23.660000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.030 [2024-07-12 00:48:23.660025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.030 qpair failed and we were unable to recover it. 00:35:56.030 [2024-07-12 00:48:23.660107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.030 [2024-07-12 00:48:23.660133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.030 qpair failed and we were unable to recover it. 00:35:56.030 [2024-07-12 00:48:23.660218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.030 [2024-07-12 00:48:23.660246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.030 qpair failed and we were unable to recover it. 00:35:56.030 [2024-07-12 00:48:23.660327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.030 [2024-07-12 00:48:23.660353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.030 qpair failed and we were unable to recover it. 00:35:56.030 [2024-07-12 00:48:23.660440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.030 [2024-07-12 00:48:23.660468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.030 qpair failed and we were unable to recover it. 00:35:56.030 [2024-07-12 00:48:23.660543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.030 [2024-07-12 00:48:23.660569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.030 qpair failed and we were unable to recover it. 00:35:56.030 [2024-07-12 00:48:23.660661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.030 [2024-07-12 00:48:23.660687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.030 qpair failed and we were unable to recover it. 00:35:56.030 [2024-07-12 00:48:23.660774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.030 [2024-07-12 00:48:23.660802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.030 qpair failed and we were unable to recover it. 00:35:56.030 [2024-07-12 00:48:23.660878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.030 [2024-07-12 00:48:23.660905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.030 qpair failed and we were unable to recover it. 00:35:56.030 [2024-07-12 00:48:23.661002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.030 [2024-07-12 00:48:23.661042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.030 qpair failed and we were unable to recover it. 00:35:56.030 [2024-07-12 00:48:23.661124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.030 [2024-07-12 00:48:23.661151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.030 qpair failed and we were unable to recover it. 00:35:56.030 [2024-07-12 00:48:23.661239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.030 [2024-07-12 00:48:23.661268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.030 qpair failed and we were unable to recover it. 00:35:56.030 [2024-07-12 00:48:23.661358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.030 [2024-07-12 00:48:23.661384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.030 qpair failed and we were unable to recover it. 00:35:56.030 [2024-07-12 00:48:23.661463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.030 [2024-07-12 00:48:23.661488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.030 qpair failed and we were unable to recover it. 00:35:56.030 [2024-07-12 00:48:23.661570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.030 [2024-07-12 00:48:23.661602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.030 qpair failed and we were unable to recover it. 00:35:56.030 [2024-07-12 00:48:23.661682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.030 [2024-07-12 00:48:23.661709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.030 qpair failed and we were unable to recover it. 00:35:56.030 [2024-07-12 00:48:23.661796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.030 [2024-07-12 00:48:23.661823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.030 qpair failed and we were unable to recover it. 00:35:56.030 [2024-07-12 00:48:23.661906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.030 [2024-07-12 00:48:23.661937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.030 qpair failed and we were unable to recover it. 00:35:56.030 [2024-07-12 00:48:23.662021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.030 [2024-07-12 00:48:23.662047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.030 qpair failed and we were unable to recover it. 00:35:56.030 [2024-07-12 00:48:23.662127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.030 [2024-07-12 00:48:23.662152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.030 qpair failed and we were unable to recover it. 00:35:56.030 [2024-07-12 00:48:23.662229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.030 [2024-07-12 00:48:23.662254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.030 qpair failed and we were unable to recover it. 00:35:56.030 [2024-07-12 00:48:23.662332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.030 [2024-07-12 00:48:23.662360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.030 qpair failed and we were unable to recover it. 00:35:56.030 [2024-07-12 00:48:23.662438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.030 [2024-07-12 00:48:23.662465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.030 qpair failed and we were unable to recover it. 00:35:56.030 [2024-07-12 00:48:23.662544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.030 [2024-07-12 00:48:23.662570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.030 qpair failed and we were unable to recover it. 00:35:56.030 [2024-07-12 00:48:23.662654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.030 [2024-07-12 00:48:23.662682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.030 qpair failed and we were unable to recover it. 00:35:56.030 [2024-07-12 00:48:23.662772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.030 [2024-07-12 00:48:23.662798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.030 qpair failed and we were unable to recover it. 00:35:56.030 [2024-07-12 00:48:23.662876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.030 [2024-07-12 00:48:23.662904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.030 qpair failed and we were unable to recover it. 00:35:56.030 [2024-07-12 00:48:23.662987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.031 [2024-07-12 00:48:23.663014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.031 qpair failed and we were unable to recover it. 00:35:56.031 [2024-07-12 00:48:23.663097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.031 [2024-07-12 00:48:23.663123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.031 qpair failed and we were unable to recover it. 00:35:56.031 [2024-07-12 00:48:23.663210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.031 [2024-07-12 00:48:23.663237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.031 qpair failed and we were unable to recover it. 00:35:56.031 [2024-07-12 00:48:23.663323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.031 [2024-07-12 00:48:23.663351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.031 qpair failed and we were unable to recover it. 00:35:56.031 [2024-07-12 00:48:23.663432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.031 [2024-07-12 00:48:23.663459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.031 qpair failed and we were unable to recover it. 00:35:56.031 [2024-07-12 00:48:23.663541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.031 [2024-07-12 00:48:23.663566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.031 qpair failed and we were unable to recover it. 00:35:56.031 [2024-07-12 00:48:23.663653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.031 [2024-07-12 00:48:23.663678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.031 qpair failed and we were unable to recover it. 00:35:56.031 [2024-07-12 00:48:23.663754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.031 [2024-07-12 00:48:23.663779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.031 qpair failed and we were unable to recover it. 00:35:56.031 [2024-07-12 00:48:23.663869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.031 [2024-07-12 00:48:23.663894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.031 qpair failed and we were unable to recover it. 00:35:56.031 [2024-07-12 00:48:23.663972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.031 [2024-07-12 00:48:23.663997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.031 qpair failed and we were unable to recover it. 00:35:56.031 [2024-07-12 00:48:23.664071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.031 [2024-07-12 00:48:23.664097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.031 qpair failed and we were unable to recover it. 00:35:56.031 [2024-07-12 00:48:23.664179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.031 [2024-07-12 00:48:23.664205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.031 qpair failed and we were unable to recover it. 00:35:56.031 [2024-07-12 00:48:23.664288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.031 [2024-07-12 00:48:23.664316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.031 qpair failed and we were unable to recover it. 00:35:56.031 [2024-07-12 00:48:23.664411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.031 [2024-07-12 00:48:23.664451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.031 qpair failed and we were unable to recover it. 00:35:56.031 [2024-07-12 00:48:23.664537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.031 [2024-07-12 00:48:23.664566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.031 qpair failed and we were unable to recover it. 00:35:56.031 [2024-07-12 00:48:23.664663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.031 [2024-07-12 00:48:23.664690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.031 qpair failed and we were unable to recover it. 00:35:56.031 [2024-07-12 00:48:23.664770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.031 [2024-07-12 00:48:23.664797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.031 qpair failed and we were unable to recover it. 00:35:56.031 [2024-07-12 00:48:23.664877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.031 [2024-07-12 00:48:23.664907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.031 qpair failed and we were unable to recover it. 00:35:56.031 [2024-07-12 00:48:23.664990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.031 [2024-07-12 00:48:23.665016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.031 qpair failed and we were unable to recover it. 00:35:56.031 [2024-07-12 00:48:23.665098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.031 [2024-07-12 00:48:23.665126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.031 qpair failed and we were unable to recover it. 00:35:56.031 [2024-07-12 00:48:23.665213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.031 [2024-07-12 00:48:23.665239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.031 qpair failed and we were unable to recover it. 00:35:56.031 [2024-07-12 00:48:23.665318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.031 [2024-07-12 00:48:23.665344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.031 qpair failed and we were unable to recover it. 00:35:56.031 [2024-07-12 00:48:23.665426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.031 [2024-07-12 00:48:23.665453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.031 qpair failed and we were unable to recover it. 00:35:56.031 [2024-07-12 00:48:23.665536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.031 [2024-07-12 00:48:23.665562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.031 qpair failed and we were unable to recover it. 00:35:56.031 [2024-07-12 00:48:23.665686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.031 [2024-07-12 00:48:23.665715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.031 qpair failed and we were unable to recover it. 00:35:56.031 [2024-07-12 00:48:23.665795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.031 [2024-07-12 00:48:23.665821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.031 qpair failed and we were unable to recover it. 00:35:56.031 [2024-07-12 00:48:23.665901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.031 [2024-07-12 00:48:23.665928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.031 qpair failed and we were unable to recover it. 00:35:56.031 [2024-07-12 00:48:23.666006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.031 [2024-07-12 00:48:23.666031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.031 qpair failed and we were unable to recover it. 00:35:56.031 [2024-07-12 00:48:23.666120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.031 [2024-07-12 00:48:23.666146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.031 qpair failed and we were unable to recover it. 00:35:56.031 [2024-07-12 00:48:23.666225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.031 [2024-07-12 00:48:23.666251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.031 qpair failed and we were unable to recover it. 00:35:56.031 [2024-07-12 00:48:23.666351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.031 [2024-07-12 00:48:23.666378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.031 qpair failed and we were unable to recover it. 00:35:56.031 [2024-07-12 00:48:23.666466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.031 [2024-07-12 00:48:23.666502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.031 qpair failed and we were unable to recover it. 00:35:56.031 [2024-07-12 00:48:23.666603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.032 [2024-07-12 00:48:23.666644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.032 qpair failed and we were unable to recover it. 00:35:56.032 [2024-07-12 00:48:23.666734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.032 [2024-07-12 00:48:23.666762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.032 qpair failed and we were unable to recover it. 00:35:56.032 [2024-07-12 00:48:23.666840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.032 [2024-07-12 00:48:23.666866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.032 qpair failed and we were unable to recover it. 00:35:56.032 [2024-07-12 00:48:23.666955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.032 [2024-07-12 00:48:23.666981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.032 qpair failed and we were unable to recover it. 00:35:56.032 [2024-07-12 00:48:23.667063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.032 [2024-07-12 00:48:23.667090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.032 qpair failed and we were unable to recover it. 00:35:56.032 [2024-07-12 00:48:23.667182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.032 [2024-07-12 00:48:23.667211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.032 qpair failed and we were unable to recover it. 00:35:56.032 [2024-07-12 00:48:23.667292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.032 [2024-07-12 00:48:23.667319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.032 qpair failed and we were unable to recover it. 00:35:56.032 [2024-07-12 00:48:23.667399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.032 [2024-07-12 00:48:23.667425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.032 qpair failed and we were unable to recover it. 00:35:56.032 [2024-07-12 00:48:23.667505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.032 [2024-07-12 00:48:23.667532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.032 qpair failed and we were unable to recover it. 00:35:56.032 [2024-07-12 00:48:23.667623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.032 [2024-07-12 00:48:23.667649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.032 qpair failed and we were unable to recover it. 00:35:56.032 [2024-07-12 00:48:23.667732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.032 [2024-07-12 00:48:23.667758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.032 qpair failed and we were unable to recover it. 00:35:56.032 [2024-07-12 00:48:23.667841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.032 [2024-07-12 00:48:23.667868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.032 qpair failed and we were unable to recover it. 00:35:56.032 [2024-07-12 00:48:23.667958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.032 [2024-07-12 00:48:23.667987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.032 qpair failed and we were unable to recover it. 00:35:56.032 [2024-07-12 00:48:23.668069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.032 [2024-07-12 00:48:23.668094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.032 qpair failed and we were unable to recover it. 00:35:56.033 [2024-07-12 00:48:23.668181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.033 [2024-07-12 00:48:23.668208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.033 qpair failed and we were unable to recover it. 00:35:56.033 [2024-07-12 00:48:23.668285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.033 [2024-07-12 00:48:23.668311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.033 qpair failed and we were unable to recover it. 00:35:56.033 [2024-07-12 00:48:23.668393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.033 [2024-07-12 00:48:23.668419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.033 qpair failed and we were unable to recover it. 00:35:56.033 [2024-07-12 00:48:23.668501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.033 [2024-07-12 00:48:23.668527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.033 qpair failed and we were unable to recover it. 00:35:56.033 [2024-07-12 00:48:23.668611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.033 [2024-07-12 00:48:23.668640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.033 qpair failed and we were unable to recover it. 00:35:56.033 [2024-07-12 00:48:23.668725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.033 [2024-07-12 00:48:23.668752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.033 qpair failed and we were unable to recover it. 00:35:56.033 [2024-07-12 00:48:23.668834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.033 [2024-07-12 00:48:23.668860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.033 qpair failed and we were unable to recover it. 00:35:56.033 [2024-07-12 00:48:23.668940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.033 [2024-07-12 00:48:23.668965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.033 qpair failed and we were unable to recover it. 00:35:56.033 [2024-07-12 00:48:23.669048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.033 [2024-07-12 00:48:23.669077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.033 qpair failed and we were unable to recover it. 00:35:56.033 [2024-07-12 00:48:23.669161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.033 [2024-07-12 00:48:23.669189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.033 qpair failed and we were unable to recover it. 00:35:56.033 [2024-07-12 00:48:23.669283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.033 [2024-07-12 00:48:23.669312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.033 qpair failed and we were unable to recover it. 00:35:56.033 [2024-07-12 00:48:23.669393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.033 [2024-07-12 00:48:23.669425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.033 qpair failed and we were unable to recover it. 00:35:56.033 [2024-07-12 00:48:23.669514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.033 [2024-07-12 00:48:23.669541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.033 qpair failed and we were unable to recover it. 00:35:56.033 [2024-07-12 00:48:23.669633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.033 [2024-07-12 00:48:23.669660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.033 qpair failed and we were unable to recover it. 00:35:56.033 [2024-07-12 00:48:23.669737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.033 [2024-07-12 00:48:23.669765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.033 qpair failed and we were unable to recover it. 00:35:56.033 [2024-07-12 00:48:23.669841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.033 [2024-07-12 00:48:23.669868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.033 qpair failed and we were unable to recover it. 00:35:56.033 [2024-07-12 00:48:23.669942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.033 [2024-07-12 00:48:23.669969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.033 qpair failed and we were unable to recover it. 00:35:56.033 [2024-07-12 00:48:23.670048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.033 [2024-07-12 00:48:23.670074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.033 qpair failed and we were unable to recover it. 00:35:56.033 [2024-07-12 00:48:23.670158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.033 [2024-07-12 00:48:23.670185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.033 qpair failed and we were unable to recover it. 00:35:56.033 [2024-07-12 00:48:23.670270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.033 [2024-07-12 00:48:23.670296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.033 qpair failed and we were unable to recover it. 00:35:56.033 [2024-07-12 00:48:23.670370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.033 [2024-07-12 00:48:23.670396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.033 qpair failed and we were unable to recover it. 00:35:56.033 [2024-07-12 00:48:23.670479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.033 [2024-07-12 00:48:23.670505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.033 qpair failed and we were unable to recover it. 00:35:56.033 [2024-07-12 00:48:23.670582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.033 [2024-07-12 00:48:23.670613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.033 qpair failed and we were unable to recover it. 00:35:56.033 [2024-07-12 00:48:23.670695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.033 [2024-07-12 00:48:23.670723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.033 qpair failed and we were unable to recover it. 00:35:56.033 [2024-07-12 00:48:23.670807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.033 [2024-07-12 00:48:23.670834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.033 qpair failed and we were unable to recover it. 00:35:56.033 [2024-07-12 00:48:23.670915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.033 [2024-07-12 00:48:23.670942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.033 qpair failed and we were unable to recover it. 00:35:56.033 [2024-07-12 00:48:23.671026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.033 [2024-07-12 00:48:23.671052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.033 qpair failed and we were unable to recover it. 00:35:56.033 [2024-07-12 00:48:23.671129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.033 [2024-07-12 00:48:23.671155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.033 qpair failed and we were unable to recover it. 00:35:56.033 [2024-07-12 00:48:23.671236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.033 [2024-07-12 00:48:23.671263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.033 qpair failed and we were unable to recover it. 00:35:56.033 [2024-07-12 00:48:23.671345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.033 [2024-07-12 00:48:23.671372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.033 qpair failed and we were unable to recover it. 00:35:56.033 [2024-07-12 00:48:23.671459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.033 [2024-07-12 00:48:23.671488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.033 qpair failed and we were unable to recover it. 00:35:56.033 [2024-07-12 00:48:23.671565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.033 [2024-07-12 00:48:23.671598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.033 qpair failed and we were unable to recover it. 00:35:56.034 [2024-07-12 00:48:23.671678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.034 [2024-07-12 00:48:23.671703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.034 qpair failed and we were unable to recover it. 00:35:56.034 [2024-07-12 00:48:23.671788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.034 [2024-07-12 00:48:23.671814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.034 qpair failed and we were unable to recover it. 00:35:56.034 [2024-07-12 00:48:23.671888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.034 [2024-07-12 00:48:23.671913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.034 qpair failed and we were unable to recover it. 00:35:56.034 [2024-07-12 00:48:23.671991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.034 [2024-07-12 00:48:23.672018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.034 qpair failed and we were unable to recover it. 00:35:56.034 [2024-07-12 00:48:23.672102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.034 [2024-07-12 00:48:23.672129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.034 qpair failed and we were unable to recover it. 00:35:56.034 [2024-07-12 00:48:23.672211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.034 [2024-07-12 00:48:23.672238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.034 qpair failed and we were unable to recover it. 00:35:56.034 [2024-07-12 00:48:23.672325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.034 [2024-07-12 00:48:23.672357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.034 qpair failed and we were unable to recover it. 00:35:56.034 [2024-07-12 00:48:23.672553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.034 [2024-07-12 00:48:23.672581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.034 qpair failed and we were unable to recover it. 00:35:56.034 [2024-07-12 00:48:23.672667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.034 [2024-07-12 00:48:23.672693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.034 qpair failed and we were unable to recover it. 00:35:56.034 [2024-07-12 00:48:23.672883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.034 [2024-07-12 00:48:23.672909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.034 qpair failed and we were unable to recover it. 00:35:56.034 [2024-07-12 00:48:23.672992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.034 [2024-07-12 00:48:23.673018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.034 qpair failed and we were unable to recover it. 00:35:56.034 [2024-07-12 00:48:23.673098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.034 [2024-07-12 00:48:23.673123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.034 qpair failed and we were unable to recover it. 00:35:56.034 [2024-07-12 00:48:23.673206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.034 [2024-07-12 00:48:23.673232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.034 qpair failed and we were unable to recover it. 00:35:56.034 [2024-07-12 00:48:23.673309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.034 [2024-07-12 00:48:23.673336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.034 qpair failed and we were unable to recover it. 00:35:56.034 [2024-07-12 00:48:23.673421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.034 [2024-07-12 00:48:23.673448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.034 qpair failed and we were unable to recover it. 00:35:56.034 [2024-07-12 00:48:23.673523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.034 [2024-07-12 00:48:23.673549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.034 qpair failed and we were unable to recover it. 00:35:56.034 [2024-07-12 00:48:23.673747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.034 [2024-07-12 00:48:23.673773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.034 qpair failed and we were unable to recover it. 00:35:56.034 [2024-07-12 00:48:23.673850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.034 [2024-07-12 00:48:23.673875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.034 qpair failed and we were unable to recover it. 00:35:56.034 [2024-07-12 00:48:23.673959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.034 [2024-07-12 00:48:23.673985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.034 qpair failed and we were unable to recover it. 00:35:56.034 [2024-07-12 00:48:23.674069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.034 [2024-07-12 00:48:23.674095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.034 qpair failed and we were unable to recover it. 00:35:56.034 [2024-07-12 00:48:23.674292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.034 [2024-07-12 00:48:23.674318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.034 qpair failed and we were unable to recover it. 00:35:56.034 [2024-07-12 00:48:23.674392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.034 [2024-07-12 00:48:23.674418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.034 qpair failed and we were unable to recover it. 00:35:56.034 [2024-07-12 00:48:23.674496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.034 [2024-07-12 00:48:23.674521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.034 qpair failed and we were unable to recover it. 00:35:56.034 [2024-07-12 00:48:23.674620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.034 [2024-07-12 00:48:23.674661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.034 qpair failed and we were unable to recover it. 00:35:56.034 [2024-07-12 00:48:23.674754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.034 [2024-07-12 00:48:23.674784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.034 qpair failed and we were unable to recover it. 00:35:56.034 [2024-07-12 00:48:23.674866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.034 [2024-07-12 00:48:23.674893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.034 qpair failed and we were unable to recover it. 00:35:56.034 [2024-07-12 00:48:23.674978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.034 [2024-07-12 00:48:23.675004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.034 qpair failed and we were unable to recover it. 00:35:56.034 [2024-07-12 00:48:23.675098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.034 [2024-07-12 00:48:23.675124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.034 qpair failed and we were unable to recover it. 00:35:56.034 [2024-07-12 00:48:23.675207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.034 [2024-07-12 00:48:23.675236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.034 qpair failed and we were unable to recover it. 00:35:56.034 [2024-07-12 00:48:23.675315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.034 [2024-07-12 00:48:23.675342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.034 qpair failed and we were unable to recover it. 00:35:56.034 [2024-07-12 00:48:23.675419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.035 [2024-07-12 00:48:23.675445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.035 qpair failed and we were unable to recover it. 00:35:56.035 [2024-07-12 00:48:23.675527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.035 [2024-07-12 00:48:23.675553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.035 qpair failed and we were unable to recover it. 00:35:56.035 [2024-07-12 00:48:23.675647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.035 [2024-07-12 00:48:23.675676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.035 qpair failed and we were unable to recover it. 00:35:56.035 [2024-07-12 00:48:23.675770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.035 [2024-07-12 00:48:23.675806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.035 qpair failed and we were unable to recover it. 00:35:56.035 [2024-07-12 00:48:23.675894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.035 [2024-07-12 00:48:23.675922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.035 qpair failed and we were unable to recover it. 00:35:56.035 [2024-07-12 00:48:23.675998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.035 [2024-07-12 00:48:23.676024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.035 qpair failed and we were unable to recover it. 00:35:56.035 [2024-07-12 00:48:23.676105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.035 [2024-07-12 00:48:23.676131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.035 qpair failed and we were unable to recover it. 00:35:56.035 [2024-07-12 00:48:23.676216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.035 [2024-07-12 00:48:23.676242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.035 qpair failed and we were unable to recover it. 00:35:56.035 [2024-07-12 00:48:23.676322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.035 [2024-07-12 00:48:23.676349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.035 qpair failed and we were unable to recover it. 00:35:56.035 [2024-07-12 00:48:23.676438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.035 [2024-07-12 00:48:23.676466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.035 qpair failed and we were unable to recover it. 00:35:56.035 [2024-07-12 00:48:23.676554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.035 [2024-07-12 00:48:23.676582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.035 qpair failed and we were unable to recover it. 00:35:56.035 [2024-07-12 00:48:23.676672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.035 [2024-07-12 00:48:23.676697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.035 qpair failed and we were unable to recover it. 00:35:56.035 [2024-07-12 00:48:23.676778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.035 [2024-07-12 00:48:23.676805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.035 qpair failed and we were unable to recover it. 00:35:56.035 [2024-07-12 00:48:23.676902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.035 [2024-07-12 00:48:23.676929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.035 qpair failed and we were unable to recover it. 00:35:56.035 [2024-07-12 00:48:23.677004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.035 [2024-07-12 00:48:23.677030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.035 qpair failed and we were unable to recover it. 00:35:56.035 [2024-07-12 00:48:23.677111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.035 [2024-07-12 00:48:23.677139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.035 qpair failed and we were unable to recover it. 00:35:56.035 [2024-07-12 00:48:23.677217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.035 [2024-07-12 00:48:23.677243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.035 qpair failed and we were unable to recover it. 00:35:56.035 [2024-07-12 00:48:23.677328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.035 [2024-07-12 00:48:23.677354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.035 qpair failed and we were unable to recover it. 00:35:56.035 [2024-07-12 00:48:23.677434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.035 [2024-07-12 00:48:23.677460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.035 qpair failed and we were unable to recover it. 00:35:56.035 [2024-07-12 00:48:23.677655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.035 [2024-07-12 00:48:23.677684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.035 qpair failed and we were unable to recover it. 00:35:56.035 [2024-07-12 00:48:23.677768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.035 [2024-07-12 00:48:23.677796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.035 qpair failed and we were unable to recover it. 00:35:56.035 [2024-07-12 00:48:23.677879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.035 [2024-07-12 00:48:23.677908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.035 qpair failed and we were unable to recover it. 00:35:56.035 [2024-07-12 00:48:23.677990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.035 [2024-07-12 00:48:23.678017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.035 qpair failed and we were unable to recover it. 00:35:56.035 [2024-07-12 00:48:23.678098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.035 [2024-07-12 00:48:23.678124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.035 qpair failed and we were unable to recover it. 00:35:56.035 [2024-07-12 00:48:23.678200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.035 [2024-07-12 00:48:23.678226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.035 qpair failed and we were unable to recover it. 00:35:56.035 [2024-07-12 00:48:23.678303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.035 [2024-07-12 00:48:23.678330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.035 qpair failed and we were unable to recover it. 00:35:56.035 [2024-07-12 00:48:23.678415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.035 [2024-07-12 00:48:23.678442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.035 qpair failed and we were unable to recover it. 00:35:56.035 [2024-07-12 00:48:23.678517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.035 [2024-07-12 00:48:23.678543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.035 qpair failed and we were unable to recover it. 00:35:56.035 [2024-07-12 00:48:23.678630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.035 [2024-07-12 00:48:23.678657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.035 qpair failed and we were unable to recover it. 00:35:56.035 [2024-07-12 00:48:23.678739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.035 [2024-07-12 00:48:23.678767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.035 qpair failed and we were unable to recover it. 00:35:56.035 [2024-07-12 00:48:23.678857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.035 [2024-07-12 00:48:23.678883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.035 qpair failed and we were unable to recover it. 00:35:56.035 [2024-07-12 00:48:23.678961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.035 [2024-07-12 00:48:23.678987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.035 qpair failed and we were unable to recover it. 00:35:56.035 [2024-07-12 00:48:23.679070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.036 [2024-07-12 00:48:23.679095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.036 qpair failed and we were unable to recover it. 00:35:56.036 [2024-07-12 00:48:23.679182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.036 [2024-07-12 00:48:23.679208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.036 qpair failed and we were unable to recover it. 00:35:56.036 [2024-07-12 00:48:23.679285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.036 [2024-07-12 00:48:23.679313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.036 qpair failed and we were unable to recover it. 00:35:56.036 [2024-07-12 00:48:23.679397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.036 [2024-07-12 00:48:23.679423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.036 qpair failed and we were unable to recover it. 00:35:56.036 [2024-07-12 00:48:23.679497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.036 [2024-07-12 00:48:23.679523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.036 qpair failed and we were unable to recover it. 00:35:56.036 [2024-07-12 00:48:23.679606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.036 [2024-07-12 00:48:23.679633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.036 qpair failed and we were unable to recover it. 00:35:56.036 [2024-07-12 00:48:23.679717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.036 [2024-07-12 00:48:23.679744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.036 qpair failed and we were unable to recover it. 00:35:56.036 [2024-07-12 00:48:23.679830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.036 [2024-07-12 00:48:23.679859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.036 qpair failed and we were unable to recover it. 00:35:56.036 [2024-07-12 00:48:23.679957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.036 [2024-07-12 00:48:23.679983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.036 qpair failed and we were unable to recover it. 00:35:56.036 [2024-07-12 00:48:23.680067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.036 [2024-07-12 00:48:23.680094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.036 qpair failed and we were unable to recover it. 00:35:56.036 [2024-07-12 00:48:23.680178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.036 [2024-07-12 00:48:23.680203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.036 qpair failed and we were unable to recover it. 00:35:56.036 [2024-07-12 00:48:23.680284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.036 [2024-07-12 00:48:23.680315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.036 qpair failed and we were unable to recover it. 00:35:56.036 [2024-07-12 00:48:23.680401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.036 [2024-07-12 00:48:23.680428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.036 qpair failed and we were unable to recover it. 00:35:56.036 [2024-07-12 00:48:23.680504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.036 [2024-07-12 00:48:23.680531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.036 qpair failed and we were unable to recover it. 00:35:56.036 [2024-07-12 00:48:23.680623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.036 [2024-07-12 00:48:23.680649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.036 qpair failed and we were unable to recover it. 00:35:56.036 [2024-07-12 00:48:23.680733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.036 [2024-07-12 00:48:23.680759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.036 qpair failed and we were unable to recover it. 00:35:56.036 [2024-07-12 00:48:23.680834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.036 [2024-07-12 00:48:23.680860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.036 qpair failed and we were unable to recover it. 00:35:56.036 [2024-07-12 00:48:23.680941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.036 [2024-07-12 00:48:23.680967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.036 qpair failed and we were unable to recover it. 00:35:56.036 [2024-07-12 00:48:23.681054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.036 [2024-07-12 00:48:23.681083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.036 qpair failed and we were unable to recover it. 00:35:56.036 [2024-07-12 00:48:23.681169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.036 [2024-07-12 00:48:23.681196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.036 qpair failed and we were unable to recover it. 00:35:56.036 [2024-07-12 00:48:23.681270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.036 [2024-07-12 00:48:23.681296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.036 qpair failed and we were unable to recover it. 00:35:56.036 [2024-07-12 00:48:23.681388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.036 [2024-07-12 00:48:23.681413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.036 qpair failed and we were unable to recover it. 00:35:56.036 [2024-07-12 00:48:23.681494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.036 [2024-07-12 00:48:23.681521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.036 qpair failed and we were unable to recover it. 00:35:56.036 [2024-07-12 00:48:23.681614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.036 [2024-07-12 00:48:23.681644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.036 qpair failed and we were unable to recover it. 00:35:56.036 [2024-07-12 00:48:23.681725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.036 [2024-07-12 00:48:23.681751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.036 qpair failed and we were unable to recover it. 00:35:56.036 [2024-07-12 00:48:23.681848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.036 [2024-07-12 00:48:23.681876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.036 qpair failed and we were unable to recover it. 00:35:56.036 [2024-07-12 00:48:23.681958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.036 [2024-07-12 00:48:23.681984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.036 qpair failed and we were unable to recover it. 00:35:56.036 [2024-07-12 00:48:23.682062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.036 [2024-07-12 00:48:23.682088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.036 qpair failed and we were unable to recover it. 00:35:56.036 [2024-07-12 00:48:23.682173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.036 [2024-07-12 00:48:23.682199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.036 qpair failed and we were unable to recover it. 00:35:56.036 [2024-07-12 00:48:23.682285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.036 [2024-07-12 00:48:23.682311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.036 qpair failed and we were unable to recover it. 00:35:56.036 [2024-07-12 00:48:23.682388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.036 [2024-07-12 00:48:23.682414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.037 qpair failed and we were unable to recover it. 00:35:56.037 [2024-07-12 00:48:23.682495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.037 [2024-07-12 00:48:23.682522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.037 qpair failed and we were unable to recover it. 00:35:56.037 [2024-07-12 00:48:23.682606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.037 [2024-07-12 00:48:23.682636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.037 qpair failed and we were unable to recover it. 00:35:56.037 [2024-07-12 00:48:23.682725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.037 [2024-07-12 00:48:23.682754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.037 qpair failed and we were unable to recover it. 00:35:56.037 [2024-07-12 00:48:23.682837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.037 [2024-07-12 00:48:23.682863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.037 qpair failed and we were unable to recover it. 00:35:56.037 [2024-07-12 00:48:23.682949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.037 [2024-07-12 00:48:23.682975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.037 qpair failed and we were unable to recover it. 00:35:56.037 [2024-07-12 00:48:23.683052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.037 [2024-07-12 00:48:23.683078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.037 qpair failed and we were unable to recover it. 00:35:56.037 [2024-07-12 00:48:23.683163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.037 [2024-07-12 00:48:23.683190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.037 qpair failed and we were unable to recover it. 00:35:56.037 [2024-07-12 00:48:23.683271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.037 [2024-07-12 00:48:23.683306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.037 qpair failed and we were unable to recover it. 00:35:56.037 [2024-07-12 00:48:23.683388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.037 [2024-07-12 00:48:23.683416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.037 qpair failed and we were unable to recover it. 00:35:56.037 [2024-07-12 00:48:23.683497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.037 [2024-07-12 00:48:23.683525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.037 qpair failed and we were unable to recover it. 00:35:56.037 [2024-07-12 00:48:23.683618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.037 [2024-07-12 00:48:23.683647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.037 qpair failed and we were unable to recover it. 00:35:56.037 [2024-07-12 00:48:23.683734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.037 [2024-07-12 00:48:23.683762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.037 qpair failed and we were unable to recover it. 00:35:56.037 [2024-07-12 00:48:23.683847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.037 [2024-07-12 00:48:23.683872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.037 qpair failed and we were unable to recover it. 00:35:56.037 [2024-07-12 00:48:23.683948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.037 [2024-07-12 00:48:23.683974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.037 qpair failed and we were unable to recover it. 00:35:56.037 [2024-07-12 00:48:23.684052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.037 [2024-07-12 00:48:23.684078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.037 qpair failed and we were unable to recover it. 00:35:56.037 [2024-07-12 00:48:23.684157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.037 [2024-07-12 00:48:23.684181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.037 qpair failed and we were unable to recover it. 00:35:56.037 [2024-07-12 00:48:23.684264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.037 [2024-07-12 00:48:23.684293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.037 qpair failed and we were unable to recover it. 00:35:56.037 [2024-07-12 00:48:23.684378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.037 [2024-07-12 00:48:23.684406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.037 qpair failed and we were unable to recover it. 00:35:56.037 [2024-07-12 00:48:23.684485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.037 [2024-07-12 00:48:23.684511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.037 qpair failed and we were unable to recover it. 00:35:56.037 [2024-07-12 00:48:23.684594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.037 [2024-07-12 00:48:23.684621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.037 qpair failed and we were unable to recover it. 00:35:56.037 [2024-07-12 00:48:23.684704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.037 [2024-07-12 00:48:23.684731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.037 qpair failed and we were unable to recover it. 00:35:56.037 [2024-07-12 00:48:23.684821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.037 [2024-07-12 00:48:23.684848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.037 qpair failed and we were unable to recover it. 00:35:56.037 [2024-07-12 00:48:23.684929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.037 [2024-07-12 00:48:23.684957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.037 qpair failed and we were unable to recover it. 00:35:56.037 [2024-07-12 00:48:23.685045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.037 [2024-07-12 00:48:23.685071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.037 qpair failed and we were unable to recover it. 00:35:56.037 [2024-07-12 00:48:23.685158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.037 [2024-07-12 00:48:23.685184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.037 qpair failed and we were unable to recover it. 00:35:56.037 [2024-07-12 00:48:23.685273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.037 [2024-07-12 00:48:23.685300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.037 qpair failed and we were unable to recover it. 00:35:56.037 [2024-07-12 00:48:23.685375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.037 [2024-07-12 00:48:23.685401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.037 qpair failed and we were unable to recover it. 00:35:56.037 [2024-07-12 00:48:23.685478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.037 [2024-07-12 00:48:23.685504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.037 qpair failed and we were unable to recover it. 00:35:56.037 [2024-07-12 00:48:23.685581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.037 [2024-07-12 00:48:23.685613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.037 qpair failed and we were unable to recover it. 00:35:56.037 [2024-07-12 00:48:23.685690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.037 [2024-07-12 00:48:23.685716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.037 qpair failed and we were unable to recover it. 00:35:56.037 [2024-07-12 00:48:23.685801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.037 [2024-07-12 00:48:23.685827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.037 qpair failed and we were unable to recover it. 00:35:56.037 [2024-07-12 00:48:23.685914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.038 [2024-07-12 00:48:23.685941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.038 qpair failed and we were unable to recover it. 00:35:56.038 [2024-07-12 00:48:23.686030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.038 [2024-07-12 00:48:23.686059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.038 qpair failed and we were unable to recover it. 00:35:56.038 [2024-07-12 00:48:23.686133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.038 [2024-07-12 00:48:23.686160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.038 qpair failed and we were unable to recover it. 00:35:56.038 [2024-07-12 00:48:23.686242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.038 [2024-07-12 00:48:23.686274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.038 qpair failed and we were unable to recover it. 00:35:56.038 [2024-07-12 00:48:23.686358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.038 [2024-07-12 00:48:23.686385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.038 qpair failed and we were unable to recover it. 00:35:56.038 [2024-07-12 00:48:23.686460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.038 [2024-07-12 00:48:23.686487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.038 qpair failed and we were unable to recover it. 00:35:56.038 [2024-07-12 00:48:23.686571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.038 [2024-07-12 00:48:23.686605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.038 qpair failed and we were unable to recover it. 00:35:56.038 [2024-07-12 00:48:23.686706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.038 [2024-07-12 00:48:23.686732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.038 qpair failed and we were unable to recover it. 00:35:56.038 [2024-07-12 00:48:23.686809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.038 [2024-07-12 00:48:23.686835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.038 qpair failed and we were unable to recover it. 00:35:56.038 [2024-07-12 00:48:23.686927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.038 [2024-07-12 00:48:23.686965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.038 qpair failed and we were unable to recover it. 00:35:56.038 [2024-07-12 00:48:23.687057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.038 [2024-07-12 00:48:23.687085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.038 qpair failed and we were unable to recover it. 00:35:56.038 [2024-07-12 00:48:23.687178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.038 [2024-07-12 00:48:23.687207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.038 qpair failed and we were unable to recover it. 00:35:56.038 [2024-07-12 00:48:23.687288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.038 [2024-07-12 00:48:23.687314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.038 qpair failed and we were unable to recover it. 00:35:56.038 [2024-07-12 00:48:23.687392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.038 [2024-07-12 00:48:23.687418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.038 qpair failed and we were unable to recover it. 00:35:56.038 [2024-07-12 00:48:23.687497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.038 [2024-07-12 00:48:23.687525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.038 qpair failed and we were unable to recover it. 00:35:56.038 [2024-07-12 00:48:23.687606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.038 [2024-07-12 00:48:23.687632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.038 qpair failed and we were unable to recover it. 00:35:56.038 [2024-07-12 00:48:23.687712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.038 [2024-07-12 00:48:23.687737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.038 qpair failed and we were unable to recover it. 00:35:56.038 [2024-07-12 00:48:23.687821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.038 [2024-07-12 00:48:23.687846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.038 qpair failed and we were unable to recover it. 00:35:56.038 [2024-07-12 00:48:23.687929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.038 [2024-07-12 00:48:23.687958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.038 qpair failed and we were unable to recover it. 00:35:56.038 [2024-07-12 00:48:23.688045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.038 [2024-07-12 00:48:23.688073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.038 qpair failed and we were unable to recover it. 00:35:56.038 [2024-07-12 00:48:23.688163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.038 [2024-07-12 00:48:23.688200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.038 qpair failed and we were unable to recover it. 00:35:56.038 [2024-07-12 00:48:23.688284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.038 [2024-07-12 00:48:23.688310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.038 qpair failed and we were unable to recover it. 00:35:56.038 [2024-07-12 00:48:23.688414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.038 [2024-07-12 00:48:23.688441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.038 qpair failed and we were unable to recover it. 00:35:56.038 [2024-07-12 00:48:23.688536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.038 [2024-07-12 00:48:23.688571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.038 qpair failed and we were unable to recover it. 00:35:56.038 [2024-07-12 00:48:23.688667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.038 [2024-07-12 00:48:23.688698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.039 qpair failed and we were unable to recover it. 00:35:56.039 [2024-07-12 00:48:23.688793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.039 [2024-07-12 00:48:23.688819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.039 qpair failed and we were unable to recover it. 00:35:56.039 [2024-07-12 00:48:23.688897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.039 [2024-07-12 00:48:23.688926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.039 qpair failed and we were unable to recover it. 00:35:56.039 [2024-07-12 00:48:23.689003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.039 [2024-07-12 00:48:23.689031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.039 qpair failed and we were unable to recover it. 00:35:56.039 [2024-07-12 00:48:23.689122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.039 [2024-07-12 00:48:23.689150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.039 qpair failed and we were unable to recover it. 00:35:56.039 [2024-07-12 00:48:23.689237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.039 [2024-07-12 00:48:23.689263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.039 qpair failed and we were unable to recover it. 00:35:56.039 [2024-07-12 00:48:23.689346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.039 [2024-07-12 00:48:23.689374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.039 qpair failed and we were unable to recover it. 00:35:56.039 [2024-07-12 00:48:23.689468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.039 [2024-07-12 00:48:23.689494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.039 qpair failed and we were unable to recover it. 00:35:56.039 [2024-07-12 00:48:23.689569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.039 [2024-07-12 00:48:23.689603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.039 qpair failed and we were unable to recover it. 00:35:56.039 [2024-07-12 00:48:23.689686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.039 [2024-07-12 00:48:23.689713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.039 qpair failed and we were unable to recover it. 00:35:56.039 [2024-07-12 00:48:23.689788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.039 [2024-07-12 00:48:23.689814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.039 qpair failed and we were unable to recover it. 00:35:56.039 [2024-07-12 00:48:23.689906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.039 [2024-07-12 00:48:23.689936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.039 qpair failed and we were unable to recover it. 00:35:56.039 [2024-07-12 00:48:23.690020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.039 [2024-07-12 00:48:23.690048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.039 qpair failed and we were unable to recover it. 00:35:56.039 [2024-07-12 00:48:23.690135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.039 [2024-07-12 00:48:23.690163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.039 qpair failed and we were unable to recover it. 00:35:56.039 [2024-07-12 00:48:23.690238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.039 [2024-07-12 00:48:23.690264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.039 qpair failed and we were unable to recover it. 00:35:56.039 [2024-07-12 00:48:23.690340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.039 [2024-07-12 00:48:23.690367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.039 qpair failed and we were unable to recover it. 00:35:56.039 [2024-07-12 00:48:23.690559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.039 [2024-07-12 00:48:23.690592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.039 qpair failed and we were unable to recover it. 00:35:56.039 [2024-07-12 00:48:23.690675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.039 [2024-07-12 00:48:23.690702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.039 qpair failed and we were unable to recover it. 00:35:56.039 [2024-07-12 00:48:23.690782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.039 [2024-07-12 00:48:23.690808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.039 qpair failed and we were unable to recover it. 00:35:56.039 [2024-07-12 00:48:23.690888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.039 [2024-07-12 00:48:23.690914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.039 qpair failed and we were unable to recover it. 00:35:56.039 [2024-07-12 00:48:23.691002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.039 [2024-07-12 00:48:23.691029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.039 qpair failed and we were unable to recover it. 00:35:56.039 [2024-07-12 00:48:23.691107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.039 [2024-07-12 00:48:23.691133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.039 qpair failed and we were unable to recover it. 00:35:56.039 [2024-07-12 00:48:23.691214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.039 [2024-07-12 00:48:23.691240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.039 qpair failed and we were unable to recover it. 00:35:56.039 [2024-07-12 00:48:23.691315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.039 [2024-07-12 00:48:23.691341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.039 qpair failed and we were unable to recover it. 00:35:56.039 [2024-07-12 00:48:23.691415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.039 [2024-07-12 00:48:23.691441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.039 qpair failed and we were unable to recover it. 00:35:56.039 [2024-07-12 00:48:23.691528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.039 [2024-07-12 00:48:23.691556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.039 qpair failed and we were unable to recover it. 00:35:56.039 [2024-07-12 00:48:23.691646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.039 [2024-07-12 00:48:23.691674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.039 qpair failed and we were unable to recover it. 00:35:56.039 [2024-07-12 00:48:23.691769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.039 [2024-07-12 00:48:23.691798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.039 qpair failed and we were unable to recover it. 00:35:56.039 [2024-07-12 00:48:23.691875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.039 [2024-07-12 00:48:23.691900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.039 qpair failed and we were unable to recover it. 00:35:56.039 [2024-07-12 00:48:23.691984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.039 [2024-07-12 00:48:23.692011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.039 qpair failed and we were unable to recover it. 00:35:56.039 [2024-07-12 00:48:23.692091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.039 [2024-07-12 00:48:23.692118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.039 qpair failed and we were unable to recover it. 00:35:56.040 [2024-07-12 00:48:23.692199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.040 [2024-07-12 00:48:23.692225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.040 qpair failed and we were unable to recover it. 00:35:56.040 [2024-07-12 00:48:23.692304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.040 [2024-07-12 00:48:23.692329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.040 qpair failed and we were unable to recover it. 00:35:56.040 [2024-07-12 00:48:23.692419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.040 [2024-07-12 00:48:23.692446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.040 qpair failed and we were unable to recover it. 00:35:56.040 [2024-07-12 00:48:23.692521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.040 [2024-07-12 00:48:23.692546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.040 qpair failed and we were unable to recover it. 00:35:56.040 [2024-07-12 00:48:23.692638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.040 [2024-07-12 00:48:23.692665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.040 qpair failed and we were unable to recover it. 00:35:56.040 [2024-07-12 00:48:23.692744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.040 [2024-07-12 00:48:23.692770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.040 qpair failed and we were unable to recover it. 00:35:56.040 [2024-07-12 00:48:23.692848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.040 [2024-07-12 00:48:23.692876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.040 qpair failed and we were unable to recover it. 00:35:56.040 [2024-07-12 00:48:23.692962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.040 [2024-07-12 00:48:23.692988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.040 qpair failed and we were unable to recover it. 00:35:56.040 [2024-07-12 00:48:23.693071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.040 [2024-07-12 00:48:23.693097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.040 qpair failed and we were unable to recover it. 00:35:56.040 [2024-07-12 00:48:23.693172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.040 [2024-07-12 00:48:23.693197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.040 qpair failed and we were unable to recover it. 00:35:56.040 [2024-07-12 00:48:23.693282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.040 [2024-07-12 00:48:23.693309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.040 qpair failed and we were unable to recover it. 00:35:56.040 [2024-07-12 00:48:23.693386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.040 [2024-07-12 00:48:23.693414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.040 qpair failed and we were unable to recover it. 00:35:56.040 [2024-07-12 00:48:23.693504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.040 [2024-07-12 00:48:23.693532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.040 qpair failed and we were unable to recover it. 00:35:56.040 [2024-07-12 00:48:23.693607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.040 [2024-07-12 00:48:23.693631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.040 qpair failed and we were unable to recover it. 00:35:56.040 [2024-07-12 00:48:23.693710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.040 [2024-07-12 00:48:23.693736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.040 qpair failed and we were unable to recover it. 00:35:56.040 [2024-07-12 00:48:23.693818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.040 [2024-07-12 00:48:23.693849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.040 qpair failed and we were unable to recover it. 00:35:56.040 [2024-07-12 00:48:23.693924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.040 [2024-07-12 00:48:23.693949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.040 qpair failed and we were unable to recover it. 00:35:56.040 [2024-07-12 00:48:23.694032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.040 [2024-07-12 00:48:23.694057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.040 qpair failed and we were unable to recover it. 00:35:56.040 [2024-07-12 00:48:23.694135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.040 [2024-07-12 00:48:23.694163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.040 qpair failed and we were unable to recover it. 00:35:56.040 [2024-07-12 00:48:23.694250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.040 [2024-07-12 00:48:23.694279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.040 qpair failed and we were unable to recover it. 00:35:56.040 [2024-07-12 00:48:23.694363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.040 [2024-07-12 00:48:23.694391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.040 qpair failed and we were unable to recover it. 00:35:56.040 [2024-07-12 00:48:23.694469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.040 [2024-07-12 00:48:23.694497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.040 qpair failed and we were unable to recover it. 00:35:56.040 [2024-07-12 00:48:23.694600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.040 [2024-07-12 00:48:23.694628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.040 qpair failed and we were unable to recover it. 00:35:56.040 [2024-07-12 00:48:23.694723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.040 [2024-07-12 00:48:23.694749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.040 qpair failed and we were unable to recover it. 00:35:56.040 [2024-07-12 00:48:23.694826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.040 [2024-07-12 00:48:23.694852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.040 qpair failed and we were unable to recover it. 00:35:56.040 [2024-07-12 00:48:23.694934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.040 [2024-07-12 00:48:23.694959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.040 qpair failed and we were unable to recover it. 00:35:56.040 [2024-07-12 00:48:23.695040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.040 [2024-07-12 00:48:23.695065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.040 qpair failed and we were unable to recover it. 00:35:56.040 [2024-07-12 00:48:23.695149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.040 [2024-07-12 00:48:23.695177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.040 qpair failed and we were unable to recover it. 00:35:56.040 [2024-07-12 00:48:23.695263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.040 [2024-07-12 00:48:23.695291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.040 qpair failed and we were unable to recover it. 00:35:56.040 [2024-07-12 00:48:23.695372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.040 [2024-07-12 00:48:23.695398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.040 qpair failed and we were unable to recover it. 00:35:56.040 [2024-07-12 00:48:23.695476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.040 [2024-07-12 00:48:23.695502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.040 qpair failed and we were unable to recover it. 00:35:56.040 [2024-07-12 00:48:23.695584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.040 [2024-07-12 00:48:23.695615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.040 qpair failed and we were unable to recover it. 00:35:56.041 [2024-07-12 00:48:23.695702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.041 [2024-07-12 00:48:23.695731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.041 qpair failed and we were unable to recover it. 00:35:56.041 [2024-07-12 00:48:23.695819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.041 [2024-07-12 00:48:23.695846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.041 qpair failed and we were unable to recover it. 00:35:56.041 [2024-07-12 00:48:23.695928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.041 [2024-07-12 00:48:23.695955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.041 qpair failed and we were unable to recover it. 00:35:56.041 [2024-07-12 00:48:23.696042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.041 [2024-07-12 00:48:23.696084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.041 qpair failed and we were unable to recover it. 00:35:56.041 [2024-07-12 00:48:23.696167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.041 [2024-07-12 00:48:23.696193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.041 qpair failed and we were unable to recover it. 00:35:56.041 [2024-07-12 00:48:23.696274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.041 [2024-07-12 00:48:23.696302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.041 qpair failed and we were unable to recover it. 00:35:56.041 [2024-07-12 00:48:23.696388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.041 [2024-07-12 00:48:23.696414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.041 qpair failed and we were unable to recover it. 00:35:56.041 [2024-07-12 00:48:23.696497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.041 [2024-07-12 00:48:23.696524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.041 qpair failed and we were unable to recover it. 00:35:56.041 [2024-07-12 00:48:23.696598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.041 [2024-07-12 00:48:23.696624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.041 qpair failed and we were unable to recover it. 00:35:56.041 [2024-07-12 00:48:23.696702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.041 [2024-07-12 00:48:23.696728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.041 qpair failed and we were unable to recover it. 00:35:56.041 [2024-07-12 00:48:23.696814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.041 [2024-07-12 00:48:23.696846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.041 qpair failed and we were unable to recover it. 00:35:56.041 [2024-07-12 00:48:23.696925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.041 [2024-07-12 00:48:23.696951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.041 qpair failed and we were unable to recover it. 00:35:56.041 [2024-07-12 00:48:23.697034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.041 [2024-07-12 00:48:23.697059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.041 qpair failed and we were unable to recover it. 00:35:56.041 [2024-07-12 00:48:23.697140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.041 [2024-07-12 00:48:23.697166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.041 qpair failed and we were unable to recover it. 00:35:56.041 [2024-07-12 00:48:23.697243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.041 [2024-07-12 00:48:23.697269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.041 qpair failed and we were unable to recover it. 00:35:56.041 [2024-07-12 00:48:23.697355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.041 [2024-07-12 00:48:23.697380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.041 qpair failed and we were unable to recover it. 00:35:56.041 [2024-07-12 00:48:23.697466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.041 [2024-07-12 00:48:23.697495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.041 qpair failed and we were unable to recover it. 00:35:56.041 [2024-07-12 00:48:23.697571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.041 [2024-07-12 00:48:23.697607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.041 qpair failed and we were unable to recover it. 00:35:56.041 [2024-07-12 00:48:23.697689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.041 [2024-07-12 00:48:23.697714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.041 qpair failed and we were unable to recover it. 00:35:56.041 [2024-07-12 00:48:23.697791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.041 [2024-07-12 00:48:23.697816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.041 qpair failed and we were unable to recover it. 00:35:56.041 [2024-07-12 00:48:23.697900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.041 [2024-07-12 00:48:23.697925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.041 qpair failed and we were unable to recover it. 00:35:56.041 [2024-07-12 00:48:23.698004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.041 [2024-07-12 00:48:23.698032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.041 qpair failed and we were unable to recover it. 00:35:56.041 [2024-07-12 00:48:23.698113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.041 [2024-07-12 00:48:23.698139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.041 qpair failed and we were unable to recover it. 00:35:56.041 [2024-07-12 00:48:23.698225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.041 [2024-07-12 00:48:23.698250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.041 qpair failed and we were unable to recover it. 00:35:56.041 [2024-07-12 00:48:23.698332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.041 [2024-07-12 00:48:23.698358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.041 qpair failed and we were unable to recover it. 00:35:56.041 [2024-07-12 00:48:23.698435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.041 [2024-07-12 00:48:23.698461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.041 qpair failed and we were unable to recover it. 00:35:56.041 [2024-07-12 00:48:23.698547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.041 [2024-07-12 00:48:23.698575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.041 qpair failed and we were unable to recover it. 00:35:56.041 [2024-07-12 00:48:23.698664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.041 [2024-07-12 00:48:23.698692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.041 qpair failed and we were unable to recover it. 00:35:56.041 [2024-07-12 00:48:23.698775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.041 [2024-07-12 00:48:23.698801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.041 qpair failed and we were unable to recover it. 00:35:56.041 [2024-07-12 00:48:23.698883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.041 [2024-07-12 00:48:23.698910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.041 qpair failed and we were unable to recover it. 00:35:56.042 [2024-07-12 00:48:23.698992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.042 [2024-07-12 00:48:23.699019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.042 qpair failed and we were unable to recover it. 00:35:56.042 [2024-07-12 00:48:23.699100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.042 [2024-07-12 00:48:23.699126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.042 qpair failed and we were unable to recover it. 00:35:56.042 [2024-07-12 00:48:23.699211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.042 [2024-07-12 00:48:23.699238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.042 qpair failed and we were unable to recover it. 00:35:56.042 [2024-07-12 00:48:23.699324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.042 [2024-07-12 00:48:23.699353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.042 qpair failed and we were unable to recover it. 00:35:56.042 [2024-07-12 00:48:23.699441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.042 [2024-07-12 00:48:23.699467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.042 qpair failed and we were unable to recover it. 00:35:56.042 [2024-07-12 00:48:23.699551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.042 [2024-07-12 00:48:23.699579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.042 qpair failed and we were unable to recover it. 00:35:56.042 [2024-07-12 00:48:23.699672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.042 [2024-07-12 00:48:23.699698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.042 qpair failed and we were unable to recover it. 00:35:56.042 [2024-07-12 00:48:23.699785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.042 [2024-07-12 00:48:23.699818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.042 qpair failed and we were unable to recover it. 00:35:56.042 [2024-07-12 00:48:23.699906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.042 [2024-07-12 00:48:23.699934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.042 qpair failed and we were unable to recover it. 00:35:56.042 [2024-07-12 00:48:23.700017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.042 [2024-07-12 00:48:23.700044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.042 qpair failed and we were unable to recover it. 00:35:56.042 [2024-07-12 00:48:23.700124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.042 [2024-07-12 00:48:23.700150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.042 qpair failed and we were unable to recover it. 00:35:56.042 [2024-07-12 00:48:23.700234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.042 [2024-07-12 00:48:23.700262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.042 qpair failed and we were unable to recover it. 00:35:56.042 [2024-07-12 00:48:23.700359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.042 [2024-07-12 00:48:23.700385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.042 qpair failed and we were unable to recover it. 00:35:56.042 [2024-07-12 00:48:23.700464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.042 [2024-07-12 00:48:23.700490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.042 qpair failed and we were unable to recover it. 00:35:56.042 [2024-07-12 00:48:23.700574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.042 [2024-07-12 00:48:23.700612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.042 qpair failed and we were unable to recover it. 00:35:56.042 [2024-07-12 00:48:23.700693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.042 [2024-07-12 00:48:23.700719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.042 qpair failed and we were unable to recover it. 00:35:56.042 [2024-07-12 00:48:23.700800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.042 [2024-07-12 00:48:23.700824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.042 qpair failed and we were unable to recover it. 00:35:56.042 [2024-07-12 00:48:23.700911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.042 [2024-07-12 00:48:23.700938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.042 qpair failed and we were unable to recover it. 00:35:56.042 [2024-07-12 00:48:23.701027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.042 [2024-07-12 00:48:23.701055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.042 qpair failed and we were unable to recover it. 00:35:56.042 [2024-07-12 00:48:23.701143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.042 [2024-07-12 00:48:23.701172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.042 qpair failed and we were unable to recover it. 00:35:56.042 [2024-07-12 00:48:23.701250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.042 [2024-07-12 00:48:23.701280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.042 qpair failed and we were unable to recover it. 00:35:56.042 [2024-07-12 00:48:23.701360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.042 [2024-07-12 00:48:23.701387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.042 qpair failed and we were unable to recover it. 00:35:56.042 [2024-07-12 00:48:23.701461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.042 [2024-07-12 00:48:23.701487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.042 qpair failed and we were unable to recover it. 00:35:56.042 [2024-07-12 00:48:23.701568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.042 [2024-07-12 00:48:23.701599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.042 qpair failed and we were unable to recover it. 00:35:56.042 [2024-07-12 00:48:23.701685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.042 [2024-07-12 00:48:23.701711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.042 qpair failed and we were unable to recover it. 00:35:56.042 [2024-07-12 00:48:23.701789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.042 [2024-07-12 00:48:23.701814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.042 qpair failed and we were unable to recover it. 00:35:56.042 [2024-07-12 00:48:23.701901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.042 [2024-07-12 00:48:23.701928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.042 qpair failed and we were unable to recover it. 00:35:56.042 [2024-07-12 00:48:23.702004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.042 [2024-07-12 00:48:23.702031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.042 qpair failed and we were unable to recover it. 00:35:56.042 [2024-07-12 00:48:23.702114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.042 [2024-07-12 00:48:23.702140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.042 qpair failed and we were unable to recover it. 00:35:56.042 [2024-07-12 00:48:23.702224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.043 [2024-07-12 00:48:23.702253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.043 qpair failed and we were unable to recover it. 00:35:56.043 [2024-07-12 00:48:23.702339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.043 [2024-07-12 00:48:23.702365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.043 qpair failed and we were unable to recover it. 00:35:56.043 [2024-07-12 00:48:23.702449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.043 [2024-07-12 00:48:23.702476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.043 qpair failed and we were unable to recover it. 00:35:56.043 [2024-07-12 00:48:23.702556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.043 [2024-07-12 00:48:23.702582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.043 qpair failed and we were unable to recover it. 00:35:56.043 [2024-07-12 00:48:23.702674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.043 [2024-07-12 00:48:23.702702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.043 qpair failed and we were unable to recover it. 00:35:56.043 [2024-07-12 00:48:23.702791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.043 [2024-07-12 00:48:23.702817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.043 qpair failed and we were unable to recover it. 00:35:56.043 [2024-07-12 00:48:23.702900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.043 [2024-07-12 00:48:23.702927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.043 qpair failed and we were unable to recover it. 00:35:56.043 [2024-07-12 00:48:23.703008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.043 [2024-07-12 00:48:23.703034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.043 qpair failed and we were unable to recover it. 00:35:56.043 [2024-07-12 00:48:23.703112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.043 [2024-07-12 00:48:23.703140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.043 qpair failed and we were unable to recover it. 00:35:56.043 [2024-07-12 00:48:23.703220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.043 [2024-07-12 00:48:23.703245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.043 qpair failed and we were unable to recover it. 00:35:56.043 [2024-07-12 00:48:23.703325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.043 [2024-07-12 00:48:23.703352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.043 qpair failed and we were unable to recover it. 00:35:56.043 [2024-07-12 00:48:23.703435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.043 [2024-07-12 00:48:23.703461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.043 qpair failed and we were unable to recover it. 00:35:56.043 [2024-07-12 00:48:23.703535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.043 [2024-07-12 00:48:23.703560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.043 qpair failed and we were unable to recover it. 00:35:56.043 [2024-07-12 00:48:23.703651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.043 [2024-07-12 00:48:23.703676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.043 qpair failed and we were unable to recover it. 00:35:56.043 [2024-07-12 00:48:23.703751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.043 [2024-07-12 00:48:23.703776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.043 qpair failed and we were unable to recover it. 00:35:56.043 [2024-07-12 00:48:23.703856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.043 [2024-07-12 00:48:23.703881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.043 qpair failed and we were unable to recover it. 00:35:56.043 [2024-07-12 00:48:23.703963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.043 [2024-07-12 00:48:23.703990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.043 qpair failed and we were unable to recover it. 00:35:56.043 [2024-07-12 00:48:23.704081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.043 [2024-07-12 00:48:23.704110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.043 qpair failed and we were unable to recover it. 00:35:56.043 [2024-07-12 00:48:23.704187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.043 [2024-07-12 00:48:23.704218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.043 qpair failed and we were unable to recover it. 00:35:56.043 [2024-07-12 00:48:23.704303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.043 [2024-07-12 00:48:23.704330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.043 qpair failed and we were unable to recover it. 00:35:56.043 [2024-07-12 00:48:23.704408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.043 [2024-07-12 00:48:23.704435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.043 qpair failed and we were unable to recover it. 00:35:56.043 [2024-07-12 00:48:23.704514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.043 [2024-07-12 00:48:23.704540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.043 qpair failed and we were unable to recover it. 00:35:56.043 [2024-07-12 00:48:23.704625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.043 [2024-07-12 00:48:23.704652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.043 qpair failed and we were unable to recover it. 00:35:56.043 [2024-07-12 00:48:23.704743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.043 [2024-07-12 00:48:23.704770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.044 qpair failed and we were unable to recover it. 00:35:56.044 [2024-07-12 00:48:23.704856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.044 [2024-07-12 00:48:23.704882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.044 qpair failed and we were unable to recover it. 00:35:56.044 [2024-07-12 00:48:23.704969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.044 [2024-07-12 00:48:23.704995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.044 qpair failed and we were unable to recover it. 00:35:56.044 [2024-07-12 00:48:23.705079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.044 [2024-07-12 00:48:23.705105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.044 qpair failed and we were unable to recover it. 00:35:56.044 [2024-07-12 00:48:23.705188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.044 [2024-07-12 00:48:23.705215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.044 qpair failed and we were unable to recover it. 00:35:56.044 [2024-07-12 00:48:23.705294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.044 [2024-07-12 00:48:23.705322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.044 qpair failed and we were unable to recover it. 00:35:56.044 [2024-07-12 00:48:23.705401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.044 [2024-07-12 00:48:23.705427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.044 qpair failed and we were unable to recover it. 00:35:56.044 [2024-07-12 00:48:23.705506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.044 [2024-07-12 00:48:23.705532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.044 qpair failed and we were unable to recover it. 00:35:56.044 [2024-07-12 00:48:23.705609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.044 [2024-07-12 00:48:23.705635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.044 qpair failed and we were unable to recover it. 00:35:56.044 [2024-07-12 00:48:23.705722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.044 [2024-07-12 00:48:23.705747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.044 qpair failed and we were unable to recover it. 00:35:56.044 [2024-07-12 00:48:23.705827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.044 [2024-07-12 00:48:23.705852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.044 qpair failed and we were unable to recover it. 00:35:56.044 [2024-07-12 00:48:23.705929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.044 [2024-07-12 00:48:23.705955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.044 qpair failed and we were unable to recover it. 00:35:56.044 [2024-07-12 00:48:23.706029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.044 [2024-07-12 00:48:23.706055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.044 qpair failed and we were unable to recover it. 00:35:56.044 [2024-07-12 00:48:23.706137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.044 [2024-07-12 00:48:23.706163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.044 qpair failed and we were unable to recover it. 00:35:56.044 [2024-07-12 00:48:23.706248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.044 [2024-07-12 00:48:23.706277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.044 qpair failed and we were unable to recover it. 00:35:56.044 [2024-07-12 00:48:23.706361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.044 [2024-07-12 00:48:23.706388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.044 qpair failed and we were unable to recover it. 00:35:56.044 [2024-07-12 00:48:23.706465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.044 [2024-07-12 00:48:23.706491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.044 qpair failed and we were unable to recover it. 00:35:56.044 [2024-07-12 00:48:23.706578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.044 [2024-07-12 00:48:23.706613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.044 qpair failed and we were unable to recover it. 00:35:56.044 [2024-07-12 00:48:23.706697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.044 [2024-07-12 00:48:23.706722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.044 qpair failed and we were unable to recover it. 00:35:56.044 [2024-07-12 00:48:23.706809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.044 [2024-07-12 00:48:23.706835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.044 qpair failed and we were unable to recover it. 00:35:56.044 [2024-07-12 00:48:23.706918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.044 [2024-07-12 00:48:23.706946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.044 qpair failed and we were unable to recover it. 00:35:56.044 [2024-07-12 00:48:23.707030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.044 [2024-07-12 00:48:23.707057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.044 qpair failed and we were unable to recover it. 00:35:56.044 [2024-07-12 00:48:23.707144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.044 [2024-07-12 00:48:23.707173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.044 qpair failed and we were unable to recover it. 00:35:56.044 [2024-07-12 00:48:23.707255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.044 [2024-07-12 00:48:23.707281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.044 qpair failed and we were unable to recover it. 00:35:56.044 [2024-07-12 00:48:23.707362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.044 [2024-07-12 00:48:23.707389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.044 qpair failed and we were unable to recover it. 00:35:56.044 [2024-07-12 00:48:23.707467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.044 [2024-07-12 00:48:23.707492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.044 qpair failed and we were unable to recover it. 00:35:56.044 [2024-07-12 00:48:23.707574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.044 [2024-07-12 00:48:23.707612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.044 qpair failed and we were unable to recover it. 00:35:56.044 [2024-07-12 00:48:23.707698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.044 [2024-07-12 00:48:23.707730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.044 qpair failed and we were unable to recover it. 00:35:56.044 [2024-07-12 00:48:23.707818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.044 [2024-07-12 00:48:23.707844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.044 qpair failed and we were unable to recover it. 00:35:56.044 [2024-07-12 00:48:23.707936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.044 [2024-07-12 00:48:23.707964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.045 qpair failed and we were unable to recover it. 00:35:56.045 [2024-07-12 00:48:23.708050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.045 [2024-07-12 00:48:23.708076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.045 qpair failed and we were unable to recover it. 00:35:56.045 [2024-07-12 00:48:23.708167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.045 [2024-07-12 00:48:23.708196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.045 qpair failed and we were unable to recover it. 00:35:56.045 [2024-07-12 00:48:23.708286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.045 [2024-07-12 00:48:23.708314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.045 qpair failed and we were unable to recover it. 00:35:56.045 [2024-07-12 00:48:23.708403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.045 [2024-07-12 00:48:23.708431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.045 qpair failed and we were unable to recover it. 00:35:56.045 [2024-07-12 00:48:23.708514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.045 [2024-07-12 00:48:23.708541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.045 qpair failed and we were unable to recover it. 00:35:56.045 [2024-07-12 00:48:23.708637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.045 [2024-07-12 00:48:23.708670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.045 qpair failed and we were unable to recover it. 00:35:56.045 [2024-07-12 00:48:23.708755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.045 [2024-07-12 00:48:23.708780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.045 qpair failed and we were unable to recover it. 00:35:56.045 [2024-07-12 00:48:23.708869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.045 [2024-07-12 00:48:23.708896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.045 qpair failed and we were unable to recover it. 00:35:56.045 [2024-07-12 00:48:23.708977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.045 [2024-07-12 00:48:23.709002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.045 qpair failed and we were unable to recover it. 00:35:56.045 [2024-07-12 00:48:23.709083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.045 [2024-07-12 00:48:23.709119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.045 qpair failed and we were unable to recover it. 00:35:56.045 [2024-07-12 00:48:23.709201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.045 [2024-07-12 00:48:23.709228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.045 qpair failed and we were unable to recover it. 00:35:56.045 [2024-07-12 00:48:23.709307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.045 [2024-07-12 00:48:23.709333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.045 qpair failed and we were unable to recover it. 00:35:56.045 [2024-07-12 00:48:23.709415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.045 [2024-07-12 00:48:23.709441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.045 qpair failed and we were unable to recover it. 00:35:56.045 [2024-07-12 00:48:23.709518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.045 [2024-07-12 00:48:23.709543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.045 qpair failed and we were unable to recover it. 00:35:56.045 [2024-07-12 00:48:23.709632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.045 [2024-07-12 00:48:23.709661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.045 qpair failed and we were unable to recover it. 00:35:56.045 [2024-07-12 00:48:23.709741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.045 [2024-07-12 00:48:23.709768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.045 qpair failed and we were unable to recover it. 00:35:56.045 [2024-07-12 00:48:23.709850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.045 [2024-07-12 00:48:23.709878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.045 qpair failed and we were unable to recover it. 00:35:56.045 [2024-07-12 00:48:23.709963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.045 [2024-07-12 00:48:23.709989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.045 qpair failed and we were unable to recover it. 00:35:56.045 [2024-07-12 00:48:23.710072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.045 [2024-07-12 00:48:23.710101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.045 qpair failed and we were unable to recover it. 00:35:56.045 [2024-07-12 00:48:23.710189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.045 [2024-07-12 00:48:23.710215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.045 qpair failed and we were unable to recover it. 00:35:56.045 [2024-07-12 00:48:23.710297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.045 [2024-07-12 00:48:23.710323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.045 qpair failed and we were unable to recover it. 00:35:56.045 [2024-07-12 00:48:23.710404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.045 [2024-07-12 00:48:23.710429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.045 qpair failed and we were unable to recover it. 00:35:56.045 [2024-07-12 00:48:23.710513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.045 [2024-07-12 00:48:23.710538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.045 qpair failed and we were unable to recover it. 00:35:56.045 [2024-07-12 00:48:23.710623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.045 [2024-07-12 00:48:23.710649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.045 qpair failed and we were unable to recover it. 00:35:56.045 [2024-07-12 00:48:23.710730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.045 [2024-07-12 00:48:23.710755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.045 qpair failed and we were unable to recover it. 00:35:56.045 [2024-07-12 00:48:23.710832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.045 [2024-07-12 00:48:23.710859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.045 qpair failed and we were unable to recover it. 00:35:56.045 [2024-07-12 00:48:23.710942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.045 [2024-07-12 00:48:23.710969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.045 qpair failed and we were unable to recover it. 00:35:56.045 [2024-07-12 00:48:23.711053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.045 [2024-07-12 00:48:23.711079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.045 qpair failed and we were unable to recover it. 00:35:56.045 [2024-07-12 00:48:23.711163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.045 [2024-07-12 00:48:23.711189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.045 qpair failed and we were unable to recover it. 00:35:56.045 [2024-07-12 00:48:23.711276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.045 [2024-07-12 00:48:23.711304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.045 qpair failed and we were unable to recover it. 00:35:56.045 [2024-07-12 00:48:23.711387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.045 [2024-07-12 00:48:23.711416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.046 qpair failed and we were unable to recover it. 00:35:56.046 [2024-07-12 00:48:23.711507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.046 [2024-07-12 00:48:23.711537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.046 qpair failed and we were unable to recover it. 00:35:56.046 [2024-07-12 00:48:23.711647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.046 [2024-07-12 00:48:23.711676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.046 qpair failed and we were unable to recover it. 00:35:56.046 [2024-07-12 00:48:23.711756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.046 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 1083902 Killed "${NVMF_APP[@]}" "$@" 00:35:56.046 [2024-07-12 00:48:23.711784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.046 qpair failed and we were unable to recover it. 00:35:56.046 [2024-07-12 00:48:23.711873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.046 [2024-07-12 00:48:23.711900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.046 qpair failed and we were unable to recover it. 00:35:56.046 [2024-07-12 00:48:23.711985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.046 [2024-07-12 00:48:23.712011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.046 qpair failed and we were unable to recover it. 00:35:56.046 [2024-07-12 00:48:23.712093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.046 [2024-07-12 00:48:23.712120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.046 qpair failed and we were unable to recover it. 00:35:56.046 [2024-07-12 00:48:23.712195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.046 [2024-07-12 00:48:23.712221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.046 qpair failed and we were unable to recover it. 00:35:56.046 [2024-07-12 00:48:23.712294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.046 [2024-07-12 00:48:23.712321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.046 qpair failed and we were unable to recover it. 00:35:56.046 [2024-07-12 00:48:23.712397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.046 [2024-07-12 00:48:23.712423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.046 00:48:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:35:56.046 qpair failed and we were unable to recover it. 00:35:56.046 [2024-07-12 00:48:23.712503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.046 [2024-07-12 00:48:23.712531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.046 qpair failed and we were unable to recover it. 00:35:56.046 [2024-07-12 00:48:23.712622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.046 [2024-07-12 00:48:23.712648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.046 qpair failed and we were unable to recover it. 00:35:56.046 [2024-07-12 00:48:23.712729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.046 [2024-07-12 00:48:23.712756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.046 qpair failed and we were unable to recover it. 00:35:56.046 00:48:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:35:56.046 [2024-07-12 00:48:23.712845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.046 [2024-07-12 00:48:23.712871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.046 qpair failed and we were unable to recover it. 00:35:56.046 [2024-07-12 00:48:23.712945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.046 [2024-07-12 00:48:23.712975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.046 qpair failed and we were unable to recover it. 00:35:56.046 [2024-07-12 00:48:23.713062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.046 00:48:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:35:56.046 [2024-07-12 00:48:23.713090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.046 qpair failed and we were unable to recover it. 00:35:56.046 [2024-07-12 00:48:23.713169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.046 [2024-07-12 00:48:23.713194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.046 qpair failed and we were unable to recover it. 00:35:56.046 [2024-07-12 00:48:23.713273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.046 [2024-07-12 00:48:23.713303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.046 qpair failed and we were unable to recover it. 00:35:56.046 00:48:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@720 -- # xtrace_disable 00:35:56.046 [2024-07-12 00:48:23.713389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.046 [2024-07-12 00:48:23.713418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.046 qpair failed and we were unable to recover it. 00:35:56.046 [2024-07-12 00:48:23.713497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.046 [2024-07-12 00:48:23.713525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.046 qpair failed and we were unable to recover it. 00:35:56.046 [2024-07-12 00:48:23.713605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.046 [2024-07-12 00:48:23.713640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.046 00:48:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:56.046 qpair failed and we were unable to recover it. 00:35:56.046 [2024-07-12 00:48:23.713737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.046 [2024-07-12 00:48:23.713764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.046 qpair failed and we were unable to recover it. 00:35:56.046 [2024-07-12 00:48:23.713845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.046 [2024-07-12 00:48:23.713871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.046 qpair failed and we were unable to recover it. 00:35:56.046 [2024-07-12 00:48:23.713956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.046 [2024-07-12 00:48:23.713982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.046 qpair failed and we were unable to recover it. 00:35:56.046 [2024-07-12 00:48:23.714058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.046 [2024-07-12 00:48:23.714083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.046 qpair failed and we were unable to recover it. 00:35:56.046 [2024-07-12 00:48:23.714165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.046 [2024-07-12 00:48:23.714190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.046 qpair failed and we were unable to recover it. 00:35:56.046 [2024-07-12 00:48:23.714279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.046 [2024-07-12 00:48:23.714308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.046 qpair failed and we were unable to recover it. 00:35:56.046 [2024-07-12 00:48:23.714399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.046 [2024-07-12 00:48:23.714426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.046 qpair failed and we were unable to recover it. 00:35:56.046 [2024-07-12 00:48:23.714512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.047 [2024-07-12 00:48:23.714538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.047 qpair failed and we were unable to recover it. 00:35:56.047 [2024-07-12 00:48:23.714627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.047 [2024-07-12 00:48:23.714654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.047 qpair failed and we were unable to recover it. 00:35:56.047 [2024-07-12 00:48:23.714735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.047 [2024-07-12 00:48:23.714761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.047 qpair failed and we were unable to recover it. 00:35:56.047 [2024-07-12 00:48:23.714845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.047 [2024-07-12 00:48:23.714870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.047 qpair failed and we were unable to recover it. 00:35:56.047 [2024-07-12 00:48:23.714948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.047 [2024-07-12 00:48:23.714973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.047 qpair failed and we were unable to recover it. 00:35:56.047 [2024-07-12 00:48:23.715060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.047 [2024-07-12 00:48:23.715087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.047 qpair failed and we were unable to recover it. 00:35:56.047 [2024-07-12 00:48:23.715160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.047 [2024-07-12 00:48:23.715186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.047 qpair failed and we were unable to recover it. 00:35:56.047 [2024-07-12 00:48:23.715275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.047 [2024-07-12 00:48:23.715303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.047 qpair failed and we were unable to recover it. 00:35:56.047 [2024-07-12 00:48:23.715379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.047 [2024-07-12 00:48:23.715405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.047 qpair failed and we were unable to recover it. 00:35:56.047 [2024-07-12 00:48:23.715483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.047 [2024-07-12 00:48:23.715508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.047 qpair failed and we were unable to recover it. 00:35:56.047 [2024-07-12 00:48:23.715600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.047 [2024-07-12 00:48:23.715632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.047 qpair failed and we were unable to recover it. 00:35:56.047 [2024-07-12 00:48:23.715723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.047 [2024-07-12 00:48:23.715750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.047 qpair failed and we were unable to recover it. 00:35:56.047 [2024-07-12 00:48:23.715831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.047 [2024-07-12 00:48:23.715856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.047 qpair failed and we were unable to recover it. 00:35:56.047 [2024-07-12 00:48:23.715966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.047 [2024-07-12 00:48:23.715992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.047 qpair failed and we were unable to recover it. 00:35:56.047 [2024-07-12 00:48:23.716074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.047 [2024-07-12 00:48:23.716100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.047 qpair failed and we were unable to recover it. 00:35:56.047 [2024-07-12 00:48:23.716174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.047 [2024-07-12 00:48:23.716199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.047 qpair failed and we were unable to recover it. 00:35:56.047 [2024-07-12 00:48:23.716276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.047 [2024-07-12 00:48:23.716302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.047 qpair failed and we were unable to recover it. 00:35:56.047 [2024-07-12 00:48:23.716386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.047 [2024-07-12 00:48:23.716412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.047 qpair failed and we were unable to recover it. 00:35:56.047 [2024-07-12 00:48:23.716501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.047 [2024-07-12 00:48:23.716532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.047 qpair failed and we were unable to recover it. 00:35:56.047 [2024-07-12 00:48:23.716632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.047 [2024-07-12 00:48:23.716679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.047 qpair failed and we were unable to recover it. 00:35:56.047 [2024-07-12 00:48:23.716771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.047 [2024-07-12 00:48:23.716797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.047 qpair failed and we were unable to recover it. 00:35:56.047 [2024-07-12 00:48:23.716889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.047 [2024-07-12 00:48:23.716916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.047 qpair failed and we were unable to recover it. 00:35:56.047 [2024-07-12 00:48:23.716998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.047 [2024-07-12 00:48:23.717025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.047 qpair failed and we were unable to recover it. 00:35:56.047 [2024-07-12 00:48:23.717103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.047 [2024-07-12 00:48:23.717129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.047 qpair failed and we were unable to recover it. 00:35:56.047 [2024-07-12 00:48:23.717204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.047 [2024-07-12 00:48:23.717229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.047 qpair failed and we were unable to recover it. 00:35:56.047 [2024-07-12 00:48:23.717337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.047 [2024-07-12 00:48:23.717364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.047 qpair failed and we were unable to recover it. 00:35:56.047 [2024-07-12 00:48:23.717449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.047 [2024-07-12 00:48:23.717477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.047 qpair failed and we were unable to recover it. 00:35:56.047 [2024-07-12 00:48:23.717561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.047 [2024-07-12 00:48:23.717600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.047 qpair failed and we were unable to recover it. 00:35:56.047 [2024-07-12 00:48:23.717685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.047 [2024-07-12 00:48:23.717710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.047 qpair failed and we were unable to recover it. 00:35:56.047 [2024-07-12 00:48:23.717783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.047 [2024-07-12 00:48:23.717809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.047 qpair failed and we were unable to recover it. 00:35:56.047 [2024-07-12 00:48:23.717888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.047 [2024-07-12 00:48:23.717914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.047 qpair failed and we were unable to recover it. 00:35:56.048 [2024-07-12 00:48:23.717999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.048 [2024-07-12 00:48:23.718024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.048 qpair failed and we were unable to recover it. 00:35:56.048 [2024-07-12 00:48:23.718111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.048 [2024-07-12 00:48:23.718139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.048 qpair failed and we were unable to recover it. 00:35:56.048 [2024-07-12 00:48:23.718339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.048 [2024-07-12 00:48:23.718369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.048 qpair failed and we were unable to recover it. 00:35:56.048 [2024-07-12 00:48:23.718449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.048 [2024-07-12 00:48:23.718476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.048 qpair failed and we were unable to recover it. 00:35:56.048 [2024-07-12 00:48:23.718565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.048 [2024-07-12 00:48:23.718601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.048 qpair failed and we were unable to recover it. 00:35:56.048 [2024-07-12 00:48:23.718687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.048 [2024-07-12 00:48:23.718713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.048 qpair failed and we were unable to recover it. 00:35:56.048 [2024-07-12 00:48:23.718790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.048 [2024-07-12 00:48:23.718814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.048 qpair failed and we were unable to recover it. 00:35:56.048 [2024-07-12 00:48:23.718898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.048 [2024-07-12 00:48:23.718923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.048 qpair failed and we were unable to recover it. 00:35:56.048 [2024-07-12 00:48:23.719016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.048 [2024-07-12 00:48:23.719043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.048 qpair failed and we were unable to recover it. 00:35:56.048 [2024-07-12 00:48:23.719123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.048 [2024-07-12 00:48:23.719149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.048 qpair failed and we were unable to recover it. 00:35:56.048 [2024-07-12 00:48:23.719233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.048 [2024-07-12 00:48:23.719260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.048 qpair failed and we were unable to recover it. 00:35:56.048 [2024-07-12 00:48:23.719343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.048 [2024-07-12 00:48:23.719370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.048 qpair failed and we were unable to recover it. 00:35:56.048 [2024-07-12 00:48:23.719465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.048 [2024-07-12 00:48:23.719505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.048 qpair failed and we were unable to recover it. 00:35:56.048 00:48:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=1084328 00:35:56.048 [2024-07-12 00:48:23.719597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.048 [2024-07-12 00:48:23.719627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.048 qpair failed and we were unable to recover it. 00:35:56.048 [2024-07-12 00:48:23.719705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.048 [2024-07-12 00:48:23.719731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.048 qpair failed and we were unable to recover it. 00:35:56.048 00:48:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:35:56.048 00:48:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 1084328 00:35:56.048 [2024-07-12 00:48:23.719813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.048 [2024-07-12 00:48:23.719839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.048 qpair failed and we were unable to recover it. 00:35:56.048 [2024-07-12 00:48:23.719918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.048 [2024-07-12 00:48:23.719943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.048 qpair failed and we were unable to recover it. 00:35:56.048 00:48:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@827 -- # '[' -z 1084328 ']' 00:35:56.048 00:48:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:56.048 00:48:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@832 -- # local max_retries=100 00:35:56.048 00:48:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:56.048 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:56.048 00:48:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # xtrace_disable 00:35:56.048 00:48:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:56.048 [2024-07-12 00:48:23.721973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.048 [2024-07-12 00:48:23.722008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.048 qpair failed and we were unable to recover it. 00:35:56.048 [2024-07-12 00:48:23.722243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.048 [2024-07-12 00:48:23.722273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.048 qpair failed and we were unable to recover it. 00:35:56.048 [2024-07-12 00:48:23.722355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.048 [2024-07-12 00:48:23.722383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.048 qpair failed and we were unable to recover it. 00:35:56.048 [2024-07-12 00:48:23.722470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.048 [2024-07-12 00:48:23.722496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.048 qpair failed and we were unable to recover it. 00:35:56.048 [2024-07-12 00:48:23.722582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.048 [2024-07-12 00:48:23.722618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.048 qpair failed and we were unable to recover it. 00:35:56.048 [2024-07-12 00:48:23.722706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.048 [2024-07-12 00:48:23.722732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.048 qpair failed and we were unable to recover it. 00:35:56.048 [2024-07-12 00:48:23.722819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.048 [2024-07-12 00:48:23.722846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.048 qpair failed and we were unable to recover it. 00:35:56.048 [2024-07-12 00:48:23.722924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.048 [2024-07-12 00:48:23.722951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.048 qpair failed and we were unable to recover it. 00:35:56.048 [2024-07-12 00:48:23.723038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.048 [2024-07-12 00:48:23.723063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.048 qpair failed and we were unable to recover it. 00:35:56.048 [2024-07-12 00:48:23.723149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.048 [2024-07-12 00:48:23.723174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.048 qpair failed and we were unable to recover it. 00:35:56.048 [2024-07-12 00:48:23.723271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.049 [2024-07-12 00:48:23.723301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.049 qpair failed and we were unable to recover it. 00:35:56.049 [2024-07-12 00:48:23.723394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.049 [2024-07-12 00:48:23.723423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.049 qpair failed and we were unable to recover it. 00:35:56.049 [2024-07-12 00:48:23.723511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.049 [2024-07-12 00:48:23.723541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.049 qpair failed and we were unable to recover it. 00:35:56.049 [2024-07-12 00:48:23.723649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.049 [2024-07-12 00:48:23.723677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.049 qpair failed and we were unable to recover it. 00:35:56.049 [2024-07-12 00:48:23.723771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.049 [2024-07-12 00:48:23.723797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.049 qpair failed and we were unable to recover it. 00:35:56.049 [2024-07-12 00:48:23.723886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.049 [2024-07-12 00:48:23.723913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.049 qpair failed and we were unable to recover it. 00:35:56.049 [2024-07-12 00:48:23.723993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.049 [2024-07-12 00:48:23.724021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.049 qpair failed and we were unable to recover it. 00:35:56.049 [2024-07-12 00:48:23.724100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.049 [2024-07-12 00:48:23.724124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.049 qpair failed and we were unable to recover it. 00:35:56.049 [2024-07-12 00:48:23.724212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.049 [2024-07-12 00:48:23.724238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.049 qpair failed and we were unable to recover it. 00:35:56.049 [2024-07-12 00:48:23.724317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.049 [2024-07-12 00:48:23.724342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.049 qpair failed and we were unable to recover it. 00:35:56.049 [2024-07-12 00:48:23.724452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.049 [2024-07-12 00:48:23.724480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.049 qpair failed and we were unable to recover it. 00:35:56.049 [2024-07-12 00:48:23.724564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.049 [2024-07-12 00:48:23.724595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.049 qpair failed and we were unable to recover it. 00:35:56.049 [2024-07-12 00:48:23.724707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.049 [2024-07-12 00:48:23.724734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.049 qpair failed and we were unable to recover it. 00:35:56.049 [2024-07-12 00:48:23.724863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.049 [2024-07-12 00:48:23.724907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.049 qpair failed and we were unable to recover it. 00:35:56.049 [2024-07-12 00:48:23.724993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.049 [2024-07-12 00:48:23.725019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.049 qpair failed and we were unable to recover it. 00:35:56.049 [2024-07-12 00:48:23.725108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.049 [2024-07-12 00:48:23.725135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.049 qpair failed and we were unable to recover it. 00:35:56.049 [2024-07-12 00:48:23.725226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.049 [2024-07-12 00:48:23.725258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.049 qpair failed and we were unable to recover it. 00:35:56.049 [2024-07-12 00:48:23.725344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.049 [2024-07-12 00:48:23.725369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.049 qpair failed and we were unable to recover it. 00:35:56.049 [2024-07-12 00:48:23.725489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.049 [2024-07-12 00:48:23.725523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.049 qpair failed and we were unable to recover it. 00:35:56.049 [2024-07-12 00:48:23.725623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.049 [2024-07-12 00:48:23.725652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.049 qpair failed and we were unable to recover it. 00:35:56.049 [2024-07-12 00:48:23.725779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.049 [2024-07-12 00:48:23.725820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.049 qpair failed and we were unable to recover it. 00:35:56.049 [2024-07-12 00:48:23.725911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.049 [2024-07-12 00:48:23.725937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.049 qpair failed and we were unable to recover it. 00:35:56.049 [2024-07-12 00:48:23.726018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.049 [2024-07-12 00:48:23.726044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.049 qpair failed and we were unable to recover it. 00:35:56.049 [2024-07-12 00:48:23.726124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.049 [2024-07-12 00:48:23.726149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.049 qpair failed and we were unable to recover it. 00:35:56.049 [2024-07-12 00:48:23.726229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.049 [2024-07-12 00:48:23.726254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.049 qpair failed and we were unable to recover it. 00:35:56.049 [2024-07-12 00:48:23.726365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.049 [2024-07-12 00:48:23.726391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.050 qpair failed and we were unable to recover it. 00:35:56.050 [2024-07-12 00:48:23.726465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.050 [2024-07-12 00:48:23.726491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.050 qpair failed and we were unable to recover it. 00:35:56.050 [2024-07-12 00:48:23.726570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.050 [2024-07-12 00:48:23.726604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.050 qpair failed and we were unable to recover it. 00:35:56.050 [2024-07-12 00:48:23.726699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.050 [2024-07-12 00:48:23.726725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.050 qpair failed and we were unable to recover it. 00:35:56.050 [2024-07-12 00:48:23.726821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.050 [2024-07-12 00:48:23.726848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.050 qpair failed and we were unable to recover it. 00:35:56.050 [2024-07-12 00:48:23.726933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.050 [2024-07-12 00:48:23.726960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.050 qpair failed and we were unable to recover it. 00:35:56.050 [2024-07-12 00:48:23.727092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.050 [2024-07-12 00:48:23.727132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.050 qpair failed and we were unable to recover it. 00:35:56.050 [2024-07-12 00:48:23.727225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.050 [2024-07-12 00:48:23.727250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.050 qpair failed and we were unable to recover it. 00:35:56.050 [2024-07-12 00:48:23.727342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.050 [2024-07-12 00:48:23.727370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.050 qpair failed and we were unable to recover it. 00:35:56.050 [2024-07-12 00:48:23.727453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.050 [2024-07-12 00:48:23.727482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.050 qpair failed and we were unable to recover it. 00:35:56.050 [2024-07-12 00:48:23.727573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.050 [2024-07-12 00:48:23.727615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.050 qpair failed and we were unable to recover it. 00:35:56.050 [2024-07-12 00:48:23.727701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.050 [2024-07-12 00:48:23.727727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.050 qpair failed and we were unable to recover it. 00:35:56.050 [2024-07-12 00:48:23.727832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.050 [2024-07-12 00:48:23.727859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.050 qpair failed and we were unable to recover it. 00:35:56.050 [2024-07-12 00:48:23.727979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.050 [2024-07-12 00:48:23.728004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.050 qpair failed and we were unable to recover it. 00:35:56.050 [2024-07-12 00:48:23.728088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.050 [2024-07-12 00:48:23.728115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.050 qpair failed and we were unable to recover it. 00:35:56.050 [2024-07-12 00:48:23.728208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.050 [2024-07-12 00:48:23.728236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.050 qpair failed and we were unable to recover it. 00:35:56.050 [2024-07-12 00:48:23.728342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.050 [2024-07-12 00:48:23.728370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.050 qpair failed and we were unable to recover it. 00:35:56.050 [2024-07-12 00:48:23.728505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.050 [2024-07-12 00:48:23.728533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.050 qpair failed and we were unable to recover it. 00:35:56.050 [2024-07-12 00:48:23.728633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.050 [2024-07-12 00:48:23.728665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.050 qpair failed and we were unable to recover it. 00:35:56.050 [2024-07-12 00:48:23.728755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.050 [2024-07-12 00:48:23.728783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.050 qpair failed and we were unable to recover it. 00:35:56.050 [2024-07-12 00:48:23.728901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.050 [2024-07-12 00:48:23.728929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.050 qpair failed and we were unable to recover it. 00:35:56.050 [2024-07-12 00:48:23.729043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.050 [2024-07-12 00:48:23.729071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.050 qpair failed and we were unable to recover it. 00:35:56.050 [2024-07-12 00:48:23.729159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.050 [2024-07-12 00:48:23.729185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.050 qpair failed and we were unable to recover it. 00:35:56.050 [2024-07-12 00:48:23.729264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.050 [2024-07-12 00:48:23.729290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.050 qpair failed and we were unable to recover it. 00:35:56.050 [2024-07-12 00:48:23.729369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.050 [2024-07-12 00:48:23.729394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.050 qpair failed and we were unable to recover it. 00:35:56.050 [2024-07-12 00:48:23.729481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.050 [2024-07-12 00:48:23.729507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.050 qpair failed and we were unable to recover it. 00:35:56.050 [2024-07-12 00:48:23.729592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.050 [2024-07-12 00:48:23.729618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.050 qpair failed and we were unable to recover it. 00:35:56.050 [2024-07-12 00:48:23.729698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.050 [2024-07-12 00:48:23.729723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.050 qpair failed and we were unable to recover it. 00:35:56.050 [2024-07-12 00:48:23.729801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.050 [2024-07-12 00:48:23.729826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.050 qpair failed and we were unable to recover it. 00:35:56.050 [2024-07-12 00:48:23.729918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.050 [2024-07-12 00:48:23.729943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.050 qpair failed and we were unable to recover it. 00:35:56.050 [2024-07-12 00:48:23.730027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.050 [2024-07-12 00:48:23.730053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.050 qpair failed and we were unable to recover it. 00:35:56.050 [2024-07-12 00:48:23.730136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.050 [2024-07-12 00:48:23.730163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.050 qpair failed and we were unable to recover it. 00:35:56.050 [2024-07-12 00:48:23.730258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.050 [2024-07-12 00:48:23.730286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.050 qpair failed and we were unable to recover it. 00:35:56.051 [2024-07-12 00:48:23.730376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.051 [2024-07-12 00:48:23.730405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.051 qpair failed and we were unable to recover it. 00:35:56.051 [2024-07-12 00:48:23.730502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.051 [2024-07-12 00:48:23.730532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.051 qpair failed and we were unable to recover it. 00:35:56.051 [2024-07-12 00:48:23.730624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.051 [2024-07-12 00:48:23.730662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.051 qpair failed and we were unable to recover it. 00:35:56.051 [2024-07-12 00:48:23.730757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.051 [2024-07-12 00:48:23.730783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.051 qpair failed and we were unable to recover it. 00:35:56.051 [2024-07-12 00:48:23.730867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.051 [2024-07-12 00:48:23.730892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.051 qpair failed and we were unable to recover it. 00:35:56.051 [2024-07-12 00:48:23.730982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.051 [2024-07-12 00:48:23.731011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.051 qpair failed and we were unable to recover it. 00:35:56.051 [2024-07-12 00:48:23.731095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.051 [2024-07-12 00:48:23.731120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.051 qpair failed and we were unable to recover it. 00:35:56.051 [2024-07-12 00:48:23.731202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.051 [2024-07-12 00:48:23.731229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.051 qpair failed and we were unable to recover it. 00:35:56.051 [2024-07-12 00:48:23.731313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.051 [2024-07-12 00:48:23.731339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.051 qpair failed and we were unable to recover it. 00:35:56.051 [2024-07-12 00:48:23.731423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.051 [2024-07-12 00:48:23.731450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.051 qpair failed and we were unable to recover it. 00:35:56.051 [2024-07-12 00:48:23.731531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.051 [2024-07-12 00:48:23.731558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.051 qpair failed and we were unable to recover it. 00:35:56.051 [2024-07-12 00:48:23.731649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.051 [2024-07-12 00:48:23.731677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.051 qpair failed and we were unable to recover it. 00:35:56.051 [2024-07-12 00:48:23.731768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.051 [2024-07-12 00:48:23.731794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.051 qpair failed and we were unable to recover it. 00:35:56.051 [2024-07-12 00:48:23.731878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.051 [2024-07-12 00:48:23.731904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.051 qpair failed and we were unable to recover it. 00:35:56.051 [2024-07-12 00:48:23.731997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.051 [2024-07-12 00:48:23.732022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.051 qpair failed and we were unable to recover it. 00:35:56.051 [2024-07-12 00:48:23.732098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.051 [2024-07-12 00:48:23.732124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.051 qpair failed and we were unable to recover it. 00:35:56.051 [2024-07-12 00:48:23.732209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.051 [2024-07-12 00:48:23.732238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.051 qpair failed and we were unable to recover it. 00:35:56.051 [2024-07-12 00:48:23.732333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.051 [2024-07-12 00:48:23.732359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.051 qpair failed and we were unable to recover it. 00:35:56.051 [2024-07-12 00:48:23.732448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.051 [2024-07-12 00:48:23.732476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.051 qpair failed and we were unable to recover it. 00:35:56.051 [2024-07-12 00:48:23.732558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.051 [2024-07-12 00:48:23.732584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.051 qpair failed and we were unable to recover it. 00:35:56.051 [2024-07-12 00:48:23.732677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.051 [2024-07-12 00:48:23.732703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.051 qpair failed and we were unable to recover it. 00:35:56.051 [2024-07-12 00:48:23.732789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.051 [2024-07-12 00:48:23.732817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.051 qpair failed and we were unable to recover it. 00:35:56.051 [2024-07-12 00:48:23.732898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.051 [2024-07-12 00:48:23.732925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.051 qpair failed and we were unable to recover it. 00:35:56.051 [2024-07-12 00:48:23.733017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.051 [2024-07-12 00:48:23.733043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.051 qpair failed and we were unable to recover it. 00:35:56.051 [2024-07-12 00:48:23.733126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.051 [2024-07-12 00:48:23.733154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.051 qpair failed and we were unable to recover it. 00:35:56.051 [2024-07-12 00:48:23.733238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.051 [2024-07-12 00:48:23.733269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.051 qpair failed and we were unable to recover it. 00:35:56.051 [2024-07-12 00:48:23.733357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.051 [2024-07-12 00:48:23.733383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.051 qpair failed and we were unable to recover it. 00:35:56.051 [2024-07-12 00:48:23.733464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.051 [2024-07-12 00:48:23.733490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.051 qpair failed and we were unable to recover it. 00:35:56.051 [2024-07-12 00:48:23.733578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.051 [2024-07-12 00:48:23.733612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.051 qpair failed and we were unable to recover it. 00:35:56.051 [2024-07-12 00:48:23.733694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.051 [2024-07-12 00:48:23.733720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.051 qpair failed and we were unable to recover it. 00:35:56.051 [2024-07-12 00:48:23.733810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.051 [2024-07-12 00:48:23.733836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.051 qpair failed and we were unable to recover it. 00:35:56.051 [2024-07-12 00:48:23.733917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.051 [2024-07-12 00:48:23.733944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.051 qpair failed and we were unable to recover it. 00:35:56.051 [2024-07-12 00:48:23.734048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.051 [2024-07-12 00:48:23.734089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.051 qpair failed and we were unable to recover it. 00:35:56.051 [2024-07-12 00:48:23.734181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.052 [2024-07-12 00:48:23.734209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.052 qpair failed and we were unable to recover it. 00:35:56.052 [2024-07-12 00:48:23.734298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.052 [2024-07-12 00:48:23.734325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.052 qpair failed and we were unable to recover it. 00:35:56.052 [2024-07-12 00:48:23.734417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.052 [2024-07-12 00:48:23.734443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.052 qpair failed and we were unable to recover it. 00:35:56.052 [2024-07-12 00:48:23.734636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.052 [2024-07-12 00:48:23.734663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.052 qpair failed and we were unable to recover it. 00:35:56.052 [2024-07-12 00:48:23.734755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.052 [2024-07-12 00:48:23.734784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.052 qpair failed and we were unable to recover it. 00:35:56.052 [2024-07-12 00:48:23.734864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.052 [2024-07-12 00:48:23.734889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.052 qpair failed and we were unable to recover it. 00:35:56.052 [2024-07-12 00:48:23.734975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.052 [2024-07-12 00:48:23.735001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.052 qpair failed and we were unable to recover it. 00:35:56.052 [2024-07-12 00:48:23.735087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.052 [2024-07-12 00:48:23.735114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.052 qpair failed and we were unable to recover it. 00:35:56.052 [2024-07-12 00:48:23.735199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.052 [2024-07-12 00:48:23.735227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.052 qpair failed and we were unable to recover it. 00:35:56.052 [2024-07-12 00:48:23.735311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.052 [2024-07-12 00:48:23.735337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.052 qpair failed and we were unable to recover it. 00:35:56.052 [2024-07-12 00:48:23.735422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.052 [2024-07-12 00:48:23.735450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.052 qpair failed and we were unable to recover it. 00:35:56.052 [2024-07-12 00:48:23.735532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.052 [2024-07-12 00:48:23.735558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.052 qpair failed and we were unable to recover it. 00:35:56.052 [2024-07-12 00:48:23.735693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.052 [2024-07-12 00:48:23.735736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.052 qpair failed and we were unable to recover it. 00:35:56.052 [2024-07-12 00:48:23.735826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.052 [2024-07-12 00:48:23.735854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.052 qpair failed and we were unable to recover it. 00:35:56.052 [2024-07-12 00:48:23.735941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.052 [2024-07-12 00:48:23.735967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.052 qpair failed and we were unable to recover it. 00:35:56.052 [2024-07-12 00:48:23.736048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.052 [2024-07-12 00:48:23.736073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.052 qpair failed and we were unable to recover it. 00:35:56.052 [2024-07-12 00:48:23.736154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.052 [2024-07-12 00:48:23.736180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.052 qpair failed and we were unable to recover it. 00:35:56.052 [2024-07-12 00:48:23.736265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.052 [2024-07-12 00:48:23.736290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.052 qpair failed and we were unable to recover it. 00:35:56.052 [2024-07-12 00:48:23.736373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.052 [2024-07-12 00:48:23.736398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.052 qpair failed and we were unable to recover it. 00:35:56.052 [2024-07-12 00:48:23.736488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.052 [2024-07-12 00:48:23.736519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.052 qpair failed and we were unable to recover it. 00:35:56.052 [2024-07-12 00:48:23.736613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.052 [2024-07-12 00:48:23.736640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.052 qpair failed and we were unable to recover it. 00:35:56.052 [2024-07-12 00:48:23.736719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.052 [2024-07-12 00:48:23.736745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.052 qpair failed and we were unable to recover it. 00:35:56.052 [2024-07-12 00:48:23.736825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.052 [2024-07-12 00:48:23.736850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.052 qpair failed and we were unable to recover it. 00:35:56.052 [2024-07-12 00:48:23.736945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.052 [2024-07-12 00:48:23.736974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.052 qpair failed and we were unable to recover it. 00:35:56.052 [2024-07-12 00:48:23.737062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.052 [2024-07-12 00:48:23.737088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.052 qpair failed and we were unable to recover it. 00:35:56.052 [2024-07-12 00:48:23.737163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.052 [2024-07-12 00:48:23.737189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.052 qpair failed and we were unable to recover it. 00:35:56.052 [2024-07-12 00:48:23.737269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.052 [2024-07-12 00:48:23.737298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.052 qpair failed and we were unable to recover it. 00:35:56.052 [2024-07-12 00:48:23.737389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.052 [2024-07-12 00:48:23.737418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.052 qpair failed and we were unable to recover it. 00:35:56.052 [2024-07-12 00:48:23.737500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.052 [2024-07-12 00:48:23.737526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.052 qpair failed and we were unable to recover it. 00:35:56.052 [2024-07-12 00:48:23.737603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.053 [2024-07-12 00:48:23.737634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.053 qpair failed and we were unable to recover it. 00:35:56.053 [2024-07-12 00:48:23.737724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.053 [2024-07-12 00:48:23.737749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.053 qpair failed and we were unable to recover it. 00:35:56.053 [2024-07-12 00:48:23.737827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.053 [2024-07-12 00:48:23.737853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.053 qpair failed and we were unable to recover it. 00:35:56.053 [2024-07-12 00:48:23.737934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.053 [2024-07-12 00:48:23.737961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.053 qpair failed and we were unable to recover it. 00:35:56.053 [2024-07-12 00:48:23.738049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.053 [2024-07-12 00:48:23.738076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.053 qpair failed and we were unable to recover it. 00:35:56.053 [2024-07-12 00:48:23.738167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.053 [2024-07-12 00:48:23.738197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.053 qpair failed and we were unable to recover it. 00:35:56.053 [2024-07-12 00:48:23.738281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.053 [2024-07-12 00:48:23.738310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.053 qpair failed and we were unable to recover it. 00:35:56.053 [2024-07-12 00:48:23.738388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.053 [2024-07-12 00:48:23.738414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.053 qpair failed and we were unable to recover it. 00:35:56.053 [2024-07-12 00:48:23.738495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.053 [2024-07-12 00:48:23.738521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.053 qpair failed and we were unable to recover it. 00:35:56.053 [2024-07-12 00:48:23.738635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.053 [2024-07-12 00:48:23.738661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.053 qpair failed and we were unable to recover it. 00:35:56.053 [2024-07-12 00:48:23.738741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.053 [2024-07-12 00:48:23.738768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.053 qpair failed and we were unable to recover it. 00:35:56.053 [2024-07-12 00:48:23.738857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.053 [2024-07-12 00:48:23.738883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.053 qpair failed and we were unable to recover it. 00:35:56.053 [2024-07-12 00:48:23.738960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.053 [2024-07-12 00:48:23.738985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.053 qpair failed and we were unable to recover it. 00:35:56.053 [2024-07-12 00:48:23.739067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.053 [2024-07-12 00:48:23.739092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.053 qpair failed and we were unable to recover it. 00:35:56.053 [2024-07-12 00:48:23.739188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.053 [2024-07-12 00:48:23.739213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.053 qpair failed and we were unable to recover it. 00:35:56.053 [2024-07-12 00:48:23.739293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.053 [2024-07-12 00:48:23.739319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.053 qpair failed and we were unable to recover it. 00:35:56.053 [2024-07-12 00:48:23.739406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.053 [2024-07-12 00:48:23.739434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.053 qpair failed and we were unable to recover it. 00:35:56.053 [2024-07-12 00:48:23.739527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.053 [2024-07-12 00:48:23.739554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.053 qpair failed and we were unable to recover it. 00:35:56.053 [2024-07-12 00:48:23.739652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.053 [2024-07-12 00:48:23.739680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.053 qpair failed and we were unable to recover it. 00:35:56.053 [2024-07-12 00:48:23.739758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.053 [2024-07-12 00:48:23.739783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.053 qpair failed and we were unable to recover it. 00:35:56.053 [2024-07-12 00:48:23.739871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.053 [2024-07-12 00:48:23.739897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.053 qpair failed and we were unable to recover it. 00:35:56.053 [2024-07-12 00:48:23.739983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.053 [2024-07-12 00:48:23.740009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.053 qpair failed and we were unable to recover it. 00:35:56.053 [2024-07-12 00:48:23.740090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.053 [2024-07-12 00:48:23.740116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.053 qpair failed and we were unable to recover it. 00:35:56.053 [2024-07-12 00:48:23.740204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.053 [2024-07-12 00:48:23.740232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.053 qpair failed and we were unable to recover it. 00:35:56.053 [2024-07-12 00:48:23.740334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.053 [2024-07-12 00:48:23.740363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.053 qpair failed and we were unable to recover it. 00:35:56.053 [2024-07-12 00:48:23.740460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.053 [2024-07-12 00:48:23.740487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.053 qpair failed and we were unable to recover it. 00:35:56.053 [2024-07-12 00:48:23.740566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.053 [2024-07-12 00:48:23.740602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.053 qpair failed and we were unable to recover it. 00:35:56.053 [2024-07-12 00:48:23.740688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.053 [2024-07-12 00:48:23.740714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.053 qpair failed and we were unable to recover it. 00:35:56.053 [2024-07-12 00:48:23.740790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.053 [2024-07-12 00:48:23.740816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.053 qpair failed and we were unable to recover it. 00:35:56.053 [2024-07-12 00:48:23.740900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.053 [2024-07-12 00:48:23.740927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.053 qpair failed and we were unable to recover it. 00:35:56.053 [2024-07-12 00:48:23.741007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.053 [2024-07-12 00:48:23.741037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.053 qpair failed and we were unable to recover it. 00:35:56.053 [2024-07-12 00:48:23.741119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.053 [2024-07-12 00:48:23.741146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.053 qpair failed and we were unable to recover it. 00:35:56.053 [2024-07-12 00:48:23.741241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.053 [2024-07-12 00:48:23.741267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.054 qpair failed and we were unable to recover it. 00:35:56.054 [2024-07-12 00:48:23.741344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.054 [2024-07-12 00:48:23.741373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.054 qpair failed and we were unable to recover it. 00:35:56.054 [2024-07-12 00:48:23.741470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.054 [2024-07-12 00:48:23.741500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.054 qpair failed and we were unable to recover it. 00:35:56.054 [2024-07-12 00:48:23.741592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.054 [2024-07-12 00:48:23.741620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.054 qpair failed and we were unable to recover it. 00:35:56.054 [2024-07-12 00:48:23.741718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.054 [2024-07-12 00:48:23.741755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.054 qpair failed and we were unable to recover it. 00:35:56.054 [2024-07-12 00:48:23.741839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.054 [2024-07-12 00:48:23.741866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.054 qpair failed and we were unable to recover it. 00:35:56.054 [2024-07-12 00:48:23.741944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.054 [2024-07-12 00:48:23.741969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.054 qpair failed and we were unable to recover it. 00:35:56.054 [2024-07-12 00:48:23.742051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.054 [2024-07-12 00:48:23.742080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.054 qpair failed and we were unable to recover it. 00:35:56.054 [2024-07-12 00:48:23.742171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.054 [2024-07-12 00:48:23.742200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.054 qpair failed and we were unable to recover it. 00:35:56.054 [2024-07-12 00:48:23.742293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.054 [2024-07-12 00:48:23.742321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.054 qpair failed and we were unable to recover it. 00:35:56.054 [2024-07-12 00:48:23.742402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.054 [2024-07-12 00:48:23.742428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.054 qpair failed and we were unable to recover it. 00:35:56.054 [2024-07-12 00:48:23.742503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.054 [2024-07-12 00:48:23.742530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.054 qpair failed and we were unable to recover it. 00:35:56.054 [2024-07-12 00:48:23.742626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.054 [2024-07-12 00:48:23.742655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.054 qpair failed and we were unable to recover it. 00:35:56.054 [2024-07-12 00:48:23.742738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.054 [2024-07-12 00:48:23.742764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.054 qpair failed and we were unable to recover it. 00:35:56.054 [2024-07-12 00:48:23.742851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.054 [2024-07-12 00:48:23.742879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.054 qpair failed and we were unable to recover it. 00:35:56.054 [2024-07-12 00:48:23.742967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.054 [2024-07-12 00:48:23.742993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.054 qpair failed and we were unable to recover it. 00:35:56.054 [2024-07-12 00:48:23.743078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.054 [2024-07-12 00:48:23.743104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.054 qpair failed and we were unable to recover it. 00:35:56.054 [2024-07-12 00:48:23.743182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.054 [2024-07-12 00:48:23.743210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.054 qpair failed and we were unable to recover it. 00:35:56.054 [2024-07-12 00:48:23.743300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.054 [2024-07-12 00:48:23.743329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.054 qpair failed and we were unable to recover it. 00:35:56.054 [2024-07-12 00:48:23.743406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.054 [2024-07-12 00:48:23.743431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.054 qpair failed and we were unable to recover it. 00:35:56.054 [2024-07-12 00:48:23.743513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.054 [2024-07-12 00:48:23.743539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.054 qpair failed and we were unable to recover it. 00:35:56.054 [2024-07-12 00:48:23.743619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.054 [2024-07-12 00:48:23.743648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.054 qpair failed and we were unable to recover it. 00:35:56.054 [2024-07-12 00:48:23.743736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.054 [2024-07-12 00:48:23.743765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.054 qpair failed and we were unable to recover it. 00:35:56.054 [2024-07-12 00:48:23.743857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.054 [2024-07-12 00:48:23.743885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.054 qpair failed and we were unable to recover it. 00:35:56.054 [2024-07-12 00:48:23.743975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.054 [2024-07-12 00:48:23.744000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.054 qpair failed and we were unable to recover it. 00:35:56.054 [2024-07-12 00:48:23.744088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.054 [2024-07-12 00:48:23.744119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.054 qpair failed and we were unable to recover it. 00:35:56.054 [2024-07-12 00:48:23.744201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.054 [2024-07-12 00:48:23.744226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.054 qpair failed and we were unable to recover it. 00:35:56.054 [2024-07-12 00:48:23.744310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.054 [2024-07-12 00:48:23.744339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.054 qpair failed and we were unable to recover it. 00:35:56.054 [2024-07-12 00:48:23.744422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.054 [2024-07-12 00:48:23.744448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.054 qpair failed and we were unable to recover it. 00:35:56.054 [2024-07-12 00:48:23.744536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.054 [2024-07-12 00:48:23.744562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.054 qpair failed and we were unable to recover it. 00:35:56.054 [2024-07-12 00:48:23.744667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.054 [2024-07-12 00:48:23.744695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.055 qpair failed and we were unable to recover it. 00:35:56.055 [2024-07-12 00:48:23.744779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.055 [2024-07-12 00:48:23.744804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.055 qpair failed and we were unable to recover it. 00:35:56.055 [2024-07-12 00:48:23.744879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.055 [2024-07-12 00:48:23.744905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.055 qpair failed and we were unable to recover it. 00:35:56.055 [2024-07-12 00:48:23.744985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.055 [2024-07-12 00:48:23.745011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.055 qpair failed and we were unable to recover it. 00:35:56.055 [2024-07-12 00:48:23.745091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.055 [2024-07-12 00:48:23.745119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.055 qpair failed and we were unable to recover it. 00:35:56.055 [2024-07-12 00:48:23.745203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.055 [2024-07-12 00:48:23.745232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.055 qpair failed and we were unable to recover it. 00:35:56.055 [2024-07-12 00:48:23.745321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.055 [2024-07-12 00:48:23.745350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.055 qpair failed and we were unable to recover it. 00:35:56.055 [2024-07-12 00:48:23.745439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.055 [2024-07-12 00:48:23.745466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.055 qpair failed and we were unable to recover it. 00:35:56.055 [2024-07-12 00:48:23.745548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.055 [2024-07-12 00:48:23.745575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.055 qpair failed and we were unable to recover it. 00:35:56.055 [2024-07-12 00:48:23.745673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.055 [2024-07-12 00:48:23.745699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.055 qpair failed and we were unable to recover it. 00:35:56.055 [2024-07-12 00:48:23.745782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.055 [2024-07-12 00:48:23.745809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.055 qpair failed and we were unable to recover it. 00:35:56.055 [2024-07-12 00:48:23.745884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.055 [2024-07-12 00:48:23.745910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.055 qpair failed and we were unable to recover it. 00:35:56.055 [2024-07-12 00:48:23.746032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.055 [2024-07-12 00:48:23.746058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.055 qpair failed and we were unable to recover it. 00:35:56.055 [2024-07-12 00:48:23.746141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.055 [2024-07-12 00:48:23.746169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.055 qpair failed and we were unable to recover it. 00:35:56.055 [2024-07-12 00:48:23.746262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.055 [2024-07-12 00:48:23.746290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.055 qpair failed and we were unable to recover it. 00:35:56.055 [2024-07-12 00:48:23.746378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.055 [2024-07-12 00:48:23.746407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.055 qpair failed and we were unable to recover it. 00:35:56.055 [2024-07-12 00:48:23.746491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.055 [2024-07-12 00:48:23.746517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.055 qpair failed and we were unable to recover it. 00:35:56.055 [2024-07-12 00:48:23.746602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.055 [2024-07-12 00:48:23.746629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.055 qpair failed and we were unable to recover it. 00:35:56.055 [2024-07-12 00:48:23.746714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.055 [2024-07-12 00:48:23.746742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.055 qpair failed and we were unable to recover it. 00:35:56.055 [2024-07-12 00:48:23.746832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.055 [2024-07-12 00:48:23.746859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.055 qpair failed and we were unable to recover it. 00:35:56.055 [2024-07-12 00:48:23.746941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.055 [2024-07-12 00:48:23.746966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.055 qpair failed and we were unable to recover it. 00:35:56.055 [2024-07-12 00:48:23.747056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.055 [2024-07-12 00:48:23.747082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.055 qpair failed and we were unable to recover it. 00:35:56.055 [2024-07-12 00:48:23.747195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.055 [2024-07-12 00:48:23.747221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.055 qpair failed and we were unable to recover it. 00:35:56.055 [2024-07-12 00:48:23.747301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.055 [2024-07-12 00:48:23.747335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.055 qpair failed and we were unable to recover it. 00:35:56.055 [2024-07-12 00:48:23.747427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.056 [2024-07-12 00:48:23.747458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.056 qpair failed and we were unable to recover it. 00:35:56.056 [2024-07-12 00:48:23.747545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.056 [2024-07-12 00:48:23.747575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.056 qpair failed and we were unable to recover it. 00:35:56.056 [2024-07-12 00:48:23.747685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.056 [2024-07-12 00:48:23.747711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.056 qpair failed and we were unable to recover it. 00:35:56.056 [2024-07-12 00:48:23.747799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.056 [2024-07-12 00:48:23.747825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.056 qpair failed and we were unable to recover it. 00:35:56.056 [2024-07-12 00:48:23.747908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.056 [2024-07-12 00:48:23.747933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.056 qpair failed and we were unable to recover it. 00:35:56.056 [2024-07-12 00:48:23.748020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.056 [2024-07-12 00:48:23.748047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.056 qpair failed and we were unable to recover it. 00:35:56.056 [2024-07-12 00:48:23.748135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.056 [2024-07-12 00:48:23.748160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.056 qpair failed and we were unable to recover it. 00:35:56.056 [2024-07-12 00:48:23.748254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.056 [2024-07-12 00:48:23.748282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.056 qpair failed and we were unable to recover it. 00:35:56.056 [2024-07-12 00:48:23.748476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.056 [2024-07-12 00:48:23.748501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.056 qpair failed and we were unable to recover it. 00:35:56.056 [2024-07-12 00:48:23.748581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.056 [2024-07-12 00:48:23.748613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.056 qpair failed and we were unable to recover it. 00:35:56.056 [2024-07-12 00:48:23.748697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.056 [2024-07-12 00:48:23.748726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.056 qpair failed and we were unable to recover it. 00:35:56.056 [2024-07-12 00:48:23.748810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.056 [2024-07-12 00:48:23.748843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.056 qpair failed and we were unable to recover it. 00:35:56.056 [2024-07-12 00:48:23.748929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.056 [2024-07-12 00:48:23.748956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.056 qpair failed and we were unable to recover it. 00:35:56.056 [2024-07-12 00:48:23.749031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.056 [2024-07-12 00:48:23.749057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.056 qpair failed and we were unable to recover it. 00:35:56.056 [2024-07-12 00:48:23.749138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.056 [2024-07-12 00:48:23.749170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.056 qpair failed and we were unable to recover it. 00:35:56.056 [2024-07-12 00:48:23.749248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.056 [2024-07-12 00:48:23.749273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.056 qpair failed and we were unable to recover it. 00:35:56.056 [2024-07-12 00:48:23.749351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.056 [2024-07-12 00:48:23.749376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.056 qpair failed and we were unable to recover it. 00:35:56.056 [2024-07-12 00:48:23.749455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.056 [2024-07-12 00:48:23.749480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.056 qpair failed and we were unable to recover it. 00:35:56.056 [2024-07-12 00:48:23.749560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.056 [2024-07-12 00:48:23.749585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.056 qpair failed and we were unable to recover it. 00:35:56.056 [2024-07-12 00:48:23.749668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.056 [2024-07-12 00:48:23.749694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.056 qpair failed and we were unable to recover it. 00:35:56.056 [2024-07-12 00:48:23.749779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.056 [2024-07-12 00:48:23.749804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.056 qpair failed and we were unable to recover it. 00:35:56.056 [2024-07-12 00:48:23.749884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.056 [2024-07-12 00:48:23.749913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.056 qpair failed and we were unable to recover it. 00:35:56.056 [2024-07-12 00:48:23.749994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.056 [2024-07-12 00:48:23.750024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.056 qpair failed and we were unable to recover it. 00:35:56.056 [2024-07-12 00:48:23.750115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.056 [2024-07-12 00:48:23.750141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.056 qpair failed and we were unable to recover it. 00:35:56.056 [2024-07-12 00:48:23.750222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.056 [2024-07-12 00:48:23.750249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.056 qpair failed and we were unable to recover it. 00:35:56.056 [2024-07-12 00:48:23.750352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.056 [2024-07-12 00:48:23.750379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.056 qpair failed and we were unable to recover it. 00:35:56.056 [2024-07-12 00:48:23.750468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.056 [2024-07-12 00:48:23.750496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.056 qpair failed and we were unable to recover it. 00:35:56.056 [2024-07-12 00:48:23.750570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.056 [2024-07-12 00:48:23.750602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.056 qpair failed and we were unable to recover it. 00:35:56.056 [2024-07-12 00:48:23.750680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.056 [2024-07-12 00:48:23.750705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.056 qpair failed and we were unable to recover it. 00:35:56.056 [2024-07-12 00:48:23.750789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.056 [2024-07-12 00:48:23.750821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.056 qpair failed and we were unable to recover it. 00:35:56.056 [2024-07-12 00:48:23.750900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.056 [2024-07-12 00:48:23.750928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.056 qpair failed and we were unable to recover it. 00:35:56.056 [2024-07-12 00:48:23.751012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.056 [2024-07-12 00:48:23.751040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.056 qpair failed and we were unable to recover it. 00:35:56.056 [2024-07-12 00:48:23.751123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.057 [2024-07-12 00:48:23.751148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.057 qpair failed and we were unable to recover it. 00:35:56.057 [2024-07-12 00:48:23.751229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.057 [2024-07-12 00:48:23.751256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.057 qpair failed and we were unable to recover it. 00:35:56.057 [2024-07-12 00:48:23.751345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.057 [2024-07-12 00:48:23.751371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.057 qpair failed and we were unable to recover it. 00:35:56.057 [2024-07-12 00:48:23.751447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.057 [2024-07-12 00:48:23.751473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.057 qpair failed and we were unable to recover it. 00:35:56.057 [2024-07-12 00:48:23.751549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.057 [2024-07-12 00:48:23.751576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.057 qpair failed and we were unable to recover it. 00:35:56.057 [2024-07-12 00:48:23.751670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.057 [2024-07-12 00:48:23.751697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.057 qpair failed and we were unable to recover it. 00:35:56.057 [2024-07-12 00:48:23.751777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.057 [2024-07-12 00:48:23.751808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.057 qpair failed and we were unable to recover it. 00:35:56.057 [2024-07-12 00:48:23.751891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.057 [2024-07-12 00:48:23.751918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.057 qpair failed and we were unable to recover it. 00:35:56.057 [2024-07-12 00:48:23.752006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.057 [2024-07-12 00:48:23.752033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.057 qpair failed and we were unable to recover it. 00:35:56.057 [2024-07-12 00:48:23.752112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.057 [2024-07-12 00:48:23.752137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.057 qpair failed and we were unable to recover it. 00:35:56.057 [2024-07-12 00:48:23.752213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.057 [2024-07-12 00:48:23.752240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.057 qpair failed and we were unable to recover it. 00:35:56.057 [2024-07-12 00:48:23.752324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.057 [2024-07-12 00:48:23.752350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.057 qpair failed and we were unable to recover it. 00:35:56.057 [2024-07-12 00:48:23.752431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.057 [2024-07-12 00:48:23.752457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.057 qpair failed and we were unable to recover it. 00:35:56.057 [2024-07-12 00:48:23.752543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.057 [2024-07-12 00:48:23.752571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.057 qpair failed and we were unable to recover it. 00:35:56.057 [2024-07-12 00:48:23.752671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.057 [2024-07-12 00:48:23.752697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.057 qpair failed and we were unable to recover it. 00:35:56.057 [2024-07-12 00:48:23.752793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.057 [2024-07-12 00:48:23.752821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.057 qpair failed and we were unable to recover it. 00:35:56.057 [2024-07-12 00:48:23.752902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.057 [2024-07-12 00:48:23.752928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.057 qpair failed and we were unable to recover it. 00:35:56.057 [2024-07-12 00:48:23.753009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.057 [2024-07-12 00:48:23.753035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.057 qpair failed and we were unable to recover it. 00:35:56.057 [2024-07-12 00:48:23.753126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.057 [2024-07-12 00:48:23.753151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.057 qpair failed and we were unable to recover it. 00:35:56.057 [2024-07-12 00:48:23.753224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.057 [2024-07-12 00:48:23.753250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.057 qpair failed and we were unable to recover it. 00:35:56.057 [2024-07-12 00:48:23.753344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.057 [2024-07-12 00:48:23.753373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.057 qpair failed and we were unable to recover it. 00:35:56.057 [2024-07-12 00:48:23.753460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.057 [2024-07-12 00:48:23.753487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.057 qpair failed and we were unable to recover it. 00:35:56.057 [2024-07-12 00:48:23.753566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.057 [2024-07-12 00:48:23.753599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.057 qpair failed and we were unable to recover it. 00:35:56.057 [2024-07-12 00:48:23.753687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.057 [2024-07-12 00:48:23.753714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.057 qpair failed and we were unable to recover it. 00:35:56.057 [2024-07-12 00:48:23.753795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.057 [2024-07-12 00:48:23.753821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.057 qpair failed and we were unable to recover it. 00:35:56.057 [2024-07-12 00:48:23.753906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.057 [2024-07-12 00:48:23.753932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.057 qpair failed and we were unable to recover it. 00:35:56.057 [2024-07-12 00:48:23.754012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.057 [2024-07-12 00:48:23.754039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.057 qpair failed and we were unable to recover it. 00:35:56.057 [2024-07-12 00:48:23.754131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.057 [2024-07-12 00:48:23.754161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.057 qpair failed and we were unable to recover it. 00:35:56.057 [2024-07-12 00:48:23.754245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.057 [2024-07-12 00:48:23.754273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.057 qpair failed and we were unable to recover it. 00:35:56.057 [2024-07-12 00:48:23.754354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.057 [2024-07-12 00:48:23.754379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.057 qpair failed and we were unable to recover it. 00:35:56.057 [2024-07-12 00:48:23.754463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.057 [2024-07-12 00:48:23.754492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.057 qpair failed and we were unable to recover it. 00:35:56.057 [2024-07-12 00:48:23.754576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.057 [2024-07-12 00:48:23.754610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.057 qpair failed and we were unable to recover it. 00:35:56.057 [2024-07-12 00:48:23.754695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.058 [2024-07-12 00:48:23.754724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.058 qpair failed and we were unable to recover it. 00:35:56.058 [2024-07-12 00:48:23.754816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.058 [2024-07-12 00:48:23.754843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.058 qpair failed and we were unable to recover it. 00:35:56.058 [2024-07-12 00:48:23.754941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.058 [2024-07-12 00:48:23.754969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.058 qpair failed and we were unable to recover it. 00:35:56.058 [2024-07-12 00:48:23.755055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.058 [2024-07-12 00:48:23.755080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.058 qpair failed and we were unable to recover it. 00:35:56.058 [2024-07-12 00:48:23.755165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.058 [2024-07-12 00:48:23.755191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.058 qpair failed and we were unable to recover it. 00:35:56.058 [2024-07-12 00:48:23.755277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.058 [2024-07-12 00:48:23.755303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.058 qpair failed and we were unable to recover it. 00:35:56.058 [2024-07-12 00:48:23.755391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.058 [2024-07-12 00:48:23.755423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.058 qpair failed and we were unable to recover it. 00:35:56.058 [2024-07-12 00:48:23.755508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.058 [2024-07-12 00:48:23.755536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.058 qpair failed and we were unable to recover it. 00:35:56.058 [2024-07-12 00:48:23.755627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.058 [2024-07-12 00:48:23.755655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.058 qpair failed and we were unable to recover it. 00:35:56.058 [2024-07-12 00:48:23.755848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.058 [2024-07-12 00:48:23.755874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.058 qpair failed and we were unable to recover it. 00:35:56.058 [2024-07-12 00:48:23.755966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.058 [2024-07-12 00:48:23.755992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.058 qpair failed and we were unable to recover it. 00:35:56.058 [2024-07-12 00:48:23.756083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.058 [2024-07-12 00:48:23.756110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.058 qpair failed and we were unable to recover it. 00:35:56.058 [2024-07-12 00:48:23.756225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.058 [2024-07-12 00:48:23.756252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.058 qpair failed and we were unable to recover it. 00:35:56.058 [2024-07-12 00:48:23.756337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.058 [2024-07-12 00:48:23.756365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.058 qpair failed and we were unable to recover it. 00:35:56.058 [2024-07-12 00:48:23.756446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.058 [2024-07-12 00:48:23.756471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.058 qpair failed and we were unable to recover it. 00:35:56.058 [2024-07-12 00:48:23.756568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.058 [2024-07-12 00:48:23.756603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.058 qpair failed and we were unable to recover it. 00:35:56.058 [2024-07-12 00:48:23.756695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.058 [2024-07-12 00:48:23.756722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.058 qpair failed and we were unable to recover it. 00:35:56.058 [2024-07-12 00:48:23.756806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.058 [2024-07-12 00:48:23.756834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.058 qpair failed and we were unable to recover it. 00:35:56.058 [2024-07-12 00:48:23.756921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.058 [2024-07-12 00:48:23.756946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.058 qpair failed and we were unable to recover it. 00:35:56.058 [2024-07-12 00:48:23.757031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.058 [2024-07-12 00:48:23.757056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.058 qpair failed and we were unable to recover it. 00:35:56.058 [2024-07-12 00:48:23.757140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.058 [2024-07-12 00:48:23.757167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.058 qpair failed and we were unable to recover it. 00:35:56.058 [2024-07-12 00:48:23.757249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.058 [2024-07-12 00:48:23.757275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.058 qpair failed and we were unable to recover it. 00:35:56.058 [2024-07-12 00:48:23.757350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.058 [2024-07-12 00:48:23.757375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.058 qpair failed and we were unable to recover it. 00:35:56.058 [2024-07-12 00:48:23.757459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.058 [2024-07-12 00:48:23.757487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.058 qpair failed and we were unable to recover it. 00:35:56.058 [2024-07-12 00:48:23.757593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.058 [2024-07-12 00:48:23.757621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.058 qpair failed and we were unable to recover it. 00:35:56.058 [2024-07-12 00:48:23.757711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.058 [2024-07-12 00:48:23.757737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.058 qpair failed and we were unable to recover it. 00:35:56.058 [2024-07-12 00:48:23.757819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.058 [2024-07-12 00:48:23.757846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.058 qpair failed and we were unable to recover it. 00:35:56.058 [2024-07-12 00:48:23.757931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.058 [2024-07-12 00:48:23.757957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.058 qpair failed and we were unable to recover it. 00:35:56.058 [2024-07-12 00:48:23.758047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.058 [2024-07-12 00:48:23.758075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.058 qpair failed and we were unable to recover it. 00:35:56.058 [2024-07-12 00:48:23.758163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.058 [2024-07-12 00:48:23.758191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.058 qpair failed and we were unable to recover it. 00:35:56.058 [2024-07-12 00:48:23.758279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.058 [2024-07-12 00:48:23.758306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.058 qpair failed and we were unable to recover it. 00:35:56.058 [2024-07-12 00:48:23.758385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.059 [2024-07-12 00:48:23.758410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.059 qpair failed and we were unable to recover it. 00:35:56.059 [2024-07-12 00:48:23.758496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.059 [2024-07-12 00:48:23.758526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.059 qpair failed and we were unable to recover it. 00:35:56.059 [2024-07-12 00:48:23.758611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.059 [2024-07-12 00:48:23.758639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.059 qpair failed and we were unable to recover it. 00:35:56.059 [2024-07-12 00:48:23.758726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.059 [2024-07-12 00:48:23.758754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.059 qpair failed and we were unable to recover it. 00:35:56.059 [2024-07-12 00:48:23.758845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.059 [2024-07-12 00:48:23.758871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.059 qpair failed and we were unable to recover it. 00:35:56.059 [2024-07-12 00:48:23.758964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.059 [2024-07-12 00:48:23.758993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.059 qpair failed and we were unable to recover it. 00:35:56.059 [2024-07-12 00:48:23.759074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.059 [2024-07-12 00:48:23.759100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.059 qpair failed and we were unable to recover it. 00:35:56.059 [2024-07-12 00:48:23.759188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.059 [2024-07-12 00:48:23.759216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.059 qpair failed and we were unable to recover it. 00:35:56.059 [2024-07-12 00:48:23.759298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.059 [2024-07-12 00:48:23.759325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.059 qpair failed and we were unable to recover it. 00:35:56.059 [2024-07-12 00:48:23.759402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.059 [2024-07-12 00:48:23.759428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.059 qpair failed and we were unable to recover it. 00:35:56.059 [2024-07-12 00:48:23.759513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.059 [2024-07-12 00:48:23.759543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.059 qpair failed and we were unable to recover it. 00:35:56.059 [2024-07-12 00:48:23.759640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.059 [2024-07-12 00:48:23.759668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.059 qpair failed and we were unable to recover it. 00:35:56.059 [2024-07-12 00:48:23.759756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.059 [2024-07-12 00:48:23.759784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.059 qpair failed and we were unable to recover it. 00:35:56.059 [2024-07-12 00:48:23.759868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.059 [2024-07-12 00:48:23.759894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.059 qpair failed and we were unable to recover it. 00:35:56.059 [2024-07-12 00:48:23.759983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.059 [2024-07-12 00:48:23.760009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.059 qpair failed and we were unable to recover it. 00:35:56.059 [2024-07-12 00:48:23.760092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.059 [2024-07-12 00:48:23.760118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.059 qpair failed and we were unable to recover it. 00:35:56.059 [2024-07-12 00:48:23.760204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.059 [2024-07-12 00:48:23.760233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.059 qpair failed and we were unable to recover it. 00:35:56.059 [2024-07-12 00:48:23.760331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.059 [2024-07-12 00:48:23.760360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.059 qpair failed and we were unable to recover it. 00:35:56.059 [2024-07-12 00:48:23.760444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.059 [2024-07-12 00:48:23.760469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.059 qpair failed and we were unable to recover it. 00:35:56.059 [2024-07-12 00:48:23.760560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.059 [2024-07-12 00:48:23.760594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.059 qpair failed and we were unable to recover it. 00:35:56.059 [2024-07-12 00:48:23.760671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.059 [2024-07-12 00:48:23.760697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.059 qpair failed and we were unable to recover it. 00:35:56.059 [2024-07-12 00:48:23.760780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.059 [2024-07-12 00:48:23.760808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.059 qpair failed and we were unable to recover it. 00:35:56.059 [2024-07-12 00:48:23.760886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.059 [2024-07-12 00:48:23.760913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.059 qpair failed and we were unable to recover it. 00:35:56.059 [2024-07-12 00:48:23.760999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.059 [2024-07-12 00:48:23.761027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.059 qpair failed and we were unable to recover it. 00:35:56.059 [2024-07-12 00:48:23.761116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.059 [2024-07-12 00:48:23.761143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.059 qpair failed and we were unable to recover it. 00:35:56.059 [2024-07-12 00:48:23.761229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.059 [2024-07-12 00:48:23.761255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.059 qpair failed and we were unable to recover it. 00:35:56.059 [2024-07-12 00:48:23.761338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.059 [2024-07-12 00:48:23.761364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.059 qpair failed and we were unable to recover it. 00:35:56.059 [2024-07-12 00:48:23.761439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.059 [2024-07-12 00:48:23.761465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.059 qpair failed and we were unable to recover it. 00:35:56.059 [2024-07-12 00:48:23.761542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.059 [2024-07-12 00:48:23.761567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.059 qpair failed and we were unable to recover it. 00:35:56.059 [2024-07-12 00:48:23.761656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.059 [2024-07-12 00:48:23.761682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.059 qpair failed and we were unable to recover it. 00:35:56.059 [2024-07-12 00:48:23.761760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.059 [2024-07-12 00:48:23.761788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.059 qpair failed and we were unable to recover it. 00:35:56.059 [2024-07-12 00:48:23.761869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.059 [2024-07-12 00:48:23.761897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.059 qpair failed and we were unable to recover it. 00:35:56.059 [2024-07-12 00:48:23.761993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.059 [2024-07-12 00:48:23.762019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.059 qpair failed and we were unable to recover it. 00:35:56.059 [2024-07-12 00:48:23.762103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.059 [2024-07-12 00:48:23.762129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.059 qpair failed and we were unable to recover it. 00:35:56.059 [2024-07-12 00:48:23.762209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.059 [2024-07-12 00:48:23.762235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.059 qpair failed and we were unable to recover it. 00:35:56.059 [2024-07-12 00:48:23.762317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.059 [2024-07-12 00:48:23.762344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.059 qpair failed and we were unable to recover it. 00:35:56.060 [2024-07-12 00:48:23.762420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.060 [2024-07-12 00:48:23.762446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.060 qpair failed and we were unable to recover it. 00:35:56.060 [2024-07-12 00:48:23.762532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.060 [2024-07-12 00:48:23.762559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.060 qpair failed and we were unable to recover it. 00:35:56.060 [2024-07-12 00:48:23.762647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.060 [2024-07-12 00:48:23.762674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.060 qpair failed and we were unable to recover it. 00:35:56.060 [2024-07-12 00:48:23.762763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.060 [2024-07-12 00:48:23.762788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.060 qpair failed and we were unable to recover it. 00:35:56.060 [2024-07-12 00:48:23.762866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.060 [2024-07-12 00:48:23.762892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.060 qpair failed and we were unable to recover it. 00:35:56.060 [2024-07-12 00:48:23.762977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.060 [2024-07-12 00:48:23.763003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.060 qpair failed and we were unable to recover it. 00:35:56.060 [2024-07-12 00:48:23.763086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.060 [2024-07-12 00:48:23.763117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.060 qpair failed and we were unable to recover it. 00:35:56.060 [2024-07-12 00:48:23.763211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.060 [2024-07-12 00:48:23.763239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.060 qpair failed and we were unable to recover it. 00:35:56.060 [2024-07-12 00:48:23.763318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.060 [2024-07-12 00:48:23.763345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.060 qpair failed and we were unable to recover it. 00:35:56.060 [2024-07-12 00:48:23.763424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.060 [2024-07-12 00:48:23.763451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.060 qpair failed and we were unable to recover it. 00:35:56.060 [2024-07-12 00:48:23.763532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.060 [2024-07-12 00:48:23.763557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.060 qpair failed and we were unable to recover it. 00:35:56.060 [2024-07-12 00:48:23.763651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.060 [2024-07-12 00:48:23.763677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.060 qpair failed and we were unable to recover it. 00:35:56.060 [2024-07-12 00:48:23.763760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.060 [2024-07-12 00:48:23.763786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.060 qpair failed and we were unable to recover it. 00:35:56.060 [2024-07-12 00:48:23.763872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.060 [2024-07-12 00:48:23.763898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.060 qpair failed and we were unable to recover it. 00:35:56.060 [2024-07-12 00:48:23.763982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.060 [2024-07-12 00:48:23.764012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.060 qpair failed and we were unable to recover it. 00:35:56.060 [2024-07-12 00:48:23.764109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.060 [2024-07-12 00:48:23.764135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.060 qpair failed and we were unable to recover it. 00:35:56.060 [2024-07-12 00:48:23.764327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.060 [2024-07-12 00:48:23.764354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.060 qpair failed and we were unable to recover it. 00:35:56.060 [2024-07-12 00:48:23.764549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.060 [2024-07-12 00:48:23.764578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.060 qpair failed and we were unable to recover it. 00:35:56.060 [2024-07-12 00:48:23.764673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.060 [2024-07-12 00:48:23.764699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.060 qpair failed and we were unable to recover it. 00:35:56.060 [2024-07-12 00:48:23.764780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.060 [2024-07-12 00:48:23.764807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.060 qpair failed and we were unable to recover it. 00:35:56.060 [2024-07-12 00:48:23.765000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.060 [2024-07-12 00:48:23.765028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.060 qpair failed and we were unable to recover it. 00:35:56.060 [2024-07-12 00:48:23.765122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.060 [2024-07-12 00:48:23.765148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.060 qpair failed and we were unable to recover it. 00:35:56.060 [2024-07-12 00:48:23.765231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.060 [2024-07-12 00:48:23.765257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.060 qpair failed and we were unable to recover it. 00:35:56.060 [2024-07-12 00:48:23.765333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.060 [2024-07-12 00:48:23.765358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.060 qpair failed and we were unable to recover it. 00:35:56.060 [2024-07-12 00:48:23.765435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.060 [2024-07-12 00:48:23.765463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.060 qpair failed and we were unable to recover it. 00:35:56.060 [2024-07-12 00:48:23.765541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.060 [2024-07-12 00:48:23.765567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.060 qpair failed and we were unable to recover it. 00:35:56.061 [2024-07-12 00:48:23.765659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.061 [2024-07-12 00:48:23.765687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.061 qpair failed and we were unable to recover it. 00:35:56.061 [2024-07-12 00:48:23.765772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.061 [2024-07-12 00:48:23.765798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.061 qpair failed and we were unable to recover it. 00:35:56.061 [2024-07-12 00:48:23.765883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.061 [2024-07-12 00:48:23.765909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.061 qpair failed and we were unable to recover it. 00:35:56.061 [2024-07-12 00:48:23.765991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.061 [2024-07-12 00:48:23.766018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.061 qpair failed and we were unable to recover it. 00:35:56.061 [2024-07-12 00:48:23.766097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.061 [2024-07-12 00:48:23.766123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.061 qpair failed and we were unable to recover it. 00:35:56.061 [2024-07-12 00:48:23.766206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.061 [2024-07-12 00:48:23.766231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.061 qpair failed and we were unable to recover it. 00:35:56.061 [2024-07-12 00:48:23.766314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.061 [2024-07-12 00:48:23.766341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.061 qpair failed and we were unable to recover it. 00:35:56.061 [2024-07-12 00:48:23.766425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.061 [2024-07-12 00:48:23.766452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.061 qpair failed and we were unable to recover it. 00:35:56.061 [2024-07-12 00:48:23.766540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.061 [2024-07-12 00:48:23.766572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.061 qpair failed and we were unable to recover it. 00:35:56.061 [2024-07-12 00:48:23.766669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.061 [2024-07-12 00:48:23.766698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.061 qpair failed and we were unable to recover it. 00:35:56.061 [2024-07-12 00:48:23.766777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.061 [2024-07-12 00:48:23.766802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.061 qpair failed and we were unable to recover it. 00:35:56.061 [2024-07-12 00:48:23.766884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.061 [2024-07-12 00:48:23.766911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.061 qpair failed and we were unable to recover it. 00:35:56.061 [2024-07-12 00:48:23.766999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.061 [2024-07-12 00:48:23.767025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.061 qpair failed and we were unable to recover it. 00:35:56.061 [2024-07-12 00:48:23.767108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.061 [2024-07-12 00:48:23.767134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.061 qpair failed and we were unable to recover it. 00:35:56.061 [2024-07-12 00:48:23.767211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.061 [2024-07-12 00:48:23.767237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.061 qpair failed and we were unable to recover it. 00:35:56.061 [2024-07-12 00:48:23.767317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.061 [2024-07-12 00:48:23.767350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.061 qpair failed and we were unable to recover it. 00:35:56.061 [2024-07-12 00:48:23.767443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.061 [2024-07-12 00:48:23.767471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.061 qpair failed and we were unable to recover it. 00:35:56.061 [2024-07-12 00:48:23.767555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.061 [2024-07-12 00:48:23.767594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.061 qpair failed and we were unable to recover it. 00:35:56.061 [2024-07-12 00:48:23.767682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.061 [2024-07-12 00:48:23.767708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.061 qpair failed and we were unable to recover it. 00:35:56.061 [2024-07-12 00:48:23.767796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.061 [2024-07-12 00:48:23.767821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.061 qpair failed and we were unable to recover it. 00:35:56.061 [2024-07-12 00:48:23.767905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.061 [2024-07-12 00:48:23.767930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.061 qpair failed and we were unable to recover it. 00:35:56.061 [2024-07-12 00:48:23.768014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.061 [2024-07-12 00:48:23.768042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.061 qpair failed and we were unable to recover it. 00:35:56.061 [2024-07-12 00:48:23.768143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.061 [2024-07-12 00:48:23.768169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.061 qpair failed and we were unable to recover it. 00:35:56.061 [2024-07-12 00:48:23.768269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.061 [2024-07-12 00:48:23.768296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.061 qpair failed and we were unable to recover it. 00:35:56.061 [2024-07-12 00:48:23.768406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.061 [2024-07-12 00:48:23.768432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.061 qpair failed and we were unable to recover it. 00:35:56.061 [2024-07-12 00:48:23.768521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.061 [2024-07-12 00:48:23.768550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.061 qpair failed and we were unable to recover it. 00:35:56.061 [2024-07-12 00:48:23.768641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.061 [2024-07-12 00:48:23.768669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.061 qpair failed and we were unable to recover it. 00:35:56.061 [2024-07-12 00:48:23.768761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.061 [2024-07-12 00:48:23.768789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.061 qpair failed and we were unable to recover it. 00:35:56.061 [2024-07-12 00:48:23.768872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.061 [2024-07-12 00:48:23.768898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.061 qpair failed and we were unable to recover it. 00:35:56.061 [2024-07-12 00:48:23.768987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.061 [2024-07-12 00:48:23.769013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.061 qpair failed and we were unable to recover it. 00:35:56.061 [2024-07-12 00:48:23.769100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.061 [2024-07-12 00:48:23.769128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.061 qpair failed and we were unable to recover it. 00:35:56.061 [2024-07-12 00:48:23.769214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.061 [2024-07-12 00:48:23.769243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.061 qpair failed and we were unable to recover it. 00:35:56.061 [2024-07-12 00:48:23.769329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.061 [2024-07-12 00:48:23.769358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.061 qpair failed and we were unable to recover it. 00:35:56.061 [2024-07-12 00:48:23.769472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.061 [2024-07-12 00:48:23.769497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.061 qpair failed and we were unable to recover it. 00:35:56.061 [2024-07-12 00:48:23.769576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.061 [2024-07-12 00:48:23.769612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.061 qpair failed and we were unable to recover it. 00:35:56.061 [2024-07-12 00:48:23.769696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.061 [2024-07-12 00:48:23.769722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.061 qpair failed and we were unable to recover it. 00:35:56.061 [2024-07-12 00:48:23.769811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.062 [2024-07-12 00:48:23.769839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.062 qpair failed and we were unable to recover it. 00:35:56.062 [2024-07-12 00:48:23.769932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.062 [2024-07-12 00:48:23.769960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.062 qpair failed and we were unable to recover it. 00:35:56.062 [2024-07-12 00:48:23.770055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.062 [2024-07-12 00:48:23.770083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.062 qpair failed and we were unable to recover it. 00:35:56.062 [2024-07-12 00:48:23.770175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.062 [2024-07-12 00:48:23.770202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.062 qpair failed and we were unable to recover it. 00:35:56.062 [2024-07-12 00:48:23.770287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.062 [2024-07-12 00:48:23.770314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.062 qpair failed and we were unable to recover it. 00:35:56.062 [2024-07-12 00:48:23.770394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.062 [2024-07-12 00:48:23.770420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.062 qpair failed and we were unable to recover it. 00:35:56.062 [2024-07-12 00:48:23.770464] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:35:56.062 [2024-07-12 00:48:23.770503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.062 [2024-07-12 00:48:23.770529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.062 [2024-07-12 00:48:23.770535] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:56.062 qpair failed and we were unable to recover it. 00:35:56.062 [2024-07-12 00:48:23.770625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.062 [2024-07-12 00:48:23.770652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.062 qpair failed and we were unable to recover it. 00:35:56.062 [2024-07-12 00:48:23.770741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.062 [2024-07-12 00:48:23.770765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.062 qpair failed and we were unable to recover it. 00:35:56.062 [2024-07-12 00:48:23.770850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.062 [2024-07-12 00:48:23.770877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.062 qpair failed and we were unable to recover it. 00:35:56.062 [2024-07-12 00:48:23.770961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.062 [2024-07-12 00:48:23.770985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.062 qpair failed and we were unable to recover it. 00:35:56.062 [2024-07-12 00:48:23.771070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.062 [2024-07-12 00:48:23.771097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.062 qpair failed and we were unable to recover it. 00:35:56.062 [2024-07-12 00:48:23.771187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.062 [2024-07-12 00:48:23.771215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.062 qpair failed and we were unable to recover it. 00:35:56.062 [2024-07-12 00:48:23.771300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.062 [2024-07-12 00:48:23.771327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.062 qpair failed and we were unable to recover it. 00:35:56.062 [2024-07-12 00:48:23.771414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.062 [2024-07-12 00:48:23.771439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.062 qpair failed and we were unable to recover it. 00:35:56.062 [2024-07-12 00:48:23.771525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.062 [2024-07-12 00:48:23.771551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.062 qpair failed and we were unable to recover it. 00:35:56.062 [2024-07-12 00:48:23.771638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.062 [2024-07-12 00:48:23.771664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.062 qpair failed and we were unable to recover it. 00:35:56.062 [2024-07-12 00:48:23.771754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.062 [2024-07-12 00:48:23.771780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.062 qpair failed and we were unable to recover it. 00:35:56.062 [2024-07-12 00:48:23.771861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.062 [2024-07-12 00:48:23.771890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.062 qpair failed and we were unable to recover it. 00:35:56.062 [2024-07-12 00:48:23.771976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.062 [2024-07-12 00:48:23.772002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.062 qpair failed and we were unable to recover it. 00:35:56.062 [2024-07-12 00:48:23.772086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.062 [2024-07-12 00:48:23.772112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.062 qpair failed and we were unable to recover it. 00:35:56.062 [2024-07-12 00:48:23.772199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.062 [2024-07-12 00:48:23.772224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.062 qpair failed and we were unable to recover it. 00:35:56.062 [2024-07-12 00:48:23.772314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.062 [2024-07-12 00:48:23.772341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.062 qpair failed and we were unable to recover it. 00:35:56.062 [2024-07-12 00:48:23.772428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.062 [2024-07-12 00:48:23.772455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.062 qpair failed and we were unable to recover it. 00:35:56.062 [2024-07-12 00:48:23.772542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.062 [2024-07-12 00:48:23.772569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.062 qpair failed and we were unable to recover it. 00:35:56.062 [2024-07-12 00:48:23.772656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.062 [2024-07-12 00:48:23.772681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.062 qpair failed and we were unable to recover it. 00:35:56.062 [2024-07-12 00:48:23.772761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.062 [2024-07-12 00:48:23.772786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.062 qpair failed and we were unable to recover it. 00:35:56.062 [2024-07-12 00:48:23.772879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.062 [2024-07-12 00:48:23.772906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.062 qpair failed and we were unable to recover it. 00:35:56.062 [2024-07-12 00:48:23.772983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.062 [2024-07-12 00:48:23.773009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.062 qpair failed and we were unable to recover it. 00:35:56.062 [2024-07-12 00:48:23.773093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.062 [2024-07-12 00:48:23.773119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.062 qpair failed and we were unable to recover it. 00:35:56.062 [2024-07-12 00:48:23.773202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.062 [2024-07-12 00:48:23.773228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.062 qpair failed and we were unable to recover it. 00:35:56.062 [2024-07-12 00:48:23.773310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.062 [2024-07-12 00:48:23.773335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.062 qpair failed and we were unable to recover it. 00:35:56.062 [2024-07-12 00:48:23.773430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.062 [2024-07-12 00:48:23.773456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.062 qpair failed and we were unable to recover it. 00:35:56.062 [2024-07-12 00:48:23.773544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.062 [2024-07-12 00:48:23.773573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.062 qpair failed and we were unable to recover it. 00:35:56.062 [2024-07-12 00:48:23.773682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.344 [2024-07-12 00:48:23.773711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.344 qpair failed and we were unable to recover it. 00:35:56.344 [2024-07-12 00:48:23.773799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.344 [2024-07-12 00:48:23.773828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.344 qpair failed and we were unable to recover it. 00:35:56.344 [2024-07-12 00:48:23.773915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.344 [2024-07-12 00:48:23.773942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.344 qpair failed and we were unable to recover it. 00:35:56.344 [2024-07-12 00:48:23.774021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.344 [2024-07-12 00:48:23.774046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.344 qpair failed and we were unable to recover it. 00:35:56.344 [2024-07-12 00:48:23.774129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.344 [2024-07-12 00:48:23.774155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.344 qpair failed and we were unable to recover it. 00:35:56.344 [2024-07-12 00:48:23.774241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.344 [2024-07-12 00:48:23.774267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.344 qpair failed and we were unable to recover it. 00:35:56.344 [2024-07-12 00:48:23.774356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.344 [2024-07-12 00:48:23.774384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.344 qpair failed and we were unable to recover it. 00:35:56.344 [2024-07-12 00:48:23.774471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.344 [2024-07-12 00:48:23.774498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.344 qpair failed and we were unable to recover it. 00:35:56.344 [2024-07-12 00:48:23.774580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.344 [2024-07-12 00:48:23.774614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.344 qpair failed and we were unable to recover it. 00:35:56.344 [2024-07-12 00:48:23.774707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.344 [2024-07-12 00:48:23.774733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.344 qpair failed and we were unable to recover it. 00:35:56.344 [2024-07-12 00:48:23.774823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.344 [2024-07-12 00:48:23.774851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.344 qpair failed and we were unable to recover it. 00:35:56.344 [2024-07-12 00:48:23.774966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.344 [2024-07-12 00:48:23.774992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.344 qpair failed and we were unable to recover it. 00:35:56.344 [2024-07-12 00:48:23.775075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.344 [2024-07-12 00:48:23.775101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.344 qpair failed and we were unable to recover it. 00:35:56.344 [2024-07-12 00:48:23.775184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.344 [2024-07-12 00:48:23.775210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.344 qpair failed and we were unable to recover it. 00:35:56.344 [2024-07-12 00:48:23.775287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.344 [2024-07-12 00:48:23.775312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.344 qpair failed and we were unable to recover it. 00:35:56.344 [2024-07-12 00:48:23.775395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.344 [2024-07-12 00:48:23.775421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.344 qpair failed and we were unable to recover it. 00:35:56.344 [2024-07-12 00:48:23.775499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.344 [2024-07-12 00:48:23.775525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.344 qpair failed and we were unable to recover it. 00:35:56.344 [2024-07-12 00:48:23.775610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.344 [2024-07-12 00:48:23.775637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.344 qpair failed and we were unable to recover it. 00:35:56.344 [2024-07-12 00:48:23.775713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.344 [2024-07-12 00:48:23.775739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.344 qpair failed and we were unable to recover it. 00:35:56.344 [2024-07-12 00:48:23.775825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.344 [2024-07-12 00:48:23.775850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.344 qpair failed and we were unable to recover it. 00:35:56.344 [2024-07-12 00:48:23.775931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.344 [2024-07-12 00:48:23.775957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.344 qpair failed and we were unable to recover it. 00:35:56.344 [2024-07-12 00:48:23.776033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.344 [2024-07-12 00:48:23.776058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.344 qpair failed and we were unable to recover it. 00:35:56.344 [2024-07-12 00:48:23.776136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.344 [2024-07-12 00:48:23.776161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.344 qpair failed and we were unable to recover it. 00:35:56.344 [2024-07-12 00:48:23.776245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.344 [2024-07-12 00:48:23.776271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.344 qpair failed and we were unable to recover it. 00:35:56.344 [2024-07-12 00:48:23.776348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.344 [2024-07-12 00:48:23.776374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.344 qpair failed and we were unable to recover it. 00:35:56.344 [2024-07-12 00:48:23.776461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.344 [2024-07-12 00:48:23.776486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.344 qpair failed and we were unable to recover it. 00:35:56.344 [2024-07-12 00:48:23.776564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.344 [2024-07-12 00:48:23.776595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.344 qpair failed and we were unable to recover it. 00:35:56.344 [2024-07-12 00:48:23.776682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.344 [2024-07-12 00:48:23.776708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.344 qpair failed and we were unable to recover it. 00:35:56.344 [2024-07-12 00:48:23.776901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.344 [2024-07-12 00:48:23.776927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.344 qpair failed and we were unable to recover it. 00:35:56.344 [2024-07-12 00:48:23.777120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.344 [2024-07-12 00:48:23.777148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.344 qpair failed and we were unable to recover it. 00:35:56.344 [2024-07-12 00:48:23.777227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.344 [2024-07-12 00:48:23.777253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.344 qpair failed and we were unable to recover it. 00:35:56.344 [2024-07-12 00:48:23.777337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.344 [2024-07-12 00:48:23.777366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.344 qpair failed and we were unable to recover it. 00:35:56.344 [2024-07-12 00:48:23.777458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.344 [2024-07-12 00:48:23.777485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.344 qpair failed and we were unable to recover it. 00:35:56.344 [2024-07-12 00:48:23.777569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.344 [2024-07-12 00:48:23.777601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.344 qpair failed and we were unable to recover it. 00:35:56.344 [2024-07-12 00:48:23.777697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.344 [2024-07-12 00:48:23.777724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.344 qpair failed and we were unable to recover it. 00:35:56.344 [2024-07-12 00:48:23.777803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.344 [2024-07-12 00:48:23.777830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.344 qpair failed and we were unable to recover it. 00:35:56.344 [2024-07-12 00:48:23.777909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.344 [2024-07-12 00:48:23.777934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.344 qpair failed and we were unable to recover it. 00:35:56.344 [2024-07-12 00:48:23.778022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.344 [2024-07-12 00:48:23.778047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.344 qpair failed and we were unable to recover it. 00:35:56.344 [2024-07-12 00:48:23.778134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.344 [2024-07-12 00:48:23.778164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.344 qpair failed and we were unable to recover it. 00:35:56.344 [2024-07-12 00:48:23.778242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.344 [2024-07-12 00:48:23.778268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.344 qpair failed and we were unable to recover it. 00:35:56.344 [2024-07-12 00:48:23.778357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.344 [2024-07-12 00:48:23.778382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.345 qpair failed and we were unable to recover it. 00:35:56.345 [2024-07-12 00:48:23.778467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.345 [2024-07-12 00:48:23.778493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.345 qpair failed and we were unable to recover it. 00:35:56.345 [2024-07-12 00:48:23.778686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.345 [2024-07-12 00:48:23.778712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.345 qpair failed and we were unable to recover it. 00:35:56.345 [2024-07-12 00:48:23.778798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.345 [2024-07-12 00:48:23.778823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.345 qpair failed and we were unable to recover it. 00:35:56.345 [2024-07-12 00:48:23.778902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.345 [2024-07-12 00:48:23.778930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.345 qpair failed and we were unable to recover it. 00:35:56.345 [2024-07-12 00:48:23.779012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.345 [2024-07-12 00:48:23.779038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.345 qpair failed and we were unable to recover it. 00:35:56.345 [2024-07-12 00:48:23.779131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.345 [2024-07-12 00:48:23.779156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.345 qpair failed and we were unable to recover it. 00:35:56.345 [2024-07-12 00:48:23.779239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.345 [2024-07-12 00:48:23.779265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.345 qpair failed and we were unable to recover it. 00:35:56.345 [2024-07-12 00:48:23.779343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.345 [2024-07-12 00:48:23.779370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.345 qpair failed and we were unable to recover it. 00:35:56.345 [2024-07-12 00:48:23.779448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.345 [2024-07-12 00:48:23.779474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.345 qpair failed and we were unable to recover it. 00:35:56.345 [2024-07-12 00:48:23.779561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.345 [2024-07-12 00:48:23.779595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.345 qpair failed and we were unable to recover it. 00:35:56.345 [2024-07-12 00:48:23.779685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.345 [2024-07-12 00:48:23.779711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.345 qpair failed and we were unable to recover it. 00:35:56.345 [2024-07-12 00:48:23.779809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.345 [2024-07-12 00:48:23.779834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.345 qpair failed and we were unable to recover it. 00:35:56.345 [2024-07-12 00:48:23.779915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.345 [2024-07-12 00:48:23.779941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.345 qpair failed and we were unable to recover it. 00:35:56.345 [2024-07-12 00:48:23.780025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.345 [2024-07-12 00:48:23.780053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.345 qpair failed and we were unable to recover it. 00:35:56.345 [2024-07-12 00:48:23.780145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.345 [2024-07-12 00:48:23.780170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.345 qpair failed and we were unable to recover it. 00:35:56.345 [2024-07-12 00:48:23.780248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.345 [2024-07-12 00:48:23.780274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.345 qpair failed and we were unable to recover it. 00:35:56.345 [2024-07-12 00:48:23.780367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.345 [2024-07-12 00:48:23.780398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.345 qpair failed and we were unable to recover it. 00:35:56.345 [2024-07-12 00:48:23.780478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.345 [2024-07-12 00:48:23.780505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.345 qpair failed and we were unable to recover it. 00:35:56.345 [2024-07-12 00:48:23.780612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.345 [2024-07-12 00:48:23.780662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.345 qpair failed and we were unable to recover it. 00:35:56.345 [2024-07-12 00:48:23.780751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.345 [2024-07-12 00:48:23.780777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.345 qpair failed and we were unable to recover it. 00:35:56.345 [2024-07-12 00:48:23.780861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.345 [2024-07-12 00:48:23.780887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.345 qpair failed and we were unable to recover it. 00:35:56.345 [2024-07-12 00:48:23.780978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.345 [2024-07-12 00:48:23.781003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.345 qpair failed and we were unable to recover it. 00:35:56.345 [2024-07-12 00:48:23.781085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.345 [2024-07-12 00:48:23.781112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.345 qpair failed and we were unable to recover it. 00:35:56.345 [2024-07-12 00:48:23.781217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.345 [2024-07-12 00:48:23.781258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.345 qpair failed and we were unable to recover it. 00:35:56.345 [2024-07-12 00:48:23.781358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.345 [2024-07-12 00:48:23.781391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.345 qpair failed and we were unable to recover it. 00:35:56.345 [2024-07-12 00:48:23.781473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.345 [2024-07-12 00:48:23.781500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.345 qpair failed and we were unable to recover it. 00:35:56.345 [2024-07-12 00:48:23.781579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.345 [2024-07-12 00:48:23.781614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.345 qpair failed and we were unable to recover it. 00:35:56.345 [2024-07-12 00:48:23.781700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.345 [2024-07-12 00:48:23.781727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.345 qpair failed and we were unable to recover it. 00:35:56.345 [2024-07-12 00:48:23.781807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.345 [2024-07-12 00:48:23.781834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.345 qpair failed and we were unable to recover it. 00:35:56.345 [2024-07-12 00:48:23.781917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.345 [2024-07-12 00:48:23.781944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.345 qpair failed and we were unable to recover it. 00:35:56.345 [2024-07-12 00:48:23.782020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.345 [2024-07-12 00:48:23.782046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.345 qpair failed and we were unable to recover it. 00:35:56.345 [2024-07-12 00:48:23.782134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.345 [2024-07-12 00:48:23.782163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.345 qpair failed and we were unable to recover it. 00:35:56.345 [2024-07-12 00:48:23.782255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.345 [2024-07-12 00:48:23.782282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.345 qpair failed and we were unable to recover it. 00:35:56.345 [2024-07-12 00:48:23.782379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.345 [2024-07-12 00:48:23.782413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.345 qpair failed and we were unable to recover it. 00:35:56.345 [2024-07-12 00:48:23.782497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.345 [2024-07-12 00:48:23.782524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.345 qpair failed and we were unable to recover it. 00:35:56.345 [2024-07-12 00:48:23.782610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.345 [2024-07-12 00:48:23.782637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.345 qpair failed and we were unable to recover it. 00:35:56.345 [2024-07-12 00:48:23.782728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.345 [2024-07-12 00:48:23.782755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.345 qpair failed and we were unable to recover it. 00:35:56.345 [2024-07-12 00:48:23.782837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.345 [2024-07-12 00:48:23.782863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.345 qpair failed and we were unable to recover it. 00:35:56.345 [2024-07-12 00:48:23.782959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.345 [2024-07-12 00:48:23.782985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.345 qpair failed and we were unable to recover it. 00:35:56.345 [2024-07-12 00:48:23.783071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.345 [2024-07-12 00:48:23.783097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.345 qpair failed and we were unable to recover it. 00:35:56.345 [2024-07-12 00:48:23.783188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.346 [2024-07-12 00:48:23.783215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.346 qpair failed and we were unable to recover it. 00:35:56.346 [2024-07-12 00:48:23.783300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.346 [2024-07-12 00:48:23.783328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.346 qpair failed and we were unable to recover it. 00:35:56.346 [2024-07-12 00:48:23.783426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.346 [2024-07-12 00:48:23.783451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.346 qpair failed and we were unable to recover it. 00:35:56.346 [2024-07-12 00:48:23.783537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.346 [2024-07-12 00:48:23.783564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.346 qpair failed and we were unable to recover it. 00:35:56.346 [2024-07-12 00:48:23.783653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.346 [2024-07-12 00:48:23.783683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.346 qpair failed and we were unable to recover it. 00:35:56.346 [2024-07-12 00:48:23.783783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.346 [2024-07-12 00:48:23.783809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.346 qpair failed and we were unable to recover it. 00:35:56.346 [2024-07-12 00:48:23.783889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.346 [2024-07-12 00:48:23.783915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.346 qpair failed and we were unable to recover it. 00:35:56.346 [2024-07-12 00:48:23.783997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.346 [2024-07-12 00:48:23.784034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.346 qpair failed and we were unable to recover it. 00:35:56.346 [2024-07-12 00:48:23.784123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.346 [2024-07-12 00:48:23.784151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.346 qpair failed and we were unable to recover it. 00:35:56.346 [2024-07-12 00:48:23.784233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.346 [2024-07-12 00:48:23.784259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.346 qpair failed and we were unable to recover it. 00:35:56.346 [2024-07-12 00:48:23.784343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.346 [2024-07-12 00:48:23.784371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.346 qpair failed and we were unable to recover it. 00:35:56.346 [2024-07-12 00:48:23.784461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.346 [2024-07-12 00:48:23.784489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.346 qpair failed and we were unable to recover it. 00:35:56.346 [2024-07-12 00:48:23.784579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.346 [2024-07-12 00:48:23.784621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.346 qpair failed and we were unable to recover it. 00:35:56.346 [2024-07-12 00:48:23.784705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.346 [2024-07-12 00:48:23.784732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.346 qpair failed and we were unable to recover it. 00:35:56.346 [2024-07-12 00:48:23.784812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.346 [2024-07-12 00:48:23.784839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.346 qpair failed and we were unable to recover it. 00:35:56.346 [2024-07-12 00:48:23.784924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.346 [2024-07-12 00:48:23.784950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.346 qpair failed and we were unable to recover it. 00:35:56.346 [2024-07-12 00:48:23.785034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.346 [2024-07-12 00:48:23.785059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.346 qpair failed and we were unable to recover it. 00:35:56.346 [2024-07-12 00:48:23.785135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.346 [2024-07-12 00:48:23.785160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.346 qpair failed and we were unable to recover it. 00:35:56.346 [2024-07-12 00:48:23.785239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.346 [2024-07-12 00:48:23.785266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.346 qpair failed and we were unable to recover it. 00:35:56.346 [2024-07-12 00:48:23.785355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.346 [2024-07-12 00:48:23.785384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.346 qpair failed and we were unable to recover it. 00:35:56.346 [2024-07-12 00:48:23.785470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.346 [2024-07-12 00:48:23.785502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.346 qpair failed and we were unable to recover it. 00:35:56.346 [2024-07-12 00:48:23.785600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.346 [2024-07-12 00:48:23.785629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.346 qpair failed and we were unable to recover it. 00:35:56.346 [2024-07-12 00:48:23.785711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.346 [2024-07-12 00:48:23.785738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.346 qpair failed and we were unable to recover it. 00:35:56.346 [2024-07-12 00:48:23.785819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.346 [2024-07-12 00:48:23.785846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.346 qpair failed and we were unable to recover it. 00:35:56.346 [2024-07-12 00:48:23.785939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.346 [2024-07-12 00:48:23.785971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.346 qpair failed and we were unable to recover it. 00:35:56.346 [2024-07-12 00:48:23.786054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.346 [2024-07-12 00:48:23.786080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.346 qpair failed and we were unable to recover it. 00:35:56.346 [2024-07-12 00:48:23.786165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.346 [2024-07-12 00:48:23.786191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.346 qpair failed and we were unable to recover it. 00:35:56.346 [2024-07-12 00:48:23.786266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.346 [2024-07-12 00:48:23.786291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.346 qpair failed and we were unable to recover it. 00:35:56.346 [2024-07-12 00:48:23.786373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.346 [2024-07-12 00:48:23.786402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.346 qpair failed and we were unable to recover it. 00:35:56.346 [2024-07-12 00:48:23.786487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.346 [2024-07-12 00:48:23.786512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.346 qpair failed and we were unable to recover it. 00:35:56.346 [2024-07-12 00:48:23.786596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.346 [2024-07-12 00:48:23.786623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.346 qpair failed and we were unable to recover it. 00:35:56.346 [2024-07-12 00:48:23.786706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.346 [2024-07-12 00:48:23.786732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.346 qpair failed and we were unable to recover it. 00:35:56.346 [2024-07-12 00:48:23.786822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.346 [2024-07-12 00:48:23.786851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.346 qpair failed and we were unable to recover it. 00:35:56.346 [2024-07-12 00:48:23.786937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.346 [2024-07-12 00:48:23.786966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.346 qpair failed and we were unable to recover it. 00:35:56.346 [2024-07-12 00:48:23.787058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.346 [2024-07-12 00:48:23.787084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.346 qpair failed and we were unable to recover it. 00:35:56.346 [2024-07-12 00:48:23.787171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.346 [2024-07-12 00:48:23.787198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.346 qpair failed and we were unable to recover it. 00:35:56.346 [2024-07-12 00:48:23.787284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.346 [2024-07-12 00:48:23.787310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.346 qpair failed and we were unable to recover it. 00:35:56.346 [2024-07-12 00:48:23.787393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.346 [2024-07-12 00:48:23.787421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.346 qpair failed and we were unable to recover it. 00:35:56.346 [2024-07-12 00:48:23.787510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.346 [2024-07-12 00:48:23.787535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.346 qpair failed and we were unable to recover it. 00:35:56.346 [2024-07-12 00:48:23.787620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.346 [2024-07-12 00:48:23.787646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.347 qpair failed and we were unable to recover it. 00:35:56.347 [2024-07-12 00:48:23.787728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.347 [2024-07-12 00:48:23.787753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.347 qpair failed and we were unable to recover it. 00:35:56.347 [2024-07-12 00:48:23.787839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.347 [2024-07-12 00:48:23.787864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.347 qpair failed and we were unable to recover it. 00:35:56.347 [2024-07-12 00:48:23.787951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.347 [2024-07-12 00:48:23.787976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.347 qpair failed and we were unable to recover it. 00:35:56.347 [2024-07-12 00:48:23.788057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.347 [2024-07-12 00:48:23.788084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.347 qpair failed and we were unable to recover it. 00:35:56.347 [2024-07-12 00:48:23.788168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.347 [2024-07-12 00:48:23.788195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.347 qpair failed and we were unable to recover it. 00:35:56.347 [2024-07-12 00:48:23.788280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.347 [2024-07-12 00:48:23.788308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.347 qpair failed and we were unable to recover it. 00:35:56.347 [2024-07-12 00:48:23.788395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.347 [2024-07-12 00:48:23.788421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.347 qpair failed and we were unable to recover it. 00:35:56.347 [2024-07-12 00:48:23.788501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.347 [2024-07-12 00:48:23.788527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.347 qpair failed and we were unable to recover it. 00:35:56.347 [2024-07-12 00:48:23.788606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.347 [2024-07-12 00:48:23.788632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.347 qpair failed and we were unable to recover it. 00:35:56.347 [2024-07-12 00:48:23.788717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.347 [2024-07-12 00:48:23.788743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.347 qpair failed and we were unable to recover it. 00:35:56.347 [2024-07-12 00:48:23.788833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.347 [2024-07-12 00:48:23.788858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.347 qpair failed and we were unable to recover it. 00:35:56.347 [2024-07-12 00:48:23.788942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.347 [2024-07-12 00:48:23.788970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.347 qpair failed and we were unable to recover it. 00:35:56.347 [2024-07-12 00:48:23.789054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.347 [2024-07-12 00:48:23.789083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.347 qpair failed and we were unable to recover it. 00:35:56.347 [2024-07-12 00:48:23.789167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.347 [2024-07-12 00:48:23.789194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.347 qpair failed and we were unable to recover it. 00:35:56.347 [2024-07-12 00:48:23.789284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.347 [2024-07-12 00:48:23.789310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.347 qpair failed and we were unable to recover it. 00:35:56.347 [2024-07-12 00:48:23.789422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.347 [2024-07-12 00:48:23.789448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.347 qpair failed and we were unable to recover it. 00:35:56.347 [2024-07-12 00:48:23.789537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.347 [2024-07-12 00:48:23.789566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.347 qpair failed and we were unable to recover it. 00:35:56.347 [2024-07-12 00:48:23.789659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.347 [2024-07-12 00:48:23.789688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.347 qpair failed and we were unable to recover it. 00:35:56.347 [2024-07-12 00:48:23.789768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.347 [2024-07-12 00:48:23.789794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.347 qpair failed and we were unable to recover it. 00:35:56.347 [2024-07-12 00:48:23.789876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.347 [2024-07-12 00:48:23.789903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.347 qpair failed and we were unable to recover it. 00:35:56.347 [2024-07-12 00:48:23.789979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.347 [2024-07-12 00:48:23.790005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.347 qpair failed and we were unable to recover it. 00:35:56.347 [2024-07-12 00:48:23.790092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.347 [2024-07-12 00:48:23.790118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.347 qpair failed and we were unable to recover it. 00:35:56.347 [2024-07-12 00:48:23.790191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.347 [2024-07-12 00:48:23.790217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.347 qpair failed and we were unable to recover it. 00:35:56.347 [2024-07-12 00:48:23.790303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.347 [2024-07-12 00:48:23.790328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.347 qpair failed and we were unable to recover it. 00:35:56.347 [2024-07-12 00:48:23.790412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.347 [2024-07-12 00:48:23.790439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.347 qpair failed and we were unable to recover it. 00:35:56.347 [2024-07-12 00:48:23.790538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.347 [2024-07-12 00:48:23.790567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.347 qpair failed and we were unable to recover it. 00:35:56.347 [2024-07-12 00:48:23.790658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.347 [2024-07-12 00:48:23.790686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.347 qpair failed and we were unable to recover it. 00:35:56.347 [2024-07-12 00:48:23.790776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.347 [2024-07-12 00:48:23.790801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.347 qpair failed and we were unable to recover it. 00:35:56.347 [2024-07-12 00:48:23.790881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.347 [2024-07-12 00:48:23.790907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.347 qpair failed and we were unable to recover it. 00:35:56.347 [2024-07-12 00:48:23.790990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.347 [2024-07-12 00:48:23.791015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.347 qpair failed and we were unable to recover it. 00:35:56.347 [2024-07-12 00:48:23.791102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.347 [2024-07-12 00:48:23.791129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.347 qpair failed and we were unable to recover it. 00:35:56.347 [2024-07-12 00:48:23.791217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.347 [2024-07-12 00:48:23.791245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.347 qpair failed and we were unable to recover it. 00:35:56.347 [2024-07-12 00:48:23.791325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.347 [2024-07-12 00:48:23.791351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.347 qpair failed and we were unable to recover it. 00:35:56.347 [2024-07-12 00:48:23.791427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.347 [2024-07-12 00:48:23.791453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.347 qpair failed and we were unable to recover it. 00:35:56.347 [2024-07-12 00:48:23.791533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.347 [2024-07-12 00:48:23.791559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.347 qpair failed and we were unable to recover it. 00:35:56.347 [2024-07-12 00:48:23.791647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.347 [2024-07-12 00:48:23.791676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.347 qpair failed and we were unable to recover it. 00:35:56.347 [2024-07-12 00:48:23.791760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.347 [2024-07-12 00:48:23.791788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.347 qpair failed and we were unable to recover it. 00:35:56.347 [2024-07-12 00:48:23.791869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.347 [2024-07-12 00:48:23.791894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.347 qpair failed and we were unable to recover it. 00:35:56.347 [2024-07-12 00:48:23.791982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.347 [2024-07-12 00:48:23.792009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.347 qpair failed and we were unable to recover it. 00:35:56.347 [2024-07-12 00:48:23.792090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.347 [2024-07-12 00:48:23.792114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.348 qpair failed and we were unable to recover it. 00:35:56.348 [2024-07-12 00:48:23.792201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.348 [2024-07-12 00:48:23.792225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.348 qpair failed and we were unable to recover it. 00:35:56.348 [2024-07-12 00:48:23.792315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.348 [2024-07-12 00:48:23.792341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.348 qpair failed and we were unable to recover it. 00:35:56.348 [2024-07-12 00:48:23.792435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.348 [2024-07-12 00:48:23.792463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.348 qpair failed and we were unable to recover it. 00:35:56.348 [2024-07-12 00:48:23.792545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.348 [2024-07-12 00:48:23.792568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.348 qpair failed and we were unable to recover it. 00:35:56.348 [2024-07-12 00:48:23.792662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.348 [2024-07-12 00:48:23.792688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.348 qpair failed and we were unable to recover it. 00:35:56.348 [2024-07-12 00:48:23.792766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.348 [2024-07-12 00:48:23.792791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.348 qpair failed and we were unable to recover it. 00:35:56.348 [2024-07-12 00:48:23.792873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.348 [2024-07-12 00:48:23.792903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.348 qpair failed and we were unable to recover it. 00:35:56.348 [2024-07-12 00:48:23.792992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.348 [2024-07-12 00:48:23.793020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.348 qpair failed and we were unable to recover it. 00:35:56.348 [2024-07-12 00:48:23.793104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.348 [2024-07-12 00:48:23.793131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.348 qpair failed and we were unable to recover it. 00:35:56.348 [2024-07-12 00:48:23.793216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.348 [2024-07-12 00:48:23.793242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.348 qpair failed and we were unable to recover it. 00:35:56.348 [2024-07-12 00:48:23.793319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.348 [2024-07-12 00:48:23.793345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.348 qpair failed and we were unable to recover it. 00:35:56.348 [2024-07-12 00:48:23.793429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.348 [2024-07-12 00:48:23.793461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.348 qpair failed and we were unable to recover it. 00:35:56.348 [2024-07-12 00:48:23.793549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.348 [2024-07-12 00:48:23.793578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.348 qpair failed and we were unable to recover it. 00:35:56.348 [2024-07-12 00:48:23.793696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.348 [2024-07-12 00:48:23.793722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.348 qpair failed and we were unable to recover it. 00:35:56.348 [2024-07-12 00:48:23.793806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.348 [2024-07-12 00:48:23.793832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.348 qpair failed and we were unable to recover it. 00:35:56.348 [2024-07-12 00:48:23.793913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.348 [2024-07-12 00:48:23.793938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.348 qpair failed and we were unable to recover it. 00:35:56.348 [2024-07-12 00:48:23.794023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.348 [2024-07-12 00:48:23.794051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.348 qpair failed and we were unable to recover it. 00:35:56.348 [2024-07-12 00:48:23.794128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.348 [2024-07-12 00:48:23.794156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.348 qpair failed and we were unable to recover it. 00:35:56.348 [2024-07-12 00:48:23.794233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.348 [2024-07-12 00:48:23.794260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.348 qpair failed and we were unable to recover it. 00:35:56.348 [2024-07-12 00:48:23.794344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.348 [2024-07-12 00:48:23.794370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.348 qpair failed and we were unable to recover it. 00:35:56.348 [2024-07-12 00:48:23.794447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.348 [2024-07-12 00:48:23.794475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.348 qpair failed and we were unable to recover it. 00:35:56.348 [2024-07-12 00:48:23.794564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.348 [2024-07-12 00:48:23.794595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.348 qpair failed and we were unable to recover it. 00:35:56.348 [2024-07-12 00:48:23.794677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.348 [2024-07-12 00:48:23.794703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.348 qpair failed and we were unable to recover it. 00:35:56.348 [2024-07-12 00:48:23.794783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.348 [2024-07-12 00:48:23.794808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.348 qpair failed and we were unable to recover it. 00:35:56.348 [2024-07-12 00:48:23.794886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.348 [2024-07-12 00:48:23.794913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.348 qpair failed and we were unable to recover it. 00:35:56.348 [2024-07-12 00:48:23.794997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.348 [2024-07-12 00:48:23.795023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.348 qpair failed and we were unable to recover it. 00:35:56.348 [2024-07-12 00:48:23.795111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.348 [2024-07-12 00:48:23.795139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.348 qpair failed and we were unable to recover it. 00:35:56.348 [2024-07-12 00:48:23.795219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.348 [2024-07-12 00:48:23.795244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.348 qpair failed and we were unable to recover it. 00:35:56.348 [2024-07-12 00:48:23.795321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.348 [2024-07-12 00:48:23.795347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.348 qpair failed and we were unable to recover it. 00:35:56.348 [2024-07-12 00:48:23.795433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.348 [2024-07-12 00:48:23.795460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.348 qpair failed and we were unable to recover it. 00:35:56.348 [2024-07-12 00:48:23.795545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.348 [2024-07-12 00:48:23.795573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.348 qpair failed and we were unable to recover it. 00:35:56.348 [2024-07-12 00:48:23.795661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.348 [2024-07-12 00:48:23.795687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.348 qpair failed and we were unable to recover it. 00:35:56.348 [2024-07-12 00:48:23.795766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.348 [2024-07-12 00:48:23.795792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.348 qpair failed and we were unable to recover it. 00:35:56.348 [2024-07-12 00:48:23.795876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.348 [2024-07-12 00:48:23.795901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.348 qpair failed and we were unable to recover it. 00:35:56.348 [2024-07-12 00:48:23.795978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.348 [2024-07-12 00:48:23.796004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.348 qpair failed and we were unable to recover it. 00:35:56.348 [2024-07-12 00:48:23.796086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.349 [2024-07-12 00:48:23.796114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.349 qpair failed and we were unable to recover it. 00:35:56.349 [2024-07-12 00:48:23.796209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.349 [2024-07-12 00:48:23.796236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.349 qpair failed and we were unable to recover it. 00:35:56.349 [2024-07-12 00:48:23.796317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.349 [2024-07-12 00:48:23.796344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.349 qpair failed and we were unable to recover it. 00:35:56.349 [2024-07-12 00:48:23.796427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.349 [2024-07-12 00:48:23.796454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.349 qpair failed and we were unable to recover it. 00:35:56.349 [2024-07-12 00:48:23.796533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.349 [2024-07-12 00:48:23.796558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.349 qpair failed and we were unable to recover it. 00:35:56.349 [2024-07-12 00:48:23.796657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.349 [2024-07-12 00:48:23.796685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.349 qpair failed and we were unable to recover it. 00:35:56.349 [2024-07-12 00:48:23.796771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.349 [2024-07-12 00:48:23.796797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.349 qpair failed and we were unable to recover it. 00:35:56.349 [2024-07-12 00:48:23.796878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.349 [2024-07-12 00:48:23.796903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.349 qpair failed and we were unable to recover it. 00:35:56.349 [2024-07-12 00:48:23.796994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.349 [2024-07-12 00:48:23.797020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.349 qpair failed and we were unable to recover it. 00:35:56.349 [2024-07-12 00:48:23.797101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.349 [2024-07-12 00:48:23.797127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.349 qpair failed and we were unable to recover it. 00:35:56.349 [2024-07-12 00:48:23.797205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.349 [2024-07-12 00:48:23.797230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.349 qpair failed and we were unable to recover it. 00:35:56.349 [2024-07-12 00:48:23.797310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.349 [2024-07-12 00:48:23.797335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.349 qpair failed and we were unable to recover it. 00:35:56.349 [2024-07-12 00:48:23.797422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.349 [2024-07-12 00:48:23.797449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.349 qpair failed and we were unable to recover it. 00:35:56.349 [2024-07-12 00:48:23.797525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.349 [2024-07-12 00:48:23.797551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.349 qpair failed and we were unable to recover it. 00:35:56.349 [2024-07-12 00:48:23.797662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.349 [2024-07-12 00:48:23.797688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.349 qpair failed and we were unable to recover it. 00:35:56.349 [2024-07-12 00:48:23.797766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.349 [2024-07-12 00:48:23.797792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.349 qpair failed and we were unable to recover it. 00:35:56.349 [2024-07-12 00:48:23.797872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.349 [2024-07-12 00:48:23.797902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.349 qpair failed and we were unable to recover it. 00:35:56.349 [2024-07-12 00:48:23.797978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.349 [2024-07-12 00:48:23.798004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.349 qpair failed and we were unable to recover it. 00:35:56.349 [2024-07-12 00:48:23.798083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.349 [2024-07-12 00:48:23.798108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.349 qpair failed and we were unable to recover it. 00:35:56.349 [2024-07-12 00:48:23.798191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.349 [2024-07-12 00:48:23.798219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.349 qpair failed and we were unable to recover it. 00:35:56.349 [2024-07-12 00:48:23.798298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.349 [2024-07-12 00:48:23.798324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.349 qpair failed and we were unable to recover it. 00:35:56.349 [2024-07-12 00:48:23.798421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.349 [2024-07-12 00:48:23.798450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.349 qpair failed and we were unable to recover it. 00:35:56.349 [2024-07-12 00:48:23.798543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.349 [2024-07-12 00:48:23.798570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.349 qpair failed and we were unable to recover it. 00:35:56.349 [2024-07-12 00:48:23.798668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.349 [2024-07-12 00:48:23.798695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.349 qpair failed and we were unable to recover it. 00:35:56.349 [2024-07-12 00:48:23.798784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.349 [2024-07-12 00:48:23.798811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.349 qpair failed and we were unable to recover it. 00:35:56.349 [2024-07-12 00:48:23.798902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.349 [2024-07-12 00:48:23.798930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.349 qpair failed and we were unable to recover it. 00:35:56.349 [2024-07-12 00:48:23.799015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.349 [2024-07-12 00:48:23.799040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.349 qpair failed and we were unable to recover it. 00:35:56.349 [2024-07-12 00:48:23.799122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.349 [2024-07-12 00:48:23.799149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.349 qpair failed and we were unable to recover it. 00:35:56.349 [2024-07-12 00:48:23.799225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.349 [2024-07-12 00:48:23.799250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.349 qpair failed and we were unable to recover it. 00:35:56.349 [2024-07-12 00:48:23.799339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.349 [2024-07-12 00:48:23.799369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.349 qpair failed and we were unable to recover it. 00:35:56.349 [2024-07-12 00:48:23.799471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.349 [2024-07-12 00:48:23.799498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.349 qpair failed and we were unable to recover it. 00:35:56.349 [2024-07-12 00:48:23.799592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.349 [2024-07-12 00:48:23.799620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.349 qpair failed and we were unable to recover it. 00:35:56.349 [2024-07-12 00:48:23.799704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.349 [2024-07-12 00:48:23.799735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.349 qpair failed and we were unable to recover it. 00:35:56.349 [2024-07-12 00:48:23.799821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.349 [2024-07-12 00:48:23.799846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.349 qpair failed and we were unable to recover it. 00:35:56.349 [2024-07-12 00:48:23.799937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.349 [2024-07-12 00:48:23.799964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.349 qpair failed and we were unable to recover it. 00:35:56.349 [2024-07-12 00:48:23.800044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.349 [2024-07-12 00:48:23.800070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.349 qpair failed and we were unable to recover it. 00:35:56.349 [2024-07-12 00:48:23.800147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.349 [2024-07-12 00:48:23.800173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.349 qpair failed and we were unable to recover it. 00:35:56.349 [2024-07-12 00:48:23.800257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.349 [2024-07-12 00:48:23.800282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.349 qpair failed and we were unable to recover it. 00:35:56.349 [2024-07-12 00:48:23.800367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.349 [2024-07-12 00:48:23.800392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.349 qpair failed and we were unable to recover it. 00:35:56.349 [2024-07-12 00:48:23.800470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.349 [2024-07-12 00:48:23.800495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.349 qpair failed and we were unable to recover it. 00:35:56.349 [2024-07-12 00:48:23.800581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.349 [2024-07-12 00:48:23.800625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.349 qpair failed and we were unable to recover it. 00:35:56.349 [2024-07-12 00:48:23.800717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.349 [2024-07-12 00:48:23.800746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.349 qpair failed and we were unable to recover it. 00:35:56.349 [2024-07-12 00:48:23.800824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.349 [2024-07-12 00:48:23.800850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.349 qpair failed and we were unable to recover it. 00:35:56.349 [2024-07-12 00:48:23.800931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.349 [2024-07-12 00:48:23.800962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.349 qpair failed and we were unable to recover it. 00:35:56.349 [2024-07-12 00:48:23.801053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.349 [2024-07-12 00:48:23.801080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.349 qpair failed and we were unable to recover it. 00:35:56.350 [2024-07-12 00:48:23.801159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.350 [2024-07-12 00:48:23.801187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.350 qpair failed and we were unable to recover it. 00:35:56.350 [2024-07-12 00:48:23.801265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.350 [2024-07-12 00:48:23.801292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.350 qpair failed and we were unable to recover it. 00:35:56.350 [2024-07-12 00:48:23.801381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.350 [2024-07-12 00:48:23.801407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.350 qpair failed and we were unable to recover it. 00:35:56.350 [2024-07-12 00:48:23.801482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.350 [2024-07-12 00:48:23.801507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.350 qpair failed and we were unable to recover it. 00:35:56.350 [2024-07-12 00:48:23.801595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.350 [2024-07-12 00:48:23.801621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.350 qpair failed and we were unable to recover it. 00:35:56.350 [2024-07-12 00:48:23.801711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.350 [2024-07-12 00:48:23.801739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.350 qpair failed and we were unable to recover it. 00:35:56.350 [2024-07-12 00:48:23.801831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.350 [2024-07-12 00:48:23.801857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.350 qpair failed and we were unable to recover it. 00:35:56.350 [2024-07-12 00:48:23.801940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.350 [2024-07-12 00:48:23.801968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.350 qpair failed and we were unable to recover it. 00:35:56.350 [2024-07-12 00:48:23.802045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.350 [2024-07-12 00:48:23.802072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.350 qpair failed and we were unable to recover it. 00:35:56.350 [2024-07-12 00:48:23.802155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.350 [2024-07-12 00:48:23.802183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.350 qpair failed and we were unable to recover it. 00:35:56.350 [2024-07-12 00:48:23.802273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.350 [2024-07-12 00:48:23.802300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.350 qpair failed and we were unable to recover it. 00:35:56.350 [2024-07-12 00:48:23.802379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.350 [2024-07-12 00:48:23.802407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.350 qpair failed and we were unable to recover it. 00:35:56.350 [2024-07-12 00:48:23.802494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.350 [2024-07-12 00:48:23.802520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.350 qpair failed and we were unable to recover it. 00:35:56.350 [2024-07-12 00:48:23.802606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.350 [2024-07-12 00:48:23.802633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.350 qpair failed and we were unable to recover it. 00:35:56.350 [2024-07-12 00:48:23.802716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.350 [2024-07-12 00:48:23.802743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.350 qpair failed and we were unable to recover it. 00:35:56.350 [2024-07-12 00:48:23.802835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.350 [2024-07-12 00:48:23.802864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.350 qpair failed and we were unable to recover it. 00:35:56.350 [2024-07-12 00:48:23.802954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.350 [2024-07-12 00:48:23.802981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.350 qpair failed and we were unable to recover it. 00:35:56.350 [2024-07-12 00:48:23.803067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.350 [2024-07-12 00:48:23.803095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.350 qpair failed and we were unable to recover it. 00:35:56.350 [2024-07-12 00:48:23.803174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.350 [2024-07-12 00:48:23.803199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.350 qpair failed and we were unable to recover it. 00:35:56.350 [2024-07-12 00:48:23.803279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.350 [2024-07-12 00:48:23.803305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.350 qpair failed and we were unable to recover it. 00:35:56.350 [2024-07-12 00:48:23.803393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.350 [2024-07-12 00:48:23.803420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.350 qpair failed and we were unable to recover it. 00:35:56.350 [2024-07-12 00:48:23.803499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.350 [2024-07-12 00:48:23.803525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.350 qpair failed and we were unable to recover it. 00:35:56.350 [2024-07-12 00:48:23.803623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.350 [2024-07-12 00:48:23.803651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.350 qpair failed and we were unable to recover it. 00:35:56.350 [2024-07-12 00:48:23.803735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.350 [2024-07-12 00:48:23.803762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.350 qpair failed and we were unable to recover it. 00:35:56.350 [2024-07-12 00:48:23.803843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.350 [2024-07-12 00:48:23.803869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.350 qpair failed and we were unable to recover it. 00:35:56.350 [2024-07-12 00:48:23.803963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.350 [2024-07-12 00:48:23.803990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.350 qpair failed and we were unable to recover it. 00:35:56.350 [2024-07-12 00:48:23.804071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.350 [2024-07-12 00:48:23.804101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.350 qpair failed and we were unable to recover it. 00:35:56.350 [2024-07-12 00:48:23.804185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.350 [2024-07-12 00:48:23.804211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.350 qpair failed and we were unable to recover it. 00:35:56.350 [2024-07-12 00:48:23.804287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.350 [2024-07-12 00:48:23.804315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.350 qpair failed and we were unable to recover it. 00:35:56.350 [2024-07-12 00:48:23.804403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.350 [2024-07-12 00:48:23.804428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.350 qpair failed and we were unable to recover it. 00:35:56.350 [2024-07-12 00:48:23.804512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.350 [2024-07-12 00:48:23.804538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.350 qpair failed and we were unable to recover it. 00:35:56.350 [2024-07-12 00:48:23.804627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.350 [2024-07-12 00:48:23.804655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.350 qpair failed and we were unable to recover it. 00:35:56.350 [2024-07-12 00:48:23.804744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.350 [2024-07-12 00:48:23.804771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.350 qpair failed and we were unable to recover it. 00:35:56.350 [2024-07-12 00:48:23.804855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.350 [2024-07-12 00:48:23.804883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.350 qpair failed and we were unable to recover it. 00:35:56.350 [2024-07-12 00:48:23.804973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.350 [2024-07-12 00:48:23.805000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.350 qpair failed and we were unable to recover it. 00:35:56.350 [2024-07-12 00:48:23.805090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.350 [2024-07-12 00:48:23.805116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.350 qpair failed and we were unable to recover it. 00:35:56.350 [2024-07-12 00:48:23.805192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.350 [2024-07-12 00:48:23.805217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.350 qpair failed and we were unable to recover it. 00:35:56.350 [2024-07-12 00:48:23.805303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.350 [2024-07-12 00:48:23.805330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.350 qpair failed and we were unable to recover it. 00:35:56.350 [2024-07-12 00:48:23.805416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.350 [2024-07-12 00:48:23.805466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.350 qpair failed and we were unable to recover it. 00:35:56.350 [2024-07-12 00:48:23.805551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.350 [2024-07-12 00:48:23.805577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.350 qpair failed and we were unable to recover it. 00:35:56.350 [2024-07-12 00:48:23.805674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.350 [2024-07-12 00:48:23.805701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.350 qpair failed and we were unable to recover it. 00:35:56.350 [2024-07-12 00:48:23.805785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.350 [2024-07-12 00:48:23.805814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.350 qpair failed and we were unable to recover it. 00:35:56.350 [2024-07-12 00:48:23.805892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.350 [2024-07-12 00:48:23.805918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.350 qpair failed and we were unable to recover it. 00:35:56.350 [2024-07-12 00:48:23.806004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.350 [2024-07-12 00:48:23.806035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.350 qpair failed and we were unable to recover it. 00:35:56.351 [2024-07-12 00:48:23.806120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.351 [2024-07-12 00:48:23.806145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.351 qpair failed and we were unable to recover it. 00:35:56.351 [2024-07-12 00:48:23.806224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.351 [2024-07-12 00:48:23.806250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.351 qpair failed and we were unable to recover it. 00:35:56.351 [2024-07-12 00:48:23.806335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.351 [2024-07-12 00:48:23.806360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.351 qpair failed and we were unable to recover it. 00:35:56.351 [2024-07-12 00:48:23.806441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.351 [2024-07-12 00:48:23.806466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.351 qpair failed and we were unable to recover it. 00:35:56.351 [2024-07-12 00:48:23.806548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.351 [2024-07-12 00:48:23.806574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.351 qpair failed and we were unable to recover it. 00:35:56.351 [2024-07-12 00:48:23.806665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.351 [2024-07-12 00:48:23.806690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.351 qpair failed and we were unable to recover it. 00:35:56.351 [2024-07-12 00:48:23.806778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.351 [2024-07-12 00:48:23.806805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.351 qpair failed and we were unable to recover it. 00:35:56.351 [2024-07-12 00:48:23.806892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.351 [2024-07-12 00:48:23.806921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.351 qpair failed and we were unable to recover it. 00:35:56.351 [2024-07-12 00:48:23.807012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.351 [2024-07-12 00:48:23.807040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.351 qpair failed and we were unable to recover it. 00:35:56.351 [2024-07-12 00:48:23.807129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.351 [2024-07-12 00:48:23.807155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.351 qpair failed and we were unable to recover it. 00:35:56.351 [2024-07-12 00:48:23.807238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.351 [2024-07-12 00:48:23.807265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.351 qpair failed and we were unable to recover it. 00:35:56.351 [2024-07-12 00:48:23.807350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.351 [2024-07-12 00:48:23.807377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.351 qpair failed and we were unable to recover it. 00:35:56.351 [2024-07-12 00:48:23.807458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.351 [2024-07-12 00:48:23.807484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.351 qpair failed and we were unable to recover it. 00:35:56.351 [2024-07-12 00:48:23.807565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.351 [2024-07-12 00:48:23.807599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.351 qpair failed and we were unable to recover it. 00:35:56.351 [2024-07-12 00:48:23.807682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.351 [2024-07-12 00:48:23.807708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.351 qpair failed and we were unable to recover it. 00:35:56.351 [2024-07-12 00:48:23.807791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.351 [2024-07-12 00:48:23.807817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.351 qpair failed and we were unable to recover it. 00:35:56.351 [2024-07-12 00:48:23.807905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.351 [2024-07-12 00:48:23.807931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.351 qpair failed and we were unable to recover it. 00:35:56.351 EAL: No free 2048 kB hugepages reported on node 1 00:35:56.351 [2024-07-12 00:48:23.808011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.351 [2024-07-12 00:48:23.808041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.351 qpair failed and we were unable to recover it. 00:35:56.351 [2024-07-12 00:48:23.808117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.351 [2024-07-12 00:48:23.808143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.351 qpair failed and we were unable to recover it. 00:35:56.351 [2024-07-12 00:48:23.808223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.351 [2024-07-12 00:48:23.808253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.351 qpair failed and we were unable to recover it. 00:35:56.351 [2024-07-12 00:48:23.808343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.351 [2024-07-12 00:48:23.808372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.351 qpair failed and we were unable to recover it. 00:35:56.351 [2024-07-12 00:48:23.808471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.351 [2024-07-12 00:48:23.808500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.351 qpair failed and we were unable to recover it. 00:35:56.351 [2024-07-12 00:48:23.808583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.351 [2024-07-12 00:48:23.808632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.351 qpair failed and we were unable to recover it. 00:35:56.351 [2024-07-12 00:48:23.808719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.351 [2024-07-12 00:48:23.808746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.351 qpair failed and we were unable to recover it. 00:35:56.351 [2024-07-12 00:48:23.808862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.351 [2024-07-12 00:48:23.808887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.351 qpair failed and we were unable to recover it. 00:35:56.351 [2024-07-12 00:48:23.808974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.351 [2024-07-12 00:48:23.809001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.351 qpair failed and we were unable to recover it. 00:35:56.351 [2024-07-12 00:48:23.809086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.351 [2024-07-12 00:48:23.809114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.351 qpair failed and we were unable to recover it. 00:35:56.351 [2024-07-12 00:48:23.809194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.351 [2024-07-12 00:48:23.809224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.351 qpair failed and we were unable to recover it. 00:35:56.351 [2024-07-12 00:48:23.809307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.351 [2024-07-12 00:48:23.809332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.351 qpair failed and we were unable to recover it. 00:35:56.351 [2024-07-12 00:48:23.809410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.351 [2024-07-12 00:48:23.809436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.351 qpair failed and we were unable to recover it. 00:35:56.351 [2024-07-12 00:48:23.809515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.351 [2024-07-12 00:48:23.809541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.351 qpair failed and we were unable to recover it. 00:35:56.351 [2024-07-12 00:48:23.809630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.351 [2024-07-12 00:48:23.809658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.351 qpair failed and we were unable to recover it. 00:35:56.351 [2024-07-12 00:48:23.809745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.351 [2024-07-12 00:48:23.809771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.351 qpair failed and we were unable to recover it. 00:35:56.351 [2024-07-12 00:48:23.809853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.351 [2024-07-12 00:48:23.809882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.351 qpair failed and we were unable to recover it. 00:35:56.351 [2024-07-12 00:48:23.809963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.351 [2024-07-12 00:48:23.809995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.351 qpair failed and we were unable to recover it. 00:35:56.351 [2024-07-12 00:48:23.810108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.351 [2024-07-12 00:48:23.810135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.351 qpair failed and we were unable to recover it. 00:35:56.351 [2024-07-12 00:48:23.810221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.351 [2024-07-12 00:48:23.810248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.351 qpair failed and we were unable to recover it. 00:35:56.351 [2024-07-12 00:48:23.810340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.351 [2024-07-12 00:48:23.810366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.351 qpair failed and we were unable to recover it. 00:35:56.351 [2024-07-12 00:48:23.810446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.351 [2024-07-12 00:48:23.810472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.351 qpair failed and we were unable to recover it. 00:35:56.351 [2024-07-12 00:48:23.810551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.351 [2024-07-12 00:48:23.810577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.351 qpair failed and we were unable to recover it. 00:35:56.351 [2024-07-12 00:48:23.810667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.352 [2024-07-12 00:48:23.810694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.352 qpair failed and we were unable to recover it. 00:35:56.352 [2024-07-12 00:48:23.810788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.352 [2024-07-12 00:48:23.810815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.352 qpair failed and we were unable to recover it. 00:35:56.352 [2024-07-12 00:48:23.810930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.352 [2024-07-12 00:48:23.810956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.352 qpair failed and we were unable to recover it. 00:35:56.352 [2024-07-12 00:48:23.811043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.352 [2024-07-12 00:48:23.811071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.352 qpair failed and we were unable to recover it. 00:35:56.352 [2024-07-12 00:48:23.811167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.352 [2024-07-12 00:48:23.811196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.352 qpair failed and we were unable to recover it. 00:35:56.352 [2024-07-12 00:48:23.811281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.352 [2024-07-12 00:48:23.811309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.352 qpair failed and we were unable to recover it. 00:35:56.352 [2024-07-12 00:48:23.811401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.352 [2024-07-12 00:48:23.811431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.352 qpair failed and we were unable to recover it. 00:35:56.352 [2024-07-12 00:48:23.811521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.352 [2024-07-12 00:48:23.811547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.352 qpair failed and we were unable to recover it. 00:35:56.352 [2024-07-12 00:48:23.811677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.352 [2024-07-12 00:48:23.811704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.352 qpair failed and we were unable to recover it. 00:35:56.352 [2024-07-12 00:48:23.811791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.352 [2024-07-12 00:48:23.811816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.352 qpair failed and we were unable to recover it. 00:35:56.352 [2024-07-12 00:48:23.811928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.352 [2024-07-12 00:48:23.811954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.352 qpair failed and we were unable to recover it. 00:35:56.352 [2024-07-12 00:48:23.812040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.352 [2024-07-12 00:48:23.812066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.352 qpair failed and we were unable to recover it. 00:35:56.352 [2024-07-12 00:48:23.812157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.352 [2024-07-12 00:48:23.812184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.352 qpair failed and we were unable to recover it. 00:35:56.352 [2024-07-12 00:48:23.812274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.352 [2024-07-12 00:48:23.812300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.352 qpair failed and we were unable to recover it. 00:35:56.352 [2024-07-12 00:48:23.812381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.352 [2024-07-12 00:48:23.812407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.352 qpair failed and we were unable to recover it. 00:35:56.352 [2024-07-12 00:48:23.812496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.352 [2024-07-12 00:48:23.812522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.352 qpair failed and we were unable to recover it. 00:35:56.352 [2024-07-12 00:48:23.812610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.352 [2024-07-12 00:48:23.812640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.352 qpair failed and we were unable to recover it. 00:35:56.352 [2024-07-12 00:48:23.812724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.352 [2024-07-12 00:48:23.812751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.352 qpair failed and we were unable to recover it. 00:35:56.352 [2024-07-12 00:48:23.812833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.352 [2024-07-12 00:48:23.812859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.352 qpair failed and we were unable to recover it. 00:35:56.352 [2024-07-12 00:48:23.812936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.352 [2024-07-12 00:48:23.812962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.352 qpair failed and we were unable to recover it. 00:35:56.352 [2024-07-12 00:48:23.813044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.352 [2024-07-12 00:48:23.813070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.352 qpair failed and we were unable to recover it. 00:35:56.352 [2024-07-12 00:48:23.813180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.352 [2024-07-12 00:48:23.813211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.352 qpair failed and we were unable to recover it. 00:35:56.352 [2024-07-12 00:48:23.813290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.352 [2024-07-12 00:48:23.813316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.352 qpair failed and we were unable to recover it. 00:35:56.352 [2024-07-12 00:48:23.813397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.352 [2024-07-12 00:48:23.813424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.352 qpair failed and we were unable to recover it. 00:35:56.352 [2024-07-12 00:48:23.813537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.352 [2024-07-12 00:48:23.813564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.352 qpair failed and we were unable to recover it. 00:35:56.352 [2024-07-12 00:48:23.813686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.352 [2024-07-12 00:48:23.813714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.352 qpair failed and we were unable to recover it. 00:35:56.352 [2024-07-12 00:48:23.813854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.352 [2024-07-12 00:48:23.813882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.352 qpair failed and we were unable to recover it. 00:35:56.352 [2024-07-12 00:48:23.813989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.352 [2024-07-12 00:48:23.814016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.352 qpair failed and we were unable to recover it. 00:35:56.352 [2024-07-12 00:48:23.814091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.352 [2024-07-12 00:48:23.814117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.352 qpair failed and we were unable to recover it. 00:35:56.352 [2024-07-12 00:48:23.814197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.352 [2024-07-12 00:48:23.814223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.352 qpair failed and we were unable to recover it. 00:35:56.352 [2024-07-12 00:48:23.814301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.352 [2024-07-12 00:48:23.814329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.352 qpair failed and we were unable to recover it. 00:35:56.352 [2024-07-12 00:48:23.814426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.352 [2024-07-12 00:48:23.814454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.352 qpair failed and we were unable to recover it. 00:35:56.352 [2024-07-12 00:48:23.814537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.352 [2024-07-12 00:48:23.814564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.352 qpair failed and we were unable to recover it. 00:35:56.352 [2024-07-12 00:48:23.814672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.352 [2024-07-12 00:48:23.814712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.352 qpair failed and we were unable to recover it. 00:35:56.352 [2024-07-12 00:48:23.814808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.352 [2024-07-12 00:48:23.814835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.352 qpair failed and we were unable to recover it. 00:35:56.352 [2024-07-12 00:48:23.814928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.352 [2024-07-12 00:48:23.814956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.352 qpair failed and we were unable to recover it. 00:35:56.352 [2024-07-12 00:48:23.815054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.352 [2024-07-12 00:48:23.815079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.352 qpair failed and we were unable to recover it. 00:35:56.352 [2024-07-12 00:48:23.815165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.352 [2024-07-12 00:48:23.815192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.352 qpair failed and we were unable to recover it. 00:35:56.352 [2024-07-12 00:48:23.815277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.353 [2024-07-12 00:48:23.815303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.353 qpair failed and we were unable to recover it. 00:35:56.353 [2024-07-12 00:48:23.815391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.353 [2024-07-12 00:48:23.815418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.353 qpair failed and we were unable to recover it. 00:35:56.353 [2024-07-12 00:48:23.815502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.353 [2024-07-12 00:48:23.815528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.353 qpair failed and we were unable to recover it. 00:35:56.353 [2024-07-12 00:48:23.815617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.353 [2024-07-12 00:48:23.815645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.353 qpair failed and we were unable to recover it. 00:35:56.353 [2024-07-12 00:48:23.815727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.353 [2024-07-12 00:48:23.815756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.353 qpair failed and we were unable to recover it. 00:35:56.353 [2024-07-12 00:48:23.815843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.353 [2024-07-12 00:48:23.815870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.353 qpair failed and we were unable to recover it. 00:35:56.353 [2024-07-12 00:48:23.815954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.353 [2024-07-12 00:48:23.815981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.353 qpair failed and we were unable to recover it. 00:35:56.353 [2024-07-12 00:48:23.816076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.353 [2024-07-12 00:48:23.816103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.353 qpair failed and we were unable to recover it. 00:35:56.353 [2024-07-12 00:48:23.816184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.353 [2024-07-12 00:48:23.816211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.353 qpair failed and we were unable to recover it. 00:35:56.353 [2024-07-12 00:48:23.816304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.353 [2024-07-12 00:48:23.816329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.353 qpair failed and we were unable to recover it. 00:35:56.353 [2024-07-12 00:48:23.816418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.353 [2024-07-12 00:48:23.816446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.353 qpair failed and we were unable to recover it. 00:35:56.353 [2024-07-12 00:48:23.816533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.353 [2024-07-12 00:48:23.816559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.353 qpair failed and we were unable to recover it. 00:35:56.353 [2024-07-12 00:48:23.816666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.353 [2024-07-12 00:48:23.816695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.353 qpair failed and we were unable to recover it. 00:35:56.353 [2024-07-12 00:48:23.816835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.353 [2024-07-12 00:48:23.816861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.353 qpair failed and we were unable to recover it. 00:35:56.353 [2024-07-12 00:48:23.816966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.353 [2024-07-12 00:48:23.816992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.353 qpair failed and we were unable to recover it. 00:35:56.353 [2024-07-12 00:48:23.817074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.353 [2024-07-12 00:48:23.817099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.353 qpair failed and we were unable to recover it. 00:35:56.353 [2024-07-12 00:48:23.817192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.353 [2024-07-12 00:48:23.817217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.353 qpair failed and we were unable to recover it. 00:35:56.353 [2024-07-12 00:48:23.817293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.353 [2024-07-12 00:48:23.817322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.353 qpair failed and we were unable to recover it. 00:35:56.353 [2024-07-12 00:48:23.817421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.353 [2024-07-12 00:48:23.817450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.353 qpair failed and we were unable to recover it. 00:35:56.353 [2024-07-12 00:48:23.817538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.353 [2024-07-12 00:48:23.817567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.353 qpair failed and we were unable to recover it. 00:35:56.353 [2024-07-12 00:48:23.817661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.353 [2024-07-12 00:48:23.817687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.353 qpair failed and we were unable to recover it. 00:35:56.353 [2024-07-12 00:48:23.817766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.353 [2024-07-12 00:48:23.817792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.353 qpair failed and we were unable to recover it. 00:35:56.353 [2024-07-12 00:48:23.817871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.353 [2024-07-12 00:48:23.817896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.353 qpair failed and we were unable to recover it. 00:35:56.353 [2024-07-12 00:48:23.817979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.353 [2024-07-12 00:48:23.818008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.353 qpair failed and we were unable to recover it. 00:35:56.353 [2024-07-12 00:48:23.818105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.353 [2024-07-12 00:48:23.818133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.353 qpair failed and we were unable to recover it. 00:35:56.353 [2024-07-12 00:48:23.818208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.353 [2024-07-12 00:48:23.818233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.353 qpair failed and we were unable to recover it. 00:35:56.353 [2024-07-12 00:48:23.818310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.353 [2024-07-12 00:48:23.818335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.353 qpair failed and we were unable to recover it. 00:35:56.353 [2024-07-12 00:48:23.818410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.353 [2024-07-12 00:48:23.818436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.353 qpair failed and we were unable to recover it. 00:35:56.353 [2024-07-12 00:48:23.818520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.353 [2024-07-12 00:48:23.818546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.353 qpair failed and we were unable to recover it. 00:35:56.353 [2024-07-12 00:48:23.818643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.353 [2024-07-12 00:48:23.818673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.353 qpair failed and we were unable to recover it. 00:35:56.353 [2024-07-12 00:48:23.818758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.353 [2024-07-12 00:48:23.818786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.353 qpair failed and we were unable to recover it. 00:35:56.353 [2024-07-12 00:48:23.818876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.353 [2024-07-12 00:48:23.818903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.353 qpair failed and we were unable to recover it. 00:35:56.353 [2024-07-12 00:48:23.818986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.353 [2024-07-12 00:48:23.819013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.353 qpair failed and we were unable to recover it. 00:35:56.353 [2024-07-12 00:48:23.819098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.353 [2024-07-12 00:48:23.819125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.353 qpair failed and we were unable to recover it. 00:35:56.353 [2024-07-12 00:48:23.819211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.353 [2024-07-12 00:48:23.819237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.353 qpair failed and we were unable to recover it. 00:35:56.353 [2024-07-12 00:48:23.819329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.353 [2024-07-12 00:48:23.819360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.353 qpair failed and we were unable to recover it. 00:35:56.353 [2024-07-12 00:48:23.819441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.353 [2024-07-12 00:48:23.819468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.353 qpair failed and we were unable to recover it. 00:35:56.353 [2024-07-12 00:48:23.819560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.353 [2024-07-12 00:48:23.819595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.353 qpair failed and we were unable to recover it. 00:35:56.353 [2024-07-12 00:48:23.819687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.353 [2024-07-12 00:48:23.819713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.353 qpair failed and we were unable to recover it. 00:35:56.353 [2024-07-12 00:48:23.819795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.353 [2024-07-12 00:48:23.819821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.353 qpair failed and we were unable to recover it. 00:35:56.353 [2024-07-12 00:48:23.819903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.353 [2024-07-12 00:48:23.819929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.353 qpair failed and we were unable to recover it. 00:35:56.353 [2024-07-12 00:48:23.820016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.353 [2024-07-12 00:48:23.820042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.353 qpair failed and we were unable to recover it. 00:35:56.353 [2024-07-12 00:48:23.820174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.353 [2024-07-12 00:48:23.820203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.353 qpair failed and we were unable to recover it. 00:35:56.353 [2024-07-12 00:48:23.820282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.353 [2024-07-12 00:48:23.820307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.354 qpair failed and we were unable to recover it. 00:35:56.354 [2024-07-12 00:48:23.820392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.354 [2024-07-12 00:48:23.820418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.354 qpair failed and we were unable to recover it. 00:35:56.354 [2024-07-12 00:48:23.820499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.354 [2024-07-12 00:48:23.820524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.354 qpair failed and we were unable to recover it. 00:35:56.354 [2024-07-12 00:48:23.820609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.354 [2024-07-12 00:48:23.820637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.354 qpair failed and we were unable to recover it. 00:35:56.354 [2024-07-12 00:48:23.820724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.354 [2024-07-12 00:48:23.820750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.354 qpair failed and we were unable to recover it. 00:35:56.354 [2024-07-12 00:48:23.820829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.354 [2024-07-12 00:48:23.820855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.354 qpair failed and we were unable to recover it. 00:35:56.354 [2024-07-12 00:48:23.820930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.354 [2024-07-12 00:48:23.820956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.354 qpair failed and we were unable to recover it. 00:35:56.354 [2024-07-12 00:48:23.821050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.354 [2024-07-12 00:48:23.821084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.354 qpair failed and we were unable to recover it. 00:35:56.354 [2024-07-12 00:48:23.821167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.354 [2024-07-12 00:48:23.821194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.354 qpair failed and we were unable to recover it. 00:35:56.354 [2024-07-12 00:48:23.821285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.354 [2024-07-12 00:48:23.821313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.354 qpair failed and we were unable to recover it. 00:35:56.354 [2024-07-12 00:48:23.821395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.354 [2024-07-12 00:48:23.821421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.354 qpair failed and we were unable to recover it. 00:35:56.354 [2024-07-12 00:48:23.821500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.354 [2024-07-12 00:48:23.821525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.354 qpair failed and we were unable to recover it. 00:35:56.354 [2024-07-12 00:48:23.821612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.354 [2024-07-12 00:48:23.821642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.354 qpair failed and we were unable to recover it. 00:35:56.354 [2024-07-12 00:48:23.821734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.354 [2024-07-12 00:48:23.821761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.354 qpair failed and we were unable to recover it. 00:35:56.354 [2024-07-12 00:48:23.821839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.354 [2024-07-12 00:48:23.821865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.354 qpair failed and we were unable to recover it. 00:35:56.354 [2024-07-12 00:48:23.821942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.354 [2024-07-12 00:48:23.821968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.354 qpair failed and we were unable to recover it. 00:35:56.354 [2024-07-12 00:48:23.822048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.354 [2024-07-12 00:48:23.822073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.354 qpair failed and we were unable to recover it. 00:35:56.354 [2024-07-12 00:48:23.822155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.354 [2024-07-12 00:48:23.822182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.354 qpair failed and we were unable to recover it. 00:35:56.354 [2024-07-12 00:48:23.822274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.354 [2024-07-12 00:48:23.822300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.354 qpair failed and we were unable to recover it. 00:35:56.354 [2024-07-12 00:48:23.822379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.354 [2024-07-12 00:48:23.822407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.354 qpair failed and we were unable to recover it. 00:35:56.354 [2024-07-12 00:48:23.822506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.354 [2024-07-12 00:48:23.822535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.354 qpair failed and we were unable to recover it. 00:35:56.354 [2024-07-12 00:48:23.822632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.354 [2024-07-12 00:48:23.822659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.354 qpair failed and we were unable to recover it. 00:35:56.354 [2024-07-12 00:48:23.822744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.354 [2024-07-12 00:48:23.822769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.354 qpair failed and we were unable to recover it. 00:35:56.354 [2024-07-12 00:48:23.822856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.354 [2024-07-12 00:48:23.822883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.354 qpair failed and we were unable to recover it. 00:35:56.354 [2024-07-12 00:48:23.822981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.354 [2024-07-12 00:48:23.823008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.354 qpair failed and we were unable to recover it. 00:35:56.354 [2024-07-12 00:48:23.823087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.354 [2024-07-12 00:48:23.823112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.354 qpair failed and we were unable to recover it. 00:35:56.354 [2024-07-12 00:48:23.823205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.354 [2024-07-12 00:48:23.823231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.354 qpair failed and we were unable to recover it. 00:35:56.354 [2024-07-12 00:48:23.823320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.354 [2024-07-12 00:48:23.823349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.354 qpair failed and we were unable to recover it. 00:35:56.354 [2024-07-12 00:48:23.823434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.354 [2024-07-12 00:48:23.823461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.354 qpair failed and we were unable to recover it. 00:35:56.354 [2024-07-12 00:48:23.823547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.354 [2024-07-12 00:48:23.823574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.354 qpair failed and we were unable to recover it. 00:35:56.354 [2024-07-12 00:48:23.823668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.354 [2024-07-12 00:48:23.823695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.354 qpair failed and we were unable to recover it. 00:35:56.354 [2024-07-12 00:48:23.823788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.354 [2024-07-12 00:48:23.823818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.354 qpair failed and we were unable to recover it. 00:35:56.354 [2024-07-12 00:48:23.823898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.354 [2024-07-12 00:48:23.823924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.354 qpair failed and we were unable to recover it. 00:35:56.354 [2024-07-12 00:48:23.824033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.354 [2024-07-12 00:48:23.824059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.354 qpair failed and we were unable to recover it. 00:35:56.354 [2024-07-12 00:48:23.824138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.354 [2024-07-12 00:48:23.824168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.354 qpair failed and we were unable to recover it. 00:35:56.354 [2024-07-12 00:48:23.824254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.354 [2024-07-12 00:48:23.824280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.354 qpair failed and we were unable to recover it. 00:35:56.354 [2024-07-12 00:48:23.824362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.354 [2024-07-12 00:48:23.824388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.354 qpair failed and we were unable to recover it. 00:35:56.354 [2024-07-12 00:48:23.824499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.354 [2024-07-12 00:48:23.824524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.354 qpair failed and we were unable to recover it. 00:35:56.354 [2024-07-12 00:48:23.824613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.354 [2024-07-12 00:48:23.824639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.354 qpair failed and we were unable to recover it. 00:35:56.354 [2024-07-12 00:48:23.824725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.354 [2024-07-12 00:48:23.824751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.354 qpair failed and we were unable to recover it. 00:35:56.354 [2024-07-12 00:48:23.824863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.354 [2024-07-12 00:48:23.824888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.355 qpair failed and we were unable to recover it. 00:35:56.355 [2024-07-12 00:48:23.824966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.355 [2024-07-12 00:48:23.824992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.355 qpair failed and we were unable to recover it. 00:35:56.355 [2024-07-12 00:48:23.825067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.355 [2024-07-12 00:48:23.825092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.355 qpair failed and we were unable to recover it. 00:35:56.355 [2024-07-12 00:48:23.825170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.355 [2024-07-12 00:48:23.825195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.355 qpair failed and we were unable to recover it. 00:35:56.355 [2024-07-12 00:48:23.825274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.355 [2024-07-12 00:48:23.825300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.355 qpair failed and we were unable to recover it. 00:35:56.355 [2024-07-12 00:48:23.825410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.355 [2024-07-12 00:48:23.825435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.355 qpair failed and we were unable to recover it. 00:35:56.355 [2024-07-12 00:48:23.825547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.355 [2024-07-12 00:48:23.825577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.355 qpair failed and we were unable to recover it. 00:35:56.355 [2024-07-12 00:48:23.825707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.355 [2024-07-12 00:48:23.825733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.355 qpair failed and we were unable to recover it. 00:35:56.355 [2024-07-12 00:48:23.825822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.355 [2024-07-12 00:48:23.825851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.355 qpair failed and we were unable to recover it. 00:35:56.355 [2024-07-12 00:48:23.825937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.355 [2024-07-12 00:48:23.825963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.355 qpair failed and we were unable to recover it. 00:35:56.355 [2024-07-12 00:48:23.826077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.355 [2024-07-12 00:48:23.826106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.355 qpair failed and we were unable to recover it. 00:35:56.355 [2024-07-12 00:48:23.826208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.355 [2024-07-12 00:48:23.826238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.355 qpair failed and we were unable to recover it. 00:35:56.355 [2024-07-12 00:48:23.826323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.355 [2024-07-12 00:48:23.826349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.355 qpair failed and we were unable to recover it. 00:35:56.355 [2024-07-12 00:48:23.826428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.355 [2024-07-12 00:48:23.826455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.355 qpair failed and we were unable to recover it. 00:35:56.355 [2024-07-12 00:48:23.826544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.355 [2024-07-12 00:48:23.826591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.355 qpair failed and we were unable to recover it. 00:35:56.355 [2024-07-12 00:48:23.826685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.355 [2024-07-12 00:48:23.826711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.355 qpair failed and we were unable to recover it. 00:35:56.355 [2024-07-12 00:48:23.826800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.355 [2024-07-12 00:48:23.826825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.355 qpair failed and we were unable to recover it. 00:35:56.355 [2024-07-12 00:48:23.826908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.355 [2024-07-12 00:48:23.826934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.355 qpair failed and we were unable to recover it. 00:35:56.355 [2024-07-12 00:48:23.827056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.355 [2024-07-12 00:48:23.827082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.355 qpair failed and we were unable to recover it. 00:35:56.355 [2024-07-12 00:48:23.827162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.355 [2024-07-12 00:48:23.827188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.355 qpair failed and we were unable to recover it. 00:35:56.355 [2024-07-12 00:48:23.827285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.355 [2024-07-12 00:48:23.827313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.355 qpair failed and we were unable to recover it. 00:35:56.355 [2024-07-12 00:48:23.827392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.355 [2024-07-12 00:48:23.827421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.355 qpair failed and we were unable to recover it. 00:35:56.355 [2024-07-12 00:48:23.827540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.355 [2024-07-12 00:48:23.827568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.355 qpair failed and we were unable to recover it. 00:35:56.355 [2024-07-12 00:48:23.827694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.355 [2024-07-12 00:48:23.827724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.355 qpair failed and we were unable to recover it. 00:35:56.355 [2024-07-12 00:48:23.827846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.355 [2024-07-12 00:48:23.827874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.355 qpair failed and we were unable to recover it. 00:35:56.355 [2024-07-12 00:48:23.827957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.355 [2024-07-12 00:48:23.827983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.355 qpair failed and we were unable to recover it. 00:35:56.355 [2024-07-12 00:48:23.828088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.355 [2024-07-12 00:48:23.828114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.355 qpair failed and we were unable to recover it. 00:35:56.355 [2024-07-12 00:48:23.828200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.355 [2024-07-12 00:48:23.828229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.355 qpair failed and we were unable to recover it. 00:35:56.355 [2024-07-12 00:48:23.828319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.355 [2024-07-12 00:48:23.828348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.355 qpair failed and we were unable to recover it. 00:35:56.355 [2024-07-12 00:48:23.828437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.355 [2024-07-12 00:48:23.828479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.355 qpair failed and we were unable to recover it. 00:35:56.355 [2024-07-12 00:48:23.828596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.355 [2024-07-12 00:48:23.828622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.355 qpair failed and we were unable to recover it. 00:35:56.355 [2024-07-12 00:48:23.828711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.355 [2024-07-12 00:48:23.828738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.355 qpair failed and we were unable to recover it. 00:35:56.355 [2024-07-12 00:48:23.828824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.355 [2024-07-12 00:48:23.828852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.355 qpair failed and we were unable to recover it. 00:35:56.355 [2024-07-12 00:48:23.828936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.355 [2024-07-12 00:48:23.828963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.355 qpair failed and we were unable to recover it. 00:35:56.355 [2024-07-12 00:48:23.829081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.355 [2024-07-12 00:48:23.829109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.355 qpair failed and we were unable to recover it. 00:35:56.355 [2024-07-12 00:48:23.829203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.355 [2024-07-12 00:48:23.829230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.355 qpair failed and we were unable to recover it. 00:35:56.355 [2024-07-12 00:48:23.829317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.355 [2024-07-12 00:48:23.829347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.355 qpair failed and we were unable to recover it. 00:35:56.355 [2024-07-12 00:48:23.829433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.355 [2024-07-12 00:48:23.829460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.355 qpair failed and we were unable to recover it. 00:35:56.355 [2024-07-12 00:48:23.829536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.355 [2024-07-12 00:48:23.829562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.355 qpair failed and we were unable to recover it. 00:35:56.355 [2024-07-12 00:48:23.829666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.355 [2024-07-12 00:48:23.829693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.355 qpair failed and we were unable to recover it. 00:35:56.355 [2024-07-12 00:48:23.829808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.356 [2024-07-12 00:48:23.829835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.356 qpair failed and we were unable to recover it. 00:35:56.356 [2024-07-12 00:48:23.829919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.356 [2024-07-12 00:48:23.829945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.356 qpair failed and we were unable to recover it. 00:35:56.356 [2024-07-12 00:48:23.830031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.356 [2024-07-12 00:48:23.830060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.356 qpair failed and we were unable to recover it. 00:35:56.356 [2024-07-12 00:48:23.830145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.356 [2024-07-12 00:48:23.830174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.356 qpair failed and we were unable to recover it. 00:35:56.356 [2024-07-12 00:48:23.830268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.356 [2024-07-12 00:48:23.830296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.356 qpair failed and we were unable to recover it. 00:35:56.356 [2024-07-12 00:48:23.830378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.356 [2024-07-12 00:48:23.830404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.356 qpair failed and we were unable to recover it. 00:35:56.356 [2024-07-12 00:48:23.830482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.356 [2024-07-12 00:48:23.830508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.356 qpair failed and we were unable to recover it. 00:35:56.356 [2024-07-12 00:48:23.830595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.356 [2024-07-12 00:48:23.830621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.356 qpair failed and we were unable to recover it. 00:35:56.356 [2024-07-12 00:48:23.830710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.356 [2024-07-12 00:48:23.830740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.356 qpair failed and we were unable to recover it. 00:35:56.356 [2024-07-12 00:48:23.830855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.356 [2024-07-12 00:48:23.830881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.356 qpair failed and we were unable to recover it. 00:35:56.356 [2024-07-12 00:48:23.830963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.356 [2024-07-12 00:48:23.830989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.356 qpair failed and we were unable to recover it. 00:35:56.356 [2024-07-12 00:48:23.831075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.356 [2024-07-12 00:48:23.831101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.356 qpair failed and we were unable to recover it. 00:35:56.356 [2024-07-12 00:48:23.831181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.356 [2024-07-12 00:48:23.831207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.356 qpair failed and we were unable to recover it. 00:35:56.356 [2024-07-12 00:48:23.831318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.356 [2024-07-12 00:48:23.831343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.356 qpair failed and we were unable to recover it. 00:35:56.356 [2024-07-12 00:48:23.831431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.356 [2024-07-12 00:48:23.831459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.356 qpair failed and we were unable to recover it. 00:35:56.356 [2024-07-12 00:48:23.831541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.356 [2024-07-12 00:48:23.831567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.356 qpair failed and we were unable to recover it. 00:35:56.356 [2024-07-12 00:48:23.831655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.356 [2024-07-12 00:48:23.831682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.356 qpair failed and we were unable to recover it. 00:35:56.356 [2024-07-12 00:48:23.831813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.356 [2024-07-12 00:48:23.831840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.356 qpair failed and we were unable to recover it. 00:35:56.356 [2024-07-12 00:48:23.831914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.356 [2024-07-12 00:48:23.831940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.356 qpair failed and we were unable to recover it. 00:35:56.356 [2024-07-12 00:48:23.832034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.356 [2024-07-12 00:48:23.832061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.356 qpair failed and we were unable to recover it. 00:35:56.356 [2024-07-12 00:48:23.832139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.356 [2024-07-12 00:48:23.832166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.356 qpair failed and we were unable to recover it. 00:35:56.356 [2024-07-12 00:48:23.832248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.356 [2024-07-12 00:48:23.832281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.356 qpair failed and we were unable to recover it. 00:35:56.356 [2024-07-12 00:48:23.832366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.356 [2024-07-12 00:48:23.832392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.356 qpair failed and we were unable to recover it. 00:35:56.356 [2024-07-12 00:48:23.832591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.356 [2024-07-12 00:48:23.832618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.356 qpair failed and we were unable to recover it. 00:35:56.356 [2024-07-12 00:48:23.832712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.356 [2024-07-12 00:48:23.832738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.356 qpair failed and we were unable to recover it. 00:35:56.356 [2024-07-12 00:48:23.832933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.356 [2024-07-12 00:48:23.832959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.356 qpair failed and we were unable to recover it. 00:35:56.356 [2024-07-12 00:48:23.833041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.356 [2024-07-12 00:48:23.833070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.356 qpair failed and we were unable to recover it. 00:35:56.356 [2024-07-12 00:48:23.833162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.356 [2024-07-12 00:48:23.833189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.356 qpair failed and we were unable to recover it. 00:35:56.356 [2024-07-12 00:48:23.833380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.356 [2024-07-12 00:48:23.833407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.356 qpair failed and we were unable to recover it. 00:35:56.356 [2024-07-12 00:48:23.833496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.356 [2024-07-12 00:48:23.833526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.356 qpair failed and we were unable to recover it. 00:35:56.356 [2024-07-12 00:48:23.833660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.356 [2024-07-12 00:48:23.833703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.356 qpair failed and we were unable to recover it. 00:35:56.356 [2024-07-12 00:48:23.833804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.356 [2024-07-12 00:48:23.833832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.356 qpair failed and we were unable to recover it. 00:35:56.356 [2024-07-12 00:48:23.833914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.356 [2024-07-12 00:48:23.833941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.356 qpair failed and we were unable to recover it. 00:35:56.356 [2024-07-12 00:48:23.834022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.356 [2024-07-12 00:48:23.834048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.356 qpair failed and we were unable to recover it. 00:35:56.356 [2024-07-12 00:48:23.834166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.356 [2024-07-12 00:48:23.834193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.356 qpair failed and we were unable to recover it. 00:35:56.357 [2024-07-12 00:48:23.834280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.357 [2024-07-12 00:48:23.834305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.357 qpair failed and we were unable to recover it. 00:35:56.357 [2024-07-12 00:48:23.834407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.357 [2024-07-12 00:48:23.834434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.357 qpair failed and we were unable to recover it. 00:35:56.357 [2024-07-12 00:48:23.834515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.357 [2024-07-12 00:48:23.834540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.357 qpair failed and we were unable to recover it. 00:35:56.357 [2024-07-12 00:48:23.834741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.357 [2024-07-12 00:48:23.834769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.357 qpair failed and we were unable to recover it. 00:35:56.357 [2024-07-12 00:48:23.834848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.357 [2024-07-12 00:48:23.834875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.357 qpair failed and we were unable to recover it. 00:35:56.357 [2024-07-12 00:48:23.834962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.357 [2024-07-12 00:48:23.834991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.357 qpair failed and we were unable to recover it. 00:35:56.357 [2024-07-12 00:48:23.835079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.357 [2024-07-12 00:48:23.835105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.357 qpair failed and we were unable to recover it. 00:35:56.357 [2024-07-12 00:48:23.835189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.357 [2024-07-12 00:48:23.835215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.357 qpair failed and we were unable to recover it. 00:35:56.357 [2024-07-12 00:48:23.835296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.357 [2024-07-12 00:48:23.835323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.357 qpair failed and we were unable to recover it. 00:35:56.357 [2024-07-12 00:48:23.835412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.357 [2024-07-12 00:48:23.835439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.357 qpair failed and we were unable to recover it. 00:35:56.357 [2024-07-12 00:48:23.835525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.357 [2024-07-12 00:48:23.835551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.357 qpair failed and we were unable to recover it. 00:35:56.357 [2024-07-12 00:48:23.835638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.357 [2024-07-12 00:48:23.835664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.357 qpair failed and we were unable to recover it. 00:35:56.357 [2024-07-12 00:48:23.835752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.357 [2024-07-12 00:48:23.835778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.357 qpair failed and we were unable to recover it. 00:35:56.357 [2024-07-12 00:48:23.835859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.357 [2024-07-12 00:48:23.835889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.357 qpair failed and we were unable to recover it. 00:35:56.357 [2024-07-12 00:48:23.835966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.357 [2024-07-12 00:48:23.835991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.357 qpair failed and we were unable to recover it. 00:35:56.357 [2024-07-12 00:48:23.836069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.357 [2024-07-12 00:48:23.836099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.357 qpair failed and we were unable to recover it. 00:35:56.357 [2024-07-12 00:48:23.836184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.357 [2024-07-12 00:48:23.836214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.357 qpair failed and we were unable to recover it. 00:35:56.357 [2024-07-12 00:48:23.836302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.357 [2024-07-12 00:48:23.836327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.357 qpair failed and we were unable to recover it. 00:35:56.357 [2024-07-12 00:48:23.836417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.357 [2024-07-12 00:48:23.836442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.357 qpair failed and we were unable to recover it. 00:35:56.357 [2024-07-12 00:48:23.836517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.357 [2024-07-12 00:48:23.836543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.357 qpair failed and we were unable to recover it. 00:35:56.357 [2024-07-12 00:48:23.836637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.357 [2024-07-12 00:48:23.836664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.357 qpair failed and we were unable to recover it. 00:35:56.357 [2024-07-12 00:48:23.836740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.357 [2024-07-12 00:48:23.836764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.357 qpair failed and we were unable to recover it. 00:35:56.357 [2024-07-12 00:48:23.836852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.357 [2024-07-12 00:48:23.836877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.357 qpair failed and we were unable to recover it. 00:35:56.357 [2024-07-12 00:48:23.836965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.357 [2024-07-12 00:48:23.836989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.357 qpair failed and we were unable to recover it. 00:35:56.357 [2024-07-12 00:48:23.837077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.357 [2024-07-12 00:48:23.837102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.357 qpair failed and we were unable to recover it. 00:35:56.357 [2024-07-12 00:48:23.837185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.357 [2024-07-12 00:48:23.837211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.357 qpair failed and we were unable to recover it. 00:35:56.357 [2024-07-12 00:48:23.837294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.357 [2024-07-12 00:48:23.837320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.357 qpair failed and we were unable to recover it. 00:35:56.357 [2024-07-12 00:48:23.837412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.357 [2024-07-12 00:48:23.837440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.357 qpair failed and we were unable to recover it. 00:35:56.357 [2024-07-12 00:48:23.837521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.357 [2024-07-12 00:48:23.837545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.357 qpair failed and we were unable to recover it. 00:35:56.357 [2024-07-12 00:48:23.837640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.357 [2024-07-12 00:48:23.837665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.357 qpair failed and we were unable to recover it. 00:35:56.357 [2024-07-12 00:48:23.837750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.357 [2024-07-12 00:48:23.837774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.357 qpair failed and we were unable to recover it. 00:35:56.357 [2024-07-12 00:48:23.837857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.357 [2024-07-12 00:48:23.837883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.357 qpair failed and we were unable to recover it. 00:35:56.357 [2024-07-12 00:48:23.837964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.357 [2024-07-12 00:48:23.837988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.357 qpair failed and we were unable to recover it. 00:35:56.357 [2024-07-12 00:48:23.838070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.357 [2024-07-12 00:48:23.838094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.357 qpair failed and we were unable to recover it. 00:35:56.357 [2024-07-12 00:48:23.838183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.357 [2024-07-12 00:48:23.838210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.357 qpair failed and we were unable to recover it. 00:35:56.357 [2024-07-12 00:48:23.838303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.357 [2024-07-12 00:48:23.838332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.357 qpair failed and we were unable to recover it. 00:35:56.357 [2024-07-12 00:48:23.838420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.357 [2024-07-12 00:48:23.838449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.357 qpair failed and we were unable to recover it. 00:35:56.357 [2024-07-12 00:48:23.838536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.357 [2024-07-12 00:48:23.838561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.357 qpair failed and we were unable to recover it. 00:35:56.357 [2024-07-12 00:48:23.838645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.357 [2024-07-12 00:48:23.838673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.357 qpair failed and we were unable to recover it. 00:35:56.357 [2024-07-12 00:48:23.838762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.358 [2024-07-12 00:48:23.838787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.358 qpair failed and we were unable to recover it. 00:35:56.358 [2024-07-12 00:48:23.838980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.358 [2024-07-12 00:48:23.839011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.358 qpair failed and we were unable to recover it. 00:35:56.358 [2024-07-12 00:48:23.839090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.358 [2024-07-12 00:48:23.839115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.358 qpair failed and we were unable to recover it. 00:35:56.358 [2024-07-12 00:48:23.839306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.358 [2024-07-12 00:48:23.839332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.358 qpair failed and we were unable to recover it. 00:35:56.358 [2024-07-12 00:48:23.839523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.358 [2024-07-12 00:48:23.839549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.358 qpair failed and we were unable to recover it. 00:35:56.358 [2024-07-12 00:48:23.839650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.358 [2024-07-12 00:48:23.839677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.358 qpair failed and we were unable to recover it. 00:35:56.358 [2024-07-12 00:48:23.839794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.358 [2024-07-12 00:48:23.839821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.358 qpair failed and we were unable to recover it. 00:35:56.358 [2024-07-12 00:48:23.839904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.358 [2024-07-12 00:48:23.839931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.358 qpair failed and we were unable to recover it. 00:35:56.358 [2024-07-12 00:48:23.840016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.358 [2024-07-12 00:48:23.840046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.358 qpair failed and we were unable to recover it. 00:35:56.358 [2024-07-12 00:48:23.840130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.358 [2024-07-12 00:48:23.840155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.358 qpair failed and we were unable to recover it. 00:35:56.358 [2024-07-12 00:48:23.840238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.358 [2024-07-12 00:48:23.840263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.358 qpair failed and we were unable to recover it. 00:35:56.358 [2024-07-12 00:48:23.840348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.358 [2024-07-12 00:48:23.840374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.358 qpair failed and we were unable to recover it. 00:35:56.358 [2024-07-12 00:48:23.840457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.358 [2024-07-12 00:48:23.840484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.358 qpair failed and we were unable to recover it. 00:35:56.358 [2024-07-12 00:48:23.840567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.358 [2024-07-12 00:48:23.840597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.358 qpair failed and we were unable to recover it. 00:35:56.358 [2024-07-12 00:48:23.840684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.358 [2024-07-12 00:48:23.840708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.358 qpair failed and we were unable to recover it. 00:35:56.358 [2024-07-12 00:48:23.840791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.358 [2024-07-12 00:48:23.840815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.358 qpair failed and we were unable to recover it. 00:35:56.358 [2024-07-12 00:48:23.840919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.358 [2024-07-12 00:48:23.840959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.358 qpair failed and we were unable to recover it. 00:35:56.358 [2024-07-12 00:48:23.841048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.358 [2024-07-12 00:48:23.841075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.358 qpair failed and we were unable to recover it. 00:35:56.358 [2024-07-12 00:48:23.841157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.358 [2024-07-12 00:48:23.841183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.358 qpair failed and we were unable to recover it. 00:35:56.358 [2024-07-12 00:48:23.841262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.358 [2024-07-12 00:48:23.841287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.358 qpair failed and we were unable to recover it. 00:35:56.358 [2024-07-12 00:48:23.841297] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:35:56.358 [2024-07-12 00:48:23.841402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.358 [2024-07-12 00:48:23.841427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.358 qpair failed and we were unable to recover it. 00:35:56.358 [2024-07-12 00:48:23.841509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.358 [2024-07-12 00:48:23.841536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.358 qpair failed and we were unable to recover it. 00:35:56.358 [2024-07-12 00:48:23.841622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.358 [2024-07-12 00:48:23.841648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.358 qpair failed and we were unable to recover it. 00:35:56.358 [2024-07-12 00:48:23.841737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.358 [2024-07-12 00:48:23.841763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.358 qpair failed and we were unable to recover it. 00:35:56.358 [2024-07-12 00:48:23.841852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.358 [2024-07-12 00:48:23.841877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.358 qpair failed and we were unable to recover it. 00:35:56.358 [2024-07-12 00:48:23.841961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.358 [2024-07-12 00:48:23.842002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.358 qpair failed and we were unable to recover it. 00:35:56.358 [2024-07-12 00:48:23.842082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.358 [2024-07-12 00:48:23.842106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.358 qpair failed and we were unable to recover it. 00:35:56.358 [2024-07-12 00:48:23.842182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.358 [2024-07-12 00:48:23.842208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.358 qpair failed and we were unable to recover it. 00:35:56.358 [2024-07-12 00:48:23.842301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.358 [2024-07-12 00:48:23.842327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.358 qpair failed and we were unable to recover it. 00:35:56.358 [2024-07-12 00:48:23.842418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.358 [2024-07-12 00:48:23.842447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.358 qpair failed and we were unable to recover it. 00:35:56.358 [2024-07-12 00:48:23.842538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.358 [2024-07-12 00:48:23.842562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.358 qpair failed and we were unable to recover it. 00:35:56.358 [2024-07-12 00:48:23.842644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.358 [2024-07-12 00:48:23.842670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.358 qpair failed and we were unable to recover it. 00:35:56.358 [2024-07-12 00:48:23.842745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.358 [2024-07-12 00:48:23.842770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.358 qpair failed and we were unable to recover it. 00:35:56.358 [2024-07-12 00:48:23.842851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.358 [2024-07-12 00:48:23.842878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.358 qpair failed and we were unable to recover it. 00:35:56.358 [2024-07-12 00:48:23.842959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.358 [2024-07-12 00:48:23.842983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.358 qpair failed and we were unable to recover it. 00:35:56.358 [2024-07-12 00:48:23.843089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.358 [2024-07-12 00:48:23.843115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.358 qpair failed and we were unable to recover it. 00:35:56.358 [2024-07-12 00:48:23.843192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.358 [2024-07-12 00:48:23.843217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.358 qpair failed and we were unable to recover it. 00:35:56.358 [2024-07-12 00:48:23.843305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.358 [2024-07-12 00:48:23.843333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.358 qpair failed and we were unable to recover it. 00:35:56.358 [2024-07-12 00:48:23.843415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.359 [2024-07-12 00:48:23.843441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.359 qpair failed and we were unable to recover it. 00:35:56.359 [2024-07-12 00:48:23.843522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.359 [2024-07-12 00:48:23.843547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.359 qpair failed and we were unable to recover it. 00:35:56.359 [2024-07-12 00:48:23.843637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.359 [2024-07-12 00:48:23.843663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.359 qpair failed and we were unable to recover it. 00:35:56.359 [2024-07-12 00:48:23.843747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.359 [2024-07-12 00:48:23.843775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.359 qpair failed and we were unable to recover it. 00:35:56.359 [2024-07-12 00:48:23.843852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.359 [2024-07-12 00:48:23.843876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.359 qpair failed and we were unable to recover it. 00:35:56.359 [2024-07-12 00:48:23.843964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.359 [2024-07-12 00:48:23.843989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.359 qpair failed and we were unable to recover it. 00:35:56.359 [2024-07-12 00:48:23.844072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.359 [2024-07-12 00:48:23.844098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.359 qpair failed and we were unable to recover it. 00:35:56.359 [2024-07-12 00:48:23.844184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.359 [2024-07-12 00:48:23.844209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.359 qpair failed and we were unable to recover it. 00:35:56.359 [2024-07-12 00:48:23.844301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.359 [2024-07-12 00:48:23.844327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.359 qpair failed and we were unable to recover it. 00:35:56.359 [2024-07-12 00:48:23.844407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.359 [2024-07-12 00:48:23.844434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.359 qpair failed and we were unable to recover it. 00:35:56.359 [2024-07-12 00:48:23.844525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.359 [2024-07-12 00:48:23.844550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.359 qpair failed and we were unable to recover it. 00:35:56.359 [2024-07-12 00:48:23.844637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.359 [2024-07-12 00:48:23.844663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.359 qpair failed and we were unable to recover it. 00:35:56.359 [2024-07-12 00:48:23.844750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.359 [2024-07-12 00:48:23.844774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.359 qpair failed and we were unable to recover it. 00:35:56.359 [2024-07-12 00:48:23.844869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.359 [2024-07-12 00:48:23.844893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.359 qpair failed and we were unable to recover it. 00:35:56.359 [2024-07-12 00:48:23.844972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.359 [2024-07-12 00:48:23.844996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.359 qpair failed and we were unable to recover it. 00:35:56.359 [2024-07-12 00:48:23.845081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.359 [2024-07-12 00:48:23.845105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.359 qpair failed and we were unable to recover it. 00:35:56.359 [2024-07-12 00:48:23.845193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.359 [2024-07-12 00:48:23.845225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.359 qpair failed and we were unable to recover it. 00:35:56.359 [2024-07-12 00:48:23.845306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.359 [2024-07-12 00:48:23.845333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.359 qpair failed and we were unable to recover it. 00:35:56.359 [2024-07-12 00:48:23.845422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.359 [2024-07-12 00:48:23.845449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.359 qpair failed and we were unable to recover it. 00:35:56.359 [2024-07-12 00:48:23.845542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.359 [2024-07-12 00:48:23.845571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.359 qpair failed and we were unable to recover it. 00:35:56.359 [2024-07-12 00:48:23.845661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.359 [2024-07-12 00:48:23.845687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.359 qpair failed and we were unable to recover it. 00:35:56.359 [2024-07-12 00:48:23.845760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.359 [2024-07-12 00:48:23.845785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.359 qpair failed and we were unable to recover it. 00:35:56.359 [2024-07-12 00:48:23.845868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.359 [2024-07-12 00:48:23.845893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.359 qpair failed and we were unable to recover it. 00:35:56.359 [2024-07-12 00:48:23.845974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.359 [2024-07-12 00:48:23.845999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.359 qpair failed and we were unable to recover it. 00:35:56.359 [2024-07-12 00:48:23.846085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.359 [2024-07-12 00:48:23.846112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.359 qpair failed and we were unable to recover it. 00:35:56.359 [2024-07-12 00:48:23.846192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.359 [2024-07-12 00:48:23.846218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.359 qpair failed and we were unable to recover it. 00:35:56.359 [2024-07-12 00:48:23.846297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.359 [2024-07-12 00:48:23.846322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.359 qpair failed and we were unable to recover it. 00:35:56.359 [2024-07-12 00:48:23.846406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.359 [2024-07-12 00:48:23.846431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.359 qpair failed and we were unable to recover it. 00:35:56.359 [2024-07-12 00:48:23.846511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.359 [2024-07-12 00:48:23.846537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.359 qpair failed and we were unable to recover it. 00:35:56.359 [2024-07-12 00:48:23.846622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.359 [2024-07-12 00:48:23.846651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.359 qpair failed and we were unable to recover it. 00:35:56.359 [2024-07-12 00:48:23.846744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.359 [2024-07-12 00:48:23.846771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.359 qpair failed and we were unable to recover it. 00:35:56.359 [2024-07-12 00:48:23.846859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.359 [2024-07-12 00:48:23.846884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.359 qpair failed and we were unable to recover it. 00:35:56.359 [2024-07-12 00:48:23.846977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.359 [2024-07-12 00:48:23.847003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.359 qpair failed and we were unable to recover it. 00:35:56.359 [2024-07-12 00:48:23.847081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.359 [2024-07-12 00:48:23.847107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.359 qpair failed and we were unable to recover it. 00:35:56.359 [2024-07-12 00:48:23.847190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.359 [2024-07-12 00:48:23.847215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.359 qpair failed and we were unable to recover it. 00:35:56.359 [2024-07-12 00:48:23.847302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.359 [2024-07-12 00:48:23.847330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.359 qpair failed and we were unable to recover it. 00:35:56.359 [2024-07-12 00:48:23.847409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.359 [2024-07-12 00:48:23.847433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.359 qpair failed and we were unable to recover it. 00:35:56.359 [2024-07-12 00:48:23.847513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.359 [2024-07-12 00:48:23.847538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.359 qpair failed and we were unable to recover it. 00:35:56.359 [2024-07-12 00:48:23.847623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.359 [2024-07-12 00:48:23.847647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.359 qpair failed and we were unable to recover it. 00:35:56.359 [2024-07-12 00:48:23.847735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.359 [2024-07-12 00:48:23.847763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.359 qpair failed and we were unable to recover it. 00:35:56.359 [2024-07-12 00:48:23.847843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.359 [2024-07-12 00:48:23.847868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.359 qpair failed and we were unable to recover it. 00:35:56.360 [2024-07-12 00:48:23.847960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.360 [2024-07-12 00:48:23.847990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.360 qpair failed and we were unable to recover it. 00:35:56.360 [2024-07-12 00:48:23.848071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.360 [2024-07-12 00:48:23.848097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.360 qpair failed and we were unable to recover it. 00:35:56.360 [2024-07-12 00:48:23.848209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.360 [2024-07-12 00:48:23.848243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.360 qpair failed and we were unable to recover it. 00:35:56.360 [2024-07-12 00:48:23.848333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.360 [2024-07-12 00:48:23.848357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.360 qpair failed and we were unable to recover it. 00:35:56.360 [2024-07-12 00:48:23.848444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.360 [2024-07-12 00:48:23.848470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.360 qpair failed and we were unable to recover it. 00:35:56.360 [2024-07-12 00:48:23.848550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.360 [2024-07-12 00:48:23.848596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.360 qpair failed and we were unable to recover it. 00:35:56.360 [2024-07-12 00:48:23.848685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.360 [2024-07-12 00:48:23.848711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.360 qpair failed and we were unable to recover it. 00:35:56.360 [2024-07-12 00:48:23.848793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.360 [2024-07-12 00:48:23.848818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.360 qpair failed and we were unable to recover it. 00:35:56.360 [2024-07-12 00:48:23.848895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.360 [2024-07-12 00:48:23.848919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.360 qpair failed and we were unable to recover it. 00:35:56.360 [2024-07-12 00:48:23.849001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.360 [2024-07-12 00:48:23.849026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.360 qpair failed and we were unable to recover it. 00:35:56.360 [2024-07-12 00:48:23.849117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.360 [2024-07-12 00:48:23.849140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.360 qpair failed and we were unable to recover it. 00:35:56.360 [2024-07-12 00:48:23.849235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.360 [2024-07-12 00:48:23.849260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.360 qpair failed and we were unable to recover it. 00:35:56.360 [2024-07-12 00:48:23.849338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.360 [2024-07-12 00:48:23.849361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.360 qpair failed and we were unable to recover it. 00:35:56.360 [2024-07-12 00:48:23.849462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.360 [2024-07-12 00:48:23.849494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.360 qpair failed and we were unable to recover it. 00:35:56.360 [2024-07-12 00:48:23.849598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.360 [2024-07-12 00:48:23.849627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.360 qpair failed and we were unable to recover it. 00:35:56.360 [2024-07-12 00:48:23.849725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.360 [2024-07-12 00:48:23.849751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.360 qpair failed and we were unable to recover it. 00:35:56.360 [2024-07-12 00:48:23.849843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.360 [2024-07-12 00:48:23.849868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.360 qpair failed and we were unable to recover it. 00:35:56.360 [2024-07-12 00:48:23.849943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.360 [2024-07-12 00:48:23.849969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.360 qpair failed and we were unable to recover it. 00:35:56.360 [2024-07-12 00:48:23.850056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.360 [2024-07-12 00:48:23.850083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.360 qpair failed and we were unable to recover it. 00:35:56.360 [2024-07-12 00:48:23.850170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.360 [2024-07-12 00:48:23.850195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.360 qpair failed and we were unable to recover it. 00:35:56.360 [2024-07-12 00:48:23.850270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.360 [2024-07-12 00:48:23.850294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.360 qpair failed and we were unable to recover it. 00:35:56.360 [2024-07-12 00:48:23.850377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.360 [2024-07-12 00:48:23.850405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.360 qpair failed and we were unable to recover it. 00:35:56.360 [2024-07-12 00:48:23.850495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.360 [2024-07-12 00:48:23.850523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.360 qpair failed and we were unable to recover it. 00:35:56.360 [2024-07-12 00:48:23.850620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.360 [2024-07-12 00:48:23.850648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.360 qpair failed and we were unable to recover it. 00:35:56.360 [2024-07-12 00:48:23.850742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.360 [2024-07-12 00:48:23.850769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.360 qpair failed and we were unable to recover it. 00:35:56.360 [2024-07-12 00:48:23.850851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.360 [2024-07-12 00:48:23.850876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.360 qpair failed and we were unable to recover it. 00:35:56.360 [2024-07-12 00:48:23.850961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.360 [2024-07-12 00:48:23.850986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.360 qpair failed and we were unable to recover it. 00:35:56.360 [2024-07-12 00:48:23.851065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.360 [2024-07-12 00:48:23.851090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.360 qpair failed and we were unable to recover it. 00:35:56.360 [2024-07-12 00:48:23.851175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.360 [2024-07-12 00:48:23.851199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.360 qpair failed and we were unable to recover it. 00:35:56.360 [2024-07-12 00:48:23.851279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.360 [2024-07-12 00:48:23.851321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.360 qpair failed and we were unable to recover it. 00:35:56.360 [2024-07-12 00:48:23.851406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.360 [2024-07-12 00:48:23.851433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.360 qpair failed and we were unable to recover it. 00:35:56.360 [2024-07-12 00:48:23.851517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.360 [2024-07-12 00:48:23.851544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.360 qpair failed and we were unable to recover it. 00:35:56.360 [2024-07-12 00:48:23.851692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.360 [2024-07-12 00:48:23.851718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.360 qpair failed and we were unable to recover it. 00:35:56.360 [2024-07-12 00:48:23.851800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.360 [2024-07-12 00:48:23.851826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.360 qpair failed and we were unable to recover it. 00:35:56.360 [2024-07-12 00:48:23.851909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.360 [2024-07-12 00:48:23.851934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.360 qpair failed and we were unable to recover it. 00:35:56.360 [2024-07-12 00:48:23.852019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.360 [2024-07-12 00:48:23.852043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.360 qpair failed and we were unable to recover it. 00:35:56.360 [2024-07-12 00:48:23.852122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.360 [2024-07-12 00:48:23.852150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.360 qpair failed and we were unable to recover it. 00:35:56.360 [2024-07-12 00:48:23.852238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.360 [2024-07-12 00:48:23.852263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.360 qpair failed and we were unable to recover it. 00:35:56.360 [2024-07-12 00:48:23.852345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.360 [2024-07-12 00:48:23.852370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.360 qpair failed and we were unable to recover it. 00:35:56.360 [2024-07-12 00:48:23.852448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.361 [2024-07-12 00:48:23.852472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.361 qpair failed and we were unable to recover it. 00:35:56.361 [2024-07-12 00:48:23.852558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.361 [2024-07-12 00:48:23.852592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.361 qpair failed and we were unable to recover it. 00:35:56.361 [2024-07-12 00:48:23.852686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.361 [2024-07-12 00:48:23.852712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.361 qpair failed and we were unable to recover it. 00:35:56.361 [2024-07-12 00:48:23.852809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.361 [2024-07-12 00:48:23.852834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.361 qpair failed and we were unable to recover it. 00:35:56.361 [2024-07-12 00:48:23.852919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.361 [2024-07-12 00:48:23.852944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.361 qpair failed and we were unable to recover it. 00:35:56.361 [2024-07-12 00:48:23.853040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.361 [2024-07-12 00:48:23.853068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.361 qpair failed and we were unable to recover it. 00:35:56.361 [2024-07-12 00:48:23.853158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.361 [2024-07-12 00:48:23.853185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.361 qpair failed and we were unable to recover it. 00:35:56.361 [2024-07-12 00:48:23.853281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.361 [2024-07-12 00:48:23.853308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.361 qpair failed and we were unable to recover it. 00:35:56.361 [2024-07-12 00:48:23.853390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.361 [2024-07-12 00:48:23.853414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.361 qpair failed and we were unable to recover it. 00:35:56.361 [2024-07-12 00:48:23.853496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.361 [2024-07-12 00:48:23.853521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.361 qpair failed and we were unable to recover it. 00:35:56.361 [2024-07-12 00:48:23.853601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.361 [2024-07-12 00:48:23.853626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.361 qpair failed and we were unable to recover it. 00:35:56.361 [2024-07-12 00:48:23.853707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.361 [2024-07-12 00:48:23.853731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.361 qpair failed and we were unable to recover it. 00:35:56.361 [2024-07-12 00:48:23.853809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.361 [2024-07-12 00:48:23.853836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.361 qpair failed and we were unable to recover it. 00:35:56.361 [2024-07-12 00:48:23.853926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.361 [2024-07-12 00:48:23.853952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.361 qpair failed and we were unable to recover it. 00:35:56.361 [2024-07-12 00:48:23.854037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.361 [2024-07-12 00:48:23.854066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.361 qpair failed and we were unable to recover it. 00:35:56.361 [2024-07-12 00:48:23.854161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.361 [2024-07-12 00:48:23.854185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.361 qpair failed and we were unable to recover it. 00:35:56.361 [2024-07-12 00:48:23.854271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.361 [2024-07-12 00:48:23.854298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.361 qpair failed and we were unable to recover it. 00:35:56.361 [2024-07-12 00:48:23.854387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.361 [2024-07-12 00:48:23.854411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.361 qpair failed and we were unable to recover it. 00:35:56.361 [2024-07-12 00:48:23.854500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.361 [2024-07-12 00:48:23.854527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.361 qpair failed and we were unable to recover it. 00:35:56.361 [2024-07-12 00:48:23.854616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.361 [2024-07-12 00:48:23.854642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.361 qpair failed and we were unable to recover it. 00:35:56.361 [2024-07-12 00:48:23.854728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.361 [2024-07-12 00:48:23.854753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.361 qpair failed and we were unable to recover it. 00:35:56.361 [2024-07-12 00:48:23.854832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.361 [2024-07-12 00:48:23.854856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.361 qpair failed and we were unable to recover it. 00:35:56.361 [2024-07-12 00:48:23.854950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.361 [2024-07-12 00:48:23.854975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.361 qpair failed and we were unable to recover it. 00:35:56.361 [2024-07-12 00:48:23.855054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.361 [2024-07-12 00:48:23.855081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.361 qpair failed and we were unable to recover it. 00:35:56.361 [2024-07-12 00:48:23.855170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.361 [2024-07-12 00:48:23.855196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.361 qpair failed and we were unable to recover it. 00:35:56.361 [2024-07-12 00:48:23.855277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.361 [2024-07-12 00:48:23.855302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.361 qpair failed and we were unable to recover it. 00:35:56.361 [2024-07-12 00:48:23.855383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.361 [2024-07-12 00:48:23.855412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.361 qpair failed and we were unable to recover it. 00:35:56.361 [2024-07-12 00:48:23.855491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.361 [2024-07-12 00:48:23.855515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.361 qpair failed and we were unable to recover it. 00:35:56.361 [2024-07-12 00:48:23.855611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.361 [2024-07-12 00:48:23.855636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.361 qpair failed and we were unable to recover it. 00:35:56.361 [2024-07-12 00:48:23.855717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.361 [2024-07-12 00:48:23.855741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.361 qpair failed and we were unable to recover it. 00:35:56.361 [2024-07-12 00:48:23.855821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.361 [2024-07-12 00:48:23.855850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.361 qpair failed and we were unable to recover it. 00:35:56.361 [2024-07-12 00:48:23.855935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.361 [2024-07-12 00:48:23.855959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.361 qpair failed and we were unable to recover it. 00:35:56.361 [2024-07-12 00:48:23.856041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.361 [2024-07-12 00:48:23.856067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.361 qpair failed and we were unable to recover it. 00:35:56.361 [2024-07-12 00:48:23.856149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.361 [2024-07-12 00:48:23.856174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.361 qpair failed and we were unable to recover it. 00:35:56.361 [2024-07-12 00:48:23.856255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.361 [2024-07-12 00:48:23.856280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.361 qpair failed and we were unable to recover it. 00:35:56.361 [2024-07-12 00:48:23.856363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.361 [2024-07-12 00:48:23.856390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.361 qpair failed and we were unable to recover it. 00:35:56.361 [2024-07-12 00:48:23.856477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.361 [2024-07-12 00:48:23.856504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.362 qpair failed and we were unable to recover it. 00:35:56.362 [2024-07-12 00:48:23.856593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.362 [2024-07-12 00:48:23.856620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.362 qpair failed and we were unable to recover it. 00:35:56.362 [2024-07-12 00:48:23.856699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.362 [2024-07-12 00:48:23.856724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.362 qpair failed and we were unable to recover it. 00:35:56.362 [2024-07-12 00:48:23.856809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.362 [2024-07-12 00:48:23.856833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.362 qpair failed and we were unable to recover it. 00:35:56.362 [2024-07-12 00:48:23.856917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.362 [2024-07-12 00:48:23.856943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.362 qpair failed and we were unable to recover it. 00:35:56.362 [2024-07-12 00:48:23.857027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.362 [2024-07-12 00:48:23.857051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.362 qpair failed and we were unable to recover it. 00:35:56.362 [2024-07-12 00:48:23.857139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.362 [2024-07-12 00:48:23.857164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.362 qpair failed and we were unable to recover it. 00:35:56.362 [2024-07-12 00:48:23.857248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.362 [2024-07-12 00:48:23.857276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.362 qpair failed and we were unable to recover it. 00:35:56.362 [2024-07-12 00:48:23.857364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.362 [2024-07-12 00:48:23.857391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.362 qpair failed and we were unable to recover it. 00:35:56.362 [2024-07-12 00:48:23.857470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.362 [2024-07-12 00:48:23.857498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.362 qpair failed and we were unable to recover it. 00:35:56.362 [2024-07-12 00:48:23.857580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.362 [2024-07-12 00:48:23.857610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.362 qpair failed and we were unable to recover it. 00:35:56.362 [2024-07-12 00:48:23.857692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.362 [2024-07-12 00:48:23.857717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.362 qpair failed and we were unable to recover it. 00:35:56.362 [2024-07-12 00:48:23.857797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.362 [2024-07-12 00:48:23.857822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.362 qpair failed and we were unable to recover it. 00:35:56.362 [2024-07-12 00:48:23.857901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.362 [2024-07-12 00:48:23.857926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.362 qpair failed and we were unable to recover it. 00:35:56.362 [2024-07-12 00:48:23.858014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.362 [2024-07-12 00:48:23.858038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.362 qpair failed and we were unable to recover it. 00:35:56.362 [2024-07-12 00:48:23.858113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.362 [2024-07-12 00:48:23.858137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.362 qpair failed and we were unable to recover it. 00:35:56.362 [2024-07-12 00:48:23.858227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.362 [2024-07-12 00:48:23.858254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.362 qpair failed and we were unable to recover it. 00:35:56.362 [2024-07-12 00:48:23.858334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.362 [2024-07-12 00:48:23.858361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.362 qpair failed and we were unable to recover it. 00:35:56.362 [2024-07-12 00:48:23.858439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.362 [2024-07-12 00:48:23.858463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.362 qpair failed and we were unable to recover it. 00:35:56.362 [2024-07-12 00:48:23.858546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.362 [2024-07-12 00:48:23.858570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.362 qpair failed and we were unable to recover it. 00:35:56.362 [2024-07-12 00:48:23.858660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.362 [2024-07-12 00:48:23.858684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.362 qpair failed and we were unable to recover it. 00:35:56.362 [2024-07-12 00:48:23.858768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.362 [2024-07-12 00:48:23.858798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.362 qpair failed and we were unable to recover it. 00:35:56.362 [2024-07-12 00:48:23.858878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.362 [2024-07-12 00:48:23.858903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.362 qpair failed and we were unable to recover it. 00:35:56.362 [2024-07-12 00:48:23.858985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.362 [2024-07-12 00:48:23.859010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.362 qpair failed and we were unable to recover it. 00:35:56.362 [2024-07-12 00:48:23.859088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.362 [2024-07-12 00:48:23.859112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.362 qpair failed and we were unable to recover it. 00:35:56.362 [2024-07-12 00:48:23.859197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.362 [2024-07-12 00:48:23.859222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.362 qpair failed and we were unable to recover it. 00:35:56.362 [2024-07-12 00:48:23.859301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.362 [2024-07-12 00:48:23.859325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.362 qpair failed and we were unable to recover it. 00:35:56.362 [2024-07-12 00:48:23.859401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.362 [2024-07-12 00:48:23.859426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.362 qpair failed and we were unable to recover it. 00:35:56.362 [2024-07-12 00:48:23.859509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.362 [2024-07-12 00:48:23.859535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.362 qpair failed and we were unable to recover it. 00:35:56.362 [2024-07-12 00:48:23.859623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.362 [2024-07-12 00:48:23.859651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.362 qpair failed and we were unable to recover it. 00:35:56.362 [2024-07-12 00:48:23.859738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.362 [2024-07-12 00:48:23.859765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.362 qpair failed and we were unable to recover it. 00:35:56.362 [2024-07-12 00:48:23.859856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.362 [2024-07-12 00:48:23.859880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.362 qpair failed and we were unable to recover it. 00:35:56.362 [2024-07-12 00:48:23.859968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.362 [2024-07-12 00:48:23.859994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.362 qpair failed and we were unable to recover it. 00:35:56.362 [2024-07-12 00:48:23.860076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.362 [2024-07-12 00:48:23.860102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.362 qpair failed and we were unable to recover it. 00:35:56.362 [2024-07-12 00:48:23.860176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.362 [2024-07-12 00:48:23.860201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.362 qpair failed and we were unable to recover it. 00:35:56.362 [2024-07-12 00:48:23.860287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.362 [2024-07-12 00:48:23.860313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.362 qpair failed and we were unable to recover it. 00:35:56.362 [2024-07-12 00:48:23.860400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.362 [2024-07-12 00:48:23.860427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.362 qpair failed and we were unable to recover it. 00:35:56.362 [2024-07-12 00:48:23.860510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.362 [2024-07-12 00:48:23.860535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.362 qpair failed and we were unable to recover it. 00:35:56.362 [2024-07-12 00:48:23.860626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.362 [2024-07-12 00:48:23.860652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.362 qpair failed and we were unable to recover it. 00:35:56.362 [2024-07-12 00:48:23.860729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.362 [2024-07-12 00:48:23.860753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.362 qpair failed and we were unable to recover it. 00:35:56.362 [2024-07-12 00:48:23.860828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.362 [2024-07-12 00:48:23.860852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.362 qpair failed and we were unable to recover it. 00:35:56.362 [2024-07-12 00:48:23.860934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.363 [2024-07-12 00:48:23.860959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.363 qpair failed and we were unable to recover it. 00:35:56.363 [2024-07-12 00:48:23.861042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.363 [2024-07-12 00:48:23.861069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.363 qpair failed and we were unable to recover it. 00:35:56.363 [2024-07-12 00:48:23.861173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.363 [2024-07-12 00:48:23.861212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.363 qpair failed and we were unable to recover it. 00:35:56.363 [2024-07-12 00:48:23.861304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.363 [2024-07-12 00:48:23.861331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.363 qpair failed and we were unable to recover it. 00:35:56.363 [2024-07-12 00:48:23.861424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.363 [2024-07-12 00:48:23.861449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.363 qpair failed and we were unable to recover it. 00:35:56.363 [2024-07-12 00:48:23.861531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.363 [2024-07-12 00:48:23.861557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.363 qpair failed and we were unable to recover it. 00:35:56.363 [2024-07-12 00:48:23.861654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.363 [2024-07-12 00:48:23.861680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.363 qpair failed and we were unable to recover it. 00:35:56.363 [2024-07-12 00:48:23.861768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.363 [2024-07-12 00:48:23.861802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.363 qpair failed and we were unable to recover it. 00:35:56.363 [2024-07-12 00:48:23.861888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.363 [2024-07-12 00:48:23.861915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.363 qpair failed and we were unable to recover it. 00:35:56.363 [2024-07-12 00:48:23.862001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.363 [2024-07-12 00:48:23.862029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.363 qpair failed and we were unable to recover it. 00:35:56.363 [2024-07-12 00:48:23.862112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.363 [2024-07-12 00:48:23.862139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.363 qpair failed and we were unable to recover it. 00:35:56.363 [2024-07-12 00:48:23.862224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.363 [2024-07-12 00:48:23.862249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.363 qpair failed and we were unable to recover it. 00:35:56.363 [2024-07-12 00:48:23.862333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.363 [2024-07-12 00:48:23.862357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.363 qpair failed and we were unable to recover it. 00:35:56.363 [2024-07-12 00:48:23.862435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.363 [2024-07-12 00:48:23.862459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.363 qpair failed and we were unable to recover it. 00:35:56.363 [2024-07-12 00:48:23.862537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.363 [2024-07-12 00:48:23.862563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.363 qpair failed and we were unable to recover it. 00:35:56.363 [2024-07-12 00:48:23.862653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.363 [2024-07-12 00:48:23.862679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.363 qpair failed and we were unable to recover it. 00:35:56.363 [2024-07-12 00:48:23.862755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.363 [2024-07-12 00:48:23.862780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.363 qpair failed and we were unable to recover it. 00:35:56.363 [2024-07-12 00:48:23.862867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.363 [2024-07-12 00:48:23.862893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.363 qpair failed and we were unable to recover it. 00:35:56.363 [2024-07-12 00:48:23.862981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.363 [2024-07-12 00:48:23.863008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.363 qpair failed and we were unable to recover it. 00:35:56.363 [2024-07-12 00:48:23.863083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.363 [2024-07-12 00:48:23.863108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.363 qpair failed and we were unable to recover it. 00:35:56.363 [2024-07-12 00:48:23.863193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.363 [2024-07-12 00:48:23.863225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.363 qpair failed and we were unable to recover it. 00:35:56.363 [2024-07-12 00:48:23.863308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.363 [2024-07-12 00:48:23.863337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.363 qpair failed and we were unable to recover it. 00:35:56.363 [2024-07-12 00:48:23.863419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.363 [2024-07-12 00:48:23.863444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.363 qpair failed and we were unable to recover it. 00:35:56.363 [2024-07-12 00:48:23.863528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.363 [2024-07-12 00:48:23.863553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.363 qpair failed and we were unable to recover it. 00:35:56.363 [2024-07-12 00:48:23.863644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.363 [2024-07-12 00:48:23.863673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.363 qpair failed and we were unable to recover it. 00:35:56.363 [2024-07-12 00:48:23.863762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.363 [2024-07-12 00:48:23.863791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.363 qpair failed and we were unable to recover it. 00:35:56.363 [2024-07-12 00:48:23.863875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.363 [2024-07-12 00:48:23.863900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.363 qpair failed and we were unable to recover it. 00:35:56.363 [2024-07-12 00:48:23.863983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.363 [2024-07-12 00:48:23.864007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.363 qpair failed and we were unable to recover it. 00:35:56.363 [2024-07-12 00:48:23.864084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.363 [2024-07-12 00:48:23.864107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.363 qpair failed and we were unable to recover it. 00:35:56.363 [2024-07-12 00:48:23.864180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.363 [2024-07-12 00:48:23.864204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.363 qpair failed and we were unable to recover it. 00:35:56.363 [2024-07-12 00:48:23.864283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.363 [2024-07-12 00:48:23.864312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.363 qpair failed and we were unable to recover it. 00:35:56.363 [2024-07-12 00:48:23.864399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.363 [2024-07-12 00:48:23.864427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.363 qpair failed and we were unable to recover it. 00:35:56.363 [2024-07-12 00:48:23.864516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.363 [2024-07-12 00:48:23.864543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.363 qpair failed and we were unable to recover it. 00:35:56.363 [2024-07-12 00:48:23.864635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.363 [2024-07-12 00:48:23.864667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.363 qpair failed and we were unable to recover it. 00:35:56.363 [2024-07-12 00:48:23.864761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.363 [2024-07-12 00:48:23.864786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.363 qpair failed and we were unable to recover it. 00:35:56.363 [2024-07-12 00:48:23.864861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.363 [2024-07-12 00:48:23.864889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.363 qpair failed and we were unable to recover it. 00:35:56.363 [2024-07-12 00:48:23.864969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.363 [2024-07-12 00:48:23.864994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.363 qpair failed and we were unable to recover it. 00:35:56.363 [2024-07-12 00:48:23.865077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.363 [2024-07-12 00:48:23.865103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.363 qpair failed and we were unable to recover it. 00:35:56.363 [2024-07-12 00:48:23.865184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.363 [2024-07-12 00:48:23.865211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.363 qpair failed and we were unable to recover it. 00:35:56.363 [2024-07-12 00:48:23.865303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.364 [2024-07-12 00:48:23.865329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.364 qpair failed and we were unable to recover it. 00:35:56.364 [2024-07-12 00:48:23.865410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.364 [2024-07-12 00:48:23.865436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.364 qpair failed and we were unable to recover it. 00:35:56.364 [2024-07-12 00:48:23.865520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.364 [2024-07-12 00:48:23.865544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.364 qpair failed and we were unable to recover it. 00:35:56.364 [2024-07-12 00:48:23.865631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.364 [2024-07-12 00:48:23.865659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.364 qpair failed and we were unable to recover it. 00:35:56.364 [2024-07-12 00:48:23.865748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.364 [2024-07-12 00:48:23.865775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.364 qpair failed and we were unable to recover it. 00:35:56.364 [2024-07-12 00:48:23.865856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.364 [2024-07-12 00:48:23.865880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.364 qpair failed and we were unable to recover it. 00:35:56.364 [2024-07-12 00:48:23.865963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.364 [2024-07-12 00:48:23.865988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.364 qpair failed and we were unable to recover it. 00:35:56.364 [2024-07-12 00:48:23.866063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.364 [2024-07-12 00:48:23.866088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.364 qpair failed and we were unable to recover it. 00:35:56.364 [2024-07-12 00:48:23.866162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.364 [2024-07-12 00:48:23.866192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.364 qpair failed and we were unable to recover it. 00:35:56.364 [2024-07-12 00:48:23.866278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.364 [2024-07-12 00:48:23.866303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.364 qpair failed and we were unable to recover it. 00:35:56.364 [2024-07-12 00:48:23.866380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.364 [2024-07-12 00:48:23.866404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.364 qpair failed and we were unable to recover it. 00:35:56.364 [2024-07-12 00:48:23.866488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.364 [2024-07-12 00:48:23.866513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.364 qpair failed and we were unable to recover it. 00:35:56.364 [2024-07-12 00:48:23.866595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.364 [2024-07-12 00:48:23.866621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.364 qpair failed and we were unable to recover it. 00:35:56.364 [2024-07-12 00:48:23.866705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.364 [2024-07-12 00:48:23.866734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.364 qpair failed and we were unable to recover it. 00:35:56.364 [2024-07-12 00:48:23.866823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.364 [2024-07-12 00:48:23.866852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.364 qpair failed and we were unable to recover it. 00:35:56.364 [2024-07-12 00:48:23.866937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.364 [2024-07-12 00:48:23.866963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.364 qpair failed and we were unable to recover it. 00:35:56.364 [2024-07-12 00:48:23.867048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.364 [2024-07-12 00:48:23.867075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.364 qpair failed and we were unable to recover it. 00:35:56.364 [2024-07-12 00:48:23.867159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.364 [2024-07-12 00:48:23.867185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.364 qpair failed and we were unable to recover it. 00:35:56.364 [2024-07-12 00:48:23.867267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.364 [2024-07-12 00:48:23.867296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.364 qpair failed and we were unable to recover it. 00:35:56.364 [2024-07-12 00:48:23.867379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.364 [2024-07-12 00:48:23.867403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.364 qpair failed and we were unable to recover it. 00:35:56.364 [2024-07-12 00:48:23.867491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.364 [2024-07-12 00:48:23.867516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.364 qpair failed and we were unable to recover it. 00:35:56.364 [2024-07-12 00:48:23.867600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.364 [2024-07-12 00:48:23.867627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.364 qpair failed and we were unable to recover it. 00:35:56.364 [2024-07-12 00:48:23.867715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.364 [2024-07-12 00:48:23.867741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.364 qpair failed and we were unable to recover it. 00:35:56.364 [2024-07-12 00:48:23.867829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.364 [2024-07-12 00:48:23.867858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.364 qpair failed and we were unable to recover it. 00:35:56.364 [2024-07-12 00:48:23.867941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.364 [2024-07-12 00:48:23.867967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.364 qpair failed and we were unable to recover it. 00:35:56.364 [2024-07-12 00:48:23.868059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.364 [2024-07-12 00:48:23.868085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.364 qpair failed and we were unable to recover it. 00:35:56.364 [2024-07-12 00:48:23.868168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.364 [2024-07-12 00:48:23.868193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.364 qpair failed and we were unable to recover it. 00:35:56.364 [2024-07-12 00:48:23.868273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.364 [2024-07-12 00:48:23.868304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.364 qpair failed and we were unable to recover it. 00:35:56.364 [2024-07-12 00:48:23.868381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.364 [2024-07-12 00:48:23.868407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.364 qpair failed and we were unable to recover it. 00:35:56.364 [2024-07-12 00:48:23.868500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.364 [2024-07-12 00:48:23.868527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.364 qpair failed and we were unable to recover it. 00:35:56.364 [2024-07-12 00:48:23.868606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.364 [2024-07-12 00:48:23.868631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.364 qpair failed and we were unable to recover it. 00:35:56.364 [2024-07-12 00:48:23.868707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.364 [2024-07-12 00:48:23.868731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.364 qpair failed and we were unable to recover it. 00:35:56.364 [2024-07-12 00:48:23.868815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.364 [2024-07-12 00:48:23.868843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.364 qpair failed and we were unable to recover it. 00:35:56.364 [2024-07-12 00:48:23.868926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.364 [2024-07-12 00:48:23.868954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.364 qpair failed and we were unable to recover it. 00:35:56.364 [2024-07-12 00:48:23.869038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.364 [2024-07-12 00:48:23.869064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.364 qpair failed and we were unable to recover it. 00:35:56.364 [2024-07-12 00:48:23.869160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.364 [2024-07-12 00:48:23.869187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.364 qpair failed and we were unable to recover it. 00:35:56.364 [2024-07-12 00:48:23.869268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.364 [2024-07-12 00:48:23.869294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.364 qpair failed and we were unable to recover it. 00:35:56.364 [2024-07-12 00:48:23.869382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.364 [2024-07-12 00:48:23.869408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.364 qpair failed and we were unable to recover it. 00:35:56.364 [2024-07-12 00:48:23.869494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.364 [2024-07-12 00:48:23.869518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.364 qpair failed and we were unable to recover it. 00:35:56.364 [2024-07-12 00:48:23.869606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.364 [2024-07-12 00:48:23.869632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.364 qpair failed and we were unable to recover it. 00:35:56.364 [2024-07-12 00:48:23.869715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.364 [2024-07-12 00:48:23.869741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.364 qpair failed and we were unable to recover it. 00:35:56.364 [2024-07-12 00:48:23.869819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.365 [2024-07-12 00:48:23.869844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.365 qpair failed and we were unable to recover it. 00:35:56.365 [2024-07-12 00:48:23.869922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.365 [2024-07-12 00:48:23.869950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.365 qpair failed and we were unable to recover it. 00:35:56.365 [2024-07-12 00:48:23.870037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.365 [2024-07-12 00:48:23.870066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.365 qpair failed and we were unable to recover it. 00:35:56.365 [2024-07-12 00:48:23.870152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.365 [2024-07-12 00:48:23.870179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.365 qpair failed and we were unable to recover it. 00:35:56.365 [2024-07-12 00:48:23.870257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.365 [2024-07-12 00:48:23.870285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.365 qpair failed and we were unable to recover it. 00:35:56.365 [2024-07-12 00:48:23.870368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.365 [2024-07-12 00:48:23.870394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.365 qpair failed and we were unable to recover it. 00:35:56.365 [2024-07-12 00:48:23.870482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.365 [2024-07-12 00:48:23.870507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.365 qpair failed and we were unable to recover it. 00:35:56.365 [2024-07-12 00:48:23.870591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.365 [2024-07-12 00:48:23.870621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.365 qpair failed and we were unable to recover it. 00:35:56.365 [2024-07-12 00:48:23.870699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.365 [2024-07-12 00:48:23.870728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.365 qpair failed and we were unable to recover it. 00:35:56.365 [2024-07-12 00:48:23.870818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.365 [2024-07-12 00:48:23.870843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.365 qpair failed and we were unable to recover it. 00:35:56.365 [2024-07-12 00:48:23.870921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.365 [2024-07-12 00:48:23.870949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.365 qpair failed and we were unable to recover it. 00:35:56.365 [2024-07-12 00:48:23.871027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.365 [2024-07-12 00:48:23.871053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.365 qpair failed and we were unable to recover it. 00:35:56.365 [2024-07-12 00:48:23.871137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.365 [2024-07-12 00:48:23.871165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.365 qpair failed and we were unable to recover it. 00:35:56.365 [2024-07-12 00:48:23.871249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.365 [2024-07-12 00:48:23.871279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.365 qpair failed and we were unable to recover it. 00:35:56.365 [2024-07-12 00:48:23.871358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.365 [2024-07-12 00:48:23.871390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.365 qpair failed and we were unable to recover it. 00:35:56.365 [2024-07-12 00:48:23.871469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.365 [2024-07-12 00:48:23.871496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.365 qpair failed and we were unable to recover it. 00:35:56.365 [2024-07-12 00:48:23.871577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.365 [2024-07-12 00:48:23.871626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.365 qpair failed and we were unable to recover it. 00:35:56.365 [2024-07-12 00:48:23.871715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.365 [2024-07-12 00:48:23.871743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.365 qpair failed and we were unable to recover it. 00:35:56.365 [2024-07-12 00:48:23.871832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.365 [2024-07-12 00:48:23.871857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.365 qpair failed and we were unable to recover it. 00:35:56.365 [2024-07-12 00:48:23.871939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.365 [2024-07-12 00:48:23.871964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.365 qpair failed and we were unable to recover it. 00:35:56.365 [2024-07-12 00:48:23.872049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.365 [2024-07-12 00:48:23.872075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.365 qpair failed and we were unable to recover it. 00:35:56.365 [2024-07-12 00:48:23.872163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.365 [2024-07-12 00:48:23.872191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.365 qpair failed and we were unable to recover it. 00:35:56.365 [2024-07-12 00:48:23.872271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.365 [2024-07-12 00:48:23.872295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.365 qpair failed and we were unable to recover it. 00:35:56.365 [2024-07-12 00:48:23.872378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.365 [2024-07-12 00:48:23.872403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.365 qpair failed and we were unable to recover it. 00:35:56.365 [2024-07-12 00:48:23.872487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.365 [2024-07-12 00:48:23.872512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.365 qpair failed and we were unable to recover it. 00:35:56.365 [2024-07-12 00:48:23.872602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.365 [2024-07-12 00:48:23.872631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.365 qpair failed and we were unable to recover it. 00:35:56.365 [2024-07-12 00:48:23.872713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.365 [2024-07-12 00:48:23.872739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.365 qpair failed and we were unable to recover it. 00:35:56.365 [2024-07-12 00:48:23.872823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.365 [2024-07-12 00:48:23.872848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.365 qpair failed and we were unable to recover it. 00:35:56.365 [2024-07-12 00:48:23.872931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.365 [2024-07-12 00:48:23.872956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.365 qpair failed and we were unable to recover it. 00:35:56.365 [2024-07-12 00:48:23.873046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.365 [2024-07-12 00:48:23.873075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.365 qpair failed and we were unable to recover it. 00:35:56.365 [2024-07-12 00:48:23.873154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.365 [2024-07-12 00:48:23.873181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.365 qpair failed and we were unable to recover it. 00:35:56.365 [2024-07-12 00:48:23.873271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.365 [2024-07-12 00:48:23.873299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.365 qpair failed and we were unable to recover it. 00:35:56.365 [2024-07-12 00:48:23.873382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.365 [2024-07-12 00:48:23.873408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.365 qpair failed and we were unable to recover it. 00:35:56.365 [2024-07-12 00:48:23.873494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.365 [2024-07-12 00:48:23.873524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.365 qpair failed and we were unable to recover it. 00:35:56.366 [2024-07-12 00:48:23.873616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.366 [2024-07-12 00:48:23.873646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.366 qpair failed and we were unable to recover it. 00:35:56.366 [2024-07-12 00:48:23.873730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.366 [2024-07-12 00:48:23.873755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.366 qpair failed and we were unable to recover it. 00:35:56.366 [2024-07-12 00:48:23.873832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.366 [2024-07-12 00:48:23.873856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.366 qpair failed and we were unable to recover it. 00:35:56.366 [2024-07-12 00:48:23.873940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.366 [2024-07-12 00:48:23.873966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.366 qpair failed and we were unable to recover it. 00:35:56.366 [2024-07-12 00:48:23.874055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.366 [2024-07-12 00:48:23.874079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.366 qpair failed and we were unable to recover it. 00:35:56.366 [2024-07-12 00:48:23.874162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.366 [2024-07-12 00:48:23.874193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.366 qpair failed and we were unable to recover it. 00:35:56.366 [2024-07-12 00:48:23.874285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.366 [2024-07-12 00:48:23.874312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.366 qpair failed and we were unable to recover it. 00:35:56.366 [2024-07-12 00:48:23.874404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.366 [2024-07-12 00:48:23.874433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.366 qpair failed and we were unable to recover it. 00:35:56.366 [2024-07-12 00:48:23.874518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.366 [2024-07-12 00:48:23.874543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.366 qpair failed and we were unable to recover it. 00:35:56.366 [2024-07-12 00:48:23.874631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.366 [2024-07-12 00:48:23.874657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.366 qpair failed and we were unable to recover it. 00:35:56.366 [2024-07-12 00:48:23.874740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.366 [2024-07-12 00:48:23.874766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.366 qpair failed and we were unable to recover it. 00:35:56.366 [2024-07-12 00:48:23.874848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.366 [2024-07-12 00:48:23.874875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.366 qpair failed and we were unable to recover it. 00:35:56.366 [2024-07-12 00:48:23.874959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.366 [2024-07-12 00:48:23.874984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.366 qpair failed and we were unable to recover it. 00:35:56.366 [2024-07-12 00:48:23.875062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.366 [2024-07-12 00:48:23.875088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.366 qpair failed and we were unable to recover it. 00:35:56.366 [2024-07-12 00:48:23.875175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.366 [2024-07-12 00:48:23.875201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.366 qpair failed and we were unable to recover it. 00:35:56.366 [2024-07-12 00:48:23.875294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.366 [2024-07-12 00:48:23.875322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.366 qpair failed and we were unable to recover it. 00:35:56.366 [2024-07-12 00:48:23.875402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.366 [2024-07-12 00:48:23.875430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.366 qpair failed and we were unable to recover it. 00:35:56.366 [2024-07-12 00:48:23.875506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.366 [2024-07-12 00:48:23.875532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.366 qpair failed and we were unable to recover it. 00:35:56.366 [2024-07-12 00:48:23.875619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.366 [2024-07-12 00:48:23.875652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.366 qpair failed and we were unable to recover it. 00:35:56.366 [2024-07-12 00:48:23.875729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.366 [2024-07-12 00:48:23.875754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.366 qpair failed and we were unable to recover it. 00:35:56.366 [2024-07-12 00:48:23.875845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.366 [2024-07-12 00:48:23.875873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.366 qpair failed and we were unable to recover it. 00:35:56.366 [2024-07-12 00:48:23.875953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.366 [2024-07-12 00:48:23.875980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.366 qpair failed and we were unable to recover it. 00:35:56.366 [2024-07-12 00:48:23.876067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.366 [2024-07-12 00:48:23.876098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.366 qpair failed and we were unable to recover it. 00:35:56.366 [2024-07-12 00:48:23.876182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.366 [2024-07-12 00:48:23.876208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.366 qpair failed and we were unable to recover it. 00:35:56.366 [2024-07-12 00:48:23.876285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.366 [2024-07-12 00:48:23.876311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.366 qpair failed and we were unable to recover it. 00:35:56.366 [2024-07-12 00:48:23.876404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.366 [2024-07-12 00:48:23.876429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.366 qpair failed and we were unable to recover it. 00:35:56.366 [2024-07-12 00:48:23.876514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.366 [2024-07-12 00:48:23.876540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.366 qpair failed and we were unable to recover it. 00:35:56.366 [2024-07-12 00:48:23.876634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.366 [2024-07-12 00:48:23.876662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.366 qpair failed and we were unable to recover it. 00:35:56.366 [2024-07-12 00:48:23.876754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.366 [2024-07-12 00:48:23.876783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.366 qpair failed and we were unable to recover it. 00:35:56.366 [2024-07-12 00:48:23.876867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.366 [2024-07-12 00:48:23.876894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.366 qpair failed and we were unable to recover it. 00:35:56.366 [2024-07-12 00:48:23.876973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.366 [2024-07-12 00:48:23.876998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.366 qpair failed and we were unable to recover it. 00:35:56.366 [2024-07-12 00:48:23.877078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.366 [2024-07-12 00:48:23.877104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.366 qpair failed and we were unable to recover it. 00:35:56.366 [2024-07-12 00:48:23.877196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.366 [2024-07-12 00:48:23.877227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.366 qpair failed and we were unable to recover it. 00:35:56.366 [2024-07-12 00:48:23.877305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.366 [2024-07-12 00:48:23.877331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.366 qpair failed and we were unable to recover it. 00:35:56.366 [2024-07-12 00:48:23.877416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.366 [2024-07-12 00:48:23.877442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.366 qpair failed and we were unable to recover it. 00:35:56.366 [2024-07-12 00:48:23.877527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.366 [2024-07-12 00:48:23.877551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.366 qpair failed and we were unable to recover it. 00:35:56.366 [2024-07-12 00:48:23.877635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.366 [2024-07-12 00:48:23.877660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.366 qpair failed and we were unable to recover it. 00:35:56.366 [2024-07-12 00:48:23.877745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.366 [2024-07-12 00:48:23.877772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.366 qpair failed and we were unable to recover it. 00:35:56.366 [2024-07-12 00:48:23.877852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.366 [2024-07-12 00:48:23.877878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.366 qpair failed and we were unable to recover it. 00:35:56.366 [2024-07-12 00:48:23.878008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.366 [2024-07-12 00:48:23.878036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.366 qpair failed and we were unable to recover it. 00:35:56.367 [2024-07-12 00:48:23.878120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.367 [2024-07-12 00:48:23.878145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.367 qpair failed and we were unable to recover it. 00:35:56.367 [2024-07-12 00:48:23.878231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.367 [2024-07-12 00:48:23.878260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.367 qpair failed and we were unable to recover it. 00:35:56.367 [2024-07-12 00:48:23.878349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.367 [2024-07-12 00:48:23.878378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.367 qpair failed and we were unable to recover it. 00:35:56.367 [2024-07-12 00:48:23.878462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.367 [2024-07-12 00:48:23.878488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.367 qpair failed and we were unable to recover it. 00:35:56.367 [2024-07-12 00:48:23.878569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.367 [2024-07-12 00:48:23.878603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.367 qpair failed and we were unable to recover it. 00:35:56.367 [2024-07-12 00:48:23.878685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.367 [2024-07-12 00:48:23.878712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.367 qpair failed and we were unable to recover it. 00:35:56.367 [2024-07-12 00:48:23.878791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.367 [2024-07-12 00:48:23.878816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.367 qpair failed and we were unable to recover it. 00:35:56.367 [2024-07-12 00:48:23.878902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.367 [2024-07-12 00:48:23.878926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.367 qpair failed and we were unable to recover it. 00:35:56.367 [2024-07-12 00:48:23.879010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.367 [2024-07-12 00:48:23.879036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.367 qpair failed and we were unable to recover it. 00:35:56.367 [2024-07-12 00:48:23.879117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.367 [2024-07-12 00:48:23.879141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.367 qpair failed and we were unable to recover it. 00:35:56.367 [2024-07-12 00:48:23.879226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.367 [2024-07-12 00:48:23.879250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.367 qpair failed and we were unable to recover it. 00:35:56.367 [2024-07-12 00:48:23.879327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.367 [2024-07-12 00:48:23.879352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.367 qpair failed and we were unable to recover it. 00:35:56.367 [2024-07-12 00:48:23.879429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.367 [2024-07-12 00:48:23.879454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.367 qpair failed and we were unable to recover it. 00:35:56.367 [2024-07-12 00:48:23.879537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.367 [2024-07-12 00:48:23.879564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.367 qpair failed and we were unable to recover it. 00:35:56.367 [2024-07-12 00:48:23.879669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.367 [2024-07-12 00:48:23.879698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.367 qpair failed and we were unable to recover it. 00:35:56.367 [2024-07-12 00:48:23.879780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.367 [2024-07-12 00:48:23.879807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.367 qpair failed and we were unable to recover it. 00:35:56.367 [2024-07-12 00:48:23.879887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.367 [2024-07-12 00:48:23.879914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.367 qpair failed and we were unable to recover it. 00:35:56.367 [2024-07-12 00:48:23.879993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.367 [2024-07-12 00:48:23.880019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.367 qpair failed and we were unable to recover it. 00:35:56.367 [2024-07-12 00:48:23.880103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.367 [2024-07-12 00:48:23.880132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.367 qpair failed and we were unable to recover it. 00:35:56.367 [2024-07-12 00:48:23.880213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.367 [2024-07-12 00:48:23.880244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.367 qpair failed and we were unable to recover it. 00:35:56.367 [2024-07-12 00:48:23.880329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.367 [2024-07-12 00:48:23.880354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.367 qpair failed and we were unable to recover it. 00:35:56.367 [2024-07-12 00:48:23.880447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.367 [2024-07-12 00:48:23.880477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.367 qpair failed and we were unable to recover it. 00:35:56.367 [2024-07-12 00:48:23.880559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.367 [2024-07-12 00:48:23.880594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.367 qpair failed and we were unable to recover it. 00:35:56.367 [2024-07-12 00:48:23.880688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.367 [2024-07-12 00:48:23.880715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.367 qpair failed and we were unable to recover it. 00:35:56.367 [2024-07-12 00:48:23.880802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.367 [2024-07-12 00:48:23.880829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.367 qpair failed and we were unable to recover it. 00:35:56.367 [2024-07-12 00:48:23.880917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.367 [2024-07-12 00:48:23.880944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.367 qpair failed and we were unable to recover it. 00:35:56.367 [2024-07-12 00:48:23.881028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.367 [2024-07-12 00:48:23.881064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.367 qpair failed and we were unable to recover it. 00:35:56.367 [2024-07-12 00:48:23.881155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.367 [2024-07-12 00:48:23.881187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.367 qpair failed and we were unable to recover it. 00:35:56.367 [2024-07-12 00:48:23.881267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.367 [2024-07-12 00:48:23.881295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.367 qpair failed and we were unable to recover it. 00:35:56.367 [2024-07-12 00:48:23.881376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.367 [2024-07-12 00:48:23.881401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.367 qpair failed and we were unable to recover it. 00:35:56.367 [2024-07-12 00:48:23.881485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.367 [2024-07-12 00:48:23.881513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.367 qpair failed and we were unable to recover it. 00:35:56.367 [2024-07-12 00:48:23.881597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.367 [2024-07-12 00:48:23.881623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.367 qpair failed and we were unable to recover it. 00:35:56.367 [2024-07-12 00:48:23.881702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.367 [2024-07-12 00:48:23.881728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.367 qpair failed and we were unable to recover it. 00:35:56.367 [2024-07-12 00:48:23.881806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.367 [2024-07-12 00:48:23.881833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.367 qpair failed and we were unable to recover it. 00:35:56.367 [2024-07-12 00:48:23.881925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.367 [2024-07-12 00:48:23.881953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.367 qpair failed and we were unable to recover it. 00:35:56.367 [2024-07-12 00:48:23.882033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.367 [2024-07-12 00:48:23.882061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.367 qpair failed and we were unable to recover it. 00:35:56.367 [2024-07-12 00:48:23.882144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.367 [2024-07-12 00:48:23.882172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.367 qpair failed and we were unable to recover it. 00:35:56.367 [2024-07-12 00:48:23.882261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.367 [2024-07-12 00:48:23.882288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.367 qpair failed and we were unable to recover it. 00:35:56.367 [2024-07-12 00:48:23.882371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.367 [2024-07-12 00:48:23.882398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.367 qpair failed and we were unable to recover it. 00:35:56.368 [2024-07-12 00:48:23.882473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.368 [2024-07-12 00:48:23.882498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.368 qpair failed and we were unable to recover it. 00:35:56.368 [2024-07-12 00:48:23.882579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.368 [2024-07-12 00:48:23.882609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.368 qpair failed and we were unable to recover it. 00:35:56.368 [2024-07-12 00:48:23.882703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.368 [2024-07-12 00:48:23.882730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.368 qpair failed and we were unable to recover it. 00:35:56.368 [2024-07-12 00:48:23.882818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.368 [2024-07-12 00:48:23.882845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.368 qpair failed and we were unable to recover it. 00:35:56.368 [2024-07-12 00:48:23.882935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.368 [2024-07-12 00:48:23.882963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.368 qpair failed and we were unable to recover it. 00:35:56.368 [2024-07-12 00:48:23.883047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.368 [2024-07-12 00:48:23.883072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.368 qpair failed and we were unable to recover it. 00:35:56.368 [2024-07-12 00:48:23.883161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.368 [2024-07-12 00:48:23.883188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.368 qpair failed and we were unable to recover it. 00:35:56.368 [2024-07-12 00:48:23.883262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.368 [2024-07-12 00:48:23.883290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.368 qpair failed and we were unable to recover it. 00:35:56.368 [2024-07-12 00:48:23.883375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.368 [2024-07-12 00:48:23.883403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.368 qpair failed and we were unable to recover it. 00:35:56.368 [2024-07-12 00:48:23.883486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.368 [2024-07-12 00:48:23.883511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.368 qpair failed and we were unable to recover it. 00:35:56.368 [2024-07-12 00:48:23.883596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.368 [2024-07-12 00:48:23.883621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.368 qpair failed and we were unable to recover it. 00:35:56.368 [2024-07-12 00:48:23.883705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.368 [2024-07-12 00:48:23.883733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.368 qpair failed and we were unable to recover it. 00:35:56.368 [2024-07-12 00:48:23.883820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.368 [2024-07-12 00:48:23.883849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.368 qpair failed and we were unable to recover it. 00:35:56.368 [2024-07-12 00:48:23.883930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.368 [2024-07-12 00:48:23.883956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.368 qpair failed and we were unable to recover it. 00:35:56.368 [2024-07-12 00:48:23.884044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.368 [2024-07-12 00:48:23.884071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.368 qpair failed and we were unable to recover it. 00:35:56.368 [2024-07-12 00:48:23.884147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.368 [2024-07-12 00:48:23.884177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.368 qpair failed and we were unable to recover it. 00:35:56.368 [2024-07-12 00:48:23.884261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.368 [2024-07-12 00:48:23.884290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.368 qpair failed and we were unable to recover it. 00:35:56.368 [2024-07-12 00:48:23.884375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.368 [2024-07-12 00:48:23.884401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.368 qpair failed and we were unable to recover it. 00:35:56.368 [2024-07-12 00:48:23.884484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.368 [2024-07-12 00:48:23.884510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.368 qpair failed and we were unable to recover it. 00:35:56.368 [2024-07-12 00:48:23.884591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.368 [2024-07-12 00:48:23.884616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.368 qpair failed and we were unable to recover it. 00:35:56.368 [2024-07-12 00:48:23.884692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.368 [2024-07-12 00:48:23.884717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.368 qpair failed and we were unable to recover it. 00:35:56.368 [2024-07-12 00:48:23.884796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.368 [2024-07-12 00:48:23.884821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.368 qpair failed and we were unable to recover it. 00:35:56.368 [2024-07-12 00:48:23.884903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.368 [2024-07-12 00:48:23.884931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.368 qpair failed and we were unable to recover it. 00:35:56.368 [2024-07-12 00:48:23.885013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.368 [2024-07-12 00:48:23.885042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.368 qpair failed and we were unable to recover it. 00:35:56.368 [2024-07-12 00:48:23.885128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.368 [2024-07-12 00:48:23.885156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.368 qpair failed and we were unable to recover it. 00:35:56.368 [2024-07-12 00:48:23.885236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.368 [2024-07-12 00:48:23.885261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.368 qpair failed and we were unable to recover it. 00:35:56.368 [2024-07-12 00:48:23.885352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.368 [2024-07-12 00:48:23.885380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.368 qpair failed and we were unable to recover it. 00:35:56.368 [2024-07-12 00:48:23.885458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.368 [2024-07-12 00:48:23.885483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.368 qpair failed and we were unable to recover it. 00:35:56.368 [2024-07-12 00:48:23.885567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.368 [2024-07-12 00:48:23.885599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.368 qpair failed and we were unable to recover it. 00:35:56.368 [2024-07-12 00:48:23.885681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.368 [2024-07-12 00:48:23.885706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.368 qpair failed and we were unable to recover it. 00:35:56.368 [2024-07-12 00:48:23.885781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.368 [2024-07-12 00:48:23.885808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.368 qpair failed and we were unable to recover it. 00:35:56.368 [2024-07-12 00:48:23.885893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.368 [2024-07-12 00:48:23.885922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.368 qpair failed and we were unable to recover it. 00:35:56.368 [2024-07-12 00:48:23.886006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.368 [2024-07-12 00:48:23.886034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.368 qpair failed and we were unable to recover it. 00:35:56.368 [2024-07-12 00:48:23.886120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.368 [2024-07-12 00:48:23.886147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.368 qpair failed and we were unable to recover it. 00:35:56.368 [2024-07-12 00:48:23.886223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.368 [2024-07-12 00:48:23.886250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.368 qpair failed and we were unable to recover it. 00:35:56.368 [2024-07-12 00:48:23.886328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.368 [2024-07-12 00:48:23.886353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.368 qpair failed and we were unable to recover it. 00:35:56.368 [2024-07-12 00:48:23.886438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.368 [2024-07-12 00:48:23.886466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.368 qpair failed and we were unable to recover it. 00:35:56.368 [2024-07-12 00:48:23.886544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.368 [2024-07-12 00:48:23.886571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.368 qpair failed and we were unable to recover it. 00:35:56.368 [2024-07-12 00:48:23.886660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.368 [2024-07-12 00:48:23.886687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.368 qpair failed and we were unable to recover it. 00:35:56.368 [2024-07-12 00:48:23.886774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.368 [2024-07-12 00:48:23.886800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.369 qpair failed and we were unable to recover it. 00:35:56.369 [2024-07-12 00:48:23.886875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.369 [2024-07-12 00:48:23.886900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.369 qpair failed and we were unable to recover it. 00:35:56.369 [2024-07-12 00:48:23.886987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.369 [2024-07-12 00:48:23.887015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.369 qpair failed and we were unable to recover it. 00:35:56.369 [2024-07-12 00:48:23.887102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.369 [2024-07-12 00:48:23.887130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.369 qpair failed and we were unable to recover it. 00:35:56.369 [2024-07-12 00:48:23.887214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.369 [2024-07-12 00:48:23.887240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.369 qpair failed and we were unable to recover it. 00:35:56.369 [2024-07-12 00:48:23.887324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.369 [2024-07-12 00:48:23.887350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.369 qpair failed and we were unable to recover it. 00:35:56.369 [2024-07-12 00:48:23.887434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.369 [2024-07-12 00:48:23.887458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.369 qpair failed and we were unable to recover it. 00:35:56.369 [2024-07-12 00:48:23.887538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.369 [2024-07-12 00:48:23.887565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.369 qpair failed and we were unable to recover it. 00:35:56.369 [2024-07-12 00:48:23.887648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.369 [2024-07-12 00:48:23.887677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.369 qpair failed and we were unable to recover it. 00:35:56.369 [2024-07-12 00:48:23.887758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.369 [2024-07-12 00:48:23.887783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.369 qpair failed and we were unable to recover it. 00:35:56.369 [2024-07-12 00:48:23.887871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.369 [2024-07-12 00:48:23.887898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.369 qpair failed and we were unable to recover it. 00:35:56.369 [2024-07-12 00:48:23.887981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.369 [2024-07-12 00:48:23.888006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.369 qpair failed and we were unable to recover it. 00:35:56.369 [2024-07-12 00:48:23.888086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.369 [2024-07-12 00:48:23.888111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.369 qpair failed and we were unable to recover it. 00:35:56.369 [2024-07-12 00:48:23.888186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.369 [2024-07-12 00:48:23.888211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.369 qpair failed and we were unable to recover it. 00:35:56.369 [2024-07-12 00:48:23.888300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.369 [2024-07-12 00:48:23.888328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.369 qpair failed and we were unable to recover it. 00:35:56.369 [2024-07-12 00:48:23.888412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.369 [2024-07-12 00:48:23.888437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.369 qpair failed and we were unable to recover it. 00:35:56.369 [2024-07-12 00:48:23.888516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.369 [2024-07-12 00:48:23.888546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.369 qpair failed and we were unable to recover it. 00:35:56.369 [2024-07-12 00:48:23.888639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.369 [2024-07-12 00:48:23.888666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.369 qpair failed and we were unable to recover it. 00:35:56.369 [2024-07-12 00:48:23.888748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.369 [2024-07-12 00:48:23.888775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.369 qpair failed and we were unable to recover it. 00:35:56.369 [2024-07-12 00:48:23.888864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.369 [2024-07-12 00:48:23.888894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.369 qpair failed and we were unable to recover it. 00:35:56.369 [2024-07-12 00:48:23.888973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.369 [2024-07-12 00:48:23.889000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.369 qpair failed and we were unable to recover it. 00:35:56.369 [2024-07-12 00:48:23.889094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.369 [2024-07-12 00:48:23.889123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.369 qpair failed and we were unable to recover it. 00:35:56.369 [2024-07-12 00:48:23.889215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.369 [2024-07-12 00:48:23.889243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.369 qpair failed and we were unable to recover it. 00:35:56.369 [2024-07-12 00:48:23.889328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.369 [2024-07-12 00:48:23.889355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.369 qpair failed and we were unable to recover it. 00:35:56.369 [2024-07-12 00:48:23.889432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.369 [2024-07-12 00:48:23.889459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.369 qpair failed and we were unable to recover it. 00:35:56.369 [2024-07-12 00:48:23.889542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.369 [2024-07-12 00:48:23.889568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.369 qpair failed and we were unable to recover it. 00:35:56.369 [2024-07-12 00:48:23.889666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.369 [2024-07-12 00:48:23.889692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.369 qpair failed and we were unable to recover it. 00:35:56.369 [2024-07-12 00:48:23.889777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.369 [2024-07-12 00:48:23.889806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.369 qpair failed and we were unable to recover it. 00:35:56.369 [2024-07-12 00:48:23.889885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.369 [2024-07-12 00:48:23.889910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.369 qpair failed and we were unable to recover it. 00:35:56.369 [2024-07-12 00:48:23.890003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.369 [2024-07-12 00:48:23.890027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.369 qpair failed and we were unable to recover it. 00:35:56.369 [2024-07-12 00:48:23.890117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.369 [2024-07-12 00:48:23.890143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.369 qpair failed and we were unable to recover it. 00:35:56.369 [2024-07-12 00:48:23.890222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.369 [2024-07-12 00:48:23.890246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.369 qpair failed and we were unable to recover it. 00:35:56.369 [2024-07-12 00:48:23.890336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.369 [2024-07-12 00:48:23.890368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.369 qpair failed and we were unable to recover it. 00:35:56.369 [2024-07-12 00:48:23.890452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.370 [2024-07-12 00:48:23.890478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.370 qpair failed and we were unable to recover it. 00:35:56.370 [2024-07-12 00:48:23.890556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.370 [2024-07-12 00:48:23.890581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.370 qpair failed and we were unable to recover it. 00:35:56.370 [2024-07-12 00:48:23.890666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.370 [2024-07-12 00:48:23.890692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.370 qpair failed and we were unable to recover it. 00:35:56.370 [2024-07-12 00:48:23.890775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.370 [2024-07-12 00:48:23.890800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.370 qpair failed and we were unable to recover it. 00:35:56.370 [2024-07-12 00:48:23.890878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.370 [2024-07-12 00:48:23.890903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.370 qpair failed and we were unable to recover it. 00:35:56.370 [2024-07-12 00:48:23.890989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.370 [2024-07-12 00:48:23.891014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.370 qpair failed and we were unable to recover it. 00:35:56.370 [2024-07-12 00:48:23.891102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.370 [2024-07-12 00:48:23.891127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.370 qpair failed and we were unable to recover it. 00:35:56.370 [2024-07-12 00:48:23.891237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.370 [2024-07-12 00:48:23.891263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.370 qpair failed and we were unable to recover it. 00:35:56.370 [2024-07-12 00:48:23.891347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.370 [2024-07-12 00:48:23.891374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.370 qpair failed and we were unable to recover it. 00:35:56.370 [2024-07-12 00:48:23.891454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.370 [2024-07-12 00:48:23.891479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.370 qpair failed and we were unable to recover it. 00:35:56.370 [2024-07-12 00:48:23.891567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.370 [2024-07-12 00:48:23.891607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.370 qpair failed and we were unable to recover it. 00:35:56.370 [2024-07-12 00:48:23.891719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.370 [2024-07-12 00:48:23.891751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.370 qpair failed and we were unable to recover it. 00:35:56.370 [2024-07-12 00:48:23.891846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.370 [2024-07-12 00:48:23.891872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.370 qpair failed and we were unable to recover it. 00:35:56.370 [2024-07-12 00:48:23.891962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.370 [2024-07-12 00:48:23.891987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.370 qpair failed and we were unable to recover it. 00:35:56.370 [2024-07-12 00:48:23.892071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.370 [2024-07-12 00:48:23.892097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.370 qpair failed and we were unable to recover it. 00:35:56.370 [2024-07-12 00:48:23.892181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.370 [2024-07-12 00:48:23.892206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.370 qpair failed and we were unable to recover it. 00:35:56.370 [2024-07-12 00:48:23.892283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.370 [2024-07-12 00:48:23.892309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.370 qpair failed and we were unable to recover it. 00:35:56.370 [2024-07-12 00:48:23.892394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.370 [2024-07-12 00:48:23.892421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.370 qpair failed and we were unable to recover it. 00:35:56.370 [2024-07-12 00:48:23.892506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.370 [2024-07-12 00:48:23.892535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.370 qpair failed and we were unable to recover it. 00:35:56.370 [2024-07-12 00:48:23.892618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.370 [2024-07-12 00:48:23.892644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.370 qpair failed and we were unable to recover it. 00:35:56.370 [2024-07-12 00:48:23.892723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.370 [2024-07-12 00:48:23.892749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.370 qpair failed and we were unable to recover it. 00:35:56.370 [2024-07-12 00:48:23.892824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.370 [2024-07-12 00:48:23.892848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.370 qpair failed and we were unable to recover it. 00:35:56.370 [2024-07-12 00:48:23.892928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.370 [2024-07-12 00:48:23.892952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.370 qpair failed and we were unable to recover it. 00:35:56.370 [2024-07-12 00:48:23.893038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.370 [2024-07-12 00:48:23.893062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.370 qpair failed and we were unable to recover it. 00:35:56.370 [2024-07-12 00:48:23.893148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.370 [2024-07-12 00:48:23.893174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.370 qpair failed and we were unable to recover it. 00:35:56.370 [2024-07-12 00:48:23.893251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.370 [2024-07-12 00:48:23.893276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.370 qpair failed and we were unable to recover it. 00:35:56.370 [2024-07-12 00:48:23.893366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.370 [2024-07-12 00:48:23.893393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.370 qpair failed and we were unable to recover it. 00:35:56.370 [2024-07-12 00:48:23.893474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.370 [2024-07-12 00:48:23.893499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.370 qpair failed and we were unable to recover it. 00:35:56.370 [2024-07-12 00:48:23.893593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.370 [2024-07-12 00:48:23.893622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.370 qpair failed and we were unable to recover it. 00:35:56.370 [2024-07-12 00:48:23.893704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.370 [2024-07-12 00:48:23.893731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.370 qpair failed and we were unable to recover it. 00:35:56.370 [2024-07-12 00:48:23.893809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.370 [2024-07-12 00:48:23.893835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.370 qpair failed and we were unable to recover it. 00:35:56.370 [2024-07-12 00:48:23.893917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.370 [2024-07-12 00:48:23.893943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.370 qpair failed and we were unable to recover it. 00:35:56.370 [2024-07-12 00:48:23.894034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.370 [2024-07-12 00:48:23.894059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.370 qpair failed and we were unable to recover it. 00:35:56.370 [2024-07-12 00:48:23.894147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.370 [2024-07-12 00:48:23.894172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.370 qpair failed and we were unable to recover it. 00:35:56.370 [2024-07-12 00:48:23.894262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.370 [2024-07-12 00:48:23.894288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.370 qpair failed and we were unable to recover it. 00:35:56.370 [2024-07-12 00:48:23.894375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.370 [2024-07-12 00:48:23.894403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.370 qpair failed and we were unable to recover it. 00:35:56.370 [2024-07-12 00:48:23.894484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.370 [2024-07-12 00:48:23.894510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.370 qpair failed and we were unable to recover it. 00:35:56.370 [2024-07-12 00:48:23.894598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.370 [2024-07-12 00:48:23.894641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.370 qpair failed and we were unable to recover it. 00:35:56.370 [2024-07-12 00:48:23.894727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.370 [2024-07-12 00:48:23.894752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.370 qpair failed and we were unable to recover it. 00:35:56.370 [2024-07-12 00:48:23.894846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.370 [2024-07-12 00:48:23.894872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.370 qpair failed and we were unable to recover it. 00:35:56.370 [2024-07-12 00:48:23.894953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.370 [2024-07-12 00:48:23.894978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.370 qpair failed and we were unable to recover it. 00:35:56.370 [2024-07-12 00:48:23.895062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.371 [2024-07-12 00:48:23.895088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.371 qpair failed and we were unable to recover it. 00:35:56.371 [2024-07-12 00:48:23.895168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.371 [2024-07-12 00:48:23.895193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.371 qpair failed and we were unable to recover it. 00:35:56.371 [2024-07-12 00:48:23.895285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.371 [2024-07-12 00:48:23.895309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.371 qpair failed and we were unable to recover it. 00:35:56.371 [2024-07-12 00:48:23.895390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.371 [2024-07-12 00:48:23.895418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.371 qpair failed and we were unable to recover it. 00:35:56.371 [2024-07-12 00:48:23.895512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.371 [2024-07-12 00:48:23.895539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.371 qpair failed and we were unable to recover it. 00:35:56.371 [2024-07-12 00:48:23.895641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.371 [2024-07-12 00:48:23.895669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.371 qpair failed and we were unable to recover it. 00:35:56.371 [2024-07-12 00:48:23.895754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.371 [2024-07-12 00:48:23.895779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.371 qpair failed and we were unable to recover it. 00:35:56.371 [2024-07-12 00:48:23.895857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.371 [2024-07-12 00:48:23.895882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.371 qpair failed and we were unable to recover it. 00:35:56.371 [2024-07-12 00:48:23.895961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.371 [2024-07-12 00:48:23.895985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.371 qpair failed and we were unable to recover it. 00:35:56.371 [2024-07-12 00:48:23.896068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.371 [2024-07-12 00:48:23.896093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.371 qpair failed and we were unable to recover it. 00:35:56.371 [2024-07-12 00:48:23.896173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.371 [2024-07-12 00:48:23.896198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.371 qpair failed and we were unable to recover it. 00:35:56.371 [2024-07-12 00:48:23.896279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.371 [2024-07-12 00:48:23.896308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.371 qpair failed and we were unable to recover it. 00:35:56.371 [2024-07-12 00:48:23.896386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.371 [2024-07-12 00:48:23.896412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.371 qpair failed and we were unable to recover it. 00:35:56.371 [2024-07-12 00:48:23.896503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.371 [2024-07-12 00:48:23.896527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.371 qpair failed and we were unable to recover it. 00:35:56.371 [2024-07-12 00:48:23.896606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.371 [2024-07-12 00:48:23.896632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.371 qpair failed and we were unable to recover it. 00:35:56.371 [2024-07-12 00:48:23.896713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.371 [2024-07-12 00:48:23.896737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.371 qpair failed and we were unable to recover it. 00:35:56.371 [2024-07-12 00:48:23.896819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.371 [2024-07-12 00:48:23.896844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.371 qpair failed and we were unable to recover it. 00:35:56.371 [2024-07-12 00:48:23.896925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.371 [2024-07-12 00:48:23.896950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.371 qpair failed and we were unable to recover it. 00:35:56.371 [2024-07-12 00:48:23.897033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.371 [2024-07-12 00:48:23.897060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.371 qpair failed and we were unable to recover it. 00:35:56.371 [2024-07-12 00:48:23.897155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.371 [2024-07-12 00:48:23.897181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.371 qpair failed and we were unable to recover it. 00:35:56.371 [2024-07-12 00:48:23.897262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.371 [2024-07-12 00:48:23.897288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.371 qpair failed and we were unable to recover it. 00:35:56.371 [2024-07-12 00:48:23.897375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.371 [2024-07-12 00:48:23.897400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.371 qpair failed and we were unable to recover it. 00:35:56.371 [2024-07-12 00:48:23.897494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.371 [2024-07-12 00:48:23.897525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.371 qpair failed and we were unable to recover it. 00:35:56.371 [2024-07-12 00:48:23.897633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.371 [2024-07-12 00:48:23.897662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.371 qpair failed and we were unable to recover it. 00:35:56.371 [2024-07-12 00:48:23.897747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.371 [2024-07-12 00:48:23.897778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.371 qpair failed and we were unable to recover it. 00:35:56.371 [2024-07-12 00:48:23.897870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.371 [2024-07-12 00:48:23.897896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.371 qpair failed and we were unable to recover it. 00:35:56.371 [2024-07-12 00:48:23.897970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.371 [2024-07-12 00:48:23.897996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.371 qpair failed and we were unable to recover it. 00:35:56.371 [2024-07-12 00:48:23.898078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.371 [2024-07-12 00:48:23.898103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.371 qpair failed and we were unable to recover it. 00:35:56.371 [2024-07-12 00:48:23.898184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.371 [2024-07-12 00:48:23.898209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.371 qpair failed and we were unable to recover it. 00:35:56.371 [2024-07-12 00:48:23.898288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.371 [2024-07-12 00:48:23.898312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.371 qpair failed and we were unable to recover it. 00:35:56.371 [2024-07-12 00:48:23.898399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.371 [2024-07-12 00:48:23.898424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.371 qpair failed and we were unable to recover it. 00:35:56.371 [2024-07-12 00:48:23.898503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.371 [2024-07-12 00:48:23.898529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.371 qpair failed and we were unable to recover it. 00:35:56.371 [2024-07-12 00:48:23.898609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.371 [2024-07-12 00:48:23.898634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.371 qpair failed and we were unable to recover it. 00:35:56.371 [2024-07-12 00:48:23.898751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.371 [2024-07-12 00:48:23.898776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.371 qpair failed and we were unable to recover it. 00:35:56.371 [2024-07-12 00:48:23.898855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.371 [2024-07-12 00:48:23.898880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.371 qpair failed and we were unable to recover it. 00:35:56.371 [2024-07-12 00:48:23.898959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.371 [2024-07-12 00:48:23.898983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.371 qpair failed and we were unable to recover it. 00:35:56.371 [2024-07-12 00:48:23.899069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.371 [2024-07-12 00:48:23.899100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.371 qpair failed and we were unable to recover it. 00:35:56.371 [2024-07-12 00:48:23.899184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.371 [2024-07-12 00:48:23.899214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.371 qpair failed and we were unable to recover it. 00:35:56.371 [2024-07-12 00:48:23.899293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.371 [2024-07-12 00:48:23.899319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.371 qpair failed and we were unable to recover it. 00:35:56.371 [2024-07-12 00:48:23.899396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.372 [2024-07-12 00:48:23.899422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.372 qpair failed and we were unable to recover it. 00:35:56.372 [2024-07-12 00:48:23.899502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.372 [2024-07-12 00:48:23.899527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.372 qpair failed and we were unable to recover it. 00:35:56.372 [2024-07-12 00:48:23.899617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.372 [2024-07-12 00:48:23.899643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.372 qpair failed and we were unable to recover it. 00:35:56.372 [2024-07-12 00:48:23.899730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.372 [2024-07-12 00:48:23.899755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.372 qpair failed and we were unable to recover it. 00:35:56.372 [2024-07-12 00:48:23.899841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.372 [2024-07-12 00:48:23.899866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.372 qpair failed and we were unable to recover it. 00:35:56.372 [2024-07-12 00:48:23.899950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.372 [2024-07-12 00:48:23.899976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.372 qpair failed and we were unable to recover it. 00:35:56.372 [2024-07-12 00:48:23.900069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.372 [2024-07-12 00:48:23.900095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.372 qpair failed and we were unable to recover it. 00:35:56.372 [2024-07-12 00:48:23.900172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.372 [2024-07-12 00:48:23.900197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.372 qpair failed and we were unable to recover it. 00:35:56.372 [2024-07-12 00:48:23.900287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.372 [2024-07-12 00:48:23.900313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.372 qpair failed and we were unable to recover it. 00:35:56.372 [2024-07-12 00:48:23.900402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.372 [2024-07-12 00:48:23.900428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.372 qpair failed and we were unable to recover it. 00:35:56.372 [2024-07-12 00:48:23.900504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.372 [2024-07-12 00:48:23.900529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.372 qpair failed and we were unable to recover it. 00:35:56.372 [2024-07-12 00:48:23.900622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.372 [2024-07-12 00:48:23.900648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.372 qpair failed and we were unable to recover it. 00:35:56.372 [2024-07-12 00:48:23.900739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.372 [2024-07-12 00:48:23.900764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.372 qpair failed and we were unable to recover it. 00:35:56.372 [2024-07-12 00:48:23.900847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.372 [2024-07-12 00:48:23.900871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.372 qpair failed and we were unable to recover it. 00:35:56.372 [2024-07-12 00:48:23.900956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.372 [2024-07-12 00:48:23.900982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.372 qpair failed and we were unable to recover it. 00:35:56.372 [2024-07-12 00:48:23.901060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.372 [2024-07-12 00:48:23.901085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.372 qpair failed and we were unable to recover it. 00:35:56.372 [2024-07-12 00:48:23.901167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.372 [2024-07-12 00:48:23.901193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.372 qpair failed and we were unable to recover it. 00:35:56.372 [2024-07-12 00:48:23.901275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.372 [2024-07-12 00:48:23.901300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.372 qpair failed and we were unable to recover it. 00:35:56.372 [2024-07-12 00:48:23.901381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.372 [2024-07-12 00:48:23.901406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.372 qpair failed and we were unable to recover it. 00:35:56.372 [2024-07-12 00:48:23.901484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.372 [2024-07-12 00:48:23.901508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.372 qpair failed and we were unable to recover it. 00:35:56.372 [2024-07-12 00:48:23.901601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.372 [2024-07-12 00:48:23.901628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.372 qpair failed and we were unable to recover it. 00:35:56.372 [2024-07-12 00:48:23.901711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.372 [2024-07-12 00:48:23.901740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.372 qpair failed and we were unable to recover it. 00:35:56.372 [2024-07-12 00:48:23.901829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.372 [2024-07-12 00:48:23.901854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.372 qpair failed and we were unable to recover it. 00:35:56.372 [2024-07-12 00:48:23.901935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.372 [2024-07-12 00:48:23.901960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.372 qpair failed and we were unable to recover it. 00:35:56.372 [2024-07-12 00:48:23.902052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.372 [2024-07-12 00:48:23.902077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.372 qpair failed and we were unable to recover it. 00:35:56.372 [2024-07-12 00:48:23.902157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.372 [2024-07-12 00:48:23.902181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.372 qpair failed and we were unable to recover it. 00:35:56.372 [2024-07-12 00:48:23.902270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.372 [2024-07-12 00:48:23.902295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.372 qpair failed and we were unable to recover it. 00:35:56.372 [2024-07-12 00:48:23.902384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.372 [2024-07-12 00:48:23.902409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.372 qpair failed and we were unable to recover it. 00:35:56.372 [2024-07-12 00:48:23.902498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.372 [2024-07-12 00:48:23.902524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.372 qpair failed and we were unable to recover it. 00:35:56.372 [2024-07-12 00:48:23.902608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.372 [2024-07-12 00:48:23.902634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.372 qpair failed and we were unable to recover it. 00:35:56.372 [2024-07-12 00:48:23.902728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.372 [2024-07-12 00:48:23.902758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.372 qpair failed and we were unable to recover it. 00:35:56.372 [2024-07-12 00:48:23.902842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.372 [2024-07-12 00:48:23.902867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.372 qpair failed and we were unable to recover it. 00:35:56.372 [2024-07-12 00:48:23.902957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.372 [2024-07-12 00:48:23.902982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.372 qpair failed and we were unable to recover it. 00:35:56.372 [2024-07-12 00:48:23.903067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.372 [2024-07-12 00:48:23.903093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.372 qpair failed and we were unable to recover it. 00:35:56.372 [2024-07-12 00:48:23.903177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.372 [2024-07-12 00:48:23.903202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.372 qpair failed and we were unable to recover it. 00:35:56.372 [2024-07-12 00:48:23.903282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.372 [2024-07-12 00:48:23.903307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.372 qpair failed and we were unable to recover it. 00:35:56.372 [2024-07-12 00:48:23.903385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.372 [2024-07-12 00:48:23.903410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.372 qpair failed and we were unable to recover it. 00:35:56.372 [2024-07-12 00:48:23.903494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.372 [2024-07-12 00:48:23.903523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.372 qpair failed and we were unable to recover it. 00:35:56.372 [2024-07-12 00:48:23.903618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.372 [2024-07-12 00:48:23.903643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.373 qpair failed and we were unable to recover it. 00:35:56.373 [2024-07-12 00:48:23.903736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.373 [2024-07-12 00:48:23.903761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.373 qpair failed and we were unable to recover it. 00:35:56.373 [2024-07-12 00:48:23.903853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.373 [2024-07-12 00:48:23.903877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.373 qpair failed and we were unable to recover it. 00:35:56.373 [2024-07-12 00:48:23.903962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.373 [2024-07-12 00:48:23.903988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.373 qpair failed and we were unable to recover it. 00:35:56.373 [2024-07-12 00:48:23.904079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.373 [2024-07-12 00:48:23.904106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.373 qpair failed and we were unable to recover it. 00:35:56.373 [2024-07-12 00:48:23.904192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.373 [2024-07-12 00:48:23.904218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.373 qpair failed and we were unable to recover it. 00:35:56.373 [2024-07-12 00:48:23.904306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.373 [2024-07-12 00:48:23.904332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.373 qpair failed and we were unable to recover it. 00:35:56.373 [2024-07-12 00:48:23.904413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.373 [2024-07-12 00:48:23.904437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.373 qpair failed and we were unable to recover it. 00:35:56.373 [2024-07-12 00:48:23.904532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.373 [2024-07-12 00:48:23.904557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.373 qpair failed and we were unable to recover it. 00:35:56.373 [2024-07-12 00:48:23.904659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.373 [2024-07-12 00:48:23.904692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.373 qpair failed and we were unable to recover it. 00:35:56.373 [2024-07-12 00:48:23.904785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.373 [2024-07-12 00:48:23.904810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.373 qpair failed and we were unable to recover it. 00:35:56.373 [2024-07-12 00:48:23.904888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.373 [2024-07-12 00:48:23.904914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.373 qpair failed and we were unable to recover it. 00:35:56.373 [2024-07-12 00:48:23.904999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.373 [2024-07-12 00:48:23.905024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.373 qpair failed and we were unable to recover it. 00:35:56.373 [2024-07-12 00:48:23.905126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.373 [2024-07-12 00:48:23.905151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.373 qpair failed and we were unable to recover it. 00:35:56.373 [2024-07-12 00:48:23.905232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.373 [2024-07-12 00:48:23.905258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.373 qpair failed and we were unable to recover it. 00:35:56.373 [2024-07-12 00:48:23.905346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.373 [2024-07-12 00:48:23.905373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.373 qpair failed and we were unable to recover it. 00:35:56.373 [2024-07-12 00:48:23.905458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.373 [2024-07-12 00:48:23.905482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.373 qpair failed and we were unable to recover it. 00:35:56.373 [2024-07-12 00:48:23.905571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.373 [2024-07-12 00:48:23.905606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.373 qpair failed and we were unable to recover it. 00:35:56.373 [2024-07-12 00:48:23.905694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.373 [2024-07-12 00:48:23.905721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.373 qpair failed and we were unable to recover it. 00:35:56.373 [2024-07-12 00:48:23.905800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.373 [2024-07-12 00:48:23.905827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.373 qpair failed and we were unable to recover it. 00:35:56.373 [2024-07-12 00:48:23.905914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.373 [2024-07-12 00:48:23.905940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.373 qpair failed and we were unable to recover it. 00:35:56.373 [2024-07-12 00:48:23.906062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.373 [2024-07-12 00:48:23.906088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.373 qpair failed and we were unable to recover it. 00:35:56.373 [2024-07-12 00:48:23.906174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.373 [2024-07-12 00:48:23.906200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.373 qpair failed and we were unable to recover it. 00:35:56.373 [2024-07-12 00:48:23.906282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.373 [2024-07-12 00:48:23.906306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.373 qpair failed and we were unable to recover it. 00:35:56.373 [2024-07-12 00:48:23.906396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.373 [2024-07-12 00:48:23.906421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.373 qpair failed and we were unable to recover it. 00:35:56.373 [2024-07-12 00:48:23.906498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.373 [2024-07-12 00:48:23.906523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.373 qpair failed and we were unable to recover it. 00:35:56.373 A controller has encountered a failure and is being reset. 00:35:56.373 [2024-07-12 00:48:23.906638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.373 [2024-07-12 00:48:23.906669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.373 qpair failed and we were unable to recover it. 00:35:56.373 [2024-07-12 00:48:23.906755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.373 [2024-07-12 00:48:23.906780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.373 qpair failed and we were unable to recover it. 00:35:56.373 [2024-07-12 00:48:23.906863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.373 [2024-07-12 00:48:23.906889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.373 qpair failed and we were unable to recover it. 00:35:56.373 [2024-07-12 00:48:23.906984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.373 [2024-07-12 00:48:23.907011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.373 qpair failed and we were unable to recover it. 00:35:56.373 [2024-07-12 00:48:23.907094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.373 [2024-07-12 00:48:23.907119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.373 qpair failed and we were unable to recover it. 00:35:56.373 [2024-07-12 00:48:23.907201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.373 [2024-07-12 00:48:23.907226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ab0000b90 with addr=10.0.0.2, port=4420 00:35:56.373 qpair failed and we were unable to recover it. 00:35:56.373 [2024-07-12 00:48:23.907307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.373 [2024-07-12 00:48:23.907335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.373 qpair failed and we were unable to recover it. 00:35:56.373 [2024-07-12 00:48:23.907417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.373 [2024-07-12 00:48:23.907442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.373 qpair failed and we were unable to recover it. 00:35:56.373 [2024-07-12 00:48:23.907530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.373 [2024-07-12 00:48:23.907556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x863990 with addr=10.0.0.2, port=4420 00:35:56.373 qpair failed and we were unable to recover it. 00:35:56.373 [2024-07-12 00:48:23.907677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.373 [2024-07-12 00:48:23.907717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.373 qpair failed and we were unable to recover it. 00:35:56.373 [2024-07-12 00:48:23.907813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.373 [2024-07-12 00:48:23.907842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.373 qpair failed and we were unable to recover it. 00:35:56.373 [2024-07-12 00:48:23.907924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.373 [2024-07-12 00:48:23.907950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.373 qpair failed and we were unable to recover it. 00:35:56.373 [2024-07-12 00:48:23.908027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.373 [2024-07-12 00:48:23.908053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.373 qpair failed and we were unable to recover it. 00:35:56.373 [2024-07-12 00:48:23.908143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.373 [2024-07-12 00:48:23.908174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.373 qpair failed and we were unable to recover it. 00:35:56.373 [2024-07-12 00:48:23.908251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.373 [2024-07-12 00:48:23.908276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.374 qpair failed and we were unable to recover it. 00:35:56.374 [2024-07-12 00:48:23.908364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.374 [2024-07-12 00:48:23.908389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.374 qpair failed and we were unable to recover it. 00:35:56.374 [2024-07-12 00:48:23.908469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.374 [2024-07-12 00:48:23.908495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.374 qpair failed and we were unable to recover it. 00:35:56.374 [2024-07-12 00:48:23.908578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.374 [2024-07-12 00:48:23.908616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.374 qpair failed and we were unable to recover it. 00:35:56.374 [2024-07-12 00:48:23.908697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.374 [2024-07-12 00:48:23.908722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa8000b90 with addr=10.0.0.2, port=4420 00:35:56.374 qpair failed and we were unable to recover it. 00:35:56.374 [2024-07-12 00:48:23.908803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.374 [2024-07-12 00:48:23.908833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.374 qpair failed and we were unable to recover it. 00:35:56.374 [2024-07-12 00:48:23.908916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.374 [2024-07-12 00:48:23.908942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.374 qpair failed and we were unable to recover it. 00:35:56.374 [2024-07-12 00:48:23.909024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.374 [2024-07-12 00:48:23.909049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.374 qpair failed and we were unable to recover it. 00:35:56.374 [2024-07-12 00:48:23.909129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.374 [2024-07-12 00:48:23.909154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.374 qpair failed and we were unable to recover it. 00:35:56.374 [2024-07-12 00:48:23.909241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.374 [2024-07-12 00:48:23.909265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.374 qpair failed and we were unable to recover it. 00:35:56.374 [2024-07-12 00:48:23.909353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.374 [2024-07-12 00:48:23.909377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.374 qpair failed and we were unable to recover it. 00:35:56.374 [2024-07-12 00:48:23.909456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.374 [2024-07-12 00:48:23.909482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.374 qpair failed and we were unable to recover it. 00:35:56.374 [2024-07-12 00:48:23.909566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.374 [2024-07-12 00:48:23.909599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.374 qpair failed and we were unable to recover it. 00:35:56.374 [2024-07-12 00:48:23.909693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.374 [2024-07-12 00:48:23.909719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.374 qpair failed and we were unable to recover it. 00:35:56.374 [2024-07-12 00:48:23.909809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.374 [2024-07-12 00:48:23.909834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.374 qpair failed and we were unable to recover it. 00:35:56.374 [2024-07-12 00:48:23.909916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.374 [2024-07-12 00:48:23.909941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.374 qpair failed and we were unable to recover it. 00:35:56.374 [2024-07-12 00:48:23.910019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.374 [2024-07-12 00:48:23.910045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.374 qpair failed and we were unable to recover it. 00:35:56.374 [2024-07-12 00:48:23.910127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.374 [2024-07-12 00:48:23.910152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.374 qpair failed and we were unable to recover it. 00:35:56.374 [2024-07-12 00:48:23.910231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.374 [2024-07-12 00:48:23.910256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.374 qpair failed and we were unable to recover it. 00:35:56.374 [2024-07-12 00:48:23.910339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.374 [2024-07-12 00:48:23.910364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.374 qpair failed and we were unable to recover it. 00:35:56.374 [2024-07-12 00:48:23.910449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.374 [2024-07-12 00:48:23.910474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.374 qpair failed and we were unable to recover it. 00:35:56.374 [2024-07-12 00:48:23.910561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.374 [2024-07-12 00:48:23.910595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.374 qpair failed and we were unable to recover it. 00:35:56.374 [2024-07-12 00:48:23.910679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.374 [2024-07-12 00:48:23.910704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.374 qpair failed and we were unable to recover it. 00:35:56.374 [2024-07-12 00:48:23.910809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.374 [2024-07-12 00:48:23.910833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.374 qpair failed and we were unable to recover it. 00:35:56.374 [2024-07-12 00:48:23.910915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.374 [2024-07-12 00:48:23.910940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.374 qpair failed and we were unable to recover it. 00:35:56.374 [2024-07-12 00:48:23.911019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.374 [2024-07-12 00:48:23.911043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6aa0000b90 with addr=10.0.0.2, port=4420 00:35:56.374 qpair failed and we were unable to recover it. 00:35:56.374 qpair failed and we were unable to recover it. 00:35:56.374 [2024-07-12 00:48:23.911159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.374 [2024-07-12 00:48:23.911199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x871320 with addr=10.0.0.2, port=4420 00:35:56.374 [2024-07-12 00:48:23.911218] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x871320 is same with the state(5) to be set 00:35:56.374 [2024-07-12 00:48:23.911249] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x871320 (9): Bad file descriptor 00:35:56.374 [2024-07-12 00:48:23.911270] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:56.374 [2024-07-12 00:48:23.911291] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:56.374 [2024-07-12 00:48:23.911309] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:56.374 Unable to reset the controller. 00:35:56.374 [2024-07-12 00:48:23.940654] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:56.374 [2024-07-12 00:48:23.940709] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:56.374 [2024-07-12 00:48:23.940734] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:56.374 [2024-07-12 00:48:23.940754] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:56.374 [2024-07-12 00:48:23.940773] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:56.374 [2024-07-12 00:48:23.940864] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:35:56.374 [2024-07-12 00:48:23.940922] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:35:56.374 [2024-07-12 00:48:23.940977] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:35:56.374 [2024-07-12 00:48:23.940985] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:35:56.374 00:48:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:35:56.374 00:48:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@860 -- # return 0 00:35:56.374 00:48:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:35:56.374 00:48:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:56.375 00:48:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:56.375 00:48:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:56.375 00:48:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:35:56.375 00:48:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:56.375 00:48:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:56.375 Malloc0 00:35:56.375 00:48:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:56.375 00:48:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:35:56.375 00:48:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:56.375 00:48:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:56.375 [2024-07-12 00:48:24.110340] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:56.375 00:48:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:56.375 00:48:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:35:56.375 00:48:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:56.375 00:48:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:56.375 00:48:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:56.375 00:48:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:35:56.375 00:48:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:56.375 00:48:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:56.375 00:48:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:56.375 00:48:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:56.375 00:48:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:56.375 00:48:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:56.375 [2024-07-12 00:48:24.138573] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:56.375 00:48:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:56.375 00:48:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:35:56.375 00:48:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:56.375 00:48:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:56.375 00:48:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:56.375 00:48:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 1084011 00:35:57.307 Controller properly reset. 00:36:02.568 Initializing NVMe Controllers 00:36:02.568 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:36:02.568 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:36:02.568 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:36:02.568 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:36:02.568 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:36:02.568 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:36:02.568 Initialization complete. Launching workers. 00:36:02.568 Starting thread on core 1 00:36:02.568 Starting thread on core 2 00:36:02.568 Starting thread on core 3 00:36:02.568 Starting thread on core 0 00:36:02.568 00:48:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:36:02.568 00:36:02.568 real 0m10.669s 00:36:02.568 user 0m33.352s 00:36:02.568 sys 0m8.122s 00:36:02.568 00:48:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:36:02.568 00:48:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:02.568 ************************************ 00:36:02.568 END TEST nvmf_target_disconnect_tc2 00:36:02.568 ************************************ 00:36:02.568 00:48:29 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:36:02.568 00:48:29 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:36:02.568 00:48:29 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:36:02.568 00:48:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:36:02.568 00:48:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@117 -- # sync 00:36:02.568 00:48:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:36:02.568 00:48:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@120 -- # set +e 00:36:02.568 00:48:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:36:02.568 00:48:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:36:02.568 rmmod nvme_tcp 00:36:02.568 rmmod nvme_fabrics 00:36:02.568 rmmod nvme_keyring 00:36:02.568 00:48:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:36:02.568 00:48:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set -e 00:36:02.568 00:48:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@125 -- # return 0 00:36:02.568 00:48:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@489 -- # '[' -n 1084328 ']' 00:36:02.568 00:48:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@490 -- # killprocess 1084328 00:36:02.568 00:48:30 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@946 -- # '[' -z 1084328 ']' 00:36:02.568 00:48:30 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@950 -- # kill -0 1084328 00:36:02.568 00:48:30 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@951 -- # uname 00:36:02.568 00:48:30 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:36:02.568 00:48:30 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1084328 00:36:02.568 00:48:30 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@952 -- # process_name=reactor_4 00:36:02.568 00:48:30 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@956 -- # '[' reactor_4 = sudo ']' 00:36:02.568 00:48:30 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1084328' 00:36:02.568 killing process with pid 1084328 00:36:02.568 00:48:30 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@965 -- # kill 1084328 00:36:02.568 00:48:30 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@970 -- # wait 1084328 00:36:02.568 00:48:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:36:02.568 00:48:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:36:02.568 00:48:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:36:02.568 00:48:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:36:02.568 00:48:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:36:02.568 00:48:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:02.568 00:48:30 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:36:02.568 00:48:30 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:04.476 00:48:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:36:04.476 00:36:04.476 real 0m14.912s 00:36:04.476 user 0m58.182s 00:36:04.476 sys 0m10.292s 00:36:04.476 00:48:32 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1122 -- # xtrace_disable 00:36:04.476 00:48:32 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:36:04.476 ************************************ 00:36:04.476 END TEST nvmf_target_disconnect 00:36:04.476 ************************************ 00:36:04.736 00:48:32 nvmf_tcp -- nvmf/nvmf.sh@126 -- # timing_exit host 00:36:04.736 00:48:32 nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:04.736 00:48:32 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:04.736 00:48:32 nvmf_tcp -- nvmf/nvmf.sh@128 -- # trap - SIGINT SIGTERM EXIT 00:36:04.736 00:36:04.736 real 27m21.494s 00:36:04.736 user 76m7.215s 00:36:04.736 sys 5m58.866s 00:36:04.736 00:48:32 nvmf_tcp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:36:04.736 00:48:32 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:04.736 ************************************ 00:36:04.736 END TEST nvmf_tcp 00:36:04.736 ************************************ 00:36:04.736 00:48:32 -- spdk/autotest.sh@288 -- # [[ 0 -eq 0 ]] 00:36:04.736 00:48:32 -- spdk/autotest.sh@289 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:36:04.736 00:48:32 -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:36:04.736 00:48:32 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:36:04.736 00:48:32 -- common/autotest_common.sh@10 -- # set +x 00:36:04.736 ************************************ 00:36:04.736 START TEST spdkcli_nvmf_tcp 00:36:04.736 ************************************ 00:36:04.736 00:48:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:36:04.736 * Looking for test storage... 00:36:04.736 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:36:04.736 00:48:32 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:36:04.736 00:48:32 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:36:04.736 00:48:32 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:36:04.736 00:48:32 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:04.736 00:48:32 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:36:04.736 00:48:32 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:04.736 00:48:32 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:04.736 00:48:32 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:04.736 00:48:32 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:04.736 00:48:32 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:04.736 00:48:32 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:04.736 00:48:32 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:04.736 00:48:32 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:04.736 00:48:32 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:04.736 00:48:32 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:04.736 00:48:32 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:36:04.736 00:48:32 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:36:04.736 00:48:32 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:04.736 00:48:32 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:04.736 00:48:32 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:04.736 00:48:32 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:04.736 00:48:32 spdkcli_nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:04.736 00:48:32 spdkcli_nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:04.736 00:48:32 spdkcli_nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:04.736 00:48:32 spdkcli_nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:04.736 00:48:32 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:04.736 00:48:32 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:04.736 00:48:32 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:04.736 00:48:32 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:36:04.736 00:48:32 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:04.736 00:48:32 spdkcli_nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:36:04.736 00:48:32 spdkcli_nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:36:04.736 00:48:32 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:36:04.736 00:48:32 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:04.736 00:48:32 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:04.736 00:48:32 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:04.736 00:48:32 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:36:04.736 00:48:32 spdkcli_nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:36:04.736 00:48:32 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:36:04.736 00:48:32 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:36:04.736 00:48:32 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:36:04.736 00:48:32 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:36:04.736 00:48:32 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:36:04.736 00:48:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:36:04.736 00:48:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:04.736 00:48:32 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:36:04.736 00:48:32 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=1085267 00:36:04.736 00:48:32 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:36:04.736 00:48:32 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 1085267 00:36:04.736 00:48:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@827 -- # '[' -z 1085267 ']' 00:36:04.736 00:48:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:04.736 00:48:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@832 -- # local max_retries=100 00:36:04.736 00:48:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:04.736 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:04.736 00:48:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@836 -- # xtrace_disable 00:36:04.736 00:48:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:04.736 [2024-07-12 00:48:32.509303] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:36:04.736 [2024-07-12 00:48:32.509413] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1085267 ] 00:36:04.736 EAL: No free 2048 kB hugepages reported on node 1 00:36:04.995 [2024-07-12 00:48:32.585672] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:36:04.995 [2024-07-12 00:48:32.692498] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:36:04.995 [2024-07-12 00:48:32.692506] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:36:04.995 00:48:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:36:04.995 00:48:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@860 -- # return 0 00:36:04.995 00:48:32 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:36:04.995 00:48:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:04.995 00:48:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:04.995 00:48:32 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:36:04.995 00:48:32 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:36:04.995 00:48:32 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:36:04.995 00:48:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:36:04.995 00:48:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:05.253 00:48:32 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:36:05.253 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:36:05.253 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:36:05.253 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:36:05.253 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:36:05.253 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:36:05.253 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:36:05.253 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:36:05.253 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:36:05.253 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:36:05.253 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:36:05.253 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:36:05.253 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:36:05.253 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:36:05.253 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:36:05.253 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:36:05.253 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:36:05.253 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:36:05.253 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:36:05.253 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:36:05.253 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:36:05.253 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:36:05.253 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:36:05.253 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:36:05.254 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:36:05.254 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:36:05.254 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:36:05.254 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:36:05.254 ' 00:36:07.790 [2024-07-12 00:48:35.425314] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:09.164 [2024-07-12 00:48:36.665495] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:36:11.697 [2024-07-12 00:48:38.960596] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:36:13.601 [2024-07-12 00:48:40.934698] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:36:14.979 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:36:14.979 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:36:14.979 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:36:14.979 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:36:14.979 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:36:14.979 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:36:14.979 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:36:14.979 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:36:14.979 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:36:14.979 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:36:14.979 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:36:14.979 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:36:14.979 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:36:14.979 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:36:14.979 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:36:14.979 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:36:14.979 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:36:14.979 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:36:14.979 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:36:14.979 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:36:14.979 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:36:14.979 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:36:14.979 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:36:14.979 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:36:14.979 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:36:14.979 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:36:14.979 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:36:14.980 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:36:14.980 00:48:42 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:36:14.980 00:48:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:14.980 00:48:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:14.980 00:48:42 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:36:14.980 00:48:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:36:14.980 00:48:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:14.980 00:48:42 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:36:14.980 00:48:42 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:36:15.237 00:48:42 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:36:15.237 00:48:43 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:36:15.237 00:48:43 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:36:15.237 00:48:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:15.237 00:48:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:15.237 00:48:43 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:36:15.237 00:48:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:36:15.237 00:48:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:15.237 00:48:43 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:36:15.237 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:36:15.237 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:36:15.237 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:36:15.237 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:36:15.237 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:36:15.237 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:36:15.238 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:36:15.238 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:36:15.238 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:36:15.238 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:36:15.238 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:36:15.238 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:36:15.238 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:36:15.238 ' 00:36:20.507 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:36:20.507 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:36:20.507 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:36:20.507 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:36:20.507 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:36:20.507 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:36:20.507 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:36:20.507 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:36:20.507 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:36:20.507 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:36:20.507 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:36:20.507 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:36:20.507 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:36:20.507 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:36:20.507 00:48:48 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:36:20.507 00:48:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:20.507 00:48:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:20.765 00:48:48 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 1085267 00:36:20.765 00:48:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@946 -- # '[' -z 1085267 ']' 00:36:20.765 00:48:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@950 -- # kill -0 1085267 00:36:20.765 00:48:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@951 -- # uname 00:36:20.765 00:48:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:36:20.765 00:48:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1085267 00:36:20.765 00:48:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:36:20.765 00:48:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:36:20.765 00:48:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1085267' 00:36:20.765 killing process with pid 1085267 00:36:20.765 00:48:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@965 -- # kill 1085267 00:36:20.765 00:48:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@970 -- # wait 1085267 00:36:20.765 00:48:48 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:36:20.765 00:48:48 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:36:20.765 00:48:48 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 1085267 ']' 00:36:20.765 00:48:48 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 1085267 00:36:20.765 00:48:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@946 -- # '[' -z 1085267 ']' 00:36:20.765 00:48:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@950 -- # kill -0 1085267 00:36:20.765 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 950: kill: (1085267) - No such process 00:36:20.765 00:48:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@973 -- # echo 'Process with pid 1085267 is not found' 00:36:20.765 Process with pid 1085267 is not found 00:36:20.765 00:48:48 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:36:20.765 00:48:48 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:36:20.765 00:48:48 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:36:20.765 00:36:20.765 real 0m16.148s 00:36:20.765 user 0m34.289s 00:36:20.765 sys 0m0.839s 00:36:20.765 00:48:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:36:20.765 00:48:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:20.765 ************************************ 00:36:20.765 END TEST spdkcli_nvmf_tcp 00:36:20.765 ************************************ 00:36:20.765 00:48:48 -- spdk/autotest.sh@290 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:36:20.765 00:48:48 -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:36:20.765 00:48:48 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:36:20.765 00:48:48 -- common/autotest_common.sh@10 -- # set +x 00:36:20.765 ************************************ 00:36:20.765 START TEST nvmf_identify_passthru 00:36:20.765 ************************************ 00:36:20.765 00:48:48 nvmf_identify_passthru -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:36:21.026 * Looking for test storage... 00:36:21.026 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:36:21.026 00:48:48 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:21.026 00:48:48 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:36:21.026 00:48:48 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:21.026 00:48:48 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:21.026 00:48:48 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:21.026 00:48:48 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:21.026 00:48:48 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:21.026 00:48:48 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:21.026 00:48:48 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:21.026 00:48:48 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:21.026 00:48:48 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:21.026 00:48:48 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:21.026 00:48:48 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:36:21.026 00:48:48 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:36:21.026 00:48:48 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:21.026 00:48:48 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:21.026 00:48:48 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:21.026 00:48:48 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:21.026 00:48:48 nvmf_identify_passthru -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:21.026 00:48:48 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:21.026 00:48:48 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:21.026 00:48:48 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:21.026 00:48:48 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:21.026 00:48:48 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:21.026 00:48:48 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:21.026 00:48:48 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:36:21.026 00:48:48 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:21.026 00:48:48 nvmf_identify_passthru -- nvmf/common.sh@47 -- # : 0 00:36:21.026 00:48:48 nvmf_identify_passthru -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:36:21.026 00:48:48 nvmf_identify_passthru -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:36:21.026 00:48:48 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:21.026 00:48:48 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:21.026 00:48:48 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:21.026 00:48:48 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:36:21.026 00:48:48 nvmf_identify_passthru -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:36:21.026 00:48:48 nvmf_identify_passthru -- nvmf/common.sh@51 -- # have_pci_nics=0 00:36:21.026 00:48:48 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:21.026 00:48:48 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:21.026 00:48:48 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:21.026 00:48:48 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:21.026 00:48:48 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:21.026 00:48:48 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:21.026 00:48:48 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:21.026 00:48:48 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:36:21.026 00:48:48 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:21.026 00:48:48 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:36:21.026 00:48:48 nvmf_identify_passthru -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:36:21.026 00:48:48 nvmf_identify_passthru -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:21.026 00:48:48 nvmf_identify_passthru -- nvmf/common.sh@448 -- # prepare_net_devs 00:36:21.026 00:48:48 nvmf_identify_passthru -- nvmf/common.sh@410 -- # local -g is_hw=no 00:36:21.026 00:48:48 nvmf_identify_passthru -- nvmf/common.sh@412 -- # remove_spdk_ns 00:36:21.026 00:48:48 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:21.026 00:48:48 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:36:21.026 00:48:48 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:21.026 00:48:48 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:36:21.026 00:48:48 nvmf_identify_passthru -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:36:21.026 00:48:48 nvmf_identify_passthru -- nvmf/common.sh@285 -- # xtrace_disable 00:36:21.026 00:48:48 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:22.403 00:48:50 nvmf_identify_passthru -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:22.403 00:48:50 nvmf_identify_passthru -- nvmf/common.sh@291 -- # pci_devs=() 00:36:22.403 00:48:50 nvmf_identify_passthru -- nvmf/common.sh@291 -- # local -a pci_devs 00:36:22.403 00:48:50 nvmf_identify_passthru -- nvmf/common.sh@292 -- # pci_net_devs=() 00:36:22.403 00:48:50 nvmf_identify_passthru -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:36:22.403 00:48:50 nvmf_identify_passthru -- nvmf/common.sh@293 -- # pci_drivers=() 00:36:22.403 00:48:50 nvmf_identify_passthru -- nvmf/common.sh@293 -- # local -A pci_drivers 00:36:22.403 00:48:50 nvmf_identify_passthru -- nvmf/common.sh@295 -- # net_devs=() 00:36:22.403 00:48:50 nvmf_identify_passthru -- nvmf/common.sh@295 -- # local -ga net_devs 00:36:22.403 00:48:50 nvmf_identify_passthru -- nvmf/common.sh@296 -- # e810=() 00:36:22.403 00:48:50 nvmf_identify_passthru -- nvmf/common.sh@296 -- # local -ga e810 00:36:22.403 00:48:50 nvmf_identify_passthru -- nvmf/common.sh@297 -- # x722=() 00:36:22.403 00:48:50 nvmf_identify_passthru -- nvmf/common.sh@297 -- # local -ga x722 00:36:22.403 00:48:50 nvmf_identify_passthru -- nvmf/common.sh@298 -- # mlx=() 00:36:22.403 00:48:50 nvmf_identify_passthru -- nvmf/common.sh@298 -- # local -ga mlx 00:36:22.403 00:48:50 nvmf_identify_passthru -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:22.404 00:48:50 nvmf_identify_passthru -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:22.404 00:48:50 nvmf_identify_passthru -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:22.404 00:48:50 nvmf_identify_passthru -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:22.404 00:48:50 nvmf_identify_passthru -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:22.404 00:48:50 nvmf_identify_passthru -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:22.404 00:48:50 nvmf_identify_passthru -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:22.404 00:48:50 nvmf_identify_passthru -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:22.404 00:48:50 nvmf_identify_passthru -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:22.404 00:48:50 nvmf_identify_passthru -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:22.404 00:48:50 nvmf_identify_passthru -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:22.404 00:48:50 nvmf_identify_passthru -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:36:22.404 00:48:50 nvmf_identify_passthru -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:36:22.404 00:48:50 nvmf_identify_passthru -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:36:22.404 00:48:50 nvmf_identify_passthru -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:36:22.404 00:48:50 nvmf_identify_passthru -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:36:22.404 00:48:50 nvmf_identify_passthru -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:36:22.404 00:48:50 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:36:22.404 00:48:50 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:36:22.404 Found 0000:08:00.0 (0x8086 - 0x159b) 00:36:22.404 00:48:50 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:36:22.404 00:48:50 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:36:22.404 00:48:50 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:22.404 00:48:50 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:22.404 00:48:50 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:36:22.404 00:48:50 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:36:22.404 00:48:50 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:36:22.404 Found 0000:08:00.1 (0x8086 - 0x159b) 00:36:22.404 00:48:50 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:36:22.404 00:48:50 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:36:22.404 00:48:50 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:22.404 00:48:50 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:22.404 00:48:50 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:36:22.404 00:48:50 nvmf_identify_passthru -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:36:22.404 00:48:50 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:36:22.404 00:48:50 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:36:22.404 00:48:50 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:36:22.404 00:48:50 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:22.404 00:48:50 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:36:22.404 00:48:50 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:22.404 00:48:50 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:36:22.404 00:48:50 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:36:22.404 00:48:50 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:22.404 00:48:50 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:36:22.404 Found net devices under 0000:08:00.0: cvl_0_0 00:36:22.404 00:48:50 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:36:22.404 00:48:50 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:36:22.404 00:48:50 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:22.404 00:48:50 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:36:22.404 00:48:50 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:22.404 00:48:50 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:36:22.404 00:48:50 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:36:22.404 00:48:50 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:22.404 00:48:50 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:36:22.404 Found net devices under 0000:08:00.1: cvl_0_1 00:36:22.404 00:48:50 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:36:22.404 00:48:50 nvmf_identify_passthru -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:36:22.404 00:48:50 nvmf_identify_passthru -- nvmf/common.sh@414 -- # is_hw=yes 00:36:22.404 00:48:50 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:36:22.404 00:48:50 nvmf_identify_passthru -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:36:22.404 00:48:50 nvmf_identify_passthru -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:36:22.404 00:48:50 nvmf_identify_passthru -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:22.404 00:48:50 nvmf_identify_passthru -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:22.404 00:48:50 nvmf_identify_passthru -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:22.404 00:48:50 nvmf_identify_passthru -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:36:22.404 00:48:50 nvmf_identify_passthru -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:22.404 00:48:50 nvmf_identify_passthru -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:22.404 00:48:50 nvmf_identify_passthru -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:36:22.404 00:48:50 nvmf_identify_passthru -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:22.404 00:48:50 nvmf_identify_passthru -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:22.404 00:48:50 nvmf_identify_passthru -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:36:22.404 00:48:50 nvmf_identify_passthru -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:36:22.404 00:48:50 nvmf_identify_passthru -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:36:22.404 00:48:50 nvmf_identify_passthru -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:22.661 00:48:50 nvmf_identify_passthru -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:22.661 00:48:50 nvmf_identify_passthru -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:22.661 00:48:50 nvmf_identify_passthru -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:36:22.661 00:48:50 nvmf_identify_passthru -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:22.661 00:48:50 nvmf_identify_passthru -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:22.661 00:48:50 nvmf_identify_passthru -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:22.661 00:48:50 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:36:22.661 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:22.661 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.261 ms 00:36:22.661 00:36:22.661 --- 10.0.0.2 ping statistics --- 00:36:22.661 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:22.661 rtt min/avg/max/mdev = 0.261/0.261/0.261/0.000 ms 00:36:22.661 00:48:50 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:22.661 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:22.661 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.163 ms 00:36:22.661 00:36:22.661 --- 10.0.0.1 ping statistics --- 00:36:22.661 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:22.661 rtt min/avg/max/mdev = 0.163/0.163/0.163/0.000 ms 00:36:22.661 00:48:50 nvmf_identify_passthru -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:22.661 00:48:50 nvmf_identify_passthru -- nvmf/common.sh@422 -- # return 0 00:36:22.661 00:48:50 nvmf_identify_passthru -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:36:22.661 00:48:50 nvmf_identify_passthru -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:22.661 00:48:50 nvmf_identify_passthru -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:36:22.661 00:48:50 nvmf_identify_passthru -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:36:22.661 00:48:50 nvmf_identify_passthru -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:22.661 00:48:50 nvmf_identify_passthru -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:36:22.661 00:48:50 nvmf_identify_passthru -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:36:22.661 00:48:50 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:36:22.661 00:48:50 nvmf_identify_passthru -- common/autotest_common.sh@720 -- # xtrace_disable 00:36:22.661 00:48:50 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:22.661 00:48:50 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:36:22.662 00:48:50 nvmf_identify_passthru -- common/autotest_common.sh@1520 -- # bdfs=() 00:36:22.662 00:48:50 nvmf_identify_passthru -- common/autotest_common.sh@1520 -- # local bdfs 00:36:22.662 00:48:50 nvmf_identify_passthru -- common/autotest_common.sh@1521 -- # bdfs=($(get_nvme_bdfs)) 00:36:22.662 00:48:50 nvmf_identify_passthru -- common/autotest_common.sh@1521 -- # get_nvme_bdfs 00:36:22.662 00:48:50 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # bdfs=() 00:36:22.662 00:48:50 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # local bdfs 00:36:22.662 00:48:50 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:36:22.662 00:48:50 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:36:22.662 00:48:50 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:36:22.662 00:48:50 nvmf_identify_passthru -- common/autotest_common.sh@1511 -- # (( 1 == 0 )) 00:36:22.662 00:48:50 nvmf_identify_passthru -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:84:00.0 00:36:22.662 00:48:50 nvmf_identify_passthru -- common/autotest_common.sh@1523 -- # echo 0000:84:00.0 00:36:22.662 00:48:50 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:84:00.0 00:36:22.662 00:48:50 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:84:00.0 ']' 00:36:22.662 00:48:50 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:84:00.0' -i 0 00:36:22.662 00:48:50 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:36:22.662 00:48:50 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:36:22.662 EAL: No free 2048 kB hugepages reported on node 1 00:36:26.841 00:48:54 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=PHLJ8275016S1P0FGN 00:36:26.841 00:48:54 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:84:00.0' -i 0 00:36:26.841 00:48:54 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:36:26.841 00:48:54 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:36:26.841 EAL: No free 2048 kB hugepages reported on node 1 00:36:31.029 00:48:58 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:36:31.029 00:48:58 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:36:31.029 00:48:58 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:31.029 00:48:58 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:31.288 00:48:58 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:36:31.288 00:48:58 nvmf_identify_passthru -- common/autotest_common.sh@720 -- # xtrace_disable 00:36:31.288 00:48:58 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:31.288 00:48:58 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=1088809 00:36:31.288 00:48:58 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:36:31.288 00:48:58 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:36:31.288 00:48:58 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 1088809 00:36:31.288 00:48:58 nvmf_identify_passthru -- common/autotest_common.sh@827 -- # '[' -z 1088809 ']' 00:36:31.288 00:48:58 nvmf_identify_passthru -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:31.288 00:48:58 nvmf_identify_passthru -- common/autotest_common.sh@832 -- # local max_retries=100 00:36:31.288 00:48:58 nvmf_identify_passthru -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:31.288 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:31.288 00:48:58 nvmf_identify_passthru -- common/autotest_common.sh@836 -- # xtrace_disable 00:36:31.288 00:48:58 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:31.288 [2024-07-12 00:48:58.928805] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:36:31.288 [2024-07-12 00:48:58.928911] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:31.288 EAL: No free 2048 kB hugepages reported on node 1 00:36:31.288 [2024-07-12 00:48:58.997347] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:36:31.288 [2024-07-12 00:48:59.088600] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:31.288 [2024-07-12 00:48:59.088661] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:31.288 [2024-07-12 00:48:59.088678] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:31.288 [2024-07-12 00:48:59.088692] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:31.288 [2024-07-12 00:48:59.088704] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:31.288 [2024-07-12 00:48:59.088783] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:36:31.288 [2024-07-12 00:48:59.088839] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:36:31.288 [2024-07-12 00:48:59.088919] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:36:31.288 [2024-07-12 00:48:59.088889] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:36:31.547 00:48:59 nvmf_identify_passthru -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:36:31.547 00:48:59 nvmf_identify_passthru -- common/autotest_common.sh@860 -- # return 0 00:36:31.547 00:48:59 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:36:31.547 00:48:59 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:31.547 00:48:59 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:31.547 INFO: Log level set to 20 00:36:31.547 INFO: Requests: 00:36:31.547 { 00:36:31.547 "jsonrpc": "2.0", 00:36:31.547 "method": "nvmf_set_config", 00:36:31.547 "id": 1, 00:36:31.547 "params": { 00:36:31.547 "admin_cmd_passthru": { 00:36:31.547 "identify_ctrlr": true 00:36:31.547 } 00:36:31.547 } 00:36:31.547 } 00:36:31.547 00:36:31.547 INFO: response: 00:36:31.547 { 00:36:31.547 "jsonrpc": "2.0", 00:36:31.547 "id": 1, 00:36:31.547 "result": true 00:36:31.547 } 00:36:31.547 00:36:31.547 00:48:59 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:31.547 00:48:59 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:36:31.547 00:48:59 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:31.547 00:48:59 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:31.547 INFO: Setting log level to 20 00:36:31.547 INFO: Setting log level to 20 00:36:31.547 INFO: Log level set to 20 00:36:31.547 INFO: Log level set to 20 00:36:31.547 INFO: Requests: 00:36:31.547 { 00:36:31.547 "jsonrpc": "2.0", 00:36:31.547 "method": "framework_start_init", 00:36:31.547 "id": 1 00:36:31.547 } 00:36:31.547 00:36:31.547 INFO: Requests: 00:36:31.547 { 00:36:31.547 "jsonrpc": "2.0", 00:36:31.547 "method": "framework_start_init", 00:36:31.547 "id": 1 00:36:31.547 } 00:36:31.547 00:36:31.547 [2024-07-12 00:48:59.277679] nvmf_tgt.c: 451:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:36:31.547 INFO: response: 00:36:31.547 { 00:36:31.547 "jsonrpc": "2.0", 00:36:31.547 "id": 1, 00:36:31.547 "result": true 00:36:31.547 } 00:36:31.547 00:36:31.547 INFO: response: 00:36:31.547 { 00:36:31.547 "jsonrpc": "2.0", 00:36:31.547 "id": 1, 00:36:31.547 "result": true 00:36:31.547 } 00:36:31.547 00:36:31.547 00:48:59 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:31.547 00:48:59 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:36:31.547 00:48:59 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:31.547 00:48:59 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:31.547 INFO: Setting log level to 40 00:36:31.547 INFO: Setting log level to 40 00:36:31.547 INFO: Setting log level to 40 00:36:31.547 [2024-07-12 00:48:59.287522] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:31.547 00:48:59 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:31.547 00:48:59 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:36:31.547 00:48:59 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:31.547 00:48:59 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:31.547 00:48:59 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:84:00.0 00:36:31.547 00:48:59 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:31.547 00:48:59 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:34.834 Nvme0n1 00:36:34.834 00:49:02 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:34.834 00:49:02 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:36:34.834 00:49:02 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:34.834 00:49:02 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:34.834 00:49:02 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:34.834 00:49:02 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:36:34.834 00:49:02 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:34.834 00:49:02 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:34.834 00:49:02 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:34.834 00:49:02 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:34.834 00:49:02 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:34.834 00:49:02 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:34.834 [2024-07-12 00:49:02.155129] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:34.834 00:49:02 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:34.834 00:49:02 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:36:34.834 00:49:02 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:34.834 00:49:02 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:34.834 [ 00:36:34.834 { 00:36:34.834 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:36:34.834 "subtype": "Discovery", 00:36:34.834 "listen_addresses": [], 00:36:34.834 "allow_any_host": true, 00:36:34.834 "hosts": [] 00:36:34.834 }, 00:36:34.834 { 00:36:34.834 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:36:34.834 "subtype": "NVMe", 00:36:34.834 "listen_addresses": [ 00:36:34.834 { 00:36:34.834 "trtype": "TCP", 00:36:34.834 "adrfam": "IPv4", 00:36:34.834 "traddr": "10.0.0.2", 00:36:34.834 "trsvcid": "4420" 00:36:34.834 } 00:36:34.834 ], 00:36:34.834 "allow_any_host": true, 00:36:34.834 "hosts": [], 00:36:34.834 "serial_number": "SPDK00000000000001", 00:36:34.834 "model_number": "SPDK bdev Controller", 00:36:34.834 "max_namespaces": 1, 00:36:34.834 "min_cntlid": 1, 00:36:34.835 "max_cntlid": 65519, 00:36:34.835 "namespaces": [ 00:36:34.835 { 00:36:34.835 "nsid": 1, 00:36:34.835 "bdev_name": "Nvme0n1", 00:36:34.835 "name": "Nvme0n1", 00:36:34.835 "nguid": "D15347EB2D4F49F8BC709D5FFF341AB5", 00:36:34.835 "uuid": "d15347eb-2d4f-49f8-bc70-9d5fff341ab5" 00:36:34.835 } 00:36:34.835 ] 00:36:34.835 } 00:36:34.835 ] 00:36:34.835 00:49:02 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:34.835 00:49:02 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:36:34.835 00:49:02 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:36:34.835 00:49:02 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:36:34.835 EAL: No free 2048 kB hugepages reported on node 1 00:36:34.835 00:49:02 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=PHLJ8275016S1P0FGN 00:36:34.835 00:49:02 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:36:34.835 00:49:02 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:36:34.835 00:49:02 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:36:34.835 EAL: No free 2048 kB hugepages reported on node 1 00:36:34.835 00:49:02 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:36:34.835 00:49:02 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' PHLJ8275016S1P0FGN '!=' PHLJ8275016S1P0FGN ']' 00:36:34.835 00:49:02 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:36:34.835 00:49:02 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:36:34.835 00:49:02 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:34.835 00:49:02 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:34.835 00:49:02 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:34.835 00:49:02 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:36:34.835 00:49:02 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:36:34.835 00:49:02 nvmf_identify_passthru -- nvmf/common.sh@488 -- # nvmfcleanup 00:36:34.835 00:49:02 nvmf_identify_passthru -- nvmf/common.sh@117 -- # sync 00:36:34.835 00:49:02 nvmf_identify_passthru -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:36:34.835 00:49:02 nvmf_identify_passthru -- nvmf/common.sh@120 -- # set +e 00:36:34.835 00:49:02 nvmf_identify_passthru -- nvmf/common.sh@121 -- # for i in {1..20} 00:36:34.835 00:49:02 nvmf_identify_passthru -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:36:34.835 rmmod nvme_tcp 00:36:34.835 rmmod nvme_fabrics 00:36:34.835 rmmod nvme_keyring 00:36:34.835 00:49:02 nvmf_identify_passthru -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:36:34.835 00:49:02 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set -e 00:36:34.835 00:49:02 nvmf_identify_passthru -- nvmf/common.sh@125 -- # return 0 00:36:34.835 00:49:02 nvmf_identify_passthru -- nvmf/common.sh@489 -- # '[' -n 1088809 ']' 00:36:34.835 00:49:02 nvmf_identify_passthru -- nvmf/common.sh@490 -- # killprocess 1088809 00:36:34.835 00:49:02 nvmf_identify_passthru -- common/autotest_common.sh@946 -- # '[' -z 1088809 ']' 00:36:34.835 00:49:02 nvmf_identify_passthru -- common/autotest_common.sh@950 -- # kill -0 1088809 00:36:34.835 00:49:02 nvmf_identify_passthru -- common/autotest_common.sh@951 -- # uname 00:36:34.835 00:49:02 nvmf_identify_passthru -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:36:34.835 00:49:02 nvmf_identify_passthru -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1088809 00:36:34.835 00:49:02 nvmf_identify_passthru -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:36:34.835 00:49:02 nvmf_identify_passthru -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:36:34.835 00:49:02 nvmf_identify_passthru -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1088809' 00:36:34.835 killing process with pid 1088809 00:36:34.835 00:49:02 nvmf_identify_passthru -- common/autotest_common.sh@965 -- # kill 1088809 00:36:34.835 00:49:02 nvmf_identify_passthru -- common/autotest_common.sh@970 -- # wait 1088809 00:36:36.214 00:49:04 nvmf_identify_passthru -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:36:36.214 00:49:04 nvmf_identify_passthru -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:36:36.214 00:49:04 nvmf_identify_passthru -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:36:36.214 00:49:04 nvmf_identify_passthru -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:36:36.214 00:49:04 nvmf_identify_passthru -- nvmf/common.sh@278 -- # remove_spdk_ns 00:36:36.214 00:49:04 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:36.214 00:49:04 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:36:36.214 00:49:04 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:38.752 00:49:06 nvmf_identify_passthru -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:36:38.752 00:36:38.752 real 0m17.499s 00:36:38.752 user 0m26.134s 00:36:38.752 sys 0m2.053s 00:36:38.752 00:49:06 nvmf_identify_passthru -- common/autotest_common.sh@1122 -- # xtrace_disable 00:36:38.752 00:49:06 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:38.752 ************************************ 00:36:38.752 END TEST nvmf_identify_passthru 00:36:38.753 ************************************ 00:36:38.753 00:49:06 -- spdk/autotest.sh@292 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:36:38.753 00:49:06 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:36:38.753 00:49:06 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:36:38.753 00:49:06 -- common/autotest_common.sh@10 -- # set +x 00:36:38.753 ************************************ 00:36:38.753 START TEST nvmf_dif 00:36:38.753 ************************************ 00:36:38.753 00:49:06 nvmf_dif -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:36:38.753 * Looking for test storage... 00:36:38.753 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:36:38.753 00:49:06 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:38.753 00:49:06 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:36:38.753 00:49:06 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:38.753 00:49:06 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:38.753 00:49:06 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:38.753 00:49:06 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:38.753 00:49:06 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:38.753 00:49:06 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:38.753 00:49:06 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:38.753 00:49:06 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:38.753 00:49:06 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:38.753 00:49:06 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:38.753 00:49:06 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:36:38.753 00:49:06 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:36:38.753 00:49:06 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:38.753 00:49:06 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:38.753 00:49:06 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:38.753 00:49:06 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:38.753 00:49:06 nvmf_dif -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:38.753 00:49:06 nvmf_dif -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:38.753 00:49:06 nvmf_dif -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:38.753 00:49:06 nvmf_dif -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:38.753 00:49:06 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:38.753 00:49:06 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:38.753 00:49:06 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:38.753 00:49:06 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:36:38.753 00:49:06 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:38.753 00:49:06 nvmf_dif -- nvmf/common.sh@47 -- # : 0 00:36:38.753 00:49:06 nvmf_dif -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:36:38.753 00:49:06 nvmf_dif -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:36:38.753 00:49:06 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:38.753 00:49:06 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:38.753 00:49:06 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:38.753 00:49:06 nvmf_dif -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:36:38.753 00:49:06 nvmf_dif -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:36:38.753 00:49:06 nvmf_dif -- nvmf/common.sh@51 -- # have_pci_nics=0 00:36:38.753 00:49:06 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:36:38.753 00:49:06 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:36:38.753 00:49:06 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:36:38.753 00:49:06 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:36:38.753 00:49:06 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:36:38.753 00:49:06 nvmf_dif -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:36:38.753 00:49:06 nvmf_dif -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:38.753 00:49:06 nvmf_dif -- nvmf/common.sh@448 -- # prepare_net_devs 00:36:38.753 00:49:06 nvmf_dif -- nvmf/common.sh@410 -- # local -g is_hw=no 00:36:38.753 00:49:06 nvmf_dif -- nvmf/common.sh@412 -- # remove_spdk_ns 00:36:38.753 00:49:06 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:38.753 00:49:06 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:36:38.753 00:49:06 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:38.753 00:49:06 nvmf_dif -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:36:38.753 00:49:06 nvmf_dif -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:36:38.753 00:49:06 nvmf_dif -- nvmf/common.sh@285 -- # xtrace_disable 00:36:38.753 00:49:06 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:40.135 00:49:07 nvmf_dif -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:40.135 00:49:07 nvmf_dif -- nvmf/common.sh@291 -- # pci_devs=() 00:36:40.135 00:49:07 nvmf_dif -- nvmf/common.sh@291 -- # local -a pci_devs 00:36:40.135 00:49:07 nvmf_dif -- nvmf/common.sh@292 -- # pci_net_devs=() 00:36:40.135 00:49:07 nvmf_dif -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:36:40.135 00:49:07 nvmf_dif -- nvmf/common.sh@293 -- # pci_drivers=() 00:36:40.135 00:49:07 nvmf_dif -- nvmf/common.sh@293 -- # local -A pci_drivers 00:36:40.135 00:49:07 nvmf_dif -- nvmf/common.sh@295 -- # net_devs=() 00:36:40.135 00:49:07 nvmf_dif -- nvmf/common.sh@295 -- # local -ga net_devs 00:36:40.135 00:49:07 nvmf_dif -- nvmf/common.sh@296 -- # e810=() 00:36:40.135 00:49:07 nvmf_dif -- nvmf/common.sh@296 -- # local -ga e810 00:36:40.135 00:49:07 nvmf_dif -- nvmf/common.sh@297 -- # x722=() 00:36:40.135 00:49:07 nvmf_dif -- nvmf/common.sh@297 -- # local -ga x722 00:36:40.135 00:49:07 nvmf_dif -- nvmf/common.sh@298 -- # mlx=() 00:36:40.135 00:49:07 nvmf_dif -- nvmf/common.sh@298 -- # local -ga mlx 00:36:40.135 00:49:07 nvmf_dif -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:40.135 00:49:07 nvmf_dif -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:40.135 00:49:07 nvmf_dif -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:40.135 00:49:07 nvmf_dif -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:40.136 00:49:07 nvmf_dif -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:40.136 00:49:07 nvmf_dif -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:40.136 00:49:07 nvmf_dif -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:40.136 00:49:07 nvmf_dif -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:40.136 00:49:07 nvmf_dif -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:40.136 00:49:07 nvmf_dif -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:40.136 00:49:07 nvmf_dif -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:40.136 00:49:07 nvmf_dif -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:36:40.136 00:49:07 nvmf_dif -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:36:40.136 00:49:07 nvmf_dif -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:36:40.136 00:49:07 nvmf_dif -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:36:40.136 00:49:07 nvmf_dif -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:36:40.136 00:49:07 nvmf_dif -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:36:40.136 00:49:07 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:36:40.136 00:49:07 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:36:40.136 Found 0000:08:00.0 (0x8086 - 0x159b) 00:36:40.136 00:49:07 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:36:40.136 00:49:07 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:36:40.136 00:49:07 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:40.136 00:49:07 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:40.136 00:49:07 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:36:40.136 00:49:07 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:36:40.136 00:49:07 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:36:40.136 Found 0000:08:00.1 (0x8086 - 0x159b) 00:36:40.136 00:49:07 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:36:40.136 00:49:07 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:36:40.136 00:49:07 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:40.136 00:49:07 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:40.136 00:49:07 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:36:40.136 00:49:07 nvmf_dif -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:36:40.136 00:49:07 nvmf_dif -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:36:40.136 00:49:07 nvmf_dif -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:36:40.136 00:49:07 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:36:40.136 00:49:07 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:40.136 00:49:07 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:36:40.136 00:49:07 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:40.136 00:49:07 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:36:40.136 00:49:07 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:36:40.136 00:49:07 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:40.136 00:49:07 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:36:40.136 Found net devices under 0000:08:00.0: cvl_0_0 00:36:40.136 00:49:07 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:36:40.136 00:49:07 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:36:40.136 00:49:07 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:40.136 00:49:07 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:36:40.136 00:49:07 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:40.136 00:49:07 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:36:40.136 00:49:07 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:36:40.136 00:49:07 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:40.136 00:49:07 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:36:40.136 Found net devices under 0000:08:00.1: cvl_0_1 00:36:40.136 00:49:07 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:36:40.136 00:49:07 nvmf_dif -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:36:40.136 00:49:07 nvmf_dif -- nvmf/common.sh@414 -- # is_hw=yes 00:36:40.136 00:49:07 nvmf_dif -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:36:40.136 00:49:07 nvmf_dif -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:36:40.136 00:49:07 nvmf_dif -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:36:40.136 00:49:07 nvmf_dif -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:40.136 00:49:07 nvmf_dif -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:40.136 00:49:07 nvmf_dif -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:40.136 00:49:07 nvmf_dif -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:36:40.136 00:49:07 nvmf_dif -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:40.136 00:49:07 nvmf_dif -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:40.136 00:49:07 nvmf_dif -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:36:40.136 00:49:07 nvmf_dif -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:40.136 00:49:07 nvmf_dif -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:40.136 00:49:07 nvmf_dif -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:36:40.136 00:49:07 nvmf_dif -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:36:40.136 00:49:07 nvmf_dif -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:36:40.136 00:49:07 nvmf_dif -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:40.136 00:49:07 nvmf_dif -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:40.136 00:49:07 nvmf_dif -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:40.136 00:49:07 nvmf_dif -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:36:40.136 00:49:07 nvmf_dif -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:40.136 00:49:07 nvmf_dif -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:40.136 00:49:07 nvmf_dif -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:40.136 00:49:07 nvmf_dif -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:36:40.136 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:40.136 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.233 ms 00:36:40.136 00:36:40.136 --- 10.0.0.2 ping statistics --- 00:36:40.136 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:40.136 rtt min/avg/max/mdev = 0.233/0.233/0.233/0.000 ms 00:36:40.136 00:49:07 nvmf_dif -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:40.136 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:40.136 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.164 ms 00:36:40.136 00:36:40.136 --- 10.0.0.1 ping statistics --- 00:36:40.136 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:40.136 rtt min/avg/max/mdev = 0.164/0.164/0.164/0.000 ms 00:36:40.136 00:49:07 nvmf_dif -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:40.136 00:49:07 nvmf_dif -- nvmf/common.sh@422 -- # return 0 00:36:40.136 00:49:07 nvmf_dif -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:36:40.136 00:49:07 nvmf_dif -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:36:41.126 0000:00:04.7 (8086 3c27): Already using the vfio-pci driver 00:36:41.126 0000:84:00.0 (8086 0a54): Already using the vfio-pci driver 00:36:41.126 0000:00:04.6 (8086 3c26): Already using the vfio-pci driver 00:36:41.126 0000:00:04.5 (8086 3c25): Already using the vfio-pci driver 00:36:41.126 0000:00:04.4 (8086 3c24): Already using the vfio-pci driver 00:36:41.126 0000:00:04.3 (8086 3c23): Already using the vfio-pci driver 00:36:41.126 0000:00:04.2 (8086 3c22): Already using the vfio-pci driver 00:36:41.126 0000:00:04.1 (8086 3c21): Already using the vfio-pci driver 00:36:41.126 0000:00:04.0 (8086 3c20): Already using the vfio-pci driver 00:36:41.126 0000:80:04.7 (8086 3c27): Already using the vfio-pci driver 00:36:41.126 0000:80:04.6 (8086 3c26): Already using the vfio-pci driver 00:36:41.126 0000:80:04.5 (8086 3c25): Already using the vfio-pci driver 00:36:41.126 0000:80:04.4 (8086 3c24): Already using the vfio-pci driver 00:36:41.126 0000:80:04.3 (8086 3c23): Already using the vfio-pci driver 00:36:41.126 0000:80:04.2 (8086 3c22): Already using the vfio-pci driver 00:36:41.126 0000:80:04.1 (8086 3c21): Already using the vfio-pci driver 00:36:41.126 0000:80:04.0 (8086 3c20): Already using the vfio-pci driver 00:36:41.126 00:49:08 nvmf_dif -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:41.126 00:49:08 nvmf_dif -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:36:41.126 00:49:08 nvmf_dif -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:36:41.126 00:49:08 nvmf_dif -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:41.126 00:49:08 nvmf_dif -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:36:41.126 00:49:08 nvmf_dif -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:36:41.385 00:49:08 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:36:41.385 00:49:08 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:36:41.385 00:49:08 nvmf_dif -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:36:41.385 00:49:08 nvmf_dif -- common/autotest_common.sh@720 -- # xtrace_disable 00:36:41.385 00:49:08 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:41.385 00:49:08 nvmf_dif -- nvmf/common.sh@481 -- # nvmfpid=1091243 00:36:41.385 00:49:08 nvmf_dif -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:36:41.385 00:49:08 nvmf_dif -- nvmf/common.sh@482 -- # waitforlisten 1091243 00:36:41.385 00:49:08 nvmf_dif -- common/autotest_common.sh@827 -- # '[' -z 1091243 ']' 00:36:41.385 00:49:08 nvmf_dif -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:41.385 00:49:08 nvmf_dif -- common/autotest_common.sh@832 -- # local max_retries=100 00:36:41.385 00:49:08 nvmf_dif -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:41.385 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:41.385 00:49:08 nvmf_dif -- common/autotest_common.sh@836 -- # xtrace_disable 00:36:41.385 00:49:08 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:41.385 [2024-07-12 00:49:09.009684] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:36:41.385 [2024-07-12 00:49:09.009796] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:41.385 EAL: No free 2048 kB hugepages reported on node 1 00:36:41.385 [2024-07-12 00:49:09.074901] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:41.385 [2024-07-12 00:49:09.161250] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:41.385 [2024-07-12 00:49:09.161309] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:41.385 [2024-07-12 00:49:09.161324] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:41.385 [2024-07-12 00:49:09.161338] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:41.385 [2024-07-12 00:49:09.161349] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:41.385 [2024-07-12 00:49:09.161385] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:36:41.644 00:49:09 nvmf_dif -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:36:41.644 00:49:09 nvmf_dif -- common/autotest_common.sh@860 -- # return 0 00:36:41.644 00:49:09 nvmf_dif -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:36:41.644 00:49:09 nvmf_dif -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:41.644 00:49:09 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:41.644 00:49:09 nvmf_dif -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:41.644 00:49:09 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:36:41.644 00:49:09 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:36:41.644 00:49:09 nvmf_dif -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:41.644 00:49:09 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:41.644 [2024-07-12 00:49:09.283748] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:41.644 00:49:09 nvmf_dif -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:41.644 00:49:09 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:36:41.644 00:49:09 nvmf_dif -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:36:41.644 00:49:09 nvmf_dif -- common/autotest_common.sh@1103 -- # xtrace_disable 00:36:41.644 00:49:09 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:41.644 ************************************ 00:36:41.644 START TEST fio_dif_1_default 00:36:41.644 ************************************ 00:36:41.644 00:49:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1121 -- # fio_dif_1 00:36:41.644 00:49:09 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:36:41.644 00:49:09 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:36:41.644 00:49:09 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:36:41.644 00:49:09 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:36:41.644 00:49:09 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:36:41.644 00:49:09 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:36:41.644 00:49:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:41.644 00:49:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:36:41.644 bdev_null0 00:36:41.644 00:49:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:41.644 00:49:09 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:36:41.644 00:49:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:41.644 00:49:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:36:41.644 00:49:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:41.644 00:49:09 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:36:41.644 00:49:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:41.644 00:49:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:36:41.644 00:49:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:41.644 00:49:09 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:36:41.644 00:49:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:41.644 00:49:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:36:41.644 [2024-07-12 00:49:09.340015] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:41.644 00:49:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:41.644 00:49:09 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:36:41.644 00:49:09 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:36:41.644 00:49:09 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:36:41.644 00:49:09 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # config=() 00:36:41.644 00:49:09 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # local subsystem config 00:36:41.644 00:49:09 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:41.644 00:49:09 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:36:41.644 00:49:09 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:36:41.644 00:49:09 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:36:41.644 { 00:36:41.644 "params": { 00:36:41.644 "name": "Nvme$subsystem", 00:36:41.644 "trtype": "$TEST_TRANSPORT", 00:36:41.644 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:41.644 "adrfam": "ipv4", 00:36:41.644 "trsvcid": "$NVMF_PORT", 00:36:41.644 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:41.644 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:41.644 "hdgst": ${hdgst:-false}, 00:36:41.644 "ddgst": ${ddgst:-false} 00:36:41.644 }, 00:36:41.644 "method": "bdev_nvme_attach_controller" 00:36:41.644 } 00:36:41.644 EOF 00:36:41.644 )") 00:36:41.644 00:49:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:41.644 00:49:09 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:36:41.644 00:49:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:36:41.644 00:49:09 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:36:41.644 00:49:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:36:41.644 00:49:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1335 -- # local sanitizers 00:36:41.644 00:49:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:41.644 00:49:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # shift 00:36:41.644 00:49:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local asan_lib= 00:36:41.644 00:49:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:36:41.644 00:49:09 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # cat 00:36:41.644 00:49:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:41.644 00:49:09 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:36:41.644 00:49:09 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:36:41.644 00:49:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # grep libasan 00:36:41.644 00:49:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:36:41.644 00:49:09 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # jq . 00:36:41.644 00:49:09 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@557 -- # IFS=, 00:36:41.644 00:49:09 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:36:41.644 "params": { 00:36:41.644 "name": "Nvme0", 00:36:41.644 "trtype": "tcp", 00:36:41.644 "traddr": "10.0.0.2", 00:36:41.644 "adrfam": "ipv4", 00:36:41.644 "trsvcid": "4420", 00:36:41.644 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:41.644 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:41.644 "hdgst": false, 00:36:41.644 "ddgst": false 00:36:41.644 }, 00:36:41.644 "method": "bdev_nvme_attach_controller" 00:36:41.644 }' 00:36:41.644 00:49:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # asan_lib= 00:36:41.644 00:49:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:36:41.644 00:49:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:36:41.644 00:49:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:41.644 00:49:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:36:41.644 00:49:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:36:41.644 00:49:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # asan_lib= 00:36:41.644 00:49:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:36:41.644 00:49:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:36:41.644 00:49:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:41.903 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:36:41.903 fio-3.35 00:36:41.903 Starting 1 thread 00:36:41.903 EAL: No free 2048 kB hugepages reported on node 1 00:36:54.103 00:36:54.103 filename0: (groupid=0, jobs=1): err= 0: pid=1091421: Fri Jul 12 00:49:20 2024 00:36:54.103 read: IOPS=190, BW=761KiB/s (780kB/s)(7616KiB/10004msec) 00:36:54.103 slat (nsec): min=7436, max=55616, avg=8952.34, stdev=2966.66 00:36:54.103 clat (usec): min=573, max=45278, avg=20988.09, stdev=20337.18 00:36:54.103 lat (usec): min=582, max=45323, avg=20997.04, stdev=20336.97 00:36:54.103 clat percentiles (usec): 00:36:54.103 | 1.00th=[ 594], 5.00th=[ 611], 10.00th=[ 627], 20.00th=[ 660], 00:36:54.103 | 30.00th=[ 668], 40.00th=[ 685], 50.00th=[ 775], 60.00th=[41157], 00:36:54.103 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:36:54.103 | 99.00th=[42206], 99.50th=[42206], 99.90th=[45351], 99.95th=[45351], 00:36:54.103 | 99.99th=[45351] 00:36:54.103 bw ( KiB/s): min= 704, max= 768, per=99.83%, avg=760.00, stdev=20.44, samples=20 00:36:54.103 iops : min= 176, max= 192, avg=190.00, stdev= 5.11, samples=20 00:36:54.103 lat (usec) : 750=49.84%, 1000=0.16% 00:36:54.103 lat (msec) : 50=50.00% 00:36:54.103 cpu : usr=90.21%, sys=9.27%, ctx=20, majf=0, minf=265 00:36:54.103 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:54.103 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:54.103 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:54.103 issued rwts: total=1904,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:54.103 latency : target=0, window=0, percentile=100.00%, depth=4 00:36:54.103 00:36:54.103 Run status group 0 (all jobs): 00:36:54.103 READ: bw=761KiB/s (780kB/s), 761KiB/s-761KiB/s (780kB/s-780kB/s), io=7616KiB (7799kB), run=10004-10004msec 00:36:54.103 00:49:20 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:36:54.103 00:49:20 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:36:54.103 00:49:20 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:36:54.103 00:49:20 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:36:54.103 00:49:20 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:36:54.103 00:49:20 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:36:54.103 00:49:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:54.103 00:49:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:36:54.103 00:49:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:54.103 00:49:20 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:36:54.103 00:49:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:54.103 00:49:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:36:54.103 00:49:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:54.103 00:36:54.103 real 0m11.059s 00:36:54.103 user 0m9.900s 00:36:54.103 sys 0m1.177s 00:36:54.103 00:49:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1122 -- # xtrace_disable 00:36:54.103 00:49:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:36:54.103 ************************************ 00:36:54.103 END TEST fio_dif_1_default 00:36:54.103 ************************************ 00:36:54.103 00:49:20 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:36:54.103 00:49:20 nvmf_dif -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:36:54.103 00:49:20 nvmf_dif -- common/autotest_common.sh@1103 -- # xtrace_disable 00:36:54.103 00:49:20 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:54.104 ************************************ 00:36:54.104 START TEST fio_dif_1_multi_subsystems 00:36:54.104 ************************************ 00:36:54.104 00:49:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1121 -- # fio_dif_1_multi_subsystems 00:36:54.104 00:49:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:36:54.104 00:49:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:36:54.104 00:49:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:36:54.104 00:49:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:36:54.104 00:49:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:36:54.104 00:49:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:36:54.104 00:49:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:36:54.104 00:49:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:54.104 00:49:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:54.104 bdev_null0 00:36:54.104 00:49:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:54.104 00:49:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:36:54.104 00:49:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:54.104 00:49:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:54.104 00:49:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:54.104 00:49:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:36:54.104 00:49:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:54.104 00:49:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:54.104 00:49:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:54.104 00:49:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:36:54.104 00:49:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:54.104 00:49:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:54.104 [2024-07-12 00:49:20.459081] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:54.104 00:49:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:54.104 00:49:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:36:54.104 00:49:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:36:54.104 00:49:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:36:54.104 00:49:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:36:54.104 00:49:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:54.104 00:49:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:54.104 bdev_null1 00:36:54.104 00:49:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:54.104 00:49:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:36:54.104 00:49:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:54.104 00:49:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:54.104 00:49:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:54.104 00:49:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:36:54.104 00:49:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:54.104 00:49:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:54.104 00:49:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:54.104 00:49:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:54.104 00:49:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:54.104 00:49:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:54.104 00:49:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:54.104 00:49:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:36:54.104 00:49:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:36:54.104 00:49:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:36:54.104 00:49:20 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # config=() 00:36:54.104 00:49:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:54.104 00:49:20 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # local subsystem config 00:36:54.104 00:49:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:36:54.104 00:49:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:54.104 00:49:20 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:36:54.104 00:49:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:36:54.104 00:49:20 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:36:54.104 { 00:36:54.104 "params": { 00:36:54.104 "name": "Nvme$subsystem", 00:36:54.104 "trtype": "$TEST_TRANSPORT", 00:36:54.104 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:54.104 "adrfam": "ipv4", 00:36:54.104 "trsvcid": "$NVMF_PORT", 00:36:54.104 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:54.104 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:54.104 "hdgst": ${hdgst:-false}, 00:36:54.104 "ddgst": ${ddgst:-false} 00:36:54.104 }, 00:36:54.104 "method": "bdev_nvme_attach_controller" 00:36:54.104 } 00:36:54.104 EOF 00:36:54.104 )") 00:36:54.104 00:49:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:36:54.104 00:49:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:36:54.104 00:49:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:36:54.104 00:49:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1335 -- # local sanitizers 00:36:54.104 00:49:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:54.104 00:49:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # shift 00:36:54.104 00:49:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local asan_lib= 00:36:54.104 00:49:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:36:54.104 00:49:20 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:36:54.104 00:49:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:54.104 00:49:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:36:54.104 00:49:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:36:54.104 00:49:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # grep libasan 00:36:54.104 00:49:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:36:54.104 00:49:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:36:54.104 00:49:20 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:36:54.104 00:49:20 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:36:54.104 { 00:36:54.104 "params": { 00:36:54.104 "name": "Nvme$subsystem", 00:36:54.104 "trtype": "$TEST_TRANSPORT", 00:36:54.104 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:54.104 "adrfam": "ipv4", 00:36:54.104 "trsvcid": "$NVMF_PORT", 00:36:54.104 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:54.104 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:54.104 "hdgst": ${hdgst:-false}, 00:36:54.104 "ddgst": ${ddgst:-false} 00:36:54.104 }, 00:36:54.104 "method": "bdev_nvme_attach_controller" 00:36:54.104 } 00:36:54.104 EOF 00:36:54.104 )") 00:36:54.104 00:49:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:36:54.104 00:49:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:36:54.104 00:49:20 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:36:54.104 00:49:20 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # jq . 00:36:54.104 00:49:20 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@557 -- # IFS=, 00:36:54.104 00:49:20 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:36:54.104 "params": { 00:36:54.104 "name": "Nvme0", 00:36:54.104 "trtype": "tcp", 00:36:54.104 "traddr": "10.0.0.2", 00:36:54.104 "adrfam": "ipv4", 00:36:54.104 "trsvcid": "4420", 00:36:54.104 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:54.104 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:54.104 "hdgst": false, 00:36:54.104 "ddgst": false 00:36:54.104 }, 00:36:54.104 "method": "bdev_nvme_attach_controller" 00:36:54.104 },{ 00:36:54.104 "params": { 00:36:54.104 "name": "Nvme1", 00:36:54.104 "trtype": "tcp", 00:36:54.104 "traddr": "10.0.0.2", 00:36:54.104 "adrfam": "ipv4", 00:36:54.104 "trsvcid": "4420", 00:36:54.104 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:36:54.104 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:36:54.104 "hdgst": false, 00:36:54.104 "ddgst": false 00:36:54.104 }, 00:36:54.104 "method": "bdev_nvme_attach_controller" 00:36:54.104 }' 00:36:54.104 00:49:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # asan_lib= 00:36:54.104 00:49:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:36:54.104 00:49:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:36:54.104 00:49:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:54.104 00:49:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:36:54.104 00:49:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:36:54.104 00:49:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # asan_lib= 00:36:54.104 00:49:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:36:54.104 00:49:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:36:54.104 00:49:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:54.104 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:36:54.104 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:36:54.104 fio-3.35 00:36:54.104 Starting 2 threads 00:36:54.104 EAL: No free 2048 kB hugepages reported on node 1 00:37:04.062 00:37:04.062 filename0: (groupid=0, jobs=1): err= 0: pid=1092485: Fri Jul 12 00:49:31 2024 00:37:04.062 read: IOPS=102, BW=412KiB/s (422kB/s)(4128KiB/10024msec) 00:37:04.062 slat (nsec): min=7817, max=50487, avg=12112.11, stdev=6818.73 00:37:04.062 clat (usec): min=615, max=44967, avg=38813.11, stdev=9144.59 00:37:04.062 lat (usec): min=625, max=45008, avg=38825.22, stdev=9144.31 00:37:04.062 clat percentiles (usec): 00:37:04.062 | 1.00th=[ 660], 5.00th=[ 701], 10.00th=[41157], 20.00th=[41157], 00:37:04.062 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:37:04.062 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:37:04.062 | 99.00th=[41157], 99.50th=[42730], 99.90th=[44827], 99.95th=[44827], 00:37:04.062 | 99.99th=[44827] 00:37:04.062 bw ( KiB/s): min= 384, max= 480, per=50.79%, avg=411.20, stdev=28.00, samples=20 00:37:04.062 iops : min= 96, max= 120, avg=102.80, stdev= 7.00, samples=20 00:37:04.062 lat (usec) : 750=5.43% 00:37:04.062 lat (msec) : 50=94.57% 00:37:04.062 cpu : usr=97.01%, sys=2.68%, ctx=15, majf=0, minf=80 00:37:04.062 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:04.062 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:04.062 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:04.062 issued rwts: total=1032,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:04.062 latency : target=0, window=0, percentile=100.00%, depth=4 00:37:04.062 filename1: (groupid=0, jobs=1): err= 0: pid=1092486: Fri Jul 12 00:49:31 2024 00:37:04.062 read: IOPS=99, BW=398KiB/s (407kB/s)(3984KiB/10014msec) 00:37:04.062 slat (nsec): min=5452, max=60153, avg=13330.58, stdev=5243.06 00:37:04.062 clat (usec): min=621, max=44707, avg=40174.21, stdev=5664.59 00:37:04.062 lat (usec): min=631, max=44731, avg=40187.54, stdev=5664.55 00:37:04.062 clat percentiles (usec): 00:37:04.062 | 1.00th=[ 652], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:37:04.062 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:37:04.062 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:37:04.062 | 99.00th=[41157], 99.50th=[42206], 99.90th=[44827], 99.95th=[44827], 00:37:04.062 | 99.99th=[44827] 00:37:04.062 bw ( KiB/s): min= 384, max= 448, per=48.93%, avg=396.80, stdev=21.78, samples=20 00:37:04.062 iops : min= 96, max= 112, avg=99.20, stdev= 5.44, samples=20 00:37:04.062 lat (usec) : 750=2.01% 00:37:04.062 lat (msec) : 50=97.99% 00:37:04.062 cpu : usr=97.20%, sys=2.42%, ctx=16, majf=0, minf=204 00:37:04.062 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:04.062 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:04.062 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:04.062 issued rwts: total=996,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:04.062 latency : target=0, window=0, percentile=100.00%, depth=4 00:37:04.062 00:37:04.062 Run status group 0 (all jobs): 00:37:04.062 READ: bw=809KiB/s (829kB/s), 398KiB/s-412KiB/s (407kB/s-422kB/s), io=8112KiB (8307kB), run=10014-10024msec 00:37:04.062 00:49:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:37:04.062 00:49:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:37:04.062 00:49:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:37:04.062 00:49:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:37:04.062 00:49:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:37:04.062 00:49:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:37:04.062 00:49:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:04.062 00:49:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:04.062 00:49:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:04.062 00:49:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:37:04.062 00:49:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:04.062 00:49:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:04.062 00:49:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:04.062 00:49:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:37:04.062 00:49:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:37:04.062 00:49:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:37:04.062 00:49:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:37:04.062 00:49:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:04.062 00:49:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:04.062 00:49:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:04.062 00:49:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:37:04.062 00:49:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:04.062 00:49:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:04.062 00:49:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:04.062 00:37:04.062 real 0m11.170s 00:37:04.062 user 0m20.422s 00:37:04.062 sys 0m0.797s 00:37:04.062 00:49:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1122 -- # xtrace_disable 00:37:04.062 00:49:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:04.062 ************************************ 00:37:04.062 END TEST fio_dif_1_multi_subsystems 00:37:04.062 ************************************ 00:37:04.062 00:49:31 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:37:04.062 00:49:31 nvmf_dif -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:37:04.062 00:49:31 nvmf_dif -- common/autotest_common.sh@1103 -- # xtrace_disable 00:37:04.062 00:49:31 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:37:04.062 ************************************ 00:37:04.062 START TEST fio_dif_rand_params 00:37:04.062 ************************************ 00:37:04.062 00:49:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1121 -- # fio_dif_rand_params 00:37:04.062 00:49:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:37:04.062 00:49:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:37:04.062 00:49:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:37:04.062 00:49:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:37:04.062 00:49:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:37:04.062 00:49:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:37:04.062 00:49:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:37:04.062 00:49:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:37:04.062 00:49:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:37:04.062 00:49:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:37:04.062 00:49:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:37:04.062 00:49:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:37:04.062 00:49:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:37:04.062 00:49:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:04.063 00:49:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:04.063 bdev_null0 00:37:04.063 00:49:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:04.063 00:49:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:37:04.063 00:49:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:04.063 00:49:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:04.063 00:49:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:04.063 00:49:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:37:04.063 00:49:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:04.063 00:49:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:04.063 00:49:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:04.063 00:49:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:37:04.063 00:49:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:04.063 00:49:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:04.063 [2024-07-12 00:49:31.664415] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:04.063 00:49:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:04.063 00:49:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:37:04.063 00:49:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:37:04.063 00:49:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:37:04.063 00:49:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:37:04.063 00:49:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:37:04.063 00:49:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:37:04.063 00:49:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:37:04.063 { 00:37:04.063 "params": { 00:37:04.063 "name": "Nvme$subsystem", 00:37:04.063 "trtype": "$TEST_TRANSPORT", 00:37:04.063 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:04.063 "adrfam": "ipv4", 00:37:04.063 "trsvcid": "$NVMF_PORT", 00:37:04.063 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:04.063 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:04.063 "hdgst": ${hdgst:-false}, 00:37:04.063 "ddgst": ${ddgst:-false} 00:37:04.063 }, 00:37:04.063 "method": "bdev_nvme_attach_controller" 00:37:04.063 } 00:37:04.063 EOF 00:37:04.063 )") 00:37:04.063 00:49:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:04.063 00:49:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:37:04.063 00:49:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:04.063 00:49:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:37:04.063 00:49:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:37:04.063 00:49:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:37:04.063 00:49:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:37:04.063 00:49:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # local sanitizers 00:37:04.063 00:49:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:04.063 00:49:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # shift 00:37:04.063 00:49:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:37:04.063 00:49:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local asan_lib= 00:37:04.063 00:49:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:37:04.063 00:49:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:37:04.063 00:49:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:37:04.063 00:49:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:04.063 00:49:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libasan 00:37:04.063 00:49:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:37:04.063 00:49:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:37:04.063 00:49:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:37:04.063 00:49:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:37:04.063 "params": { 00:37:04.063 "name": "Nvme0", 00:37:04.063 "trtype": "tcp", 00:37:04.063 "traddr": "10.0.0.2", 00:37:04.063 "adrfam": "ipv4", 00:37:04.063 "trsvcid": "4420", 00:37:04.063 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:04.063 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:04.063 "hdgst": false, 00:37:04.063 "ddgst": false 00:37:04.063 }, 00:37:04.063 "method": "bdev_nvme_attach_controller" 00:37:04.063 }' 00:37:04.063 00:49:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib= 00:37:04.063 00:49:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:37:04.063 00:49:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:37:04.063 00:49:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:04.063 00:49:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:37:04.063 00:49:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:37:04.063 00:49:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib= 00:37:04.063 00:49:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:37:04.063 00:49:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:37:04.063 00:49:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:04.320 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:37:04.320 ... 00:37:04.320 fio-3.35 00:37:04.320 Starting 3 threads 00:37:04.320 EAL: No free 2048 kB hugepages reported on node 1 00:37:09.577 00:37:09.577 filename0: (groupid=0, jobs=1): err= 0: pid=1093549: Fri Jul 12 00:49:37 2024 00:37:09.577 read: IOPS=205, BW=25.7MiB/s (26.9MB/s)(129MiB/5025msec) 00:37:09.577 slat (nsec): min=8565, max=59316, avg=19803.36, stdev=5068.14 00:37:09.577 clat (usec): min=4812, max=89426, avg=14583.88, stdev=8129.47 00:37:09.577 lat (usec): min=4827, max=89437, avg=14603.68, stdev=8130.04 00:37:09.577 clat percentiles (usec): 00:37:09.577 | 1.00th=[ 5473], 5.00th=[ 5932], 10.00th=[ 6652], 20.00th=[10028], 00:37:09.577 | 30.00th=[11338], 40.00th=[13304], 50.00th=[14484], 60.00th=[15008], 00:37:09.577 | 70.00th=[15533], 80.00th=[16712], 90.00th=[18482], 95.00th=[19530], 00:37:09.577 | 99.00th=[54789], 99.50th=[55837], 99.90th=[57934], 99.95th=[89654], 00:37:09.577 | 99.99th=[89654] 00:37:09.577 bw ( KiB/s): min=20736, max=35072, per=35.28%, avg=26342.40, stdev=4629.21, samples=10 00:37:09.577 iops : min= 162, max= 274, avg=205.80, stdev=36.17, samples=10 00:37:09.577 lat (msec) : 10=18.51%, 20=77.52%, 50=2.23%, 100=1.74% 00:37:09.577 cpu : usr=94.94%, sys=4.46%, ctx=34, majf=0, minf=181 00:37:09.577 IO depths : 1=0.4%, 2=99.6%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:09.577 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:09.577 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:09.577 issued rwts: total=1032,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:09.577 latency : target=0, window=0, percentile=100.00%, depth=3 00:37:09.577 filename0: (groupid=0, jobs=1): err= 0: pid=1093550: Fri Jul 12 00:49:37 2024 00:37:09.577 read: IOPS=174, BW=21.8MiB/s (22.8MB/s)(109MiB/5004msec) 00:37:09.577 slat (nsec): min=7765, max=28018, avg=14515.90, stdev=4126.63 00:37:09.577 clat (usec): min=4819, max=95442, avg=17211.89, stdev=12451.26 00:37:09.577 lat (usec): min=4831, max=95467, avg=17226.41, stdev=12451.19 00:37:09.577 clat percentiles (usec): 00:37:09.577 | 1.00th=[ 7898], 5.00th=[ 9241], 10.00th=[ 9896], 20.00th=[12125], 00:37:09.577 | 30.00th=[13173], 40.00th=[13698], 50.00th=[14091], 60.00th=[14484], 00:37:09.577 | 70.00th=[14877], 80.00th=[15270], 90.00th=[16909], 95.00th=[52691], 00:37:09.577 | 99.00th=[55837], 99.50th=[56361], 99.90th=[95945], 99.95th=[95945], 00:37:09.577 | 99.99th=[95945] 00:37:09.577 bw ( KiB/s): min=14080, max=27904, per=29.80%, avg=22246.40, stdev=4421.63, samples=10 00:37:09.577 iops : min= 110, max= 218, avg=173.80, stdev=34.54, samples=10 00:37:09.577 lat (msec) : 10=11.02%, 20=79.33%, 50=2.07%, 100=7.58% 00:37:09.577 cpu : usr=95.44%, sys=4.14%, ctx=15, majf=0, minf=127 00:37:09.577 IO depths : 1=0.3%, 2=99.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:09.577 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:09.577 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:09.577 issued rwts: total=871,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:09.577 latency : target=0, window=0, percentile=100.00%, depth=3 00:37:09.577 filename0: (groupid=0, jobs=1): err= 0: pid=1093551: Fri Jul 12 00:49:37 2024 00:37:09.577 read: IOPS=205, BW=25.7MiB/s (26.9MB/s)(129MiB/5005msec) 00:37:09.577 slat (nsec): min=7788, max=40143, avg=15251.80, stdev=4676.96 00:37:09.577 clat (usec): min=5217, max=92340, avg=14585.86, stdev=9457.30 00:37:09.577 lat (usec): min=5229, max=92352, avg=14601.11, stdev=9456.99 00:37:09.577 clat percentiles (usec): 00:37:09.577 | 1.00th=[ 6259], 5.00th=[ 8586], 10.00th=[ 9241], 20.00th=[10028], 00:37:09.577 | 30.00th=[11863], 40.00th=[12780], 50.00th=[13304], 60.00th=[13698], 00:37:09.577 | 70.00th=[14091], 80.00th=[14353], 90.00th=[14877], 95.00th=[46924], 00:37:09.577 | 99.00th=[54264], 99.50th=[54789], 99.90th=[55313], 99.95th=[92799], 00:37:09.577 | 99.99th=[92799] 00:37:09.577 bw ( KiB/s): min=19968, max=32256, per=35.15%, avg=26240.00, stdev=3807.15, samples=10 00:37:09.577 iops : min= 156, max= 252, avg=205.00, stdev=29.74, samples=10 00:37:09.577 lat (msec) : 10=19.65%, 20=74.90%, 50=1.17%, 100=4.28% 00:37:09.577 cpu : usr=94.54%, sys=5.00%, ctx=10, majf=0, minf=42 00:37:09.577 IO depths : 1=0.4%, 2=99.6%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:09.577 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:09.577 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:09.577 issued rwts: total=1028,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:09.577 latency : target=0, window=0, percentile=100.00%, depth=3 00:37:09.577 00:37:09.577 Run status group 0 (all jobs): 00:37:09.577 READ: bw=72.9MiB/s (76.5MB/s), 21.8MiB/s-25.7MiB/s (22.8MB/s-26.9MB/s), io=366MiB (384MB), run=5004-5025msec 00:37:09.836 00:49:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:37:09.836 00:49:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:37:09.836 00:49:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:37:09.836 00:49:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:37:09.836 00:49:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:37:09.836 00:49:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:37:09.836 00:49:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:09.836 00:49:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:09.836 00:49:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:09.836 00:49:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:37:09.836 00:49:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:09.836 00:49:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:09.836 00:49:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:09.836 00:49:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:37:09.836 00:49:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:37:09.836 00:49:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:37:09.836 00:49:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:37:09.836 00:49:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:37:09.836 00:49:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:37:09.836 00:49:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:37:09.836 00:49:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:37:09.836 00:49:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:37:09.836 00:49:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:37:09.836 00:49:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:37:09.836 00:49:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:37:09.836 00:49:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:09.836 00:49:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:09.836 bdev_null0 00:37:09.836 00:49:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:09.836 00:49:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:37:09.836 00:49:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:09.836 00:49:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:09.836 00:49:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:09.836 00:49:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:37:09.836 00:49:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:09.836 00:49:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:09.836 00:49:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:09.836 00:49:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:37:09.836 00:49:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:09.836 00:49:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:09.836 [2024-07-12 00:49:37.612060] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:09.836 00:49:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:09.836 00:49:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:37:09.836 00:49:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:37:09.836 00:49:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:37:09.836 00:49:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:37:09.836 00:49:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:09.836 00:49:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:09.836 bdev_null1 00:37:09.836 00:49:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:09.836 00:49:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:37:09.836 00:49:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:09.836 00:49:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:09.836 00:49:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:09.836 00:49:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:37:09.836 00:49:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:09.836 00:49:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:09.836 00:49:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:09.836 00:49:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:09.836 00:49:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:09.836 00:49:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:09.836 00:49:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:09.836 00:49:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:37:09.836 00:49:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:37:09.836 00:49:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:37:09.836 00:49:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:37:09.836 00:49:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:09.836 00:49:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:09.836 bdev_null2 00:37:09.836 00:49:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:09.836 00:49:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:37:09.836 00:49:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:09.836 00:49:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:09.836 00:49:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:09.836 00:49:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:37:09.836 00:49:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:09.836 00:49:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:09.836 00:49:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:09.836 00:49:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:37:09.836 00:49:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:09.836 00:49:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:10.095 00:49:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:10.095 00:49:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:37:10.095 00:49:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:37:10.095 00:49:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:10.095 00:49:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:37:10.095 00:49:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:37:10.095 00:49:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:10.095 00:49:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:37:10.095 00:49:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:37:10.095 00:49:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:37:10.095 00:49:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:37:10.095 00:49:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:37:10.095 00:49:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:37:10.095 00:49:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:37:10.095 00:49:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:37:10.095 { 00:37:10.095 "params": { 00:37:10.095 "name": "Nvme$subsystem", 00:37:10.095 "trtype": "$TEST_TRANSPORT", 00:37:10.095 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:10.095 "adrfam": "ipv4", 00:37:10.095 "trsvcid": "$NVMF_PORT", 00:37:10.095 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:10.095 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:10.095 "hdgst": ${hdgst:-false}, 00:37:10.095 "ddgst": ${ddgst:-false} 00:37:10.095 }, 00:37:10.095 "method": "bdev_nvme_attach_controller" 00:37:10.095 } 00:37:10.095 EOF 00:37:10.095 )") 00:37:10.095 00:49:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # local sanitizers 00:37:10.095 00:49:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:10.095 00:49:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # shift 00:37:10.095 00:49:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local asan_lib= 00:37:10.095 00:49:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:37:10.095 00:49:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:37:10.095 00:49:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:37:10.095 00:49:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:37:10.095 00:49:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:10.095 00:49:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:37:10.095 00:49:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libasan 00:37:10.095 00:49:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:37:10.095 00:49:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:37:10.095 00:49:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:37:10.095 { 00:37:10.095 "params": { 00:37:10.095 "name": "Nvme$subsystem", 00:37:10.095 "trtype": "$TEST_TRANSPORT", 00:37:10.095 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:10.095 "adrfam": "ipv4", 00:37:10.095 "trsvcid": "$NVMF_PORT", 00:37:10.095 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:10.095 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:10.095 "hdgst": ${hdgst:-false}, 00:37:10.095 "ddgst": ${ddgst:-false} 00:37:10.095 }, 00:37:10.095 "method": "bdev_nvme_attach_controller" 00:37:10.095 } 00:37:10.095 EOF 00:37:10.095 )") 00:37:10.095 00:49:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:37:10.095 00:49:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:37:10.095 00:49:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:37:10.095 00:49:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:37:10.095 00:49:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:37:10.095 00:49:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:37:10.095 00:49:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:37:10.095 00:49:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:37:10.095 { 00:37:10.095 "params": { 00:37:10.095 "name": "Nvme$subsystem", 00:37:10.095 "trtype": "$TEST_TRANSPORT", 00:37:10.095 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:10.095 "adrfam": "ipv4", 00:37:10.095 "trsvcid": "$NVMF_PORT", 00:37:10.095 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:10.095 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:10.095 "hdgst": ${hdgst:-false}, 00:37:10.095 "ddgst": ${ddgst:-false} 00:37:10.095 }, 00:37:10.095 "method": "bdev_nvme_attach_controller" 00:37:10.095 } 00:37:10.095 EOF 00:37:10.095 )") 00:37:10.095 00:49:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:37:10.095 00:49:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:37:10.095 00:49:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:37:10.095 00:49:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:37:10.095 "params": { 00:37:10.095 "name": "Nvme0", 00:37:10.095 "trtype": "tcp", 00:37:10.095 "traddr": "10.0.0.2", 00:37:10.095 "adrfam": "ipv4", 00:37:10.095 "trsvcid": "4420", 00:37:10.095 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:10.096 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:10.096 "hdgst": false, 00:37:10.096 "ddgst": false 00:37:10.096 }, 00:37:10.096 "method": "bdev_nvme_attach_controller" 00:37:10.096 },{ 00:37:10.096 "params": { 00:37:10.096 "name": "Nvme1", 00:37:10.096 "trtype": "tcp", 00:37:10.096 "traddr": "10.0.0.2", 00:37:10.096 "adrfam": "ipv4", 00:37:10.096 "trsvcid": "4420", 00:37:10.096 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:37:10.096 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:37:10.096 "hdgst": false, 00:37:10.096 "ddgst": false 00:37:10.096 }, 00:37:10.096 "method": "bdev_nvme_attach_controller" 00:37:10.096 },{ 00:37:10.096 "params": { 00:37:10.096 "name": "Nvme2", 00:37:10.096 "trtype": "tcp", 00:37:10.096 "traddr": "10.0.0.2", 00:37:10.096 "adrfam": "ipv4", 00:37:10.096 "trsvcid": "4420", 00:37:10.096 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:37:10.096 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:37:10.096 "hdgst": false, 00:37:10.096 "ddgst": false 00:37:10.096 }, 00:37:10.096 "method": "bdev_nvme_attach_controller" 00:37:10.096 }' 00:37:10.096 00:49:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib= 00:37:10.096 00:49:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:37:10.096 00:49:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:37:10.096 00:49:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:10.096 00:49:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:37:10.096 00:49:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:37:10.096 00:49:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib= 00:37:10.096 00:49:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:37:10.096 00:49:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:37:10.096 00:49:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:10.096 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:37:10.096 ... 00:37:10.096 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:37:10.096 ... 00:37:10.096 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:37:10.096 ... 00:37:10.096 fio-3.35 00:37:10.096 Starting 24 threads 00:37:10.353 EAL: No free 2048 kB hugepages reported on node 1 00:37:22.583 00:37:22.583 filename0: (groupid=0, jobs=1): err= 0: pid=1094202: Fri Jul 12 00:49:48 2024 00:37:22.583 read: IOPS=75, BW=304KiB/s (311kB/s)(3072KiB/10116msec) 00:37:22.583 slat (usec): min=6, max=146, avg=21.19, stdev=24.45 00:37:22.583 clat (msec): min=9, max=402, avg=210.56, stdev=101.14 00:37:22.583 lat (msec): min=9, max=402, avg=210.58, stdev=101.15 00:37:22.583 clat percentiles (msec): 00:37:22.583 | 1.00th=[ 10], 5.00th=[ 40], 10.00th=[ 41], 20.00th=[ 88], 00:37:22.583 | 30.00th=[ 215], 40.00th=[ 243], 50.00th=[ 253], 60.00th=[ 255], 00:37:22.583 | 70.00th=[ 259], 80.00th=[ 266], 90.00th=[ 292], 95.00th=[ 372], 00:37:22.583 | 99.00th=[ 401], 99.50th=[ 401], 99.90th=[ 401], 99.95th=[ 401], 00:37:22.583 | 99.99th=[ 401] 00:37:22.583 bw ( KiB/s): min= 128, max= 780, per=5.29%, avg=300.55, stdev=163.03, samples=20 00:37:22.583 iops : min= 32, max= 195, avg=75.10, stdev=40.71, samples=20 00:37:22.583 lat (msec) : 10=2.08%, 50=16.93%, 100=3.65%, 250=24.74%, 500=52.60% 00:37:22.583 cpu : usr=98.14%, sys=1.28%, ctx=189, majf=0, minf=30 00:37:22.583 IO depths : 1=4.9%, 2=11.2%, 4=25.0%, 8=51.3%, 16=7.6%, 32=0.0%, >=64=0.0% 00:37:22.583 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:22.583 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:22.583 issued rwts: total=768,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:22.583 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:22.583 filename0: (groupid=0, jobs=1): err= 0: pid=1094203: Fri Jul 12 00:49:48 2024 00:37:22.583 read: IOPS=53, BW=215KiB/s (220kB/s)(2176KiB/10136msec) 00:37:22.583 slat (usec): min=16, max=118, avg=39.30, stdev=19.57 00:37:22.583 clat (msec): min=31, max=625, avg=297.79, stdev=158.93 00:37:22.583 lat (msec): min=31, max=625, avg=297.83, stdev=158.93 00:37:22.583 clat percentiles (msec): 00:37:22.583 | 1.00th=[ 40], 5.00th=[ 40], 10.00th=[ 41], 20.00th=[ 42], 00:37:22.583 | 30.00th=[ 271], 40.00th=[ 351], 50.00th=[ 368], 60.00th=[ 372], 00:37:22.583 | 70.00th=[ 380], 80.00th=[ 397], 90.00th=[ 414], 95.00th=[ 493], 00:37:22.583 | 99.00th=[ 625], 99.50th=[ 625], 99.90th=[ 625], 99.95th=[ 625], 00:37:22.583 | 99.99th=[ 625] 00:37:22.583 bw ( KiB/s): min= 128, max= 1024, per=3.92%, avg=222.32, stdev=206.41, samples=19 00:37:22.583 iops : min= 32, max= 256, avg=55.58, stdev=51.60, samples=19 00:37:22.583 lat (msec) : 50=23.53%, 250=4.04%, 500=67.65%, 750=4.78% 00:37:22.583 cpu : usr=98.48%, sys=1.11%, ctx=15, majf=0, minf=21 00:37:22.583 IO depths : 1=4.0%, 2=10.3%, 4=25.0%, 8=52.2%, 16=8.5%, 32=0.0%, >=64=0.0% 00:37:22.583 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:22.583 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:22.583 issued rwts: total=544,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:22.583 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:22.583 filename0: (groupid=0, jobs=1): err= 0: pid=1094204: Fri Jul 12 00:49:48 2024 00:37:22.583 read: IOPS=70, BW=283KiB/s (290kB/s)(2880KiB/10178msec) 00:37:22.583 slat (usec): min=5, max=130, avg=25.80, stdev=26.76 00:37:22.583 clat (msec): min=39, max=395, avg=225.94, stdev=106.42 00:37:22.583 lat (msec): min=39, max=395, avg=225.97, stdev=106.43 00:37:22.583 clat percentiles (msec): 00:37:22.583 | 1.00th=[ 40], 5.00th=[ 41], 10.00th=[ 41], 20.00th=[ 85], 00:37:22.583 | 30.00th=[ 232], 40.00th=[ 249], 50.00th=[ 255], 60.00th=[ 259], 00:37:22.583 | 70.00th=[ 271], 80.00th=[ 288], 90.00th=[ 351], 95.00th=[ 376], 00:37:22.583 | 99.00th=[ 397], 99.50th=[ 397], 99.90th=[ 397], 99.95th=[ 397], 00:37:22.583 | 99.99th=[ 397] 00:37:22.583 bw ( KiB/s): min= 128, max= 768, per=4.96%, avg=281.60, stdev=158.68, samples=20 00:37:22.583 iops : min= 32, max= 192, avg=70.40, stdev=39.67, samples=20 00:37:22.583 lat (msec) : 50=15.56%, 100=8.61%, 250=17.50%, 500=58.33% 00:37:22.583 cpu : usr=98.37%, sys=1.10%, ctx=45, majf=0, minf=27 00:37:22.583 IO depths : 1=4.9%, 2=11.1%, 4=25.0%, 8=51.4%, 16=7.6%, 32=0.0%, >=64=0.0% 00:37:22.583 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:22.583 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:22.583 issued rwts: total=720,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:22.583 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:22.583 filename0: (groupid=0, jobs=1): err= 0: pid=1094205: Fri Jul 12 00:49:48 2024 00:37:22.583 read: IOPS=54, BW=216KiB/s (221kB/s)(2176KiB/10069msec) 00:37:22.583 slat (usec): min=9, max=146, avg=47.82, stdev=42.50 00:37:22.583 clat (msec): min=26, max=624, avg=295.73, stdev=159.67 00:37:22.583 lat (msec): min=26, max=624, avg=295.78, stdev=159.65 00:37:22.583 clat percentiles (msec): 00:37:22.583 | 1.00th=[ 39], 5.00th=[ 40], 10.00th=[ 40], 20.00th=[ 42], 00:37:22.583 | 30.00th=[ 247], 40.00th=[ 351], 50.00th=[ 368], 60.00th=[ 376], 00:37:22.583 | 70.00th=[ 384], 80.00th=[ 393], 90.00th=[ 414], 95.00th=[ 498], 00:37:22.583 | 99.00th=[ 625], 99.50th=[ 625], 99.90th=[ 625], 99.95th=[ 625], 00:37:22.583 | 99.99th=[ 625] 00:37:22.583 bw ( KiB/s): min= 112, max= 1024, per=3.92%, avg=222.32, stdev=208.40, samples=19 00:37:22.583 iops : min= 28, max= 256, avg=55.58, stdev=52.10, samples=19 00:37:22.583 lat (msec) : 50=23.16%, 100=0.37%, 250=6.62%, 500=65.07%, 750=4.78% 00:37:22.583 cpu : usr=98.30%, sys=1.15%, ctx=51, majf=0, minf=15 00:37:22.583 IO depths : 1=4.2%, 2=10.5%, 4=25.0%, 8=52.0%, 16=8.3%, 32=0.0%, >=64=0.0% 00:37:22.583 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:22.583 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:22.583 issued rwts: total=544,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:22.583 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:22.583 filename0: (groupid=0, jobs=1): err= 0: pid=1094206: Fri Jul 12 00:49:48 2024 00:37:22.583 read: IOPS=55, BW=222KiB/s (227kB/s)(2240KiB/10092msec) 00:37:22.583 slat (usec): min=14, max=145, avg=76.17, stdev=30.84 00:37:22.583 clat (msec): min=24, max=581, avg=287.69, stdev=150.29 00:37:22.583 lat (msec): min=25, max=581, avg=287.77, stdev=150.31 00:37:22.583 clat percentiles (msec): 00:37:22.583 | 1.00th=[ 40], 5.00th=[ 40], 10.00th=[ 41], 20.00th=[ 56], 00:37:22.583 | 30.00th=[ 275], 40.00th=[ 347], 50.00th=[ 363], 60.00th=[ 368], 00:37:22.583 | 70.00th=[ 380], 80.00th=[ 384], 90.00th=[ 397], 95.00th=[ 414], 00:37:22.583 | 99.00th=[ 584], 99.50th=[ 584], 99.90th=[ 584], 99.95th=[ 584], 00:37:22.583 | 99.99th=[ 584] 00:37:22.583 bw ( KiB/s): min= 128, max= 896, per=4.04%, avg=229.05, stdev=188.26, samples=19 00:37:22.583 iops : min= 32, max= 224, avg=57.26, stdev=47.07, samples=19 00:37:22.583 lat (msec) : 50=19.64%, 100=6.07%, 250=0.36%, 500=71.07%, 750=2.86% 00:37:22.583 cpu : usr=98.01%, sys=1.36%, ctx=29, majf=0, minf=29 00:37:22.583 IO depths : 1=5.4%, 2=11.6%, 4=25.0%, 8=50.9%, 16=7.1%, 32=0.0%, >=64=0.0% 00:37:22.583 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:22.583 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:22.583 issued rwts: total=560,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:22.583 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:22.583 filename0: (groupid=0, jobs=1): err= 0: pid=1094207: Fri Jul 12 00:49:48 2024 00:37:22.583 read: IOPS=58, BW=233KiB/s (239kB/s)(2368KiB/10160msec) 00:37:22.583 slat (usec): min=9, max=133, avg=56.89, stdev=37.98 00:37:22.583 clat (msec): min=39, max=581, avg=274.08, stdev=145.14 00:37:22.583 lat (msec): min=39, max=581, avg=274.14, stdev=145.16 00:37:22.584 clat percentiles (msec): 00:37:22.584 | 1.00th=[ 40], 5.00th=[ 41], 10.00th=[ 41], 20.00th=[ 58], 00:37:22.584 | 30.00th=[ 230], 40.00th=[ 262], 50.00th=[ 351], 60.00th=[ 368], 00:37:22.584 | 70.00th=[ 372], 80.00th=[ 380], 90.00th=[ 405], 95.00th=[ 418], 00:37:22.584 | 99.00th=[ 584], 99.50th=[ 584], 99.90th=[ 584], 99.95th=[ 584], 00:37:22.584 | 99.99th=[ 584] 00:37:22.584 bw ( KiB/s): min= 128, max= 896, per=4.27%, avg=242.53, stdev=174.81, samples=19 00:37:22.584 iops : min= 32, max= 224, avg=60.63, stdev=43.70, samples=19 00:37:22.584 lat (msec) : 50=18.92%, 100=5.41%, 250=8.11%, 500=64.86%, 750=2.70% 00:37:22.584 cpu : usr=98.39%, sys=1.07%, ctx=43, majf=0, minf=30 00:37:22.584 IO depths : 1=5.4%, 2=11.7%, 4=25.0%, 8=50.8%, 16=7.1%, 32=0.0%, >=64=0.0% 00:37:22.584 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:22.584 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:22.584 issued rwts: total=592,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:22.584 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:22.584 filename0: (groupid=0, jobs=1): err= 0: pid=1094208: Fri Jul 12 00:49:48 2024 00:37:22.584 read: IOPS=58, BW=235KiB/s (240kB/s)(2368KiB/10093msec) 00:37:22.584 slat (usec): min=8, max=147, avg=67.93, stdev=38.04 00:37:22.584 clat (msec): min=24, max=514, avg=272.22, stdev=137.71 00:37:22.584 lat (msec): min=24, max=514, avg=272.28, stdev=137.73 00:37:22.584 clat percentiles (msec): 00:37:22.584 | 1.00th=[ 40], 5.00th=[ 40], 10.00th=[ 41], 20.00th=[ 57], 00:37:22.584 | 30.00th=[ 251], 40.00th=[ 275], 50.00th=[ 342], 60.00th=[ 363], 00:37:22.584 | 70.00th=[ 372], 80.00th=[ 384], 90.00th=[ 397], 95.00th=[ 409], 00:37:22.584 | 99.00th=[ 502], 99.50th=[ 510], 99.90th=[ 514], 99.95th=[ 514], 00:37:22.584 | 99.99th=[ 514] 00:37:22.584 bw ( KiB/s): min= 128, max= 896, per=4.06%, avg=230.40, stdev=182.83, samples=20 00:37:22.584 iops : min= 32, max= 224, avg=57.60, stdev=45.71, samples=20 00:37:22.584 lat (msec) : 50=18.92%, 100=5.07%, 250=5.74%, 500=69.26%, 750=1.01% 00:37:22.584 cpu : usr=98.21%, sys=1.28%, ctx=29, majf=0, minf=20 00:37:22.584 IO depths : 1=4.6%, 2=10.8%, 4=25.0%, 8=51.7%, 16=7.9%, 32=0.0%, >=64=0.0% 00:37:22.584 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:22.584 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:22.584 issued rwts: total=592,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:22.584 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:22.584 filename0: (groupid=0, jobs=1): err= 0: pid=1094209: Fri Jul 12 00:49:48 2024 00:37:22.584 read: IOPS=53, BW=215KiB/s (220kB/s)(2176KiB/10141msec) 00:37:22.584 slat (usec): min=24, max=142, avg=96.98, stdev=23.48 00:37:22.584 clat (msec): min=26, max=629, avg=295.85, stdev=150.92 00:37:22.584 lat (msec): min=26, max=629, avg=295.95, stdev=150.92 00:37:22.584 clat percentiles (msec): 00:37:22.584 | 1.00th=[ 39], 5.00th=[ 40], 10.00th=[ 40], 20.00th=[ 41], 00:37:22.584 | 30.00th=[ 330], 40.00th=[ 351], 50.00th=[ 368], 60.00th=[ 372], 00:37:22.584 | 70.00th=[ 376], 80.00th=[ 388], 90.00th=[ 405], 95.00th=[ 414], 00:37:22.584 | 99.00th=[ 567], 99.50th=[ 567], 99.90th=[ 634], 99.95th=[ 634], 00:37:22.584 | 99.99th=[ 634] 00:37:22.584 bw ( KiB/s): min= 128, max= 896, per=3.92%, avg=222.32, stdev=179.27, samples=19 00:37:22.584 iops : min= 32, max= 224, avg=55.58, stdev=44.82, samples=19 00:37:22.584 lat (msec) : 50=23.53%, 250=3.68%, 500=69.49%, 750=3.31% 00:37:22.584 cpu : usr=97.93%, sys=1.43%, ctx=38, majf=0, minf=28 00:37:22.584 IO depths : 1=1.1%, 2=7.4%, 4=25.0%, 8=55.1%, 16=11.4%, 32=0.0%, >=64=0.0% 00:37:22.584 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:22.584 complete : 0=0.0%, 4=94.4%, 8=0.0%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:22.584 issued rwts: total=544,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:22.584 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:22.584 filename1: (groupid=0, jobs=1): err= 0: pid=1094210: Fri Jul 12 00:49:48 2024 00:37:22.584 read: IOPS=67, BW=270KiB/s (277kB/s)(2744KiB/10157msec) 00:37:22.584 slat (usec): min=9, max=139, avg=30.77, stdev=26.20 00:37:22.584 clat (msec): min=24, max=518, avg=236.26, stdev=108.78 00:37:22.584 lat (msec): min=24, max=518, avg=236.29, stdev=108.77 00:37:22.584 clat percentiles (msec): 00:37:22.584 | 1.00th=[ 40], 5.00th=[ 40], 10.00th=[ 41], 20.00th=[ 95], 00:37:22.584 | 30.00th=[ 236], 40.00th=[ 253], 50.00th=[ 257], 60.00th=[ 264], 00:37:22.584 | 70.00th=[ 275], 80.00th=[ 342], 90.00th=[ 368], 95.00th=[ 376], 00:37:22.584 | 99.00th=[ 393], 99.50th=[ 468], 99.90th=[ 518], 99.95th=[ 518], 00:37:22.584 | 99.99th=[ 518] 00:37:22.584 bw ( KiB/s): min= 128, max= 912, per=4.73%, avg=268.00, stdev=172.94, samples=20 00:37:22.584 iops : min= 32, max= 228, avg=67.00, stdev=43.23, samples=20 00:37:22.584 lat (msec) : 50=16.33%, 100=4.66%, 250=16.03%, 500=62.68%, 750=0.29% 00:37:22.584 cpu : usr=98.14%, sys=1.31%, ctx=62, majf=0, minf=27 00:37:22.584 IO depths : 1=3.2%, 2=9.5%, 4=25.1%, 8=53.1%, 16=9.2%, 32=0.0%, >=64=0.0% 00:37:22.584 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:22.584 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:22.584 issued rwts: total=686,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:22.584 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:22.584 filename1: (groupid=0, jobs=1): err= 0: pid=1094211: Fri Jul 12 00:49:48 2024 00:37:22.584 read: IOPS=55, BW=221KiB/s (226kB/s)(2240KiB/10158msec) 00:37:22.584 slat (usec): min=14, max=158, avg=103.45, stdev=22.89 00:37:22.584 clat (msec): min=33, max=459, avg=289.30, stdev=143.33 00:37:22.584 lat (msec): min=33, max=459, avg=289.40, stdev=143.33 00:37:22.584 clat percentiles (msec): 00:37:22.584 | 1.00th=[ 39], 5.00th=[ 40], 10.00th=[ 40], 20.00th=[ 41], 00:37:22.584 | 30.00th=[ 305], 40.00th=[ 351], 50.00th=[ 368], 60.00th=[ 368], 00:37:22.584 | 70.00th=[ 380], 80.00th=[ 384], 90.00th=[ 397], 95.00th=[ 414], 00:37:22.584 | 99.00th=[ 460], 99.50th=[ 460], 99.90th=[ 460], 99.95th=[ 460], 00:37:22.584 | 99.99th=[ 460] 00:37:22.584 bw ( KiB/s): min= 128, max= 896, per=3.83%, avg=217.60, stdev=186.19, samples=20 00:37:22.584 iops : min= 32, max= 224, avg=54.40, stdev=46.55, samples=20 00:37:22.584 lat (msec) : 50=22.86%, 250=2.86%, 500=74.29% 00:37:22.584 cpu : usr=96.89%, sys=1.91%, ctx=240, majf=0, minf=22 00:37:22.584 IO depths : 1=6.1%, 2=12.3%, 4=25.0%, 8=50.2%, 16=6.4%, 32=0.0%, >=64=0.0% 00:37:22.584 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:22.584 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:22.584 issued rwts: total=560,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:22.584 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:22.584 filename1: (groupid=0, jobs=1): err= 0: pid=1094212: Fri Jul 12 00:49:48 2024 00:37:22.584 read: IOPS=70, BW=283KiB/s (290kB/s)(2880KiB/10178msec) 00:37:22.584 slat (usec): min=10, max=144, avg=41.71, stdev=31.93 00:37:22.584 clat (msec): min=32, max=373, avg=225.84, stdev=103.52 00:37:22.584 lat (msec): min=32, max=373, avg=225.88, stdev=103.50 00:37:22.584 clat percentiles (msec): 00:37:22.584 | 1.00th=[ 39], 5.00th=[ 40], 10.00th=[ 40], 20.00th=[ 95], 00:37:22.584 | 30.00th=[ 226], 40.00th=[ 249], 50.00th=[ 255], 60.00th=[ 259], 00:37:22.584 | 70.00th=[ 271], 80.00th=[ 300], 90.00th=[ 351], 95.00th=[ 351], 00:37:22.584 | 99.00th=[ 376], 99.50th=[ 376], 99.90th=[ 376], 99.95th=[ 376], 00:37:22.584 | 99.99th=[ 376] 00:37:22.584 bw ( KiB/s): min= 128, max= 896, per=4.96%, avg=281.60, stdev=163.45, samples=20 00:37:22.584 iops : min= 32, max= 224, avg=70.40, stdev=40.86, samples=20 00:37:22.584 lat (msec) : 50=17.78%, 100=4.44%, 250=17.78%, 500=60.00% 00:37:22.584 cpu : usr=97.79%, sys=1.51%, ctx=36, majf=0, minf=32 00:37:22.584 IO depths : 1=2.5%, 2=8.8%, 4=25.0%, 8=53.8%, 16=10.0%, 32=0.0%, >=64=0.0% 00:37:22.584 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:22.584 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:22.584 issued rwts: total=720,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:22.584 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:22.584 filename1: (groupid=0, jobs=1): err= 0: pid=1094213: Fri Jul 12 00:49:48 2024 00:37:22.584 read: IOPS=61, BW=245KiB/s (251kB/s)(2496KiB/10178msec) 00:37:22.584 slat (usec): min=4, max=141, avg=78.99, stdev=36.36 00:37:22.584 clat (msec): min=32, max=501, avg=260.30, stdev=136.45 00:37:22.584 lat (msec): min=32, max=501, avg=260.38, stdev=136.46 00:37:22.584 clat percentiles (msec): 00:37:22.584 | 1.00th=[ 39], 5.00th=[ 40], 10.00th=[ 40], 20.00th=[ 42], 00:37:22.584 | 30.00th=[ 230], 40.00th=[ 255], 50.00th=[ 305], 60.00th=[ 363], 00:37:22.584 | 70.00th=[ 368], 80.00th=[ 372], 90.00th=[ 384], 95.00th=[ 397], 00:37:22.584 | 99.00th=[ 409], 99.50th=[ 481], 99.90th=[ 502], 99.95th=[ 502], 00:37:22.584 | 99.99th=[ 502] 00:37:22.584 bw ( KiB/s): min= 128, max= 896, per=4.29%, avg=243.20, stdev=180.02, samples=20 00:37:22.584 iops : min= 32, max= 224, avg=60.80, stdev=45.00, samples=20 00:37:22.584 lat (msec) : 50=20.51%, 100=5.13%, 250=11.06%, 500=62.98%, 750=0.32% 00:37:22.584 cpu : usr=97.97%, sys=1.36%, ctx=57, majf=0, minf=21 00:37:22.584 IO depths : 1=4.8%, 2=11.1%, 4=25.0%, 8=51.4%, 16=7.7%, 32=0.0%, >=64=0.0% 00:37:22.584 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:22.584 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:22.584 issued rwts: total=624,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:22.584 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:22.584 filename1: (groupid=0, jobs=1): err= 0: pid=1094214: Fri Jul 12 00:49:48 2024 00:37:22.584 read: IOPS=55, BW=220KiB/s (226kB/s)(2240KiB/10160msec) 00:37:22.584 slat (usec): min=14, max=142, avg=81.23, stdev=30.83 00:37:22.584 clat (msec): min=38, max=582, avg=289.56, stdev=151.98 00:37:22.584 lat (msec): min=39, max=582, avg=289.64, stdev=152.00 00:37:22.584 clat percentiles (msec): 00:37:22.584 | 1.00th=[ 40], 5.00th=[ 40], 10.00th=[ 41], 20.00th=[ 43], 00:37:22.584 | 30.00th=[ 259], 40.00th=[ 351], 50.00th=[ 368], 60.00th=[ 372], 00:37:22.584 | 70.00th=[ 376], 80.00th=[ 388], 90.00th=[ 405], 95.00th=[ 422], 00:37:22.584 | 99.00th=[ 584], 99.50th=[ 584], 99.90th=[ 584], 99.95th=[ 584], 00:37:22.584 | 99.99th=[ 584] 00:37:22.584 bw ( KiB/s): min= 128, max= 896, per=4.04%, avg=229.05, stdev=178.33, samples=19 00:37:22.584 iops : min= 32, max= 224, avg=57.26, stdev=44.58, samples=19 00:37:22.584 lat (msec) : 50=20.00%, 100=5.71%, 250=3.57%, 500=67.50%, 750=3.21% 00:37:22.584 cpu : usr=97.94%, sys=1.30%, ctx=83, majf=0, minf=29 00:37:22.584 IO depths : 1=5.9%, 2=12.1%, 4=25.0%, 8=50.4%, 16=6.6%, 32=0.0%, >=64=0.0% 00:37:22.584 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:22.584 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:22.584 issued rwts: total=560,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:22.584 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:22.584 filename1: (groupid=0, jobs=1): err= 0: pid=1094215: Fri Jul 12 00:49:48 2024 00:37:22.584 read: IOPS=54, BW=216KiB/s (221kB/s)(2176KiB/10073msec) 00:37:22.584 slat (usec): min=12, max=146, avg=66.89, stdev=40.97 00:37:22.584 clat (msec): min=39, max=755, avg=295.69, stdev=156.10 00:37:22.584 lat (msec): min=39, max=755, avg=295.76, stdev=156.13 00:37:22.585 clat percentiles (msec): 00:37:22.585 | 1.00th=[ 40], 5.00th=[ 40], 10.00th=[ 41], 20.00th=[ 42], 00:37:22.585 | 30.00th=[ 284], 40.00th=[ 347], 50.00th=[ 368], 60.00th=[ 372], 00:37:22.585 | 70.00th=[ 376], 80.00th=[ 388], 90.00th=[ 409], 95.00th=[ 498], 00:37:22.585 | 99.00th=[ 625], 99.50th=[ 625], 99.90th=[ 760], 99.95th=[ 760], 00:37:22.585 | 99.99th=[ 760] 00:37:22.585 bw ( KiB/s): min= 127, max= 912, per=3.92%, avg=222.26, stdev=182.92, samples=19 00:37:22.585 iops : min= 31, max= 228, avg=55.53, stdev=45.75, samples=19 00:37:22.585 lat (msec) : 50=23.53%, 250=3.68%, 500=68.01%, 750=4.41%, 1000=0.37% 00:37:22.585 cpu : usr=98.37%, sys=1.22%, ctx=19, majf=0, minf=25 00:37:22.585 IO depths : 1=4.4%, 2=10.7%, 4=25.0%, 8=51.8%, 16=8.1%, 32=0.0%, >=64=0.0% 00:37:22.585 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:22.585 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:22.585 issued rwts: total=544,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:22.585 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:22.585 filename1: (groupid=0, jobs=1): err= 0: pid=1094216: Fri Jul 12 00:49:48 2024 00:37:22.585 read: IOPS=53, BW=215KiB/s (220kB/s)(2176KiB/10136msec) 00:37:22.585 slat (usec): min=18, max=162, avg=101.64, stdev=19.35 00:37:22.585 clat (msec): min=32, max=622, avg=297.25, stdev=159.23 00:37:22.585 lat (msec): min=32, max=623, avg=297.35, stdev=159.23 00:37:22.585 clat percentiles (msec): 00:37:22.585 | 1.00th=[ 39], 5.00th=[ 40], 10.00th=[ 40], 20.00th=[ 41], 00:37:22.585 | 30.00th=[ 279], 40.00th=[ 351], 50.00th=[ 368], 60.00th=[ 372], 00:37:22.585 | 70.00th=[ 380], 80.00th=[ 393], 90.00th=[ 414], 95.00th=[ 502], 00:37:22.585 | 99.00th=[ 625], 99.50th=[ 625], 99.90th=[ 625], 99.95th=[ 625], 00:37:22.585 | 99.99th=[ 625] 00:37:22.585 bw ( KiB/s): min= 128, max= 1024, per=3.92%, avg=222.32, stdev=207.86, samples=19 00:37:22.585 iops : min= 32, max= 256, avg=55.58, stdev=51.96, samples=19 00:37:22.585 lat (msec) : 50=23.53%, 100=0.37%, 250=3.31%, 500=67.65%, 750=5.15% 00:37:22.585 cpu : usr=98.56%, sys=1.05%, ctx=26, majf=0, minf=24 00:37:22.585 IO depths : 1=4.0%, 2=10.3%, 4=25.0%, 8=52.2%, 16=8.5%, 32=0.0%, >=64=0.0% 00:37:22.585 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:22.585 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:22.585 issued rwts: total=544,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:22.585 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:22.585 filename1: (groupid=0, jobs=1): err= 0: pid=1094217: Fri Jul 12 00:49:48 2024 00:37:22.585 read: IOPS=53, BW=215KiB/s (220kB/s)(2176KiB/10137msec) 00:37:22.585 slat (usec): min=10, max=110, avg=36.54, stdev=14.79 00:37:22.585 clat (msec): min=32, max=626, avg=297.83, stdev=155.65 00:37:22.585 lat (msec): min=32, max=626, avg=297.86, stdev=155.65 00:37:22.585 clat percentiles (msec): 00:37:22.585 | 1.00th=[ 40], 5.00th=[ 40], 10.00th=[ 41], 20.00th=[ 42], 00:37:22.585 | 30.00th=[ 338], 40.00th=[ 363], 50.00th=[ 368], 60.00th=[ 372], 00:37:22.585 | 70.00th=[ 380], 80.00th=[ 393], 90.00th=[ 405], 95.00th=[ 460], 00:37:22.585 | 99.00th=[ 625], 99.50th=[ 625], 99.90th=[ 625], 99.95th=[ 625], 00:37:22.585 | 99.99th=[ 625] 00:37:22.585 bw ( KiB/s): min= 128, max= 1024, per=3.92%, avg=222.26, stdev=207.85, samples=19 00:37:22.585 iops : min= 32, max= 256, avg=55.53, stdev=51.96, samples=19 00:37:22.585 lat (msec) : 50=23.53%, 250=2.94%, 500=70.59%, 750=2.94% 00:37:22.585 cpu : usr=98.71%, sys=0.91%, ctx=14, majf=0, minf=25 00:37:22.585 IO depths : 1=5.7%, 2=11.9%, 4=25.0%, 8=50.6%, 16=6.8%, 32=0.0%, >=64=0.0% 00:37:22.585 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:22.585 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:22.585 issued rwts: total=544,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:22.585 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:22.585 filename2: (groupid=0, jobs=1): err= 0: pid=1094218: Fri Jul 12 00:49:48 2024 00:37:22.585 read: IOPS=58, BW=232KiB/s (238kB/s)(2360KiB/10158msec) 00:37:22.585 slat (usec): min=9, max=140, avg=81.61, stdev=31.88 00:37:22.585 clat (msec): min=25, max=568, avg=274.36, stdev=140.48 00:37:22.585 lat (msec): min=25, max=568, avg=274.45, stdev=140.49 00:37:22.585 clat percentiles (msec): 00:37:22.585 | 1.00th=[ 39], 5.00th=[ 40], 10.00th=[ 40], 20.00th=[ 57], 00:37:22.585 | 30.00th=[ 241], 40.00th=[ 275], 50.00th=[ 342], 60.00th=[ 368], 00:37:22.585 | 70.00th=[ 372], 80.00th=[ 380], 90.00th=[ 401], 95.00th=[ 405], 00:37:22.585 | 99.00th=[ 518], 99.50th=[ 523], 99.90th=[ 567], 99.95th=[ 567], 00:37:22.585 | 99.99th=[ 567] 00:37:22.585 bw ( KiB/s): min= 128, max= 896, per=4.04%, avg=229.60, stdev=173.59, samples=20 00:37:22.585 iops : min= 32, max= 224, avg=57.40, stdev=43.40, samples=20 00:37:22.585 lat (msec) : 50=19.32%, 100=5.08%, 250=7.80%, 500=66.44%, 750=1.36% 00:37:22.585 cpu : usr=98.02%, sys=1.30%, ctx=72, majf=0, minf=21 00:37:22.585 IO depths : 1=4.1%, 2=10.3%, 4=25.1%, 8=52.2%, 16=8.3%, 32=0.0%, >=64=0.0% 00:37:22.585 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:22.585 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:22.585 issued rwts: total=590,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:22.585 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:22.585 filename2: (groupid=0, jobs=1): err= 0: pid=1094219: Fri Jul 12 00:49:48 2024 00:37:22.585 read: IOPS=69, BW=276KiB/s (283kB/s)(2808KiB/10158msec) 00:37:22.585 slat (nsec): min=8589, max=68881, avg=22317.41, stdev=14550.03 00:37:22.585 clat (msec): min=24, max=419, avg=230.94, stdev=104.46 00:37:22.585 lat (msec): min=24, max=419, avg=230.96, stdev=104.45 00:37:22.585 clat percentiles (msec): 00:37:22.585 | 1.00th=[ 40], 5.00th=[ 40], 10.00th=[ 41], 20.00th=[ 95], 00:37:22.585 | 30.00th=[ 236], 40.00th=[ 249], 50.00th=[ 255], 60.00th=[ 259], 00:37:22.585 | 70.00th=[ 266], 80.00th=[ 284], 90.00th=[ 372], 95.00th=[ 384], 00:37:22.585 | 99.00th=[ 401], 99.50th=[ 401], 99.90th=[ 418], 99.95th=[ 418], 00:37:22.585 | 99.99th=[ 418] 00:37:22.585 bw ( KiB/s): min= 128, max= 912, per=4.83%, avg=274.40, stdev=159.43, samples=20 00:37:22.585 iops : min= 32, max= 228, avg=68.60, stdev=39.86, samples=20 00:37:22.585 lat (msec) : 50=16.24%, 100=4.27%, 250=20.51%, 500=58.97% 00:37:22.585 cpu : usr=98.51%, sys=1.07%, ctx=15, majf=0, minf=24 00:37:22.585 IO depths : 1=2.6%, 2=8.8%, 4=25.1%, 8=53.7%, 16=9.8%, 32=0.0%, >=64=0.0% 00:37:22.585 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:22.585 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:22.585 issued rwts: total=702,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:22.585 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:22.585 filename2: (groupid=0, jobs=1): err= 0: pid=1094220: Fri Jul 12 00:49:48 2024 00:37:22.585 read: IOPS=55, BW=221KiB/s (226kB/s)(2240KiB/10139msec) 00:37:22.585 slat (nsec): min=8623, max=79997, avg=19109.64, stdev=8255.03 00:37:22.585 clat (msec): min=25, max=625, avg=289.51, stdev=151.24 00:37:22.585 lat (msec): min=25, max=625, avg=289.52, stdev=151.24 00:37:22.585 clat percentiles (msec): 00:37:22.585 | 1.00th=[ 40], 5.00th=[ 40], 10.00th=[ 41], 20.00th=[ 42], 00:37:22.585 | 30.00th=[ 232], 40.00th=[ 347], 50.00th=[ 368], 60.00th=[ 372], 00:37:22.585 | 70.00th=[ 376], 80.00th=[ 388], 90.00th=[ 405], 95.00th=[ 422], 00:37:22.585 | 99.00th=[ 625], 99.50th=[ 625], 99.90th=[ 625], 99.95th=[ 625], 00:37:22.585 | 99.99th=[ 625] 00:37:22.585 bw ( KiB/s): min= 127, max= 896, per=4.02%, avg=229.00, stdev=178.92, samples=19 00:37:22.585 iops : min= 31, max= 224, avg=57.21, stdev=44.75, samples=19 00:37:22.585 lat (msec) : 50=22.50%, 100=0.36%, 250=8.57%, 500=65.71%, 750=2.86% 00:37:22.585 cpu : usr=98.52%, sys=1.10%, ctx=14, majf=0, minf=20 00:37:22.585 IO depths : 1=5.9%, 2=12.1%, 4=25.0%, 8=50.4%, 16=6.6%, 32=0.0%, >=64=0.0% 00:37:22.585 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:22.585 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:22.585 issued rwts: total=560,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:22.585 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:22.585 filename2: (groupid=0, jobs=1): err= 0: pid=1094221: Fri Jul 12 00:49:48 2024 00:37:22.585 read: IOPS=56, BW=227KiB/s (232kB/s)(2304KiB/10151msec) 00:37:22.585 slat (usec): min=8, max=135, avg=28.36, stdev=23.13 00:37:22.585 clat (msec): min=25, max=637, avg=281.72, stdev=147.53 00:37:22.585 lat (msec): min=25, max=637, avg=281.75, stdev=147.52 00:37:22.585 clat percentiles (msec): 00:37:22.585 | 1.00th=[ 40], 5.00th=[ 40], 10.00th=[ 41], 20.00th=[ 58], 00:37:22.585 | 30.00th=[ 257], 40.00th=[ 342], 50.00th=[ 355], 60.00th=[ 368], 00:37:22.585 | 70.00th=[ 376], 80.00th=[ 384], 90.00th=[ 405], 95.00th=[ 414], 00:37:22.585 | 99.00th=[ 575], 99.50th=[ 575], 99.90th=[ 634], 99.95th=[ 634], 00:37:22.585 | 99.99th=[ 634] 00:37:22.585 bw ( KiB/s): min= 128, max= 896, per=4.15%, avg=235.79, stdev=176.15, samples=19 00:37:22.585 iops : min= 32, max= 224, avg=58.95, stdev=44.04, samples=19 00:37:22.585 lat (msec) : 50=19.79%, 100=5.21%, 250=3.12%, 500=69.10%, 750=2.78% 00:37:22.585 cpu : usr=98.53%, sys=1.07%, ctx=20, majf=0, minf=27 00:37:22.585 IO depths : 1=5.6%, 2=11.8%, 4=25.0%, 8=50.7%, 16=6.9%, 32=0.0%, >=64=0.0% 00:37:22.585 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:22.585 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:22.585 issued rwts: total=576,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:22.585 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:22.585 filename2: (groupid=0, jobs=1): err= 0: pid=1094222: Fri Jul 12 00:49:48 2024 00:37:22.585 read: IOPS=53, BW=215KiB/s (220kB/s)(2176KiB/10141msec) 00:37:22.585 slat (usec): min=15, max=130, avg=39.75, stdev=23.47 00:37:22.585 clat (msec): min=39, max=628, avg=297.87, stdev=155.53 00:37:22.585 lat (msec): min=39, max=628, avg=297.91, stdev=155.53 00:37:22.585 clat percentiles (msec): 00:37:22.585 | 1.00th=[ 40], 5.00th=[ 40], 10.00th=[ 41], 20.00th=[ 42], 00:37:22.585 | 30.00th=[ 338], 40.00th=[ 363], 50.00th=[ 368], 60.00th=[ 372], 00:37:22.585 | 70.00th=[ 380], 80.00th=[ 393], 90.00th=[ 405], 95.00th=[ 460], 00:37:22.585 | 99.00th=[ 625], 99.50th=[ 625], 99.90th=[ 625], 99.95th=[ 625], 00:37:22.585 | 99.99th=[ 625] 00:37:22.585 bw ( KiB/s): min= 128, max= 1024, per=3.92%, avg=222.26, stdev=208.33, samples=19 00:37:22.585 iops : min= 32, max= 256, avg=55.53, stdev=52.08, samples=19 00:37:22.585 lat (msec) : 50=23.53%, 250=2.94%, 500=70.59%, 750=2.94% 00:37:22.585 cpu : usr=98.48%, sys=1.12%, ctx=25, majf=0, minf=27 00:37:22.585 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:37:22.585 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:22.585 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:22.585 issued rwts: total=544,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:22.585 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:22.585 filename2: (groupid=0, jobs=1): err= 0: pid=1094223: Fri Jul 12 00:49:48 2024 00:37:22.586 read: IOPS=69, BW=277KiB/s (283kB/s)(2816KiB/10181msec) 00:37:22.586 slat (usec): min=9, max=149, avg=50.37, stdev=40.37 00:37:22.586 clat (msec): min=39, max=527, avg=230.84, stdev=113.15 00:37:22.586 lat (msec): min=39, max=527, avg=230.89, stdev=113.16 00:37:22.586 clat percentiles (msec): 00:37:22.586 | 1.00th=[ 40], 5.00th=[ 40], 10.00th=[ 41], 20.00th=[ 87], 00:37:22.586 | 30.00th=[ 218], 40.00th=[ 239], 50.00th=[ 253], 60.00th=[ 262], 00:37:22.586 | 70.00th=[ 292], 80.00th=[ 342], 90.00th=[ 368], 95.00th=[ 384], 00:37:22.586 | 99.00th=[ 414], 99.50th=[ 451], 99.90th=[ 527], 99.95th=[ 527], 00:37:22.586 | 99.99th=[ 527] 00:37:22.586 bw ( KiB/s): min= 128, max= 896, per=4.85%, avg=275.20, stdev=164.29, samples=20 00:37:22.586 iops : min= 32, max= 224, avg=68.80, stdev=41.07, samples=20 00:37:22.586 lat (msec) : 50=18.18%, 100=4.26%, 250=25.00%, 500=52.27%, 750=0.28% 00:37:22.586 cpu : usr=98.42%, sys=1.07%, ctx=40, majf=0, minf=31 00:37:22.586 IO depths : 1=3.1%, 2=7.4%, 4=18.9%, 8=61.2%, 16=9.4%, 32=0.0%, >=64=0.0% 00:37:22.586 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:22.586 complete : 0=0.0%, 4=92.3%, 8=2.1%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:22.586 issued rwts: total=704,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:22.586 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:22.586 filename2: (groupid=0, jobs=1): err= 0: pid=1094224: Fri Jul 12 00:49:48 2024 00:37:22.586 read: IOPS=55, BW=222KiB/s (227kB/s)(2240KiB/10093msec) 00:37:22.586 slat (usec): min=14, max=152, avg=98.56, stdev=23.56 00:37:22.586 clat (msec): min=25, max=466, avg=287.51, stdev=140.50 00:37:22.586 lat (msec): min=25, max=466, avg=287.61, stdev=140.51 00:37:22.586 clat percentiles (msec): 00:37:22.586 | 1.00th=[ 39], 5.00th=[ 40], 10.00th=[ 40], 20.00th=[ 41], 00:37:22.586 | 30.00th=[ 284], 40.00th=[ 347], 50.00th=[ 368], 60.00th=[ 372], 00:37:22.586 | 70.00th=[ 372], 80.00th=[ 388], 90.00th=[ 393], 95.00th=[ 409], 00:37:22.586 | 99.00th=[ 414], 99.50th=[ 464], 99.90th=[ 468], 99.95th=[ 468], 00:37:22.586 | 99.99th=[ 468] 00:37:22.586 bw ( KiB/s): min= 128, max= 768, per=3.83%, avg=217.60, stdev=161.38, samples=20 00:37:22.586 iops : min= 32, max= 192, avg=54.40, stdev=40.34, samples=20 00:37:22.586 lat (msec) : 50=22.50%, 100=0.36%, 250=2.86%, 500=74.29% 00:37:22.586 cpu : usr=98.67%, sys=0.92%, ctx=18, majf=0, minf=24 00:37:22.586 IO depths : 1=5.4%, 2=11.6%, 4=25.0%, 8=50.9%, 16=7.1%, 32=0.0%, >=64=0.0% 00:37:22.586 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:22.586 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:22.586 issued rwts: total=560,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:22.586 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:22.586 filename2: (groupid=0, jobs=1): err= 0: pid=1094225: Fri Jul 12 00:49:48 2024 00:37:22.586 read: IOPS=53, BW=215KiB/s (220kB/s)(2176KiB/10137msec) 00:37:22.586 slat (usec): min=17, max=145, avg=40.19, stdev=20.82 00:37:22.586 clat (msec): min=39, max=626, avg=297.83, stdev=158.47 00:37:22.586 lat (msec): min=39, max=626, avg=297.87, stdev=158.48 00:37:22.586 clat percentiles (msec): 00:37:22.586 | 1.00th=[ 40], 5.00th=[ 40], 10.00th=[ 41], 20.00th=[ 42], 00:37:22.586 | 30.00th=[ 284], 40.00th=[ 351], 50.00th=[ 368], 60.00th=[ 372], 00:37:22.586 | 70.00th=[ 380], 80.00th=[ 397], 90.00th=[ 414], 95.00th=[ 489], 00:37:22.586 | 99.00th=[ 625], 99.50th=[ 625], 99.90th=[ 625], 99.95th=[ 625], 00:37:22.586 | 99.99th=[ 625] 00:37:22.586 bw ( KiB/s): min= 128, max= 1024, per=3.92%, avg=222.26, stdev=205.93, samples=19 00:37:22.586 iops : min= 32, max= 256, avg=55.53, stdev=51.48, samples=19 00:37:22.586 lat (msec) : 50=23.53%, 250=3.68%, 500=68.38%, 750=4.41% 00:37:22.586 cpu : usr=98.52%, sys=1.08%, ctx=14, majf=0, minf=27 00:37:22.586 IO depths : 1=4.2%, 2=10.5%, 4=25.0%, 8=52.0%, 16=8.3%, 32=0.0%, >=64=0.0% 00:37:22.586 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:22.586 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:22.586 issued rwts: total=544,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:22.586 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:22.586 00:37:22.586 Run status group 0 (all jobs): 00:37:22.586 READ: bw=5668KiB/s (5804kB/s), 215KiB/s-304KiB/s (220kB/s-311kB/s), io=56.4MiB (59.1MB), run=10069-10181msec 00:37:22.586 00:49:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:37:22.586 00:49:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:37:22.586 00:49:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:37:22.586 00:49:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:37:22.586 00:49:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:37:22.586 00:49:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:37:22.586 00:49:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:22.586 00:49:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:22.586 00:49:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:22.586 00:49:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:37:22.586 00:49:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:22.586 00:49:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:22.586 00:49:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:22.586 00:49:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:37:22.586 00:49:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:37:22.586 00:49:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:37:22.586 00:49:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:37:22.586 00:49:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:22.586 00:49:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:22.586 00:49:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:22.586 00:49:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:37:22.586 00:49:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:22.586 00:49:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:22.586 00:49:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:22.586 00:49:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:37:22.586 00:49:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:37:22.586 00:49:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:37:22.586 00:49:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:37:22.586 00:49:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:22.586 00:49:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:22.586 00:49:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:22.586 00:49:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:37:22.586 00:49:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:22.586 00:49:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:22.586 00:49:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:22.586 00:49:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:37:22.586 00:49:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:37:22.586 00:49:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:37:22.586 00:49:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:37:22.586 00:49:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:37:22.586 00:49:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:37:22.586 00:49:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:37:22.586 00:49:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:37:22.586 00:49:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:37:22.586 00:49:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:37:22.586 00:49:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:37:22.586 00:49:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:37:22.586 00:49:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:22.586 00:49:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:22.586 bdev_null0 00:37:22.586 00:49:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:22.586 00:49:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:37:22.586 00:49:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:22.586 00:49:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:22.586 00:49:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:22.586 00:49:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:37:22.586 00:49:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:22.586 00:49:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:22.586 00:49:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:22.586 00:49:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:37:22.586 00:49:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:22.586 00:49:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:22.586 [2024-07-12 00:49:49.248063] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:22.586 00:49:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:22.586 00:49:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:37:22.586 00:49:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:37:22.586 00:49:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:37:22.586 00:49:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:37:22.586 00:49:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:22.586 00:49:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:22.586 bdev_null1 00:37:22.586 00:49:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:22.586 00:49:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:37:22.586 00:49:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:22.586 00:49:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:22.586 00:49:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:22.586 00:49:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:37:22.586 00:49:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:22.586 00:49:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:22.586 00:49:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:22.586 00:49:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:22.587 00:49:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:22.587 00:49:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:22.587 00:49:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:22.587 00:49:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:37:22.587 00:49:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:37:22.587 00:49:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:37:22.587 00:49:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:37:22.587 00:49:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:37:22.587 00:49:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:37:22.587 00:49:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:37:22.587 { 00:37:22.587 "params": { 00:37:22.587 "name": "Nvme$subsystem", 00:37:22.587 "trtype": "$TEST_TRANSPORT", 00:37:22.587 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:22.587 "adrfam": "ipv4", 00:37:22.587 "trsvcid": "$NVMF_PORT", 00:37:22.587 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:22.587 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:22.587 "hdgst": ${hdgst:-false}, 00:37:22.587 "ddgst": ${ddgst:-false} 00:37:22.587 }, 00:37:22.587 "method": "bdev_nvme_attach_controller" 00:37:22.587 } 00:37:22.587 EOF 00:37:22.587 )") 00:37:22.587 00:49:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:22.587 00:49:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:37:22.587 00:49:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:22.587 00:49:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:37:22.587 00:49:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:37:22.587 00:49:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:37:22.587 00:49:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:37:22.587 00:49:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # local sanitizers 00:37:22.587 00:49:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:22.587 00:49:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # shift 00:37:22.587 00:49:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:37:22.587 00:49:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local asan_lib= 00:37:22.587 00:49:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:37:22.587 00:49:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:37:22.587 00:49:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:37:22.587 00:49:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:22.587 00:49:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:37:22.587 00:49:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libasan 00:37:22.587 00:49:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:37:22.587 00:49:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:37:22.587 00:49:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:37:22.587 { 00:37:22.587 "params": { 00:37:22.587 "name": "Nvme$subsystem", 00:37:22.587 "trtype": "$TEST_TRANSPORT", 00:37:22.587 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:22.587 "adrfam": "ipv4", 00:37:22.587 "trsvcid": "$NVMF_PORT", 00:37:22.587 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:22.587 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:22.587 "hdgst": ${hdgst:-false}, 00:37:22.587 "ddgst": ${ddgst:-false} 00:37:22.587 }, 00:37:22.587 "method": "bdev_nvme_attach_controller" 00:37:22.587 } 00:37:22.587 EOF 00:37:22.587 )") 00:37:22.587 00:49:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:37:22.587 00:49:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:37:22.587 00:49:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:37:22.587 00:49:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:37:22.587 00:49:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:37:22.587 00:49:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:37:22.587 "params": { 00:37:22.587 "name": "Nvme0", 00:37:22.587 "trtype": "tcp", 00:37:22.587 "traddr": "10.0.0.2", 00:37:22.587 "adrfam": "ipv4", 00:37:22.587 "trsvcid": "4420", 00:37:22.587 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:22.587 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:22.587 "hdgst": false, 00:37:22.587 "ddgst": false 00:37:22.587 }, 00:37:22.587 "method": "bdev_nvme_attach_controller" 00:37:22.587 },{ 00:37:22.587 "params": { 00:37:22.587 "name": "Nvme1", 00:37:22.587 "trtype": "tcp", 00:37:22.587 "traddr": "10.0.0.2", 00:37:22.587 "adrfam": "ipv4", 00:37:22.587 "trsvcid": "4420", 00:37:22.587 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:37:22.587 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:37:22.587 "hdgst": false, 00:37:22.587 "ddgst": false 00:37:22.587 }, 00:37:22.587 "method": "bdev_nvme_attach_controller" 00:37:22.587 }' 00:37:22.587 00:49:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib= 00:37:22.587 00:49:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:37:22.587 00:49:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:37:22.587 00:49:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:22.587 00:49:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:37:22.587 00:49:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:37:22.587 00:49:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib= 00:37:22.587 00:49:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:37:22.587 00:49:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:37:22.587 00:49:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:22.587 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:37:22.587 ... 00:37:22.587 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:37:22.587 ... 00:37:22.587 fio-3.35 00:37:22.587 Starting 4 threads 00:37:22.587 EAL: No free 2048 kB hugepages reported on node 1 00:37:27.845 00:37:27.845 filename0: (groupid=0, jobs=1): err= 0: pid=1095225: Fri Jul 12 00:49:55 2024 00:37:27.845 read: IOPS=1643, BW=12.8MiB/s (13.5MB/s)(64.2MiB/5003msec) 00:37:27.845 slat (nsec): min=9919, max=87291, avg=20473.16, stdev=10163.84 00:37:27.845 clat (usec): min=986, max=8835, avg=4782.23, stdev=428.28 00:37:27.845 lat (usec): min=1008, max=8846, avg=4802.70, stdev=428.06 00:37:27.845 clat percentiles (usec): 00:37:27.845 | 1.00th=[ 3884], 5.00th=[ 4293], 10.00th=[ 4621], 20.00th=[ 4686], 00:37:27.845 | 30.00th=[ 4686], 40.00th=[ 4752], 50.00th=[ 4752], 60.00th=[ 4752], 00:37:27.845 | 70.00th=[ 4817], 80.00th=[ 4817], 90.00th=[ 4948], 95.00th=[ 5276], 00:37:27.845 | 99.00th=[ 7046], 99.50th=[ 7570], 99.90th=[ 8291], 99.95th=[ 8717], 00:37:27.845 | 99.99th=[ 8848] 00:37:27.845 bw ( KiB/s): min=12848, max=13312, per=24.87%, avg=13142.40, stdev=162.50, samples=10 00:37:27.845 iops : min= 1606, max= 1664, avg=1642.80, stdev=20.31, samples=10 00:37:27.845 lat (usec) : 1000=0.01% 00:37:27.845 lat (msec) : 2=0.11%, 4=1.30%, 10=98.58% 00:37:27.845 cpu : usr=90.50%, sys=6.20%, ctx=247, majf=0, minf=41 00:37:27.845 IO depths : 1=1.2%, 2=22.0%, 4=51.8%, 8=25.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:27.845 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:27.845 complete : 0=0.0%, 4=90.9%, 8=9.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:27.845 issued rwts: total=8222,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:27.845 latency : target=0, window=0, percentile=100.00%, depth=8 00:37:27.845 filename0: (groupid=0, jobs=1): err= 0: pid=1095226: Fri Jul 12 00:49:55 2024 00:37:27.845 read: IOPS=1657, BW=12.9MiB/s (13.6MB/s)(64.8MiB/5005msec) 00:37:27.845 slat (nsec): min=7797, max=70539, avg=16310.24, stdev=10731.92 00:37:27.845 clat (usec): min=1332, max=8620, avg=4760.69, stdev=335.89 00:37:27.845 lat (usec): min=1355, max=8628, avg=4777.00, stdev=336.04 00:37:27.845 clat percentiles (usec): 00:37:27.845 | 1.00th=[ 3818], 5.00th=[ 4293], 10.00th=[ 4555], 20.00th=[ 4686], 00:37:27.846 | 30.00th=[ 4752], 40.00th=[ 4752], 50.00th=[ 4752], 60.00th=[ 4752], 00:37:27.846 | 70.00th=[ 4817], 80.00th=[ 4817], 90.00th=[ 4883], 95.00th=[ 5145], 00:37:27.846 | 99.00th=[ 5997], 99.50th=[ 6521], 99.90th=[ 7963], 99.95th=[ 7963], 00:37:27.846 | 99.99th=[ 8586] 00:37:27.846 bw ( KiB/s): min=13056, max=13440, per=25.10%, avg=13263.40, stdev=149.04, samples=10 00:37:27.846 iops : min= 1632, max= 1680, avg=1657.90, stdev=18.66, samples=10 00:37:27.846 lat (msec) : 2=0.06%, 4=1.52%, 10=98.42% 00:37:27.846 cpu : usr=95.86%, sys=3.66%, ctx=32, majf=0, minf=62 00:37:27.846 IO depths : 1=0.9%, 2=22.5%, 4=51.5%, 8=25.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:27.846 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:27.846 complete : 0=0.0%, 4=90.8%, 8=9.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:27.846 issued rwts: total=8296,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:27.846 latency : target=0, window=0, percentile=100.00%, depth=8 00:37:27.846 filename1: (groupid=0, jobs=1): err= 0: pid=1095227: Fri Jul 12 00:49:55 2024 00:37:27.846 read: IOPS=1652, BW=12.9MiB/s (13.5MB/s)(64.6MiB/5002msec) 00:37:27.846 slat (nsec): min=7564, max=66075, avg=16596.76, stdev=9052.59 00:37:27.846 clat (usec): min=1106, max=8683, avg=4783.94, stdev=379.22 00:37:27.846 lat (usec): min=1117, max=8704, avg=4800.53, stdev=379.18 00:37:27.846 clat percentiles (usec): 00:37:27.846 | 1.00th=[ 3752], 5.00th=[ 4424], 10.00th=[ 4621], 20.00th=[ 4686], 00:37:27.846 | 30.00th=[ 4752], 40.00th=[ 4752], 50.00th=[ 4752], 60.00th=[ 4817], 00:37:27.846 | 70.00th=[ 4817], 80.00th=[ 4817], 90.00th=[ 4883], 95.00th=[ 5014], 00:37:27.846 | 99.00th=[ 6325], 99.50th=[ 6980], 99.90th=[ 7963], 99.95th=[ 8029], 00:37:27.846 | 99.99th=[ 8717] 00:37:27.846 bw ( KiB/s): min=12953, max=13376, per=25.01%, avg=13218.50, stdev=142.51, samples=10 00:37:27.846 iops : min= 1619, max= 1672, avg=1652.30, stdev=17.84, samples=10 00:37:27.846 lat (msec) : 2=0.17%, 4=1.92%, 10=97.91% 00:37:27.846 cpu : usr=96.04%, sys=3.54%, ctx=9, majf=0, minf=35 00:37:27.846 IO depths : 1=0.4%, 2=15.2%, 4=55.2%, 8=29.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:27.846 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:27.846 complete : 0=0.0%, 4=94.0%, 8=6.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:27.846 issued rwts: total=8268,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:27.846 latency : target=0, window=0, percentile=100.00%, depth=8 00:37:27.846 filename1: (groupid=0, jobs=1): err= 0: pid=1095228: Fri Jul 12 00:49:55 2024 00:37:27.846 read: IOPS=1653, BW=12.9MiB/s (13.5MB/s)(64.7MiB/5005msec) 00:37:27.846 slat (nsec): min=7559, max=75322, avg=12518.00, stdev=7232.11 00:37:27.846 clat (usec): min=1523, max=7932, avg=4795.36, stdev=292.06 00:37:27.846 lat (usec): min=1531, max=7942, avg=4807.88, stdev=292.47 00:37:27.846 clat percentiles (usec): 00:37:27.846 | 1.00th=[ 4080], 5.00th=[ 4490], 10.00th=[ 4686], 20.00th=[ 4752], 00:37:27.846 | 30.00th=[ 4752], 40.00th=[ 4752], 50.00th=[ 4817], 60.00th=[ 4817], 00:37:27.846 | 70.00th=[ 4817], 80.00th=[ 4883], 90.00th=[ 4883], 95.00th=[ 5014], 00:37:27.846 | 99.00th=[ 5735], 99.50th=[ 6849], 99.90th=[ 7701], 99.95th=[ 7767], 00:37:27.846 | 99.99th=[ 7963] 00:37:27.846 bw ( KiB/s): min=12992, max=13472, per=25.03%, avg=13230.40, stdev=172.56, samples=10 00:37:27.846 iops : min= 1624, max= 1684, avg=1653.80, stdev=21.57, samples=10 00:37:27.846 lat (msec) : 2=0.04%, 4=0.43%, 10=99.53% 00:37:27.846 cpu : usr=95.72%, sys=3.88%, ctx=6, majf=0, minf=77 00:37:27.846 IO depths : 1=0.1%, 2=15.9%, 4=57.7%, 8=26.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:27.846 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:27.846 complete : 0=0.0%, 4=91.1%, 8=8.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:27.846 issued rwts: total=8277,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:27.846 latency : target=0, window=0, percentile=100.00%, depth=8 00:37:27.846 00:37:27.846 Run status group 0 (all jobs): 00:37:27.846 READ: bw=51.6MiB/s (54.1MB/s), 12.8MiB/s-12.9MiB/s (13.5MB/s-13.6MB/s), io=258MiB (271MB), run=5002-5005msec 00:37:27.846 00:49:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:37:27.846 00:49:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:37:27.846 00:49:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:37:27.846 00:49:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:37:27.846 00:49:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:37:27.846 00:49:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:37:27.846 00:49:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:27.846 00:49:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:27.846 00:49:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:27.846 00:49:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:37:27.846 00:49:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:27.846 00:49:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:27.846 00:49:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:27.846 00:49:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:37:27.846 00:49:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:37:27.846 00:49:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:37:27.846 00:49:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:37:27.846 00:49:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:27.846 00:49:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:27.846 00:49:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:27.846 00:49:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:37:27.846 00:49:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:27.846 00:49:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:27.846 00:49:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:27.846 00:37:27.846 real 0m23.731s 00:37:27.846 user 4m35.811s 00:37:27.846 sys 0m5.284s 00:37:27.846 00:49:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1122 -- # xtrace_disable 00:37:27.846 00:49:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:27.846 ************************************ 00:37:27.846 END TEST fio_dif_rand_params 00:37:27.846 ************************************ 00:37:27.846 00:49:55 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:37:27.846 00:49:55 nvmf_dif -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:37:27.846 00:49:55 nvmf_dif -- common/autotest_common.sh@1103 -- # xtrace_disable 00:37:27.846 00:49:55 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:37:27.846 ************************************ 00:37:27.846 START TEST fio_dif_digest 00:37:27.846 ************************************ 00:37:27.846 00:49:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1121 -- # fio_dif_digest 00:37:27.846 00:49:55 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:37:27.846 00:49:55 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:37:27.846 00:49:55 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:37:27.846 00:49:55 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:37:27.846 00:49:55 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:37:27.846 00:49:55 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:37:27.846 00:49:55 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:37:27.846 00:49:55 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:37:27.846 00:49:55 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:37:27.846 00:49:55 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:37:27.846 00:49:55 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:37:27.846 00:49:55 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:37:27.846 00:49:55 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:37:27.846 00:49:55 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:37:27.846 00:49:55 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:37:27.846 00:49:55 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:37:27.846 00:49:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:27.846 00:49:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:37:27.846 bdev_null0 00:37:27.846 00:49:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:27.846 00:49:55 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:37:27.846 00:49:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:27.846 00:49:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:37:27.846 00:49:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:27.846 00:49:55 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:37:27.846 00:49:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:27.846 00:49:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:37:27.846 00:49:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:27.846 00:49:55 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:37:27.846 00:49:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:27.846 00:49:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:37:27.846 [2024-07-12 00:49:55.456137] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:27.846 00:49:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:27.846 00:49:55 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:37:27.846 00:49:55 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:37:27.846 00:49:55 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:37:27.846 00:49:55 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # config=() 00:37:27.846 00:49:55 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # local subsystem config 00:37:27.846 00:49:55 nvmf_dif.fio_dif_digest -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:37:27.846 00:49:55 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:27.846 00:49:55 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:37:27.846 { 00:37:27.846 "params": { 00:37:27.846 "name": "Nvme$subsystem", 00:37:27.846 "trtype": "$TEST_TRANSPORT", 00:37:27.846 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:27.846 "adrfam": "ipv4", 00:37:27.846 "trsvcid": "$NVMF_PORT", 00:37:27.846 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:27.846 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:27.846 "hdgst": ${hdgst:-false}, 00:37:27.846 "ddgst": ${ddgst:-false} 00:37:27.846 }, 00:37:27.846 "method": "bdev_nvme_attach_controller" 00:37:27.847 } 00:37:27.847 EOF 00:37:27.847 )") 00:37:27.847 00:49:55 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:37:27.847 00:49:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:27.847 00:49:55 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:37:27.847 00:49:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:37:27.847 00:49:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:37:27.847 00:49:55 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:37:27.847 00:49:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1335 -- # local sanitizers 00:37:27.847 00:49:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:27.847 00:49:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # shift 00:37:27.847 00:49:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local asan_lib= 00:37:27.847 00:49:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:37:27.847 00:49:55 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # cat 00:37:27.847 00:49:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:27.847 00:49:55 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:37:27.847 00:49:55 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:37:27.847 00:49:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # grep libasan 00:37:27.847 00:49:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:37:27.847 00:49:55 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # jq . 00:37:27.847 00:49:55 nvmf_dif.fio_dif_digest -- nvmf/common.sh@557 -- # IFS=, 00:37:27.847 00:49:55 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:37:27.847 "params": { 00:37:27.847 "name": "Nvme0", 00:37:27.847 "trtype": "tcp", 00:37:27.847 "traddr": "10.0.0.2", 00:37:27.847 "adrfam": "ipv4", 00:37:27.847 "trsvcid": "4420", 00:37:27.847 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:27.847 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:27.847 "hdgst": true, 00:37:27.847 "ddgst": true 00:37:27.847 }, 00:37:27.847 "method": "bdev_nvme_attach_controller" 00:37:27.847 }' 00:37:27.847 00:49:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # asan_lib= 00:37:27.847 00:49:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:37:27.847 00:49:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:37:27.847 00:49:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:27.847 00:49:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:37:27.847 00:49:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:37:27.847 00:49:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # asan_lib= 00:37:27.847 00:49:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:37:27.847 00:49:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:37:27.847 00:49:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:28.104 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:37:28.104 ... 00:37:28.104 fio-3.35 00:37:28.104 Starting 3 threads 00:37:28.104 EAL: No free 2048 kB hugepages reported on node 1 00:37:40.306 00:37:40.306 filename0: (groupid=0, jobs=1): err= 0: pid=1095850: Fri Jul 12 00:50:06 2024 00:37:40.306 read: IOPS=181, BW=22.7MiB/s (23.8MB/s)(228MiB/10044msec) 00:37:40.306 slat (nsec): min=5670, max=32187, avg=13024.28, stdev=1535.53 00:37:40.306 clat (usec): min=13214, max=57020, avg=16471.33, stdev=1689.60 00:37:40.306 lat (usec): min=13227, max=57032, avg=16484.35, stdev=1689.58 00:37:40.306 clat percentiles (usec): 00:37:40.306 | 1.00th=[14091], 5.00th=[14877], 10.00th=[15139], 20.00th=[15533], 00:37:40.306 | 30.00th=[15926], 40.00th=[16057], 50.00th=[16319], 60.00th=[16581], 00:37:40.306 | 70.00th=[16909], 80.00th=[17433], 90.00th=[17695], 95.00th=[18220], 00:37:40.306 | 99.00th=[19006], 99.50th=[19268], 99.90th=[55837], 99.95th=[56886], 00:37:40.306 | 99.99th=[56886] 00:37:40.306 bw ( KiB/s): min=22272, max=24320, per=32.72%, avg=23336.65, stdev=511.41, samples=20 00:37:40.306 iops : min= 174, max= 190, avg=182.30, stdev= 4.01, samples=20 00:37:40.306 lat (msec) : 20=99.73%, 50=0.16%, 100=0.11% 00:37:40.306 cpu : usr=92.90%, sys=6.68%, ctx=22, majf=0, minf=152 00:37:40.306 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:40.306 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:40.306 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:40.306 issued rwts: total=1825,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:40.306 latency : target=0, window=0, percentile=100.00%, depth=3 00:37:40.306 filename0: (groupid=0, jobs=1): err= 0: pid=1095851: Fri Jul 12 00:50:06 2024 00:37:40.306 read: IOPS=176, BW=22.0MiB/s (23.1MB/s)(222MiB/10047msec) 00:37:40.306 slat (nsec): min=5551, max=25497, avg=13106.21, stdev=1349.58 00:37:40.306 clat (usec): min=12683, max=52393, avg=16970.25, stdev=1699.18 00:37:40.306 lat (usec): min=12696, max=52406, avg=16983.35, stdev=1699.25 00:37:40.306 clat percentiles (usec): 00:37:40.306 | 1.00th=[13960], 5.00th=[14877], 10.00th=[15401], 20.00th=[15795], 00:37:40.306 | 30.00th=[16188], 40.00th=[16581], 50.00th=[16909], 60.00th=[17171], 00:37:40.306 | 70.00th=[17695], 80.00th=[17957], 90.00th=[18482], 95.00th=[19006], 00:37:40.306 | 99.00th=[20055], 99.50th=[20579], 99.90th=[47449], 99.95th=[52167], 00:37:40.306 | 99.99th=[52167] 00:37:40.306 bw ( KiB/s): min=20736, max=24576, per=31.75%, avg=22643.20, stdev=820.02, samples=20 00:37:40.306 iops : min= 162, max= 192, avg=176.90, stdev= 6.41, samples=20 00:37:40.306 lat (msec) : 20=99.10%, 50=0.85%, 100=0.06% 00:37:40.306 cpu : usr=93.04%, sys=6.53%, ctx=20, majf=0, minf=107 00:37:40.306 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:40.306 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:40.306 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:40.306 issued rwts: total=1772,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:40.306 latency : target=0, window=0, percentile=100.00%, depth=3 00:37:40.306 filename0: (groupid=0, jobs=1): err= 0: pid=1095852: Fri Jul 12 00:50:06 2024 00:37:40.306 read: IOPS=199, BW=25.0MiB/s (26.2MB/s)(250MiB/10007msec) 00:37:40.306 slat (nsec): min=5949, max=65281, avg=13091.29, stdev=1779.75 00:37:40.306 clat (usec): min=10608, max=21434, avg=14986.39, stdev=971.97 00:37:40.306 lat (usec): min=10621, max=21463, avg=14999.48, stdev=971.92 00:37:40.306 clat percentiles (usec): 00:37:40.306 | 1.00th=[12911], 5.00th=[13435], 10.00th=[13829], 20.00th=[14222], 00:37:40.306 | 30.00th=[14484], 40.00th=[14746], 50.00th=[15008], 60.00th=[15270], 00:37:40.306 | 70.00th=[15401], 80.00th=[15795], 90.00th=[16188], 95.00th=[16581], 00:37:40.306 | 99.00th=[17433], 99.50th=[17695], 99.90th=[21365], 99.95th=[21365], 00:37:40.306 | 99.99th=[21365] 00:37:40.306 bw ( KiB/s): min=24576, max=26368, per=35.85%, avg=25571.75, stdev=410.99, samples=20 00:37:40.306 iops : min= 192, max= 206, avg=199.75, stdev= 3.18, samples=20 00:37:40.306 lat (msec) : 20=99.85%, 50=0.15% 00:37:40.306 cpu : usr=93.00%, sys=6.57%, ctx=16, majf=0, minf=173 00:37:40.306 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:40.306 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:40.306 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:40.306 issued rwts: total=2001,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:40.306 latency : target=0, window=0, percentile=100.00%, depth=3 00:37:40.306 00:37:40.306 Run status group 0 (all jobs): 00:37:40.306 READ: bw=69.6MiB/s (73.0MB/s), 22.0MiB/s-25.0MiB/s (23.1MB/s-26.2MB/s), io=700MiB (734MB), run=10007-10047msec 00:37:40.306 00:50:06 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:37:40.306 00:50:06 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:37:40.306 00:50:06 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:37:40.306 00:50:06 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:37:40.306 00:50:06 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:37:40.306 00:50:06 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:37:40.306 00:50:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:40.306 00:50:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:37:40.306 00:50:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:40.306 00:50:06 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:37:40.306 00:50:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:40.306 00:50:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:37:40.306 00:50:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:40.306 00:37:40.306 real 0m11.015s 00:37:40.306 user 0m28.861s 00:37:40.306 sys 0m2.212s 00:37:40.306 00:50:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1122 -- # xtrace_disable 00:37:40.306 00:50:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:37:40.306 ************************************ 00:37:40.306 END TEST fio_dif_digest 00:37:40.306 ************************************ 00:37:40.306 00:50:06 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:37:40.306 00:50:06 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:37:40.306 00:50:06 nvmf_dif -- nvmf/common.sh@488 -- # nvmfcleanup 00:37:40.306 00:50:06 nvmf_dif -- nvmf/common.sh@117 -- # sync 00:37:40.306 00:50:06 nvmf_dif -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:37:40.306 00:50:06 nvmf_dif -- nvmf/common.sh@120 -- # set +e 00:37:40.307 00:50:06 nvmf_dif -- nvmf/common.sh@121 -- # for i in {1..20} 00:37:40.307 00:50:06 nvmf_dif -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:37:40.307 rmmod nvme_tcp 00:37:40.307 rmmod nvme_fabrics 00:37:40.307 rmmod nvme_keyring 00:37:40.307 00:50:06 nvmf_dif -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:37:40.307 00:50:06 nvmf_dif -- nvmf/common.sh@124 -- # set -e 00:37:40.307 00:50:06 nvmf_dif -- nvmf/common.sh@125 -- # return 0 00:37:40.307 00:50:06 nvmf_dif -- nvmf/common.sh@489 -- # '[' -n 1091243 ']' 00:37:40.307 00:50:06 nvmf_dif -- nvmf/common.sh@490 -- # killprocess 1091243 00:37:40.307 00:50:06 nvmf_dif -- common/autotest_common.sh@946 -- # '[' -z 1091243 ']' 00:37:40.307 00:50:06 nvmf_dif -- common/autotest_common.sh@950 -- # kill -0 1091243 00:37:40.307 00:50:06 nvmf_dif -- common/autotest_common.sh@951 -- # uname 00:37:40.307 00:50:06 nvmf_dif -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:37:40.307 00:50:06 nvmf_dif -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1091243 00:37:40.307 00:50:06 nvmf_dif -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:37:40.307 00:50:06 nvmf_dif -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:37:40.307 00:50:06 nvmf_dif -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1091243' 00:37:40.307 killing process with pid 1091243 00:37:40.307 00:50:06 nvmf_dif -- common/autotest_common.sh@965 -- # kill 1091243 00:37:40.307 00:50:06 nvmf_dif -- common/autotest_common.sh@970 -- # wait 1091243 00:37:40.307 00:50:06 nvmf_dif -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:37:40.307 00:50:06 nvmf_dif -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:37:40.307 Waiting for block devices as requested 00:37:40.307 0000:84:00.0 (8086 0a54): vfio-pci -> nvme 00:37:40.307 0000:00:04.7 (8086 3c27): vfio-pci -> ioatdma 00:37:40.307 0000:00:04.6 (8086 3c26): vfio-pci -> ioatdma 00:37:40.307 0000:00:04.5 (8086 3c25): vfio-pci -> ioatdma 00:37:40.307 0000:00:04.4 (8086 3c24): vfio-pci -> ioatdma 00:37:40.307 0000:00:04.3 (8086 3c23): vfio-pci -> ioatdma 00:37:40.307 0000:00:04.2 (8086 3c22): vfio-pci -> ioatdma 00:37:40.566 0000:00:04.1 (8086 3c21): vfio-pci -> ioatdma 00:37:40.566 0000:00:04.0 (8086 3c20): vfio-pci -> ioatdma 00:37:40.566 0000:80:04.7 (8086 3c27): vfio-pci -> ioatdma 00:37:40.566 0000:80:04.6 (8086 3c26): vfio-pci -> ioatdma 00:37:40.822 0000:80:04.5 (8086 3c25): vfio-pci -> ioatdma 00:37:40.822 0000:80:04.4 (8086 3c24): vfio-pci -> ioatdma 00:37:40.822 0000:80:04.3 (8086 3c23): vfio-pci -> ioatdma 00:37:40.822 0000:80:04.2 (8086 3c22): vfio-pci -> ioatdma 00:37:41.079 0000:80:04.1 (8086 3c21): vfio-pci -> ioatdma 00:37:41.079 0000:80:04.0 (8086 3c20): vfio-pci -> ioatdma 00:37:41.079 00:50:08 nvmf_dif -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:37:41.079 00:50:08 nvmf_dif -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:37:41.079 00:50:08 nvmf_dif -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:37:41.079 00:50:08 nvmf_dif -- nvmf/common.sh@278 -- # remove_spdk_ns 00:37:41.079 00:50:08 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:41.079 00:50:08 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:37:41.079 00:50:08 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:43.617 00:50:10 nvmf_dif -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:37:43.617 00:37:43.617 real 1m4.788s 00:37:43.617 user 6m30.810s 00:37:43.617 sys 0m15.359s 00:37:43.617 00:50:10 nvmf_dif -- common/autotest_common.sh@1122 -- # xtrace_disable 00:37:43.617 00:50:10 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:37:43.617 ************************************ 00:37:43.617 END TEST nvmf_dif 00:37:43.617 ************************************ 00:37:43.617 00:50:10 -- spdk/autotest.sh@293 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:37:43.617 00:50:10 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:37:43.617 00:50:10 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:37:43.617 00:50:10 -- common/autotest_common.sh@10 -- # set +x 00:37:43.617 ************************************ 00:37:43.617 START TEST nvmf_abort_qd_sizes 00:37:43.617 ************************************ 00:37:43.617 00:50:10 nvmf_abort_qd_sizes -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:37:43.617 * Looking for test storage... 00:37:43.617 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:37:43.617 00:50:11 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:43.617 00:50:11 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:37:43.617 00:50:11 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:43.617 00:50:11 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:43.617 00:50:11 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:43.617 00:50:11 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:43.617 00:50:11 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:43.617 00:50:11 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:43.617 00:50:11 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:43.617 00:50:11 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:43.617 00:50:11 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:43.617 00:50:11 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:43.617 00:50:11 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:37:43.617 00:50:11 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:37:43.617 00:50:11 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:43.617 00:50:11 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:43.617 00:50:11 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:43.617 00:50:11 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:43.617 00:50:11 nvmf_abort_qd_sizes -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:43.617 00:50:11 nvmf_abort_qd_sizes -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:43.617 00:50:11 nvmf_abort_qd_sizes -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:43.617 00:50:11 nvmf_abort_qd_sizes -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:43.617 00:50:11 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:43.617 00:50:11 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:43.617 00:50:11 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:43.617 00:50:11 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:37:43.617 00:50:11 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:43.617 00:50:11 nvmf_abort_qd_sizes -- nvmf/common.sh@47 -- # : 0 00:37:43.617 00:50:11 nvmf_abort_qd_sizes -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:37:43.617 00:50:11 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:37:43.617 00:50:11 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:43.617 00:50:11 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:43.617 00:50:11 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:43.617 00:50:11 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:37:43.617 00:50:11 nvmf_abort_qd_sizes -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:37:43.617 00:50:11 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # have_pci_nics=0 00:37:43.617 00:50:11 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:37:43.617 00:50:11 nvmf_abort_qd_sizes -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:37:43.617 00:50:11 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:43.617 00:50:11 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # prepare_net_devs 00:37:43.617 00:50:11 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # local -g is_hw=no 00:37:43.617 00:50:11 nvmf_abort_qd_sizes -- nvmf/common.sh@412 -- # remove_spdk_ns 00:37:43.617 00:50:11 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:43.617 00:50:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:37:43.617 00:50:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:43.617 00:50:11 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:37:43.617 00:50:11 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:37:43.617 00:50:11 nvmf_abort_qd_sizes -- nvmf/common.sh@285 -- # xtrace_disable 00:37:43.617 00:50:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:37:44.997 00:50:12 nvmf_abort_qd_sizes -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:44.997 00:50:12 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # pci_devs=() 00:37:44.997 00:50:12 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # local -a pci_devs 00:37:44.997 00:50:12 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # pci_net_devs=() 00:37:44.997 00:50:12 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:37:44.997 00:50:12 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # pci_drivers=() 00:37:44.997 00:50:12 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # local -A pci_drivers 00:37:44.997 00:50:12 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # net_devs=() 00:37:44.997 00:50:12 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # local -ga net_devs 00:37:44.997 00:50:12 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # e810=() 00:37:44.997 00:50:12 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # local -ga e810 00:37:44.997 00:50:12 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # x722=() 00:37:44.997 00:50:12 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # local -ga x722 00:37:44.997 00:50:12 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # mlx=() 00:37:44.997 00:50:12 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # local -ga mlx 00:37:44.997 00:50:12 nvmf_abort_qd_sizes -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:44.997 00:50:12 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:44.997 00:50:12 nvmf_abort_qd_sizes -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:44.997 00:50:12 nvmf_abort_qd_sizes -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:44.997 00:50:12 nvmf_abort_qd_sizes -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:44.997 00:50:12 nvmf_abort_qd_sizes -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:44.997 00:50:12 nvmf_abort_qd_sizes -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:44.997 00:50:12 nvmf_abort_qd_sizes -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:44.997 00:50:12 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:44.997 00:50:12 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:44.997 00:50:12 nvmf_abort_qd_sizes -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:44.997 00:50:12 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:37:44.997 00:50:12 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:37:44.997 00:50:12 nvmf_abort_qd_sizes -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:37:44.997 00:50:12 nvmf_abort_qd_sizes -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:37:44.997 00:50:12 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:37:44.997 00:50:12 nvmf_abort_qd_sizes -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:37:44.997 00:50:12 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:37:44.997 00:50:12 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:37:44.997 Found 0000:08:00.0 (0x8086 - 0x159b) 00:37:44.997 00:50:12 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:37:44.997 00:50:12 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:37:44.998 00:50:12 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:44.998 00:50:12 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:44.998 00:50:12 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:37:44.998 00:50:12 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:37:44.998 00:50:12 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:37:44.998 Found 0000:08:00.1 (0x8086 - 0x159b) 00:37:44.998 00:50:12 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:37:44.998 00:50:12 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:37:44.998 00:50:12 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:44.998 00:50:12 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:44.998 00:50:12 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:37:44.998 00:50:12 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:37:44.998 00:50:12 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:37:44.998 00:50:12 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:37:44.998 00:50:12 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:37:44.998 00:50:12 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:44.998 00:50:12 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:37:44.998 00:50:12 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:44.998 00:50:12 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:37:44.998 00:50:12 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:37:44.998 00:50:12 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:44.998 00:50:12 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:37:44.998 Found net devices under 0000:08:00.0: cvl_0_0 00:37:44.998 00:50:12 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:37:44.998 00:50:12 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:37:44.998 00:50:12 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:44.998 00:50:12 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:37:44.998 00:50:12 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:44.998 00:50:12 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:37:44.998 00:50:12 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:37:44.998 00:50:12 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:44.998 00:50:12 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:37:44.998 Found net devices under 0000:08:00.1: cvl_0_1 00:37:44.998 00:50:12 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:37:44.998 00:50:12 nvmf_abort_qd_sizes -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:37:44.998 00:50:12 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # is_hw=yes 00:37:44.998 00:50:12 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:37:44.998 00:50:12 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:37:44.998 00:50:12 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:37:44.998 00:50:12 nvmf_abort_qd_sizes -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:44.998 00:50:12 nvmf_abort_qd_sizes -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:44.998 00:50:12 nvmf_abort_qd_sizes -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:44.998 00:50:12 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:37:44.998 00:50:12 nvmf_abort_qd_sizes -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:44.998 00:50:12 nvmf_abort_qd_sizes -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:44.998 00:50:12 nvmf_abort_qd_sizes -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:37:44.998 00:50:12 nvmf_abort_qd_sizes -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:44.998 00:50:12 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:44.998 00:50:12 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:37:44.998 00:50:12 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:37:44.998 00:50:12 nvmf_abort_qd_sizes -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:37:44.998 00:50:12 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:44.998 00:50:12 nvmf_abort_qd_sizes -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:44.998 00:50:12 nvmf_abort_qd_sizes -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:44.998 00:50:12 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:37:44.998 00:50:12 nvmf_abort_qd_sizes -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:44.998 00:50:12 nvmf_abort_qd_sizes -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:44.998 00:50:12 nvmf_abort_qd_sizes -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:44.998 00:50:12 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:37:44.998 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:44.998 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.250 ms 00:37:44.998 00:37:44.998 --- 10.0.0.2 ping statistics --- 00:37:44.998 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:44.998 rtt min/avg/max/mdev = 0.250/0.250/0.250/0.000 ms 00:37:44.998 00:50:12 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:44.998 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:44.998 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.150 ms 00:37:44.998 00:37:44.998 --- 10.0.0.1 ping statistics --- 00:37:44.998 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:44.998 rtt min/avg/max/mdev = 0.150/0.150/0.150/0.000 ms 00:37:44.998 00:50:12 nvmf_abort_qd_sizes -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:44.998 00:50:12 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # return 0 00:37:44.998 00:50:12 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:37:44.998 00:50:12 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:37:45.933 0000:00:04.7 (8086 3c27): ioatdma -> vfio-pci 00:37:45.933 0000:00:04.6 (8086 3c26): ioatdma -> vfio-pci 00:37:45.933 0000:00:04.5 (8086 3c25): ioatdma -> vfio-pci 00:37:45.934 0000:00:04.4 (8086 3c24): ioatdma -> vfio-pci 00:37:45.934 0000:00:04.3 (8086 3c23): ioatdma -> vfio-pci 00:37:45.934 0000:00:04.2 (8086 3c22): ioatdma -> vfio-pci 00:37:45.934 0000:00:04.1 (8086 3c21): ioatdma -> vfio-pci 00:37:45.934 0000:00:04.0 (8086 3c20): ioatdma -> vfio-pci 00:37:45.934 0000:80:04.7 (8086 3c27): ioatdma -> vfio-pci 00:37:45.934 0000:80:04.6 (8086 3c26): ioatdma -> vfio-pci 00:37:45.934 0000:80:04.5 (8086 3c25): ioatdma -> vfio-pci 00:37:45.934 0000:80:04.4 (8086 3c24): ioatdma -> vfio-pci 00:37:45.934 0000:80:04.3 (8086 3c23): ioatdma -> vfio-pci 00:37:45.934 0000:80:04.2 (8086 3c22): ioatdma -> vfio-pci 00:37:45.934 0000:80:04.1 (8086 3c21): ioatdma -> vfio-pci 00:37:45.934 0000:80:04.0 (8086 3c20): ioatdma -> vfio-pci 00:37:46.868 0000:84:00.0 (8086 0a54): nvme -> vfio-pci 00:37:46.868 00:50:14 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:46.868 00:50:14 nvmf_abort_qd_sizes -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:37:46.868 00:50:14 nvmf_abort_qd_sizes -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:37:46.868 00:50:14 nvmf_abort_qd_sizes -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:46.868 00:50:14 nvmf_abort_qd_sizes -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:37:46.868 00:50:14 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:37:46.868 00:50:14 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:37:46.868 00:50:14 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:37:46.868 00:50:14 nvmf_abort_qd_sizes -- common/autotest_common.sh@720 -- # xtrace_disable 00:37:46.868 00:50:14 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:37:46.868 00:50:14 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # nvmfpid=1099547 00:37:46.868 00:50:14 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:37:46.868 00:50:14 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # waitforlisten 1099547 00:37:46.868 00:50:14 nvmf_abort_qd_sizes -- common/autotest_common.sh@827 -- # '[' -z 1099547 ']' 00:37:46.868 00:50:14 nvmf_abort_qd_sizes -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:46.868 00:50:14 nvmf_abort_qd_sizes -- common/autotest_common.sh@832 -- # local max_retries=100 00:37:46.868 00:50:14 nvmf_abort_qd_sizes -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:46.868 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:46.868 00:50:14 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # xtrace_disable 00:37:46.868 00:50:14 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:37:47.127 [2024-07-12 00:50:14.720149] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:37:47.127 [2024-07-12 00:50:14.720239] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:47.127 EAL: No free 2048 kB hugepages reported on node 1 00:37:47.127 [2024-07-12 00:50:14.786135] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:37:47.127 [2024-07-12 00:50:14.878520] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:47.127 [2024-07-12 00:50:14.878580] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:47.127 [2024-07-12 00:50:14.878606] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:47.127 [2024-07-12 00:50:14.878620] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:47.127 [2024-07-12 00:50:14.878632] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:47.127 [2024-07-12 00:50:14.878708] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:37:47.127 [2024-07-12 00:50:14.878788] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:37:47.127 [2024-07-12 00:50:14.878867] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:37:47.127 [2024-07-12 00:50:14.878871] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:37:47.385 00:50:14 nvmf_abort_qd_sizes -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:37:47.385 00:50:14 nvmf_abort_qd_sizes -- common/autotest_common.sh@860 -- # return 0 00:37:47.385 00:50:14 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:37:47.385 00:50:14 nvmf_abort_qd_sizes -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:47.385 00:50:14 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:37:47.385 00:50:15 nvmf_abort_qd_sizes -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:47.385 00:50:15 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:37:47.385 00:50:15 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:37:47.385 00:50:15 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:37:47.385 00:50:15 nvmf_abort_qd_sizes -- scripts/common.sh@309 -- # local bdf bdfs 00:37:47.385 00:50:15 nvmf_abort_qd_sizes -- scripts/common.sh@310 -- # local nvmes 00:37:47.385 00:50:15 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # [[ -n 0000:84:00.0 ]] 00:37:47.385 00:50:15 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:37:47.385 00:50:15 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:37:47.385 00:50:15 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:84:00.0 ]] 00:37:47.385 00:50:15 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:37:47.385 00:50:15 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:37:47.385 00:50:15 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:37:47.385 00:50:15 nvmf_abort_qd_sizes -- scripts/common.sh@325 -- # (( 1 )) 00:37:47.385 00:50:15 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # printf '%s\n' 0000:84:00.0 00:37:47.385 00:50:15 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:37:47.385 00:50:15 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:84:00.0 00:37:47.385 00:50:15 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:37:47.385 00:50:15 nvmf_abort_qd_sizes -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:37:47.385 00:50:15 nvmf_abort_qd_sizes -- common/autotest_common.sh@1103 -- # xtrace_disable 00:37:47.385 00:50:15 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:37:47.385 ************************************ 00:37:47.385 START TEST spdk_target_abort 00:37:47.385 ************************************ 00:37:47.385 00:50:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1121 -- # spdk_target 00:37:47.385 00:50:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:37:47.385 00:50:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:84:00.0 -b spdk_target 00:37:47.385 00:50:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:47.385 00:50:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:37:50.683 spdk_targetn1 00:37:50.683 00:50:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:50.683 00:50:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:37:50.683 00:50:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:50.683 00:50:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:37:50.683 [2024-07-12 00:50:17.878383] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:50.683 00:50:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:50.683 00:50:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:37:50.683 00:50:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:50.683 00:50:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:37:50.683 00:50:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:50.683 00:50:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:37:50.683 00:50:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:50.683 00:50:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:37:50.683 00:50:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:50.683 00:50:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:37:50.683 00:50:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:50.683 00:50:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:37:50.683 [2024-07-12 00:50:17.910686] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:50.683 00:50:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:50.683 00:50:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:37:50.683 00:50:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:37:50.683 00:50:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:37:50.683 00:50:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:37:50.683 00:50:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:37:50.683 00:50:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:37:50.683 00:50:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:37:50.683 00:50:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:37:50.684 00:50:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:37:50.684 00:50:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:50.684 00:50:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:37:50.684 00:50:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:50.684 00:50:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:37:50.684 00:50:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:50.684 00:50:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:37:50.684 00:50:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:50.684 00:50:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:37:50.684 00:50:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:50.684 00:50:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:37:50.684 00:50:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:37:50.684 00:50:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:37:50.684 EAL: No free 2048 kB hugepages reported on node 1 00:37:53.999 Initializing NVMe Controllers 00:37:53.999 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:37:53.999 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:37:54.000 Initialization complete. Launching workers. 00:37:54.000 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 12040, failed: 0 00:37:54.000 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1156, failed to submit 10884 00:37:54.000 success 774, unsuccess 382, failed 0 00:37:54.000 00:50:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:37:54.000 00:50:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:37:54.000 EAL: No free 2048 kB hugepages reported on node 1 00:37:57.278 Initializing NVMe Controllers 00:37:57.278 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:37:57.278 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:37:57.278 Initialization complete. Launching workers. 00:37:57.278 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8713, failed: 0 00:37:57.278 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1237, failed to submit 7476 00:37:57.278 success 293, unsuccess 944, failed 0 00:37:57.278 00:50:24 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:37:57.278 00:50:24 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:37:57.278 EAL: No free 2048 kB hugepages reported on node 1 00:38:00.560 Initializing NVMe Controllers 00:38:00.560 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:38:00.560 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:38:00.560 Initialization complete. Launching workers. 00:38:00.560 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 29070, failed: 0 00:38:00.560 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2612, failed to submit 26458 00:38:00.560 success 331, unsuccess 2281, failed 0 00:38:00.560 00:50:27 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:38:00.560 00:50:27 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:00.560 00:50:27 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:38:00.560 00:50:27 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:00.560 00:50:27 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:38:00.560 00:50:27 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:00.560 00:50:27 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:38:01.491 00:50:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:01.491 00:50:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 1099547 00:38:01.491 00:50:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@946 -- # '[' -z 1099547 ']' 00:38:01.491 00:50:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@950 -- # kill -0 1099547 00:38:01.491 00:50:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@951 -- # uname 00:38:01.491 00:50:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:38:01.491 00:50:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1099547 00:38:01.491 00:50:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:38:01.491 00:50:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:38:01.491 00:50:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1099547' 00:38:01.491 killing process with pid 1099547 00:38:01.491 00:50:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@965 -- # kill 1099547 00:38:01.491 00:50:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@970 -- # wait 1099547 00:38:01.491 00:38:01.491 real 0m14.139s 00:38:01.491 user 0m53.222s 00:38:01.491 sys 0m2.590s 00:38:01.491 00:50:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1122 -- # xtrace_disable 00:38:01.491 00:50:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:38:01.491 ************************************ 00:38:01.491 END TEST spdk_target_abort 00:38:01.491 ************************************ 00:38:01.491 00:50:29 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:38:01.491 00:50:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:38:01.491 00:50:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@1103 -- # xtrace_disable 00:38:01.491 00:50:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:38:01.491 ************************************ 00:38:01.491 START TEST kernel_target_abort 00:38:01.491 ************************************ 00:38:01.491 00:50:29 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1121 -- # kernel_target 00:38:01.491 00:50:29 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:38:01.491 00:50:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@741 -- # local ip 00:38:01.491 00:50:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # ip_candidates=() 00:38:01.491 00:50:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # local -A ip_candidates 00:38:01.491 00:50:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:01.491 00:50:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:01.491 00:50:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:38:01.491 00:50:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:01.491 00:50:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:38:01.491 00:50:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:38:01.491 00:50:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:38:01.491 00:50:29 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:38:01.491 00:50:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:38:01.491 00:50:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:38:01.491 00:50:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:38:01.491 00:50:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:38:01.491 00:50:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:38:01.491 00:50:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@639 -- # local block nvme 00:38:01.491 00:50:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:38:01.491 00:50:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@642 -- # modprobe nvmet 00:38:01.491 00:50:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:38:01.491 00:50:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:38:02.429 Waiting for block devices as requested 00:38:02.429 0000:84:00.0 (8086 0a54): vfio-pci -> nvme 00:38:02.688 0000:00:04.7 (8086 3c27): vfio-pci -> ioatdma 00:38:02.688 0000:00:04.6 (8086 3c26): vfio-pci -> ioatdma 00:38:02.688 0000:00:04.5 (8086 3c25): vfio-pci -> ioatdma 00:38:02.688 0000:00:04.4 (8086 3c24): vfio-pci -> ioatdma 00:38:02.947 0000:00:04.3 (8086 3c23): vfio-pci -> ioatdma 00:38:02.947 0000:00:04.2 (8086 3c22): vfio-pci -> ioatdma 00:38:02.947 0000:00:04.1 (8086 3c21): vfio-pci -> ioatdma 00:38:02.947 0000:00:04.0 (8086 3c20): vfio-pci -> ioatdma 00:38:03.205 0000:80:04.7 (8086 3c27): vfio-pci -> ioatdma 00:38:03.205 0000:80:04.6 (8086 3c26): vfio-pci -> ioatdma 00:38:03.205 0000:80:04.5 (8086 3c25): vfio-pci -> ioatdma 00:38:03.205 0000:80:04.4 (8086 3c24): vfio-pci -> ioatdma 00:38:03.467 0000:80:04.3 (8086 3c23): vfio-pci -> ioatdma 00:38:03.467 0000:80:04.2 (8086 3c22): vfio-pci -> ioatdma 00:38:03.467 0000:80:04.1 (8086 3c21): vfio-pci -> ioatdma 00:38:03.467 0000:80:04.0 (8086 3c20): vfio-pci -> ioatdma 00:38:03.725 00:50:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:38:03.725 00:50:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:38:03.725 00:50:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:38:03.725 00:50:31 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:38:03.725 00:50:31 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:38:03.725 00:50:31 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:38:03.725 00:50:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:38:03.725 00:50:31 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:38:03.725 00:50:31 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:38:03.725 No valid GPT data, bailing 00:38:03.725 00:50:31 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:38:03.725 00:50:31 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:38:03.725 00:50:31 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:38:03.725 00:50:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:38:03.725 00:50:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:38:03.725 00:50:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:38:03.725 00:50:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:38:03.725 00:50:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:38:03.725 00:50:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:38:03.725 00:50:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # echo 1 00:38:03.725 00:50:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:38:03.725 00:50:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # echo 1 00:38:03.725 00:50:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:38:03.725 00:50:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@672 -- # echo tcp 00:38:03.725 00:50:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # echo 4420 00:38:03.725 00:50:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # echo ipv4 00:38:03.725 00:50:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:38:03.725 00:50:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -a 10.0.0.1 -t tcp -s 4420 00:38:03.725 00:38:03.725 Discovery Log Number of Records 2, Generation counter 2 00:38:03.725 =====Discovery Log Entry 0====== 00:38:03.725 trtype: tcp 00:38:03.725 adrfam: ipv4 00:38:03.725 subtype: current discovery subsystem 00:38:03.725 treq: not specified, sq flow control disable supported 00:38:03.725 portid: 1 00:38:03.725 trsvcid: 4420 00:38:03.726 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:38:03.726 traddr: 10.0.0.1 00:38:03.726 eflags: none 00:38:03.726 sectype: none 00:38:03.726 =====Discovery Log Entry 1====== 00:38:03.726 trtype: tcp 00:38:03.726 adrfam: ipv4 00:38:03.726 subtype: nvme subsystem 00:38:03.726 treq: not specified, sq flow control disable supported 00:38:03.726 portid: 1 00:38:03.726 trsvcid: 4420 00:38:03.726 subnqn: nqn.2016-06.io.spdk:testnqn 00:38:03.726 traddr: 10.0.0.1 00:38:03.726 eflags: none 00:38:03.726 sectype: none 00:38:03.726 00:50:31 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:38:03.726 00:50:31 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:38:03.726 00:50:31 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:38:03.726 00:50:31 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:38:03.726 00:50:31 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:38:03.726 00:50:31 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:38:03.726 00:50:31 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:38:03.726 00:50:31 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:38:03.726 00:50:31 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:38:03.726 00:50:31 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:38:03.726 00:50:31 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:38:03.726 00:50:31 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:38:03.726 00:50:31 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:38:03.726 00:50:31 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:38:03.726 00:50:31 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:38:03.726 00:50:31 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:38:03.726 00:50:31 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:38:03.726 00:50:31 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:38:03.726 00:50:31 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:38:03.726 00:50:31 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:38:03.726 00:50:31 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:38:03.726 EAL: No free 2048 kB hugepages reported on node 1 00:38:07.004 Initializing NVMe Controllers 00:38:07.004 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:38:07.004 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:38:07.004 Initialization complete. Launching workers. 00:38:07.004 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 45947, failed: 0 00:38:07.004 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 45947, failed to submit 0 00:38:07.004 success 0, unsuccess 45947, failed 0 00:38:07.004 00:50:34 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:38:07.004 00:50:34 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:38:07.004 EAL: No free 2048 kB hugepages reported on node 1 00:38:10.283 Initializing NVMe Controllers 00:38:10.283 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:38:10.283 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:38:10.283 Initialization complete. Launching workers. 00:38:10.283 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 80233, failed: 0 00:38:10.283 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 20226, failed to submit 60007 00:38:10.283 success 0, unsuccess 20226, failed 0 00:38:10.283 00:50:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:38:10.283 00:50:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:38:10.283 EAL: No free 2048 kB hugepages reported on node 1 00:38:13.566 Initializing NVMe Controllers 00:38:13.566 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:38:13.566 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:38:13.566 Initialization complete. Launching workers. 00:38:13.566 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 78046, failed: 0 00:38:13.566 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 19494, failed to submit 58552 00:38:13.566 success 0, unsuccess 19494, failed 0 00:38:13.566 00:50:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:38:13.566 00:50:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:38:13.566 00:50:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # echo 0 00:38:13.566 00:50:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:38:13.566 00:50:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:38:13.566 00:50:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:38:13.566 00:50:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:38:13.566 00:50:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:38:13.566 00:50:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:38:13.566 00:50:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:38:14.135 0000:00:04.7 (8086 3c27): ioatdma -> vfio-pci 00:38:14.135 0000:00:04.6 (8086 3c26): ioatdma -> vfio-pci 00:38:14.135 0000:00:04.5 (8086 3c25): ioatdma -> vfio-pci 00:38:14.135 0000:00:04.4 (8086 3c24): ioatdma -> vfio-pci 00:38:14.135 0000:00:04.3 (8086 3c23): ioatdma -> vfio-pci 00:38:14.135 0000:00:04.2 (8086 3c22): ioatdma -> vfio-pci 00:38:14.135 0000:00:04.1 (8086 3c21): ioatdma -> vfio-pci 00:38:14.135 0000:00:04.0 (8086 3c20): ioatdma -> vfio-pci 00:38:14.135 0000:80:04.7 (8086 3c27): ioatdma -> vfio-pci 00:38:14.135 0000:80:04.6 (8086 3c26): ioatdma -> vfio-pci 00:38:14.135 0000:80:04.5 (8086 3c25): ioatdma -> vfio-pci 00:38:14.135 0000:80:04.4 (8086 3c24): ioatdma -> vfio-pci 00:38:14.135 0000:80:04.3 (8086 3c23): ioatdma -> vfio-pci 00:38:14.135 0000:80:04.2 (8086 3c22): ioatdma -> vfio-pci 00:38:14.135 0000:80:04.1 (8086 3c21): ioatdma -> vfio-pci 00:38:14.135 0000:80:04.0 (8086 3c20): ioatdma -> vfio-pci 00:38:15.128 0000:84:00.0 (8086 0a54): nvme -> vfio-pci 00:38:15.128 00:38:15.128 real 0m13.617s 00:38:15.128 user 0m6.737s 00:38:15.128 sys 0m2.720s 00:38:15.128 00:50:42 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1122 -- # xtrace_disable 00:38:15.128 00:50:42 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:38:15.128 ************************************ 00:38:15.128 END TEST kernel_target_abort 00:38:15.128 ************************************ 00:38:15.128 00:50:42 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:38:15.128 00:50:42 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:38:15.128 00:50:42 nvmf_abort_qd_sizes -- nvmf/common.sh@488 -- # nvmfcleanup 00:38:15.128 00:50:42 nvmf_abort_qd_sizes -- nvmf/common.sh@117 -- # sync 00:38:15.128 00:50:42 nvmf_abort_qd_sizes -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:38:15.128 00:50:42 nvmf_abort_qd_sizes -- nvmf/common.sh@120 -- # set +e 00:38:15.128 00:50:42 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # for i in {1..20} 00:38:15.128 00:50:42 nvmf_abort_qd_sizes -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:38:15.128 rmmod nvme_tcp 00:38:15.128 rmmod nvme_fabrics 00:38:15.128 rmmod nvme_keyring 00:38:15.128 00:50:42 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:38:15.128 00:50:42 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set -e 00:38:15.128 00:50:42 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # return 0 00:38:15.128 00:50:42 nvmf_abort_qd_sizes -- nvmf/common.sh@489 -- # '[' -n 1099547 ']' 00:38:15.128 00:50:42 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # killprocess 1099547 00:38:15.128 00:50:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@946 -- # '[' -z 1099547 ']' 00:38:15.128 00:50:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@950 -- # kill -0 1099547 00:38:15.128 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 950: kill: (1099547) - No such process 00:38:15.128 00:50:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@973 -- # echo 'Process with pid 1099547 is not found' 00:38:15.128 Process with pid 1099547 is not found 00:38:15.128 00:50:42 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:38:15.129 00:50:42 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:38:16.064 Waiting for block devices as requested 00:38:16.064 0000:84:00.0 (8086 0a54): vfio-pci -> nvme 00:38:16.322 0000:00:04.7 (8086 3c27): vfio-pci -> ioatdma 00:38:16.322 0000:00:04.6 (8086 3c26): vfio-pci -> ioatdma 00:38:16.322 0000:00:04.5 (8086 3c25): vfio-pci -> ioatdma 00:38:16.579 0000:00:04.4 (8086 3c24): vfio-pci -> ioatdma 00:38:16.579 0000:00:04.3 (8086 3c23): vfio-pci -> ioatdma 00:38:16.579 0000:00:04.2 (8086 3c22): vfio-pci -> ioatdma 00:38:16.579 0000:00:04.1 (8086 3c21): vfio-pci -> ioatdma 00:38:16.837 0000:00:04.0 (8086 3c20): vfio-pci -> ioatdma 00:38:16.837 0000:80:04.7 (8086 3c27): vfio-pci -> ioatdma 00:38:16.837 0000:80:04.6 (8086 3c26): vfio-pci -> ioatdma 00:38:16.837 0000:80:04.5 (8086 3c25): vfio-pci -> ioatdma 00:38:17.095 0000:80:04.4 (8086 3c24): vfio-pci -> ioatdma 00:38:17.095 0000:80:04.3 (8086 3c23): vfio-pci -> ioatdma 00:38:17.095 0000:80:04.2 (8086 3c22): vfio-pci -> ioatdma 00:38:17.095 0000:80:04.1 (8086 3c21): vfio-pci -> ioatdma 00:38:17.355 0000:80:04.0 (8086 3c20): vfio-pci -> ioatdma 00:38:17.355 00:50:45 nvmf_abort_qd_sizes -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:38:17.355 00:50:45 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:38:17.355 00:50:45 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:38:17.355 00:50:45 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # remove_spdk_ns 00:38:17.355 00:50:45 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:17.355 00:50:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:38:17.355 00:50:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:19.260 00:50:47 nvmf_abort_qd_sizes -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:38:19.260 00:38:19.260 real 0m36.097s 00:38:19.260 user 1m1.795s 00:38:19.260 sys 0m8.133s 00:38:19.260 00:50:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@1122 -- # xtrace_disable 00:38:19.260 00:50:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:38:19.260 ************************************ 00:38:19.260 END TEST nvmf_abort_qd_sizes 00:38:19.260 ************************************ 00:38:19.260 00:50:47 -- spdk/autotest.sh@295 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:38:19.260 00:50:47 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:38:19.260 00:50:47 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:38:19.260 00:50:47 -- common/autotest_common.sh@10 -- # set +x 00:38:19.519 ************************************ 00:38:19.519 START TEST keyring_file 00:38:19.519 ************************************ 00:38:19.520 00:50:47 keyring_file -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:38:19.520 * Looking for test storage... 00:38:19.520 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:38:19.520 00:50:47 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:38:19.520 00:50:47 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:19.520 00:50:47 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:38:19.520 00:50:47 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:19.520 00:50:47 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:19.520 00:50:47 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:19.520 00:50:47 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:19.520 00:50:47 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:19.520 00:50:47 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:19.520 00:50:47 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:19.520 00:50:47 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:19.520 00:50:47 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:19.520 00:50:47 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:19.520 00:50:47 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:38:19.520 00:50:47 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:38:19.520 00:50:47 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:19.520 00:50:47 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:19.520 00:50:47 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:19.520 00:50:47 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:19.520 00:50:47 keyring_file -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:19.520 00:50:47 keyring_file -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:19.520 00:50:47 keyring_file -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:19.520 00:50:47 keyring_file -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:19.520 00:50:47 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:19.520 00:50:47 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:19.520 00:50:47 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:19.520 00:50:47 keyring_file -- paths/export.sh@5 -- # export PATH 00:38:19.520 00:50:47 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:19.520 00:50:47 keyring_file -- nvmf/common.sh@47 -- # : 0 00:38:19.520 00:50:47 keyring_file -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:38:19.520 00:50:47 keyring_file -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:38:19.520 00:50:47 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:19.520 00:50:47 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:19.520 00:50:47 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:19.520 00:50:47 keyring_file -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:38:19.520 00:50:47 keyring_file -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:38:19.520 00:50:47 keyring_file -- nvmf/common.sh@51 -- # have_pci_nics=0 00:38:19.520 00:50:47 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:38:19.520 00:50:47 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:38:19.520 00:50:47 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:38:19.520 00:50:47 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:38:19.520 00:50:47 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:38:19.520 00:50:47 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:38:19.520 00:50:47 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:38:19.520 00:50:47 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:38:19.520 00:50:47 keyring_file -- keyring/common.sh@17 -- # name=key0 00:38:19.520 00:50:47 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:38:19.520 00:50:47 keyring_file -- keyring/common.sh@17 -- # digest=0 00:38:19.520 00:50:47 keyring_file -- keyring/common.sh@18 -- # mktemp 00:38:19.520 00:50:47 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.56W6AKskit 00:38:19.520 00:50:47 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:38:19.520 00:50:47 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:38:19.520 00:50:47 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:38:19.520 00:50:47 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:38:19.520 00:50:47 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:38:19.520 00:50:47 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:38:19.520 00:50:47 keyring_file -- nvmf/common.sh@705 -- # python - 00:38:19.520 00:50:47 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.56W6AKskit 00:38:19.520 00:50:47 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.56W6AKskit 00:38:19.520 00:50:47 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.56W6AKskit 00:38:19.520 00:50:47 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:38:19.520 00:50:47 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:38:19.520 00:50:47 keyring_file -- keyring/common.sh@17 -- # name=key1 00:38:19.520 00:50:47 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:38:19.520 00:50:47 keyring_file -- keyring/common.sh@17 -- # digest=0 00:38:19.520 00:50:47 keyring_file -- keyring/common.sh@18 -- # mktemp 00:38:19.520 00:50:47 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.mvXwawGjG2 00:38:19.520 00:50:47 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:38:19.520 00:50:47 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:38:19.520 00:50:47 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:38:19.520 00:50:47 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:38:19.520 00:50:47 keyring_file -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:38:19.520 00:50:47 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:38:19.520 00:50:47 keyring_file -- nvmf/common.sh@705 -- # python - 00:38:19.520 00:50:47 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.mvXwawGjG2 00:38:19.520 00:50:47 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.mvXwawGjG2 00:38:19.520 00:50:47 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.mvXwawGjG2 00:38:19.520 00:50:47 keyring_file -- keyring/file.sh@30 -- # tgtpid=1103974 00:38:19.520 00:50:47 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:38:19.520 00:50:47 keyring_file -- keyring/file.sh@32 -- # waitforlisten 1103974 00:38:19.520 00:50:47 keyring_file -- common/autotest_common.sh@827 -- # '[' -z 1103974 ']' 00:38:19.520 00:50:47 keyring_file -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:19.520 00:50:47 keyring_file -- common/autotest_common.sh@832 -- # local max_retries=100 00:38:19.520 00:50:47 keyring_file -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:19.520 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:19.520 00:50:47 keyring_file -- common/autotest_common.sh@836 -- # xtrace_disable 00:38:19.520 00:50:47 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:38:19.520 [2024-07-12 00:50:47.336550] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:38:19.520 [2024-07-12 00:50:47.336675] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1103974 ] 00:38:19.779 EAL: No free 2048 kB hugepages reported on node 1 00:38:19.779 [2024-07-12 00:50:47.396639] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:19.779 [2024-07-12 00:50:47.486098] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:38:20.038 00:50:47 keyring_file -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:38:20.038 00:50:47 keyring_file -- common/autotest_common.sh@860 -- # return 0 00:38:20.038 00:50:47 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:38:20.038 00:50:47 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:20.038 00:50:47 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:38:20.038 [2024-07-12 00:50:47.710754] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:20.038 null0 00:38:20.038 [2024-07-12 00:50:47.742792] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:38:20.038 [2024-07-12 00:50:47.743172] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:38:20.038 [2024-07-12 00:50:47.750805] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:38:20.038 00:50:47 keyring_file -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:20.038 00:50:47 keyring_file -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:38:20.038 00:50:47 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:38:20.038 00:50:47 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:38:20.038 00:50:47 keyring_file -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:38:20.038 00:50:47 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:38:20.038 00:50:47 keyring_file -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:38:20.038 00:50:47 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:38:20.038 00:50:47 keyring_file -- common/autotest_common.sh@651 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:38:20.038 00:50:47 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:20.038 00:50:47 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:38:20.038 [2024-07-12 00:50:47.762827] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:38:20.038 request: 00:38:20.038 { 00:38:20.038 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:38:20.038 "secure_channel": false, 00:38:20.038 "listen_address": { 00:38:20.038 "trtype": "tcp", 00:38:20.038 "traddr": "127.0.0.1", 00:38:20.038 "trsvcid": "4420" 00:38:20.038 }, 00:38:20.038 "method": "nvmf_subsystem_add_listener", 00:38:20.038 "req_id": 1 00:38:20.038 } 00:38:20.038 Got JSON-RPC error response 00:38:20.038 response: 00:38:20.038 { 00:38:20.038 "code": -32602, 00:38:20.038 "message": "Invalid parameters" 00:38:20.038 } 00:38:20.038 00:50:47 keyring_file -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:38:20.038 00:50:47 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:38:20.038 00:50:47 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:38:20.038 00:50:47 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:38:20.038 00:50:47 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:38:20.038 00:50:47 keyring_file -- keyring/file.sh@46 -- # bperfpid=1104053 00:38:20.038 00:50:47 keyring_file -- keyring/file.sh@48 -- # waitforlisten 1104053 /var/tmp/bperf.sock 00:38:20.038 00:50:47 keyring_file -- common/autotest_common.sh@827 -- # '[' -z 1104053 ']' 00:38:20.038 00:50:47 keyring_file -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:38:20.038 00:50:47 keyring_file -- common/autotest_common.sh@832 -- # local max_retries=100 00:38:20.038 00:50:47 keyring_file -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:38:20.038 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:38:20.038 00:50:47 keyring_file -- keyring/file.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:38:20.038 00:50:47 keyring_file -- common/autotest_common.sh@836 -- # xtrace_disable 00:38:20.038 00:50:47 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:38:20.038 [2024-07-12 00:50:47.815595] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:38:20.038 [2024-07-12 00:50:47.815690] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1104053 ] 00:38:20.038 EAL: No free 2048 kB hugepages reported on node 1 00:38:20.038 [2024-07-12 00:50:47.875291] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:20.297 [2024-07-12 00:50:47.962530] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:38:20.297 00:50:48 keyring_file -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:38:20.297 00:50:48 keyring_file -- common/autotest_common.sh@860 -- # return 0 00:38:20.297 00:50:48 keyring_file -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.56W6AKskit 00:38:20.297 00:50:48 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.56W6AKskit 00:38:20.555 00:50:48 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.mvXwawGjG2 00:38:20.555 00:50:48 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.mvXwawGjG2 00:38:21.121 00:50:48 keyring_file -- keyring/file.sh@51 -- # get_key key0 00:38:21.121 00:50:48 keyring_file -- keyring/file.sh@51 -- # jq -r .path 00:38:21.121 00:50:48 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:21.121 00:50:48 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:21.121 00:50:48 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:21.121 00:50:48 keyring_file -- keyring/file.sh@51 -- # [[ /tmp/tmp.56W6AKskit == \/\t\m\p\/\t\m\p\.\5\6\W\6\A\K\s\k\i\t ]] 00:38:21.121 00:50:48 keyring_file -- keyring/file.sh@52 -- # get_key key1 00:38:21.121 00:50:48 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:38:21.121 00:50:48 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:21.121 00:50:48 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:21.121 00:50:48 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:38:21.379 00:50:49 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.mvXwawGjG2 == \/\t\m\p\/\t\m\p\.\m\v\X\w\a\w\G\j\G\2 ]] 00:38:21.379 00:50:49 keyring_file -- keyring/file.sh@53 -- # get_refcnt key0 00:38:21.379 00:50:49 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:38:21.379 00:50:49 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:21.379 00:50:49 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:21.379 00:50:49 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:21.379 00:50:49 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:21.637 00:50:49 keyring_file -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:38:21.637 00:50:49 keyring_file -- keyring/file.sh@54 -- # get_refcnt key1 00:38:21.637 00:50:49 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:38:21.637 00:50:49 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:21.637 00:50:49 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:21.637 00:50:49 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:21.637 00:50:49 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:38:21.895 00:50:49 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:38:21.895 00:50:49 keyring_file -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:21.895 00:50:49 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:22.153 [2024-07-12 00:50:49.897121] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:38:22.153 nvme0n1 00:38:22.153 00:50:49 keyring_file -- keyring/file.sh@59 -- # get_refcnt key0 00:38:22.153 00:50:49 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:38:22.411 00:50:49 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:22.411 00:50:49 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:22.411 00:50:49 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:22.411 00:50:49 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:22.411 00:50:50 keyring_file -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:38:22.411 00:50:50 keyring_file -- keyring/file.sh@60 -- # get_refcnt key1 00:38:22.411 00:50:50 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:38:22.411 00:50:50 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:22.411 00:50:50 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:22.411 00:50:50 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:22.411 00:50:50 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:38:22.669 00:50:50 keyring_file -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:38:22.669 00:50:50 keyring_file -- keyring/file.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:38:22.927 Running I/O for 1 seconds... 00:38:23.861 00:38:23.861 Latency(us) 00:38:23.861 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:23.861 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:38:23.861 nvme0n1 : 1.01 8742.82 34.15 0.00 0.00 14577.20 7233.23 24758.04 00:38:23.861 =================================================================================================================== 00:38:23.861 Total : 8742.82 34.15 0.00 0.00 14577.20 7233.23 24758.04 00:38:23.861 0 00:38:23.861 00:50:51 keyring_file -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:38:23.861 00:50:51 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:38:24.120 00:50:51 keyring_file -- keyring/file.sh@65 -- # get_refcnt key0 00:38:24.120 00:50:51 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:38:24.120 00:50:51 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:24.120 00:50:51 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:24.120 00:50:51 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:24.120 00:50:51 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:24.378 00:50:52 keyring_file -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:38:24.378 00:50:52 keyring_file -- keyring/file.sh@66 -- # get_refcnt key1 00:38:24.378 00:50:52 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:38:24.378 00:50:52 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:24.378 00:50:52 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:24.378 00:50:52 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:24.378 00:50:52 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:38:24.945 00:50:52 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:38:24.945 00:50:52 keyring_file -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:38:24.945 00:50:52 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:38:24.945 00:50:52 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:38:24.945 00:50:52 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:38:24.945 00:50:52 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:38:24.945 00:50:52 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:38:24.945 00:50:52 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:38:24.945 00:50:52 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:38:24.945 00:50:52 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:38:24.945 [2024-07-12 00:50:52.781610] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:38:24.945 [2024-07-12 00:50:52.782171] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2174190 (107): Transport endpoint is not connected 00:38:24.945 [2024-07-12 00:50:52.783162] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2174190 (9): Bad file descriptor 00:38:25.204 [2024-07-12 00:50:52.784160] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:38:25.204 [2024-07-12 00:50:52.784180] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:38:25.204 [2024-07-12 00:50:52.784195] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:38:25.204 request: 00:38:25.204 { 00:38:25.204 "name": "nvme0", 00:38:25.204 "trtype": "tcp", 00:38:25.204 "traddr": "127.0.0.1", 00:38:25.204 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:38:25.204 "adrfam": "ipv4", 00:38:25.204 "trsvcid": "4420", 00:38:25.204 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:25.204 "psk": "key1", 00:38:25.204 "method": "bdev_nvme_attach_controller", 00:38:25.204 "req_id": 1 00:38:25.204 } 00:38:25.204 Got JSON-RPC error response 00:38:25.204 response: 00:38:25.204 { 00:38:25.204 "code": -5, 00:38:25.204 "message": "Input/output error" 00:38:25.204 } 00:38:25.204 00:50:52 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:38:25.204 00:50:52 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:38:25.204 00:50:52 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:38:25.204 00:50:52 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:38:25.204 00:50:52 keyring_file -- keyring/file.sh@71 -- # get_refcnt key0 00:38:25.204 00:50:52 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:38:25.204 00:50:52 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:25.204 00:50:52 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:25.204 00:50:52 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:25.204 00:50:52 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:25.462 00:50:53 keyring_file -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:38:25.462 00:50:53 keyring_file -- keyring/file.sh@72 -- # get_refcnt key1 00:38:25.462 00:50:53 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:38:25.462 00:50:53 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:25.462 00:50:53 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:25.462 00:50:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:25.462 00:50:53 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:38:25.719 00:50:53 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:38:25.719 00:50:53 keyring_file -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:38:25.719 00:50:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:38:25.976 00:50:53 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:38:25.976 00:50:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:38:26.234 00:50:53 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:38:26.234 00:50:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:26.234 00:50:53 keyring_file -- keyring/file.sh@77 -- # jq length 00:38:26.491 00:50:54 keyring_file -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:38:26.491 00:50:54 keyring_file -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.56W6AKskit 00:38:26.491 00:50:54 keyring_file -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.56W6AKskit 00:38:26.491 00:50:54 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:38:26.491 00:50:54 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.56W6AKskit 00:38:26.491 00:50:54 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:38:26.491 00:50:54 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:38:26.491 00:50:54 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:38:26.491 00:50:54 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:38:26.491 00:50:54 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.56W6AKskit 00:38:26.492 00:50:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.56W6AKskit 00:38:26.749 [2024-07-12 00:50:54.503616] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.56W6AKskit': 0100660 00:38:26.749 [2024-07-12 00:50:54.503653] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:38:26.749 request: 00:38:26.749 { 00:38:26.749 "name": "key0", 00:38:26.749 "path": "/tmp/tmp.56W6AKskit", 00:38:26.749 "method": "keyring_file_add_key", 00:38:26.749 "req_id": 1 00:38:26.749 } 00:38:26.749 Got JSON-RPC error response 00:38:26.749 response: 00:38:26.749 { 00:38:26.749 "code": -1, 00:38:26.749 "message": "Operation not permitted" 00:38:26.749 } 00:38:26.749 00:50:54 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:38:26.749 00:50:54 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:38:26.749 00:50:54 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:38:26.749 00:50:54 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:38:26.749 00:50:54 keyring_file -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.56W6AKskit 00:38:26.749 00:50:54 keyring_file -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.56W6AKskit 00:38:26.749 00:50:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.56W6AKskit 00:38:27.007 00:50:54 keyring_file -- keyring/file.sh@86 -- # rm -f /tmp/tmp.56W6AKskit 00:38:27.007 00:50:54 keyring_file -- keyring/file.sh@88 -- # get_refcnt key0 00:38:27.007 00:50:54 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:38:27.007 00:50:54 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:27.007 00:50:54 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:27.007 00:50:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:27.007 00:50:54 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:27.265 00:50:55 keyring_file -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:38:27.265 00:50:55 keyring_file -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:27.265 00:50:55 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:38:27.265 00:50:55 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:27.265 00:50:55 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:38:27.265 00:50:55 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:38:27.265 00:50:55 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:38:27.265 00:50:55 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:38:27.265 00:50:55 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:27.265 00:50:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:27.523 [2024-07-12 00:50:55.233567] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.56W6AKskit': No such file or directory 00:38:27.523 [2024-07-12 00:50:55.233612] nvme_tcp.c:2573:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:38:27.523 [2024-07-12 00:50:55.233647] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:38:27.523 [2024-07-12 00:50:55.233660] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:38:27.523 [2024-07-12 00:50:55.233674] bdev_nvme.c:6269:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:38:27.523 request: 00:38:27.523 { 00:38:27.523 "name": "nvme0", 00:38:27.523 "trtype": "tcp", 00:38:27.523 "traddr": "127.0.0.1", 00:38:27.523 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:38:27.523 "adrfam": "ipv4", 00:38:27.523 "trsvcid": "4420", 00:38:27.523 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:27.523 "psk": "key0", 00:38:27.523 "method": "bdev_nvme_attach_controller", 00:38:27.523 "req_id": 1 00:38:27.523 } 00:38:27.523 Got JSON-RPC error response 00:38:27.523 response: 00:38:27.523 { 00:38:27.523 "code": -19, 00:38:27.523 "message": "No such device" 00:38:27.523 } 00:38:27.523 00:50:55 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:38:27.523 00:50:55 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:38:27.523 00:50:55 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:38:27.523 00:50:55 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:38:27.523 00:50:55 keyring_file -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:38:27.523 00:50:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:38:27.781 00:50:55 keyring_file -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:38:27.781 00:50:55 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:38:27.781 00:50:55 keyring_file -- keyring/common.sh@17 -- # name=key0 00:38:27.781 00:50:55 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:38:27.781 00:50:55 keyring_file -- keyring/common.sh@17 -- # digest=0 00:38:27.781 00:50:55 keyring_file -- keyring/common.sh@18 -- # mktemp 00:38:27.781 00:50:55 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.Ygx6SalOhY 00:38:27.781 00:50:55 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:38:27.781 00:50:55 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:38:27.781 00:50:55 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:38:27.781 00:50:55 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:38:27.781 00:50:55 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:38:27.781 00:50:55 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:38:27.781 00:50:55 keyring_file -- nvmf/common.sh@705 -- # python - 00:38:27.781 00:50:55 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.Ygx6SalOhY 00:38:27.781 00:50:55 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.Ygx6SalOhY 00:38:27.781 00:50:55 keyring_file -- keyring/file.sh@95 -- # key0path=/tmp/tmp.Ygx6SalOhY 00:38:27.781 00:50:55 keyring_file -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.Ygx6SalOhY 00:38:27.781 00:50:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.Ygx6SalOhY 00:38:28.039 00:50:55 keyring_file -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:28.039 00:50:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:28.297 nvme0n1 00:38:28.297 00:50:56 keyring_file -- keyring/file.sh@99 -- # get_refcnt key0 00:38:28.297 00:50:56 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:38:28.297 00:50:56 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:28.297 00:50:56 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:28.297 00:50:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:28.297 00:50:56 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:28.555 00:50:56 keyring_file -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:38:28.555 00:50:56 keyring_file -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:38:28.555 00:50:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:38:28.813 00:50:56 keyring_file -- keyring/file.sh@101 -- # get_key key0 00:38:28.813 00:50:56 keyring_file -- keyring/file.sh@101 -- # jq -r .removed 00:38:28.813 00:50:56 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:28.813 00:50:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:28.813 00:50:56 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:29.109 00:50:56 keyring_file -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:38:29.109 00:50:56 keyring_file -- keyring/file.sh@102 -- # get_refcnt key0 00:38:29.109 00:50:56 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:38:29.109 00:50:56 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:29.109 00:50:56 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:29.109 00:50:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:29.109 00:50:56 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:29.397 00:50:57 keyring_file -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:38:29.397 00:50:57 keyring_file -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:38:29.397 00:50:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:38:29.655 00:50:57 keyring_file -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:38:29.655 00:50:57 keyring_file -- keyring/file.sh@104 -- # jq length 00:38:29.655 00:50:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:29.913 00:50:57 keyring_file -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:38:29.913 00:50:57 keyring_file -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.Ygx6SalOhY 00:38:29.913 00:50:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.Ygx6SalOhY 00:38:30.169 00:50:57 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.mvXwawGjG2 00:38:30.169 00:50:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.mvXwawGjG2 00:38:30.427 00:50:58 keyring_file -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:30.427 00:50:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:30.685 nvme0n1 00:38:30.685 00:50:58 keyring_file -- keyring/file.sh@112 -- # bperf_cmd save_config 00:38:30.685 00:50:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:38:30.944 00:50:58 keyring_file -- keyring/file.sh@112 -- # config='{ 00:38:30.944 "subsystems": [ 00:38:30.944 { 00:38:30.944 "subsystem": "keyring", 00:38:30.944 "config": [ 00:38:30.944 { 00:38:30.944 "method": "keyring_file_add_key", 00:38:30.944 "params": { 00:38:30.944 "name": "key0", 00:38:30.944 "path": "/tmp/tmp.Ygx6SalOhY" 00:38:30.944 } 00:38:30.944 }, 00:38:30.944 { 00:38:30.944 "method": "keyring_file_add_key", 00:38:30.944 "params": { 00:38:30.944 "name": "key1", 00:38:30.944 "path": "/tmp/tmp.mvXwawGjG2" 00:38:30.944 } 00:38:30.944 } 00:38:30.944 ] 00:38:30.944 }, 00:38:30.944 { 00:38:30.944 "subsystem": "iobuf", 00:38:30.944 "config": [ 00:38:30.944 { 00:38:30.944 "method": "iobuf_set_options", 00:38:30.944 "params": { 00:38:30.944 "small_pool_count": 8192, 00:38:30.944 "large_pool_count": 1024, 00:38:30.944 "small_bufsize": 8192, 00:38:30.944 "large_bufsize": 135168 00:38:30.944 } 00:38:30.944 } 00:38:30.944 ] 00:38:30.944 }, 00:38:30.944 { 00:38:30.944 "subsystem": "sock", 00:38:30.944 "config": [ 00:38:30.944 { 00:38:30.944 "method": "sock_set_default_impl", 00:38:30.944 "params": { 00:38:30.944 "impl_name": "posix" 00:38:30.944 } 00:38:30.944 }, 00:38:30.944 { 00:38:30.944 "method": "sock_impl_set_options", 00:38:30.944 "params": { 00:38:30.944 "impl_name": "ssl", 00:38:30.944 "recv_buf_size": 4096, 00:38:30.944 "send_buf_size": 4096, 00:38:30.944 "enable_recv_pipe": true, 00:38:30.944 "enable_quickack": false, 00:38:30.944 "enable_placement_id": 0, 00:38:30.944 "enable_zerocopy_send_server": true, 00:38:30.944 "enable_zerocopy_send_client": false, 00:38:30.944 "zerocopy_threshold": 0, 00:38:30.944 "tls_version": 0, 00:38:30.944 "enable_ktls": false 00:38:30.944 } 00:38:30.944 }, 00:38:30.944 { 00:38:30.944 "method": "sock_impl_set_options", 00:38:30.944 "params": { 00:38:30.944 "impl_name": "posix", 00:38:30.944 "recv_buf_size": 2097152, 00:38:30.944 "send_buf_size": 2097152, 00:38:30.944 "enable_recv_pipe": true, 00:38:30.944 "enable_quickack": false, 00:38:30.944 "enable_placement_id": 0, 00:38:30.944 "enable_zerocopy_send_server": true, 00:38:30.944 "enable_zerocopy_send_client": false, 00:38:30.944 "zerocopy_threshold": 0, 00:38:30.944 "tls_version": 0, 00:38:30.944 "enable_ktls": false 00:38:30.944 } 00:38:30.944 } 00:38:30.944 ] 00:38:30.944 }, 00:38:30.944 { 00:38:30.944 "subsystem": "vmd", 00:38:30.944 "config": [] 00:38:30.944 }, 00:38:30.944 { 00:38:30.944 "subsystem": "accel", 00:38:30.944 "config": [ 00:38:30.944 { 00:38:30.945 "method": "accel_set_options", 00:38:30.945 "params": { 00:38:30.945 "small_cache_size": 128, 00:38:30.945 "large_cache_size": 16, 00:38:30.945 "task_count": 2048, 00:38:30.945 "sequence_count": 2048, 00:38:30.945 "buf_count": 2048 00:38:30.945 } 00:38:30.945 } 00:38:30.945 ] 00:38:30.945 }, 00:38:30.945 { 00:38:30.945 "subsystem": "bdev", 00:38:30.945 "config": [ 00:38:30.945 { 00:38:30.945 "method": "bdev_set_options", 00:38:30.945 "params": { 00:38:30.945 "bdev_io_pool_size": 65535, 00:38:30.945 "bdev_io_cache_size": 256, 00:38:30.945 "bdev_auto_examine": true, 00:38:30.945 "iobuf_small_cache_size": 128, 00:38:30.945 "iobuf_large_cache_size": 16 00:38:30.945 } 00:38:30.945 }, 00:38:30.945 { 00:38:30.945 "method": "bdev_raid_set_options", 00:38:30.945 "params": { 00:38:30.945 "process_window_size_kb": 1024 00:38:30.945 } 00:38:30.945 }, 00:38:30.945 { 00:38:30.945 "method": "bdev_iscsi_set_options", 00:38:30.945 "params": { 00:38:30.945 "timeout_sec": 30 00:38:30.945 } 00:38:30.945 }, 00:38:30.945 { 00:38:30.945 "method": "bdev_nvme_set_options", 00:38:30.945 "params": { 00:38:30.945 "action_on_timeout": "none", 00:38:30.945 "timeout_us": 0, 00:38:30.945 "timeout_admin_us": 0, 00:38:30.945 "keep_alive_timeout_ms": 10000, 00:38:30.945 "arbitration_burst": 0, 00:38:30.945 "low_priority_weight": 0, 00:38:30.945 "medium_priority_weight": 0, 00:38:30.945 "high_priority_weight": 0, 00:38:30.945 "nvme_adminq_poll_period_us": 10000, 00:38:30.945 "nvme_ioq_poll_period_us": 0, 00:38:30.945 "io_queue_requests": 512, 00:38:30.945 "delay_cmd_submit": true, 00:38:30.945 "transport_retry_count": 4, 00:38:30.945 "bdev_retry_count": 3, 00:38:30.945 "transport_ack_timeout": 0, 00:38:30.945 "ctrlr_loss_timeout_sec": 0, 00:38:30.945 "reconnect_delay_sec": 0, 00:38:30.945 "fast_io_fail_timeout_sec": 0, 00:38:30.945 "disable_auto_failback": false, 00:38:30.945 "generate_uuids": false, 00:38:30.945 "transport_tos": 0, 00:38:30.945 "nvme_error_stat": false, 00:38:30.945 "rdma_srq_size": 0, 00:38:30.945 "io_path_stat": false, 00:38:30.945 "allow_accel_sequence": false, 00:38:30.945 "rdma_max_cq_size": 0, 00:38:30.945 "rdma_cm_event_timeout_ms": 0, 00:38:30.945 "dhchap_digests": [ 00:38:30.945 "sha256", 00:38:30.945 "sha384", 00:38:30.945 "sha512" 00:38:30.945 ], 00:38:30.945 "dhchap_dhgroups": [ 00:38:30.945 "null", 00:38:30.945 "ffdhe2048", 00:38:30.945 "ffdhe3072", 00:38:30.945 "ffdhe4096", 00:38:30.945 "ffdhe6144", 00:38:30.945 "ffdhe8192" 00:38:30.945 ] 00:38:30.945 } 00:38:30.945 }, 00:38:30.945 { 00:38:30.945 "method": "bdev_nvme_attach_controller", 00:38:30.945 "params": { 00:38:30.945 "name": "nvme0", 00:38:30.945 "trtype": "TCP", 00:38:30.945 "adrfam": "IPv4", 00:38:30.945 "traddr": "127.0.0.1", 00:38:30.945 "trsvcid": "4420", 00:38:30.945 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:30.945 "prchk_reftag": false, 00:38:30.945 "prchk_guard": false, 00:38:30.945 "ctrlr_loss_timeout_sec": 0, 00:38:30.945 "reconnect_delay_sec": 0, 00:38:30.945 "fast_io_fail_timeout_sec": 0, 00:38:30.945 "psk": "key0", 00:38:30.945 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:38:30.945 "hdgst": false, 00:38:30.945 "ddgst": false 00:38:30.945 } 00:38:30.945 }, 00:38:30.945 { 00:38:30.945 "method": "bdev_nvme_set_hotplug", 00:38:30.945 "params": { 00:38:30.945 "period_us": 100000, 00:38:30.945 "enable": false 00:38:30.945 } 00:38:30.945 }, 00:38:30.945 { 00:38:30.945 "method": "bdev_wait_for_examine" 00:38:30.945 } 00:38:30.945 ] 00:38:30.945 }, 00:38:30.945 { 00:38:30.945 "subsystem": "nbd", 00:38:30.945 "config": [] 00:38:30.945 } 00:38:30.945 ] 00:38:30.945 }' 00:38:30.945 00:50:58 keyring_file -- keyring/file.sh@114 -- # killprocess 1104053 00:38:30.945 00:50:58 keyring_file -- common/autotest_common.sh@946 -- # '[' -z 1104053 ']' 00:38:30.945 00:50:58 keyring_file -- common/autotest_common.sh@950 -- # kill -0 1104053 00:38:30.945 00:50:58 keyring_file -- common/autotest_common.sh@951 -- # uname 00:38:30.945 00:50:58 keyring_file -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:38:30.945 00:50:58 keyring_file -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1104053 00:38:30.945 00:50:58 keyring_file -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:38:30.945 00:50:58 keyring_file -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:38:30.945 00:50:58 keyring_file -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1104053' 00:38:30.945 killing process with pid 1104053 00:38:30.945 00:50:58 keyring_file -- common/autotest_common.sh@965 -- # kill 1104053 00:38:30.945 Received shutdown signal, test time was about 1.000000 seconds 00:38:30.945 00:38:30.945 Latency(us) 00:38:30.945 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:30.945 =================================================================================================================== 00:38:30.945 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:38:30.945 00:50:58 keyring_file -- common/autotest_common.sh@970 -- # wait 1104053 00:38:31.204 00:50:58 keyring_file -- keyring/file.sh@117 -- # bperfpid=1105200 00:38:31.204 00:50:58 keyring_file -- keyring/file.sh@119 -- # waitforlisten 1105200 /var/tmp/bperf.sock 00:38:31.204 00:50:58 keyring_file -- common/autotest_common.sh@827 -- # '[' -z 1105200 ']' 00:38:31.204 00:50:58 keyring_file -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:38:31.204 00:50:58 keyring_file -- keyring/file.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:38:31.204 00:50:58 keyring_file -- common/autotest_common.sh@832 -- # local max_retries=100 00:38:31.204 00:50:58 keyring_file -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:38:31.204 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:38:31.204 00:50:58 keyring_file -- keyring/file.sh@115 -- # echo '{ 00:38:31.204 "subsystems": [ 00:38:31.204 { 00:38:31.204 "subsystem": "keyring", 00:38:31.204 "config": [ 00:38:31.204 { 00:38:31.204 "method": "keyring_file_add_key", 00:38:31.204 "params": { 00:38:31.204 "name": "key0", 00:38:31.204 "path": "/tmp/tmp.Ygx6SalOhY" 00:38:31.204 } 00:38:31.204 }, 00:38:31.204 { 00:38:31.204 "method": "keyring_file_add_key", 00:38:31.204 "params": { 00:38:31.204 "name": "key1", 00:38:31.204 "path": "/tmp/tmp.mvXwawGjG2" 00:38:31.204 } 00:38:31.204 } 00:38:31.204 ] 00:38:31.204 }, 00:38:31.204 { 00:38:31.204 "subsystem": "iobuf", 00:38:31.204 "config": [ 00:38:31.204 { 00:38:31.204 "method": "iobuf_set_options", 00:38:31.204 "params": { 00:38:31.204 "small_pool_count": 8192, 00:38:31.204 "large_pool_count": 1024, 00:38:31.204 "small_bufsize": 8192, 00:38:31.204 "large_bufsize": 135168 00:38:31.204 } 00:38:31.204 } 00:38:31.204 ] 00:38:31.204 }, 00:38:31.204 { 00:38:31.204 "subsystem": "sock", 00:38:31.204 "config": [ 00:38:31.204 { 00:38:31.204 "method": "sock_set_default_impl", 00:38:31.204 "params": { 00:38:31.204 "impl_name": "posix" 00:38:31.204 } 00:38:31.204 }, 00:38:31.204 { 00:38:31.204 "method": "sock_impl_set_options", 00:38:31.204 "params": { 00:38:31.204 "impl_name": "ssl", 00:38:31.204 "recv_buf_size": 4096, 00:38:31.204 "send_buf_size": 4096, 00:38:31.204 "enable_recv_pipe": true, 00:38:31.204 "enable_quickack": false, 00:38:31.204 "enable_placement_id": 0, 00:38:31.204 "enable_zerocopy_send_server": true, 00:38:31.204 "enable_zerocopy_send_client": false, 00:38:31.204 "zerocopy_threshold": 0, 00:38:31.204 "tls_version": 0, 00:38:31.204 "enable_ktls": false 00:38:31.204 } 00:38:31.204 }, 00:38:31.204 { 00:38:31.204 "method": "sock_impl_set_options", 00:38:31.204 "params": { 00:38:31.204 "impl_name": "posix", 00:38:31.204 "recv_buf_size": 2097152, 00:38:31.204 "send_buf_size": 2097152, 00:38:31.204 "enable_recv_pipe": true, 00:38:31.204 "enable_quickack": false, 00:38:31.204 "enable_placement_id": 0, 00:38:31.204 "enable_zerocopy_send_server": true, 00:38:31.204 "enable_zerocopy_send_client": false, 00:38:31.204 "zerocopy_threshold": 0, 00:38:31.204 "tls_version": 0, 00:38:31.204 "enable_ktls": false 00:38:31.204 } 00:38:31.204 } 00:38:31.204 ] 00:38:31.204 }, 00:38:31.205 { 00:38:31.205 "subsystem": "vmd", 00:38:31.205 "config": [] 00:38:31.205 }, 00:38:31.205 { 00:38:31.205 "subsystem": "accel", 00:38:31.205 "config": [ 00:38:31.205 { 00:38:31.205 "method": "accel_set_options", 00:38:31.205 "params": { 00:38:31.205 "small_cache_size": 128, 00:38:31.205 "large_cache_size": 16, 00:38:31.205 "task_count": 2048, 00:38:31.205 "sequence_count": 2048, 00:38:31.205 "buf_count": 2048 00:38:31.205 } 00:38:31.205 } 00:38:31.205 ] 00:38:31.205 }, 00:38:31.205 { 00:38:31.205 "subsystem": "bdev", 00:38:31.205 "config": [ 00:38:31.205 { 00:38:31.205 "method": "bdev_set_options", 00:38:31.205 "params": { 00:38:31.205 "bdev_io_pool_size": 65535, 00:38:31.205 "bdev_io_cache_size": 256, 00:38:31.205 "bdev_auto_examine": true, 00:38:31.205 "iobuf_small_cache_size": 128, 00:38:31.205 "iobuf_large_cache_size": 16 00:38:31.205 } 00:38:31.205 }, 00:38:31.205 { 00:38:31.205 "method": "bdev_raid_set_options", 00:38:31.205 "params": { 00:38:31.205 "process_window_size_kb": 1024 00:38:31.205 } 00:38:31.205 }, 00:38:31.205 { 00:38:31.205 "method": "bdev_iscsi_set_options", 00:38:31.205 "params": { 00:38:31.205 "timeout_sec": 30 00:38:31.205 } 00:38:31.205 }, 00:38:31.205 { 00:38:31.205 "method": "bdev_nvme_set_options", 00:38:31.205 "params": { 00:38:31.205 "action_on_timeout": "none", 00:38:31.205 "timeout_us": 0, 00:38:31.205 "timeout_admin_us": 0, 00:38:31.205 "keep_alive_timeout_ms": 10000, 00:38:31.205 "arbitration_burst": 0, 00:38:31.205 "low_priority_weight": 0, 00:38:31.205 "medium_priority_weight": 0, 00:38:31.205 "high_priority_weight": 0, 00:38:31.205 "nvme_adminq_poll_period_us": 10000, 00:38:31.205 "nvme_ioq_poll_period_us": 0, 00:38:31.205 "io_queue_requests": 512, 00:38:31.205 "delay_cmd_submit": true, 00:38:31.205 "transport_retry_count": 4, 00:38:31.205 "bdev_retry_count": 3, 00:38:31.205 "transport_ack_timeout": 0, 00:38:31.205 "ctrlr_loss_timeout_sec": 0, 00:38:31.205 "reconnect_delay_sec": 0, 00:38:31.205 "fast_io_fail_timeout_sec": 0, 00:38:31.205 "disable_auto_failback": false, 00:38:31.205 "generate_uuids": false, 00:38:31.205 "transport_tos": 0, 00:38:31.205 "nvme_error_stat": false, 00:38:31.205 "rdma_srq_size": 0, 00:38:31.205 "io_path_stat": false, 00:38:31.205 "allow_accel_sequence": false, 00:38:31.205 "rdma_max_cq_size": 0, 00:38:31.205 "rdma_cm_event_timeout_ms": 0, 00:38:31.205 "dhchap_digests": [ 00:38:31.205 "sha256", 00:38:31.205 "sha384", 00:38:31.205 "sha512" 00:38:31.205 ], 00:38:31.205 "dhchap_dhgroups": [ 00:38:31.205 "null", 00:38:31.205 "ffdhe2048", 00:38:31.205 "ffdhe3072", 00:38:31.205 "ffdhe4096", 00:38:31.205 "ffdhe6144", 00:38:31.205 "ffdhe8192" 00:38:31.205 ] 00:38:31.205 } 00:38:31.205 }, 00:38:31.205 { 00:38:31.205 "method": "bdev_nvme_attach_controller", 00:38:31.205 "params": { 00:38:31.205 "name": "nvme0", 00:38:31.205 "trtype": "TCP", 00:38:31.205 "adrfam": "IPv4", 00:38:31.205 "traddr": "127.0.0.1", 00:38:31.205 "trsvcid": "4420", 00:38:31.205 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:31.205 "prchk_reftag": false, 00:38:31.205 "prchk_guard": false, 00:38:31.205 "ctrlr_loss_timeout_sec": 0, 00:38:31.205 "reconnect_delay_sec": 0, 00:38:31.205 "fast_io_fail_timeout_sec": 0, 00:38:31.205 "psk": "key0", 00:38:31.205 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:38:31.205 "hdgst": false, 00:38:31.205 "ddgst": false 00:38:31.205 } 00:38:31.205 }, 00:38:31.205 { 00:38:31.205 "method": "bdev_nvme_set_hotplug", 00:38:31.205 "params": { 00:38:31.205 "period_us": 100000, 00:38:31.205 "enable": false 00:38:31.205 } 00:38:31.205 }, 00:38:31.205 { 00:38:31.205 "method": "bdev_wait_for_examine" 00:38:31.205 } 00:38:31.205 ] 00:38:31.205 }, 00:38:31.205 { 00:38:31.205 "subsystem": "nbd", 00:38:31.205 "config": [] 00:38:31.205 } 00:38:31.205 ] 00:38:31.205 }' 00:38:31.205 00:50:58 keyring_file -- common/autotest_common.sh@836 -- # xtrace_disable 00:38:31.205 00:50:58 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:38:31.205 [2024-07-12 00:50:58.961994] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:38:31.205 [2024-07-12 00:50:58.962092] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1105200 ] 00:38:31.205 EAL: No free 2048 kB hugepages reported on node 1 00:38:31.205 [2024-07-12 00:50:59.022358] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:31.463 [2024-07-12 00:50:59.113237] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:38:31.463 [2024-07-12 00:50:59.287446] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:38:31.720 00:50:59 keyring_file -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:38:31.720 00:50:59 keyring_file -- common/autotest_common.sh@860 -- # return 0 00:38:31.720 00:50:59 keyring_file -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:38:31.720 00:50:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:31.720 00:50:59 keyring_file -- keyring/file.sh@120 -- # jq length 00:38:31.978 00:50:59 keyring_file -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:38:31.978 00:50:59 keyring_file -- keyring/file.sh@121 -- # get_refcnt key0 00:38:31.978 00:50:59 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:38:31.978 00:50:59 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:31.978 00:50:59 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:31.978 00:50:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:31.978 00:50:59 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:32.236 00:51:00 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:38:32.236 00:51:00 keyring_file -- keyring/file.sh@122 -- # get_refcnt key1 00:38:32.236 00:51:00 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:38:32.236 00:51:00 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:32.236 00:51:00 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:32.236 00:51:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:32.236 00:51:00 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:38:32.495 00:51:00 keyring_file -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:38:32.495 00:51:00 keyring_file -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:38:32.495 00:51:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:38:32.495 00:51:00 keyring_file -- keyring/file.sh@123 -- # jq -r '.[].name' 00:38:33.063 00:51:00 keyring_file -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:38:33.063 00:51:00 keyring_file -- keyring/file.sh@1 -- # cleanup 00:38:33.063 00:51:00 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.Ygx6SalOhY /tmp/tmp.mvXwawGjG2 00:38:33.063 00:51:00 keyring_file -- keyring/file.sh@20 -- # killprocess 1105200 00:38:33.063 00:51:00 keyring_file -- common/autotest_common.sh@946 -- # '[' -z 1105200 ']' 00:38:33.063 00:51:00 keyring_file -- common/autotest_common.sh@950 -- # kill -0 1105200 00:38:33.063 00:51:00 keyring_file -- common/autotest_common.sh@951 -- # uname 00:38:33.063 00:51:00 keyring_file -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:38:33.063 00:51:00 keyring_file -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1105200 00:38:33.063 00:51:00 keyring_file -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:38:33.063 00:51:00 keyring_file -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:38:33.063 00:51:00 keyring_file -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1105200' 00:38:33.063 killing process with pid 1105200 00:38:33.063 00:51:00 keyring_file -- common/autotest_common.sh@965 -- # kill 1105200 00:38:33.063 Received shutdown signal, test time was about 1.000000 seconds 00:38:33.063 00:38:33.063 Latency(us) 00:38:33.063 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:33.063 =================================================================================================================== 00:38:33.063 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:38:33.063 00:51:00 keyring_file -- common/autotest_common.sh@970 -- # wait 1105200 00:38:33.063 00:51:00 keyring_file -- keyring/file.sh@21 -- # killprocess 1103974 00:38:33.063 00:51:00 keyring_file -- common/autotest_common.sh@946 -- # '[' -z 1103974 ']' 00:38:33.063 00:51:00 keyring_file -- common/autotest_common.sh@950 -- # kill -0 1103974 00:38:33.063 00:51:00 keyring_file -- common/autotest_common.sh@951 -- # uname 00:38:33.063 00:51:00 keyring_file -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:38:33.063 00:51:00 keyring_file -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1103974 00:38:33.063 00:51:00 keyring_file -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:38:33.063 00:51:00 keyring_file -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:38:33.063 00:51:00 keyring_file -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1103974' 00:38:33.063 killing process with pid 1103974 00:38:33.063 00:51:00 keyring_file -- common/autotest_common.sh@965 -- # kill 1103974 00:38:33.063 [2024-07-12 00:51:00.817328] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:38:33.063 00:51:00 keyring_file -- common/autotest_common.sh@970 -- # wait 1103974 00:38:33.322 00:38:33.322 real 0m13.975s 00:38:33.322 user 0m36.042s 00:38:33.322 sys 0m3.083s 00:38:33.322 00:51:01 keyring_file -- common/autotest_common.sh@1122 -- # xtrace_disable 00:38:33.322 00:51:01 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:38:33.322 ************************************ 00:38:33.322 END TEST keyring_file 00:38:33.322 ************************************ 00:38:33.323 00:51:01 -- spdk/autotest.sh@296 -- # [[ y == y ]] 00:38:33.323 00:51:01 -- spdk/autotest.sh@297 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:38:33.323 00:51:01 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:38:33.323 00:51:01 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:38:33.323 00:51:01 -- common/autotest_common.sh@10 -- # set +x 00:38:33.323 ************************************ 00:38:33.323 START TEST keyring_linux 00:38:33.323 ************************************ 00:38:33.323 00:51:01 keyring_linux -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:38:33.582 * Looking for test storage... 00:38:33.582 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:38:33.582 00:51:01 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:38:33.582 00:51:01 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:33.582 00:51:01 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:38:33.582 00:51:01 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:33.582 00:51:01 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:33.582 00:51:01 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:33.582 00:51:01 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:33.582 00:51:01 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:33.582 00:51:01 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:33.582 00:51:01 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:33.582 00:51:01 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:33.582 00:51:01 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:33.582 00:51:01 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:33.582 00:51:01 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:38:33.582 00:51:01 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:38:33.582 00:51:01 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:33.582 00:51:01 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:33.582 00:51:01 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:33.582 00:51:01 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:33.582 00:51:01 keyring_linux -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:33.582 00:51:01 keyring_linux -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:33.582 00:51:01 keyring_linux -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:33.582 00:51:01 keyring_linux -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:33.582 00:51:01 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:33.582 00:51:01 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:33.582 00:51:01 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:33.582 00:51:01 keyring_linux -- paths/export.sh@5 -- # export PATH 00:38:33.582 00:51:01 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:33.582 00:51:01 keyring_linux -- nvmf/common.sh@47 -- # : 0 00:38:33.582 00:51:01 keyring_linux -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:38:33.582 00:51:01 keyring_linux -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:38:33.582 00:51:01 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:33.582 00:51:01 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:33.582 00:51:01 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:33.582 00:51:01 keyring_linux -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:38:33.582 00:51:01 keyring_linux -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:38:33.582 00:51:01 keyring_linux -- nvmf/common.sh@51 -- # have_pci_nics=0 00:38:33.582 00:51:01 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:38:33.582 00:51:01 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:38:33.582 00:51:01 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:38:33.582 00:51:01 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:38:33.582 00:51:01 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:38:33.582 00:51:01 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:38:33.582 00:51:01 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:38:33.582 00:51:01 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:38:33.582 00:51:01 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:38:33.582 00:51:01 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:38:33.582 00:51:01 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:38:33.582 00:51:01 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:38:33.582 00:51:01 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:38:33.582 00:51:01 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:38:33.582 00:51:01 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:38:33.582 00:51:01 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:38:33.582 00:51:01 keyring_linux -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:38:33.582 00:51:01 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:38:33.582 00:51:01 keyring_linux -- nvmf/common.sh@705 -- # python - 00:38:33.582 00:51:01 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:38:33.582 00:51:01 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:38:33.582 /tmp/:spdk-test:key0 00:38:33.582 00:51:01 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:38:33.582 00:51:01 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:38:33.582 00:51:01 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:38:33.582 00:51:01 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:38:33.582 00:51:01 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:38:33.582 00:51:01 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:38:33.582 00:51:01 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:38:33.582 00:51:01 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:38:33.582 00:51:01 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:38:33.582 00:51:01 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:38:33.582 00:51:01 keyring_linux -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:38:33.582 00:51:01 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:38:33.582 00:51:01 keyring_linux -- nvmf/common.sh@705 -- # python - 00:38:33.582 00:51:01 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:38:33.582 00:51:01 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:38:33.582 /tmp/:spdk-test:key1 00:38:33.582 00:51:01 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=1105548 00:38:33.582 00:51:01 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:38:33.582 00:51:01 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 1105548 00:38:33.582 00:51:01 keyring_linux -- common/autotest_common.sh@827 -- # '[' -z 1105548 ']' 00:38:33.582 00:51:01 keyring_linux -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:33.582 00:51:01 keyring_linux -- common/autotest_common.sh@832 -- # local max_retries=100 00:38:33.582 00:51:01 keyring_linux -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:33.582 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:33.582 00:51:01 keyring_linux -- common/autotest_common.sh@836 -- # xtrace_disable 00:38:33.582 00:51:01 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:38:33.582 [2024-07-12 00:51:01.364354] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:38:33.582 [2024-07-12 00:51:01.364460] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1105548 ] 00:38:33.582 EAL: No free 2048 kB hugepages reported on node 1 00:38:33.840 [2024-07-12 00:51:01.425234] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:33.840 [2024-07-12 00:51:01.516150] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:38:34.099 00:51:01 keyring_linux -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:38:34.099 00:51:01 keyring_linux -- common/autotest_common.sh@860 -- # return 0 00:38:34.099 00:51:01 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:38:34.099 00:51:01 keyring_linux -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:34.099 00:51:01 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:38:34.099 [2024-07-12 00:51:01.736989] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:34.099 null0 00:38:34.099 [2024-07-12 00:51:01.769053] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:38:34.099 [2024-07-12 00:51:01.769416] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:38:34.099 00:51:01 keyring_linux -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:34.099 00:51:01 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:38:34.099 877995890 00:38:34.099 00:51:01 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:38:34.099 1041828562 00:38:34.099 00:51:01 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=1105673 00:38:34.099 00:51:01 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:38:34.099 00:51:01 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 1105673 /var/tmp/bperf.sock 00:38:34.099 00:51:01 keyring_linux -- common/autotest_common.sh@827 -- # '[' -z 1105673 ']' 00:38:34.099 00:51:01 keyring_linux -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:38:34.099 00:51:01 keyring_linux -- common/autotest_common.sh@832 -- # local max_retries=100 00:38:34.099 00:51:01 keyring_linux -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:38:34.099 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:38:34.099 00:51:01 keyring_linux -- common/autotest_common.sh@836 -- # xtrace_disable 00:38:34.099 00:51:01 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:38:34.099 [2024-07-12 00:51:01.840878] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:38:34.099 [2024-07-12 00:51:01.840965] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1105673 ] 00:38:34.099 EAL: No free 2048 kB hugepages reported on node 1 00:38:34.099 [2024-07-12 00:51:01.899592] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:34.358 [2024-07-12 00:51:01.986899] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:38:34.358 00:51:02 keyring_linux -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:38:34.358 00:51:02 keyring_linux -- common/autotest_common.sh@860 -- # return 0 00:38:34.358 00:51:02 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:38:34.358 00:51:02 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:38:34.616 00:51:02 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:38:34.616 00:51:02 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:38:34.873 00:51:02 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:38:34.873 00:51:02 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:38:35.131 [2024-07-12 00:51:02.919886] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:38:35.387 nvme0n1 00:38:35.387 00:51:03 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:38:35.387 00:51:03 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:38:35.387 00:51:03 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:38:35.387 00:51:03 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:38:35.387 00:51:03 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:38:35.387 00:51:03 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:35.642 00:51:03 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:38:35.642 00:51:03 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:38:35.642 00:51:03 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:38:35.642 00:51:03 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:38:35.642 00:51:03 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:35.642 00:51:03 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:38:35.642 00:51:03 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:35.899 00:51:03 keyring_linux -- keyring/linux.sh@25 -- # sn=877995890 00:38:35.899 00:51:03 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:38:35.899 00:51:03 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:38:35.899 00:51:03 keyring_linux -- keyring/linux.sh@26 -- # [[ 877995890 == \8\7\7\9\9\5\8\9\0 ]] 00:38:35.899 00:51:03 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 877995890 00:38:35.899 00:51:03 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:38:35.899 00:51:03 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:38:35.899 Running I/O for 1 seconds... 00:38:36.833 00:38:36.833 Latency(us) 00:38:36.833 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:36.833 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:38:36.833 nvme0n1 : 1.01 9303.99 36.34 0.00 0.00 13645.72 3932.16 17379.18 00:38:36.833 =================================================================================================================== 00:38:36.833 Total : 9303.99 36.34 0.00 0.00 13645.72 3932.16 17379.18 00:38:36.833 0 00:38:36.833 00:51:04 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:38:36.833 00:51:04 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:38:37.400 00:51:04 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:38:37.400 00:51:04 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:38:37.400 00:51:04 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:38:37.400 00:51:04 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:38:37.400 00:51:04 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:37.400 00:51:04 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:38:37.658 00:51:05 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:38:37.658 00:51:05 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:38:37.658 00:51:05 keyring_linux -- keyring/linux.sh@23 -- # return 00:38:37.658 00:51:05 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:38:37.658 00:51:05 keyring_linux -- common/autotest_common.sh@648 -- # local es=0 00:38:37.658 00:51:05 keyring_linux -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:38:37.658 00:51:05 keyring_linux -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:38:37.658 00:51:05 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:38:37.658 00:51:05 keyring_linux -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:38:37.658 00:51:05 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:38:37.658 00:51:05 keyring_linux -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:38:37.658 00:51:05 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:38:37.916 [2024-07-12 00:51:05.534904] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:38:37.916 [2024-07-12 00:51:05.535042] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb360f0 (107): Transport endpoint is not connected 00:38:37.916 [2024-07-12 00:51:05.536034] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb360f0 (9): Bad file descriptor 00:38:37.916 [2024-07-12 00:51:05.537034] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:38:37.917 [2024-07-12 00:51:05.537055] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:38:37.917 [2024-07-12 00:51:05.537071] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:38:37.917 request: 00:38:37.917 { 00:38:37.917 "name": "nvme0", 00:38:37.917 "trtype": "tcp", 00:38:37.917 "traddr": "127.0.0.1", 00:38:37.917 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:38:37.917 "adrfam": "ipv4", 00:38:37.917 "trsvcid": "4420", 00:38:37.917 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:37.917 "psk": ":spdk-test:key1", 00:38:37.917 "method": "bdev_nvme_attach_controller", 00:38:37.917 "req_id": 1 00:38:37.917 } 00:38:37.917 Got JSON-RPC error response 00:38:37.917 response: 00:38:37.917 { 00:38:37.917 "code": -5, 00:38:37.917 "message": "Input/output error" 00:38:37.917 } 00:38:37.917 00:51:05 keyring_linux -- common/autotest_common.sh@651 -- # es=1 00:38:37.917 00:51:05 keyring_linux -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:38:37.917 00:51:05 keyring_linux -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:38:37.917 00:51:05 keyring_linux -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:38:37.917 00:51:05 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:38:37.917 00:51:05 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:38:37.917 00:51:05 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:38:37.917 00:51:05 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:38:37.917 00:51:05 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:38:37.917 00:51:05 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:38:37.917 00:51:05 keyring_linux -- keyring/linux.sh@33 -- # sn=877995890 00:38:37.917 00:51:05 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 877995890 00:38:37.917 1 links removed 00:38:37.917 00:51:05 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:38:37.917 00:51:05 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:38:37.917 00:51:05 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:38:37.917 00:51:05 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:38:37.917 00:51:05 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:38:37.917 00:51:05 keyring_linux -- keyring/linux.sh@33 -- # sn=1041828562 00:38:37.917 00:51:05 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 1041828562 00:38:37.917 1 links removed 00:38:37.917 00:51:05 keyring_linux -- keyring/linux.sh@41 -- # killprocess 1105673 00:38:37.917 00:51:05 keyring_linux -- common/autotest_common.sh@946 -- # '[' -z 1105673 ']' 00:38:37.917 00:51:05 keyring_linux -- common/autotest_common.sh@950 -- # kill -0 1105673 00:38:37.917 00:51:05 keyring_linux -- common/autotest_common.sh@951 -- # uname 00:38:37.917 00:51:05 keyring_linux -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:38:37.917 00:51:05 keyring_linux -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1105673 00:38:37.917 00:51:05 keyring_linux -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:38:37.917 00:51:05 keyring_linux -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:38:37.917 00:51:05 keyring_linux -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1105673' 00:38:37.917 killing process with pid 1105673 00:38:37.917 00:51:05 keyring_linux -- common/autotest_common.sh@965 -- # kill 1105673 00:38:37.917 Received shutdown signal, test time was about 1.000000 seconds 00:38:37.917 00:38:37.917 Latency(us) 00:38:37.917 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:37.917 =================================================================================================================== 00:38:37.917 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:38:37.917 00:51:05 keyring_linux -- common/autotest_common.sh@970 -- # wait 1105673 00:38:38.175 00:51:05 keyring_linux -- keyring/linux.sh@42 -- # killprocess 1105548 00:38:38.175 00:51:05 keyring_linux -- common/autotest_common.sh@946 -- # '[' -z 1105548 ']' 00:38:38.175 00:51:05 keyring_linux -- common/autotest_common.sh@950 -- # kill -0 1105548 00:38:38.175 00:51:05 keyring_linux -- common/autotest_common.sh@951 -- # uname 00:38:38.175 00:51:05 keyring_linux -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:38:38.175 00:51:05 keyring_linux -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 1105548 00:38:38.175 00:51:05 keyring_linux -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:38:38.175 00:51:05 keyring_linux -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:38:38.175 00:51:05 keyring_linux -- common/autotest_common.sh@964 -- # echo 'killing process with pid 1105548' 00:38:38.175 killing process with pid 1105548 00:38:38.175 00:51:05 keyring_linux -- common/autotest_common.sh@965 -- # kill 1105548 00:38:38.175 00:51:05 keyring_linux -- common/autotest_common.sh@970 -- # wait 1105548 00:38:38.433 00:38:38.433 real 0m4.914s 00:38:38.433 user 0m10.058s 00:38:38.433 sys 0m1.516s 00:38:38.433 00:51:06 keyring_linux -- common/autotest_common.sh@1122 -- # xtrace_disable 00:38:38.433 00:51:06 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:38:38.433 ************************************ 00:38:38.433 END TEST keyring_linux 00:38:38.433 ************************************ 00:38:38.433 00:51:06 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:38:38.433 00:51:06 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:38:38.433 00:51:06 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:38:38.433 00:51:06 -- spdk/autotest.sh@321 -- # '[' 0 -eq 1 ']' 00:38:38.433 00:51:06 -- spdk/autotest.sh@330 -- # '[' 0 -eq 1 ']' 00:38:38.433 00:51:06 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:38:38.433 00:51:06 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:38:38.433 00:51:06 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:38:38.433 00:51:06 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:38:38.433 00:51:06 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:38:38.433 00:51:06 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:38:38.433 00:51:06 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:38:38.433 00:51:06 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:38:38.433 00:51:06 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:38:38.433 00:51:06 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:38:38.433 00:51:06 -- spdk/autotest.sh@380 -- # trap - SIGINT SIGTERM EXIT 00:38:38.433 00:51:06 -- spdk/autotest.sh@382 -- # timing_enter post_cleanup 00:38:38.433 00:51:06 -- common/autotest_common.sh@720 -- # xtrace_disable 00:38:38.433 00:51:06 -- common/autotest_common.sh@10 -- # set +x 00:38:38.433 00:51:06 -- spdk/autotest.sh@383 -- # autotest_cleanup 00:38:38.433 00:51:06 -- common/autotest_common.sh@1388 -- # local autotest_es=0 00:38:38.433 00:51:06 -- common/autotest_common.sh@1389 -- # xtrace_disable 00:38:38.433 00:51:06 -- common/autotest_common.sh@10 -- # set +x 00:38:39.808 INFO: APP EXITING 00:38:39.808 INFO: killing all VMs 00:38:39.808 INFO: killing vhost app 00:38:39.808 WARN: no vhost pid file found 00:38:39.808 INFO: EXIT DONE 00:38:40.741 0000:84:00.0 (8086 0a54): Already using the nvme driver 00:38:40.999 0000:00:04.7 (8086 3c27): Already using the ioatdma driver 00:38:40.999 0000:00:04.6 (8086 3c26): Already using the ioatdma driver 00:38:40.999 0000:00:04.5 (8086 3c25): Already using the ioatdma driver 00:38:40.999 0000:00:04.4 (8086 3c24): Already using the ioatdma driver 00:38:40.999 0000:00:04.3 (8086 3c23): Already using the ioatdma driver 00:38:40.999 0000:00:04.2 (8086 3c22): Already using the ioatdma driver 00:38:40.999 0000:00:04.1 (8086 3c21): Already using the ioatdma driver 00:38:40.999 0000:00:04.0 (8086 3c20): Already using the ioatdma driver 00:38:40.999 0000:80:04.7 (8086 3c27): Already using the ioatdma driver 00:38:40.999 0000:80:04.6 (8086 3c26): Already using the ioatdma driver 00:38:40.999 0000:80:04.5 (8086 3c25): Already using the ioatdma driver 00:38:40.999 0000:80:04.4 (8086 3c24): Already using the ioatdma driver 00:38:40.999 0000:80:04.3 (8086 3c23): Already using the ioatdma driver 00:38:40.999 0000:80:04.2 (8086 3c22): Already using the ioatdma driver 00:38:40.999 0000:80:04.1 (8086 3c21): Already using the ioatdma driver 00:38:40.999 0000:80:04.0 (8086 3c20): Already using the ioatdma driver 00:38:41.936 Cleaning 00:38:41.936 Removing: /var/run/dpdk/spdk0/config 00:38:41.936 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:38:41.936 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:38:41.936 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:38:41.936 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:38:41.936 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:38:41.936 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:38:41.936 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:38:41.936 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:38:41.936 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:38:41.936 Removing: /var/run/dpdk/spdk0/hugepage_info 00:38:41.936 Removing: /var/run/dpdk/spdk1/config 00:38:41.936 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:38:41.936 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:38:41.936 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:38:41.936 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:38:41.936 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:38:41.936 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:38:42.195 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:38:42.195 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:38:42.195 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:38:42.195 Removing: /var/run/dpdk/spdk1/hugepage_info 00:38:42.195 Removing: /var/run/dpdk/spdk1/mp_socket 00:38:42.195 Removing: /var/run/dpdk/spdk2/config 00:38:42.195 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:38:42.195 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:38:42.195 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:38:42.195 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:38:42.195 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:38:42.195 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:38:42.195 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:38:42.195 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:38:42.195 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:38:42.195 Removing: /var/run/dpdk/spdk2/hugepage_info 00:38:42.195 Removing: /var/run/dpdk/spdk3/config 00:38:42.195 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:38:42.195 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:38:42.195 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:38:42.195 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:38:42.195 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:38:42.195 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:38:42.195 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:38:42.195 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:38:42.195 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:38:42.195 Removing: /var/run/dpdk/spdk3/hugepage_info 00:38:42.195 Removing: /var/run/dpdk/spdk4/config 00:38:42.195 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:38:42.195 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:38:42.195 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:38:42.195 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:38:42.195 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:38:42.195 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:38:42.195 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:38:42.195 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:38:42.195 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:38:42.195 Removing: /var/run/dpdk/spdk4/hugepage_info 00:38:42.195 Removing: /dev/shm/bdev_svc_trace.1 00:38:42.195 Removing: /dev/shm/nvmf_trace.0 00:38:42.195 Removing: /dev/shm/spdk_tgt_trace.pid851521 00:38:42.195 Removing: /var/run/dpdk/spdk0 00:38:42.195 Removing: /var/run/dpdk/spdk1 00:38:42.195 Removing: /var/run/dpdk/spdk2 00:38:42.195 Removing: /var/run/dpdk/spdk3 00:38:42.195 Removing: /var/run/dpdk/spdk4 00:38:42.195 Removing: /var/run/dpdk/spdk_pid1001607 00:38:42.195 Removing: /var/run/dpdk/spdk_pid1019988 00:38:42.196 Removing: /var/run/dpdk/spdk_pid1022109 00:38:42.196 Removing: /var/run/dpdk/spdk_pid1024900 00:38:42.196 Removing: /var/run/dpdk/spdk_pid1025652 00:38:42.196 Removing: /var/run/dpdk/spdk_pid1026524 00:38:42.196 Removing: /var/run/dpdk/spdk_pid1028522 00:38:42.196 Removing: /var/run/dpdk/spdk_pid1030346 00:38:42.196 Removing: /var/run/dpdk/spdk_pid1034057 00:38:42.196 Removing: /var/run/dpdk/spdk_pid1034059 00:38:42.196 Removing: /var/run/dpdk/spdk_pid1036287 00:38:42.196 Removing: /var/run/dpdk/spdk_pid1036389 00:38:42.196 Removing: /var/run/dpdk/spdk_pid1036497 00:38:42.196 Removing: /var/run/dpdk/spdk_pid1036699 00:38:42.196 Removing: /var/run/dpdk/spdk_pid1036704 00:38:42.196 Removing: /var/run/dpdk/spdk_pid1037611 00:38:42.196 Removing: /var/run/dpdk/spdk_pid1038502 00:38:42.196 Removing: /var/run/dpdk/spdk_pid1039426 00:38:42.196 Removing: /var/run/dpdk/spdk_pid1040371 00:38:42.196 Removing: /var/run/dpdk/spdk_pid1041264 00:38:42.196 Removing: /var/run/dpdk/spdk_pid1042246 00:38:42.196 Removing: /var/run/dpdk/spdk_pid1045170 00:38:42.196 Removing: /var/run/dpdk/spdk_pid1045427 00:38:42.196 Removing: /var/run/dpdk/spdk_pid1046437 00:38:42.196 Removing: /var/run/dpdk/spdk_pid1047049 00:38:42.196 Removing: /var/run/dpdk/spdk_pid1049883 00:38:42.196 Removing: /var/run/dpdk/spdk_pid1051370 00:38:42.196 Removing: /var/run/dpdk/spdk_pid1054028 00:38:42.196 Removing: /var/run/dpdk/spdk_pid1057235 00:38:42.196 Removing: /var/run/dpdk/spdk_pid1062213 00:38:42.196 Removing: /var/run/dpdk/spdk_pid1065588 00:38:42.196 Removing: /var/run/dpdk/spdk_pid1065608 00:38:42.196 Removing: /var/run/dpdk/spdk_pid1075860 00:38:42.196 Removing: /var/run/dpdk/spdk_pid1076209 00:38:42.196 Removing: /var/run/dpdk/spdk_pid1076528 00:38:42.196 Removing: /var/run/dpdk/spdk_pid1076905 00:38:42.196 Removing: /var/run/dpdk/spdk_pid1077376 00:38:42.196 Removing: /var/run/dpdk/spdk_pid1077688 00:38:42.196 Removing: /var/run/dpdk/spdk_pid1077998 00:38:42.196 Removing: /var/run/dpdk/spdk_pid1078340 00:38:42.196 Removing: /var/run/dpdk/spdk_pid1080240 00:38:42.196 Removing: /var/run/dpdk/spdk_pid1080345 00:38:42.196 Removing: /var/run/dpdk/spdk_pid1083876 00:38:42.196 Removing: /var/run/dpdk/spdk_pid1084011 00:38:42.196 Removing: /var/run/dpdk/spdk_pid1085267 00:38:42.196 Removing: /var/run/dpdk/spdk_pid1089126 00:38:42.196 Removing: /var/run/dpdk/spdk_pid1089137 00:38:42.196 Removing: /var/run/dpdk/spdk_pid1091299 00:38:42.196 Removing: /var/run/dpdk/spdk_pid1092445 00:38:42.196 Removing: /var/run/dpdk/spdk_pid1093506 00:38:42.196 Removing: /var/run/dpdk/spdk_pid1094071 00:38:42.196 Removing: /var/run/dpdk/spdk_pid1095141 00:38:42.196 Removing: /var/run/dpdk/spdk_pid1095768 00:38:42.196 Removing: /var/run/dpdk/spdk_pid1099824 00:38:42.196 Removing: /var/run/dpdk/spdk_pid1100090 00:38:42.196 Removing: /var/run/dpdk/spdk_pid1100398 00:38:42.196 Removing: /var/run/dpdk/spdk_pid1101602 00:38:42.196 Removing: /var/run/dpdk/spdk_pid1101903 00:38:42.196 Removing: /var/run/dpdk/spdk_pid1102117 00:38:42.196 Removing: /var/run/dpdk/spdk_pid1103974 00:38:42.196 Removing: /var/run/dpdk/spdk_pid1104053 00:38:42.196 Removing: /var/run/dpdk/spdk_pid1105200 00:38:42.196 Removing: /var/run/dpdk/spdk_pid1105548 00:38:42.196 Removing: /var/run/dpdk/spdk_pid1105673 00:38:42.196 Removing: /var/run/dpdk/spdk_pid850265 00:38:42.196 Removing: /var/run/dpdk/spdk_pid850869 00:38:42.196 Removing: /var/run/dpdk/spdk_pid851521 00:38:42.196 Removing: /var/run/dpdk/spdk_pid851896 00:38:42.196 Removing: /var/run/dpdk/spdk_pid852407 00:38:42.196 Removing: /var/run/dpdk/spdk_pid852442 00:38:42.196 Removing: /var/run/dpdk/spdk_pid852994 00:38:42.196 Removing: /var/run/dpdk/spdk_pid853084 00:38:42.196 Removing: /var/run/dpdk/spdk_pid853211 00:38:42.196 Removing: /var/run/dpdk/spdk_pid854231 00:38:42.196 Removing: /var/run/dpdk/spdk_pid854941 00:38:42.455 Removing: /var/run/dpdk/spdk_pid855108 00:38:42.455 Removing: /var/run/dpdk/spdk_pid855262 00:38:42.455 Removing: /var/run/dpdk/spdk_pid855434 00:38:42.455 Removing: /var/run/dpdk/spdk_pid855586 00:38:42.455 Removing: /var/run/dpdk/spdk_pid855709 00:38:42.455 Removing: /var/run/dpdk/spdk_pid855833 00:38:42.455 Removing: /var/run/dpdk/spdk_pid856058 00:38:42.455 Removing: /var/run/dpdk/spdk_pid856429 00:38:42.455 Removing: /var/run/dpdk/spdk_pid858364 00:38:42.455 Removing: /var/run/dpdk/spdk_pid858577 00:38:42.455 Removing: /var/run/dpdk/spdk_pid858707 00:38:42.455 Removing: /var/run/dpdk/spdk_pid858726 00:38:42.455 Removing: /var/run/dpdk/spdk_pid858983 00:38:42.455 Removing: /var/run/dpdk/spdk_pid859057 00:38:42.455 Removing: /var/run/dpdk/spdk_pid859307 00:38:42.455 Removing: /var/run/dpdk/spdk_pid859393 00:38:42.455 Removing: /var/run/dpdk/spdk_pid859534 00:38:42.455 Removing: /var/run/dpdk/spdk_pid859592 00:38:42.455 Removing: /var/run/dpdk/spdk_pid859755 00:38:42.455 Removing: /var/run/dpdk/spdk_pid859775 00:38:42.455 Removing: /var/run/dpdk/spdk_pid860074 00:38:42.455 Removing: /var/run/dpdk/spdk_pid860195 00:38:42.455 Removing: /var/run/dpdk/spdk_pid860445 00:38:42.455 Removing: /var/run/dpdk/spdk_pid860492 00:38:42.455 Removing: /var/run/dpdk/spdk_pid860604 00:38:42.455 Removing: /var/run/dpdk/spdk_pid860671 00:38:42.455 Removing: /var/run/dpdk/spdk_pid860798 00:38:42.455 Removing: /var/run/dpdk/spdk_pid861002 00:38:42.455 Removing: /var/run/dpdk/spdk_pid861127 00:38:42.455 Removing: /var/run/dpdk/spdk_pid861257 00:38:42.455 Removing: /var/run/dpdk/spdk_pid861379 00:38:42.455 Removing: /var/run/dpdk/spdk_pid861587 00:38:42.455 Removing: /var/run/dpdk/spdk_pid861712 00:38:42.455 Removing: /var/run/dpdk/spdk_pid861832 00:38:42.455 Removing: /var/run/dpdk/spdk_pid861961 00:38:42.455 Removing: /var/run/dpdk/spdk_pid862166 00:38:42.455 Removing: /var/run/dpdk/spdk_pid862291 00:38:42.455 Removing: /var/run/dpdk/spdk_pid862417 00:38:42.455 Removing: /var/run/dpdk/spdk_pid862538 00:38:42.455 Removing: /var/run/dpdk/spdk_pid862748 00:38:42.455 Removing: /var/run/dpdk/spdk_pid862908 00:38:42.455 Removing: /var/run/dpdk/spdk_pid863093 00:38:42.455 Removing: /var/run/dpdk/spdk_pid863276 00:38:42.455 Removing: /var/run/dpdk/spdk_pid863447 00:38:42.455 Removing: /var/run/dpdk/spdk_pid863570 00:38:42.455 Removing: /var/run/dpdk/spdk_pid863699 00:38:42.455 Removing: /var/run/dpdk/spdk_pid863909 00:38:42.455 Removing: /var/run/dpdk/spdk_pid864400 00:38:42.455 Removing: /var/run/dpdk/spdk_pid866062 00:38:42.455 Removing: /var/run/dpdk/spdk_pid907696 00:38:42.455 Removing: /var/run/dpdk/spdk_pid909630 00:38:42.455 Removing: /var/run/dpdk/spdk_pid915725 00:38:42.455 Removing: /var/run/dpdk/spdk_pid918172 00:38:42.455 Removing: /var/run/dpdk/spdk_pid919976 00:38:42.455 Removing: /var/run/dpdk/spdk_pid920291 00:38:42.455 Removing: /var/run/dpdk/spdk_pid925917 00:38:42.455 Removing: /var/run/dpdk/spdk_pid925930 00:38:42.455 Removing: /var/run/dpdk/spdk_pid926418 00:38:42.455 Removing: /var/run/dpdk/spdk_pid926913 00:38:42.455 Removing: /var/run/dpdk/spdk_pid927377 00:38:42.455 Removing: /var/run/dpdk/spdk_pid927711 00:38:42.455 Removing: /var/run/dpdk/spdk_pid927719 00:38:42.455 Removing: /var/run/dpdk/spdk_pid927828 00:38:42.455 Removing: /var/run/dpdk/spdk_pid927935 00:38:42.455 Removing: /var/run/dpdk/spdk_pid928026 00:38:42.455 Removing: /var/run/dpdk/spdk_pid928437 00:38:42.455 Removing: /var/run/dpdk/spdk_pid928934 00:38:42.455 Removing: /var/run/dpdk/spdk_pid929436 00:38:42.455 Removing: /var/run/dpdk/spdk_pid929733 00:38:42.455 Removing: /var/run/dpdk/spdk_pid929744 00:38:42.455 Removing: /var/run/dpdk/spdk_pid929936 00:38:42.455 Removing: /var/run/dpdk/spdk_pid930635 00:38:42.455 Removing: /var/run/dpdk/spdk_pid931279 00:38:42.455 Removing: /var/run/dpdk/spdk_pid935460 00:38:42.455 Removing: /var/run/dpdk/spdk_pid935686 00:38:42.455 Removing: /var/run/dpdk/spdk_pid938138 00:38:42.455 Removing: /var/run/dpdk/spdk_pid941070 00:38:42.455 Removing: /var/run/dpdk/spdk_pid942735 00:38:42.455 Removing: /var/run/dpdk/spdk_pid947661 00:38:42.455 Removing: /var/run/dpdk/spdk_pid951571 00:38:42.455 Removing: /var/run/dpdk/spdk_pid952562 00:38:42.455 Removing: /var/run/dpdk/spdk_pid953064 00:38:42.455 Removing: /var/run/dpdk/spdk_pid960887 00:38:42.455 Removing: /var/run/dpdk/spdk_pid962507 00:38:42.455 Removing: /var/run/dpdk/spdk_pid986164 00:38:42.455 Removing: /var/run/dpdk/spdk_pid988323 00:38:42.455 Removing: /var/run/dpdk/spdk_pid989219 00:38:42.455 Removing: /var/run/dpdk/spdk_pid990210 00:38:42.455 Removing: /var/run/dpdk/spdk_pid990310 00:38:42.455 Removing: /var/run/dpdk/spdk_pid990412 00:38:42.455 Removing: /var/run/dpdk/spdk_pid990467 00:38:42.455 Removing: /var/run/dpdk/spdk_pid990773 00:38:42.455 Removing: /var/run/dpdk/spdk_pid991766 00:38:42.455 Removing: /var/run/dpdk/spdk_pid992339 00:38:42.455 Removing: /var/run/dpdk/spdk_pid992672 00:38:42.455 Removing: /var/run/dpdk/spdk_pid993829 00:38:42.455 Removing: /var/run/dpdk/spdk_pid994153 00:38:42.455 Removing: /var/run/dpdk/spdk_pid994586 00:38:42.455 Removing: /var/run/dpdk/spdk_pid996349 00:38:42.455 Removing: /var/run/dpdk/spdk_pid998840 00:38:42.455 Clean 00:38:42.713 00:51:10 -- common/autotest_common.sh@1447 -- # return 0 00:38:42.713 00:51:10 -- spdk/autotest.sh@384 -- # timing_exit post_cleanup 00:38:42.713 00:51:10 -- common/autotest_common.sh@726 -- # xtrace_disable 00:38:42.713 00:51:10 -- common/autotest_common.sh@10 -- # set +x 00:38:42.713 00:51:10 -- spdk/autotest.sh@386 -- # timing_exit autotest 00:38:42.713 00:51:10 -- common/autotest_common.sh@726 -- # xtrace_disable 00:38:42.713 00:51:10 -- common/autotest_common.sh@10 -- # set +x 00:38:42.713 00:51:10 -- spdk/autotest.sh@387 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:38:42.713 00:51:10 -- spdk/autotest.sh@389 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:38:42.713 00:51:10 -- spdk/autotest.sh@389 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:38:42.713 00:51:10 -- spdk/autotest.sh@391 -- # hash lcov 00:38:42.713 00:51:10 -- spdk/autotest.sh@391 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:38:42.713 00:51:10 -- spdk/autotest.sh@393 -- # hostname 00:38:42.713 00:51:10 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-gp-02 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:38:42.972 geninfo: WARNING: invalid characters removed from testname! 00:39:15.116 00:51:38 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:39:15.116 00:51:42 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:39:17.644 00:51:45 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:39:20.925 00:51:48 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:39:23.454 00:51:51 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:39:26.732 00:51:54 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:39:29.255 00:51:57 -- spdk/autotest.sh@400 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:39:29.516 00:51:57 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:29.516 00:51:57 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:39:29.516 00:51:57 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:29.516 00:51:57 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:29.516 00:51:57 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:29.516 00:51:57 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:29.516 00:51:57 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:29.516 00:51:57 -- paths/export.sh@5 -- $ export PATH 00:39:29.516 00:51:57 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:29.516 00:51:57 -- common/autobuild_common.sh@436 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:39:29.516 00:51:57 -- common/autobuild_common.sh@437 -- $ date +%s 00:39:29.516 00:51:57 -- common/autobuild_common.sh@437 -- $ mktemp -dt spdk_1720738317.XXXXXX 00:39:29.516 00:51:57 -- common/autobuild_common.sh@437 -- $ SPDK_WORKSPACE=/tmp/spdk_1720738317.CBxICm 00:39:29.516 00:51:57 -- common/autobuild_common.sh@439 -- $ [[ -n '' ]] 00:39:29.516 00:51:57 -- common/autobuild_common.sh@443 -- $ '[' -n v22.11.4 ']' 00:39:29.516 00:51:57 -- common/autobuild_common.sh@444 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:39:29.516 00:51:57 -- common/autobuild_common.sh@444 -- $ scanbuild_exclude=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk' 00:39:29.516 00:51:57 -- common/autobuild_common.sh@450 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:39:29.516 00:51:57 -- common/autobuild_common.sh@452 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:39:29.516 00:51:57 -- common/autobuild_common.sh@453 -- $ get_config_params 00:39:29.516 00:51:57 -- common/autotest_common.sh@395 -- $ xtrace_disable 00:39:29.516 00:51:57 -- common/autotest_common.sh@10 -- $ set +x 00:39:29.516 00:51:57 -- common/autobuild_common.sh@453 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build' 00:39:29.516 00:51:57 -- common/autobuild_common.sh@455 -- $ start_monitor_resources 00:39:29.516 00:51:57 -- pm/common@17 -- $ local monitor 00:39:29.516 00:51:57 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:39:29.516 00:51:57 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:39:29.516 00:51:57 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:39:29.516 00:51:57 -- pm/common@21 -- $ date +%s 00:39:29.516 00:51:57 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:39:29.516 00:51:57 -- pm/common@21 -- $ date +%s 00:39:29.516 00:51:57 -- pm/common@25 -- $ sleep 1 00:39:29.516 00:51:57 -- pm/common@21 -- $ date +%s 00:39:29.516 00:51:57 -- pm/common@21 -- $ date +%s 00:39:29.516 00:51:57 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1720738317 00:39:29.516 00:51:57 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1720738317 00:39:29.516 00:51:57 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1720738317 00:39:29.516 00:51:57 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1720738317 00:39:29.516 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1720738317_collect-vmstat.pm.log 00:39:29.516 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1720738317_collect-cpu-load.pm.log 00:39:29.517 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1720738317_collect-cpu-temp.pm.log 00:39:29.517 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1720738317_collect-bmc-pm.bmc.pm.log 00:39:30.454 00:51:58 -- common/autobuild_common.sh@456 -- $ trap stop_monitor_resources EXIT 00:39:30.454 00:51:58 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j32 00:39:30.454 00:51:58 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:39:30.454 00:51:58 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:39:30.454 00:51:58 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:39:30.454 00:51:58 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:39:30.454 00:51:58 -- spdk/autopackage.sh@19 -- $ timing_finish 00:39:30.454 00:51:58 -- common/autotest_common.sh@732 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:39:30.454 00:51:58 -- common/autotest_common.sh@733 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:39:30.454 00:51:58 -- common/autotest_common.sh@735 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:39:30.454 00:51:58 -- spdk/autopackage.sh@20 -- $ exit 0 00:39:30.454 00:51:58 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:39:30.454 00:51:58 -- pm/common@29 -- $ signal_monitor_resources TERM 00:39:30.454 00:51:58 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:39:30.454 00:51:58 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:39:30.454 00:51:58 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:39:30.454 00:51:58 -- pm/common@44 -- $ pid=1116473 00:39:30.454 00:51:58 -- pm/common@50 -- $ kill -TERM 1116473 00:39:30.454 00:51:58 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:39:30.454 00:51:58 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:39:30.454 00:51:58 -- pm/common@44 -- $ pid=1116475 00:39:30.454 00:51:58 -- pm/common@50 -- $ kill -TERM 1116475 00:39:30.454 00:51:58 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:39:30.454 00:51:58 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:39:30.454 00:51:58 -- pm/common@44 -- $ pid=1116477 00:39:30.454 00:51:58 -- pm/common@50 -- $ kill -TERM 1116477 00:39:30.454 00:51:58 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:39:30.454 00:51:58 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:39:30.454 00:51:58 -- pm/common@44 -- $ pid=1116508 00:39:30.454 00:51:58 -- pm/common@50 -- $ sudo -E kill -TERM 1116508 00:39:30.454 + [[ -n 754920 ]] 00:39:30.454 + sudo kill 754920 00:39:30.465 [Pipeline] } 00:39:30.483 [Pipeline] // stage 00:39:30.488 [Pipeline] } 00:39:30.502 [Pipeline] // timeout 00:39:30.507 [Pipeline] } 00:39:30.525 [Pipeline] // catchError 00:39:30.530 [Pipeline] } 00:39:30.550 [Pipeline] // wrap 00:39:30.554 [Pipeline] } 00:39:30.570 [Pipeline] // catchError 00:39:30.580 [Pipeline] stage 00:39:30.582 [Pipeline] { (Epilogue) 00:39:30.596 [Pipeline] catchError 00:39:30.598 [Pipeline] { 00:39:30.614 [Pipeline] echo 00:39:30.615 Cleanup processes 00:39:30.622 [Pipeline] sh 00:39:30.925 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:39:30.926 1116632 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/sdr.cache 00:39:30.926 1116689 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:39:30.956 [Pipeline] sh 00:39:31.245 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:39:31.245 ++ grep -v 'sudo pgrep' 00:39:31.245 ++ awk '{print $1}' 00:39:31.245 + sudo kill -9 1116632 00:39:31.261 [Pipeline] sh 00:39:31.552 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:39:39.679 [Pipeline] sh 00:39:39.967 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:39:39.967 Artifacts sizes are good 00:39:39.983 [Pipeline] archiveArtifacts 00:39:39.991 Archiving artifacts 00:39:40.249 [Pipeline] sh 00:39:40.536 + sudo chown -R sys_sgci /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:39:40.553 [Pipeline] cleanWs 00:39:40.564 [WS-CLEANUP] Deleting project workspace... 00:39:40.564 [WS-CLEANUP] Deferred wipeout is used... 00:39:40.572 [WS-CLEANUP] done 00:39:40.573 [Pipeline] } 00:39:40.591 [Pipeline] // catchError 00:39:40.602 [Pipeline] sh 00:39:40.881 + logger -p user.info -t JENKINS-CI 00:39:40.891 [Pipeline] } 00:39:40.914 [Pipeline] // stage 00:39:40.920 [Pipeline] } 00:39:40.939 [Pipeline] // node 00:39:40.945 [Pipeline] End of Pipeline 00:39:40.977 Finished: SUCCESS